Wolfram Burgard

Wolfram Burgard

Albert-Ludwigs-Universität Freiburg

H-index: 132

Europe-Germany

About Wolfram Burgard

Wolfram Burgard, With an exceptional h-index of 132 and a recent h-index of 82 (since 2020), a distinguished researcher at Albert-Ludwigs-Universität Freiburg, specializes in the field of Robotics, Artificial Intelligence, AI, Machine Learning, Computer Vision.

His recent articles reflect a diverse array of research interests and contributions to the field:

Automatic Target-Less Camera-LiDAR Calibration From Motion and Deep Point Correspondences

Evaluation of a Smart Mobile Robotic System for Industrial Plant Inspection and Supervision

Centergrasp: Object-aware implicit representation learning for simultaneous shape reconstruction and 6-dof grasp estimation

uPLAM: Robust Panoptic Localization and Mapping Leveraging Perception Uncertainties

Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation

Language, affordance and physics in robot cognition and intelligent systems

Bayesian Optimization for Sample-Efficient Policy Improvement in Robotic Manipulation

BEVCar: Camera-Radar Fusion for BEV Map and Object Segmentation

Wolfram Burgard Information

University

Albert-Ludwigs-Universität Freiburg

Position

Professor of Computer Science

Citations(all)

111331

Citations(since 2020)

40780

Cited By

88350

hIndex(all)

132

hIndex(since 2020)

82

i10Index(all)

535

i10Index(since 2020)

371

Email

University Profile Page

Albert-Ludwigs-Universität Freiburg

Wolfram Burgard Skills & Research Interests

Robotics

Artificial Intelligence

AI

Machine Learning

Computer Vision

Top articles of Wolfram Burgard

Automatic Target-Less Camera-LiDAR Calibration From Motion and Deep Point Correspondences

Authors

Kürsat Petek,Niclas Vödisch,Johannes Meyer,Daniele Cattaneo,Abhinav Valada,Wolfram Burgard

Journal

arXiv preprint arXiv:2404.17298

Published Date

2024/4/26

Sensor setups of robotic platforms commonly include both camera and LiDAR as they provide complementary information. However, fusing these two modalities typically requires a highly accurate calibration between them. In this paper, we propose MDPCalib which is a novel method for camera-LiDAR calibration that requires neither human supervision nor any specific target objects. Instead, we utilize sensor motion estimates from visual and LiDAR odometry as well as deep learning-based 2D-pixel-to-3D-point correspondences that are obtained without in-domain retraining. We represent the camera-LiDAR calibration as a graph optimization problem and minimize the costs induced by constraints from sensor motion and point correspondences. In extensive experiments, we demonstrate that our approach yields highly accurate extrinsic calibration parameters and is robust to random initialization. Additionally, our approach generalizes to a wide range of sensor setups, which we demonstrate by employing it on various robotic platforms including a self-driving perception car, a quadruped robot, and a UAV. To make our calibration method publicly accessible, we release the code on our project website at http://calibration.cs.uni-freiburg.de.

Evaluation of a Smart Mobile Robotic System for Industrial Plant Inspection and Supervision

Authors

Georg KJ Fischer,Max Bergau,D Adriana Gómez-Rosal,Andreas Wachaja,Johannes Gräter,Matthias Odenweller,Uwe Piechottka,Fabian Hoeflinger,Nikhil Gosala,Niklas Wetzel,Daniel Büscher,Abhinav Valada,Wolfram Burgard

Journal

arXiv preprint arXiv:2402.07691

Published Date

2024/2/12

Automated and autonomous industrial inspection is a longstanding research field, driven by the necessity to enhance safety and efficiency within industrial settings. In addressing this need, we introduce an autonomously navigating robotic system designed for comprehensive plant inspection. This innovative system comprises a robotic platform equipped with a diverse array of sensors integrated to facilitate the detection of various process and infrastructure parameters. These sensors encompass optical (LiDAR, Stereo, UV/IR/RGB cameras), olfactory (electronic nose), and acoustic (microphone array) capabilities, enabling the identification of factors such as methane leaks, flow rates, and infrastructural anomalies. The proposed system underwent individual evaluation at a wastewater treatment site within a chemical plant, providing a practical and challenging environment for testing. The evaluation process encompassed key aspects such as object detection, 3D localization, and path planning. Furthermore, specific evaluations were conducted for optical methane leak detection and localization, as well as acoustic assessments focusing on pump equipment and gas leak localization.

Centergrasp: Object-aware implicit representation learning for simultaneous shape reconstruction and 6-dof grasp estimation

Authors

Eugenio Chisari,Nick Heppert,Tim Welschehold,Wolfram Burgard,Abhinav Valada

Journal

IEEE Robotics and Automation Letters

Published Date

2024/4/15

Reliable object grasping is a crucial capability for autonomous robots. However, many existing grasping approaches focus on general clutter removal without explicitly modeling objects and thus only relying on the visible local geometry. We introduce CenterGrasp, a novel framework that combines object awareness and holistic grasping. CenterGrasp learns a general object prior by encoding shapes and valid grasps in a continuous latent space. It consists of an RGB-D image encoder that leverages recent advances to detect objects and infer their pose and latent code, and a decoder to predict shape and grasps for each object in the scene. We perform extensive experiments on simulated as well as real-world cluttered scenes and demonstrate strong scene reconstruction and 6-DoF grasp-pose estimation performance. Compared to the state of the art, CenterGrasp, achieves an improvement of 38.5 mm in shape …

uPLAM: Robust Panoptic Localization and Mapping Leveraging Perception Uncertainties

Authors

Kshitij Sirohi,Daniel Büscher,Wolfram Burgard

Journal

arXiv preprint arXiv:2402.05840

Published Date

2024/2/8

The availability of a reliable map and a robust localization system is critical for the operation of an autonomous vehicle. In a modern system, both mapping and localization solutions generally employ convolutional neural network (CNN) --based perception. Hence, any algorithm should consider potential errors in perception for safe and robust functioning. In this work, we present uncertainty-aware panoptic Localization and Mapping (uPLAM), which employs perception uncertainty as a bridge to fuse the perception information with classical localization and mapping approaches. We introduce an uncertainty-based map aggregation technique to create a long-term panoptic bird's eye view map and provide an associated mapping uncertainty. Our map consists of surface semantics and landmarks with unique IDs. Moreover, we present panoptic uncertainty-aware particle filter-based localization. To this end, we propose an uncertainty-based particle importance weight calculation for the adaptive incorporation of perception information into localization. We also present a new dataset for evaluating long-term panoptic mapping and map-based localization. Extensive evaluations showcase that our proposed uncertainty incorporation leads to better mapping with reliable uncertainty estimates and accurate localization. We make our dataset and code available at: \url{http://uplam.cs.uni-freiburg.de}

Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation

Authors

Abdelrhman Werby,Chenguang Huang,Martin Büchner,Abhinav Valada,Wolfram Burgard

Published Date

2024/3/26

Typically, robotic mapping relies on highly accurate dense representations obtained via approaches to simultaneous localization and mapping. While these maps allow for point/voxel-level features, they do not provide language grounding within large-scale environments due to the sheer number of points. In this work, we present HOV-SG, a hierarchical open-vocabulary 3D scene graph mapping approach for robot navigation. Using open-vocabulary vision foundation models, we first obtain state-of-the-art open-vocabulary maps in 3D. We then perform floor as well as room segmentation and identify room names. Finally, we construct a 3D scene graph hierarchy. Our approach is able to represent multi-story buildings and allows robots to traverse them by providing feasible links among floors. We demonstrate long-horizon robotic navigation in large-scale indoor environments from long queries using large language models based on the obtained scene graph tokens and outperform previous baselines.

Language, affordance and physics in robot cognition and intelligent systems

Authors

Nutan Chen,Walterio W Mayol-Cuevas,Maximilian Karl,Elie Aljalbout,Andy Zeng,Aurelio Cortese,Wolfram Burgard,Herke van Hoof

Published Date

2024/1/9

Humans can learn new skills and recognize new objects quickly from a small number of data points. This could be attributed to our ability to generalize concepts and transfer from one task to another. For instance, humans can easily recognize if a cuboid can be sat on even if they have never seen or used it before, an ability known as affordance perception. Likewise, humans can precisely estimate the trajectory of a moving ball by perceiving and predicting physical laws. This Research Topic asks whether robots could use similarly layered cognitive systems to learn efficiently. Recent progress has been made in this area, but many unsolved problems exist for efficient robot cognition and learning. This Research Topic discusses comprehensive updates and high-quality practices concerning machine-learning-based robot cognition. A generalist agent could strongly benefitfromcombininghigh-levelaffordances, intermediate-levelhumanorrobotlanguage, and low-level prediction and recognition of physical equations (matching or learning an observed phenomenon with known physical laws) to perform in a large variety of tasks and environments (Figure 1). The goal is to improve the state of the art in language integration, affordances, and physics-based inductive biases and representations or their combination. In particular, affordances allow collecting action possibilities, enabling fast discovery and learning the environment, often from one or a few observations. In addition, natural language provides a simple and promising approach to robotic communication and cognition tasks.On top of that, machine learning provides a common framework for …

Bayesian Optimization for Sample-Efficient Policy Improvement in Robotic Manipulation

Authors

Adrian Röfer,Iman Nematollahi,Tim Welschehold,Wolfram Burgard,Abhinav Valada

Journal

arXiv preprint arXiv:2403.14305

Published Date

2024/3/21

Sample efficient learning of manipulation skills poses a major challenge in robotics. While recent approaches demonstrate impressive advances in the type of task that can be addressed and the sensing modalities that can be incorporated, they still require large amounts of training data. Especially with regard to learning actions on robots in the real world, this poses a major problem due to the high costs associated with both demonstrations and real-world robot interactions. To address this challenge, we introduce BOpt-GMM, a hybrid approach that combines imitation learning with own experience collection. We first learn a skill model as a dynamical system encoded in a Gaussian Mixture Model from a few demonstrations. We then improve this model with Bayesian optimization building on a small number of autonomous skill executions in a sparse reward setting. We demonstrate the sample efficiency of our approach on multiple complex manipulation skills in both simulations and real-world experiments. Furthermore, we make the code and pre-trained models publicly available at http://bopt-gmm. cs.uni-freiburg.de.

BEVCar: Camera-Radar Fusion for BEV Map and Object Segmentation

Authors

Jonas Schramm,Niclas Vödisch,Kürsat Petek,B Ravi Kiran,Senthil Yogamani,Wolfram Burgard,Abhinav Valada

Journal

arXiv preprint arXiv:2403.11761

Published Date

2024/3/18

Semantic scene segmentation from a bird's-eye-view (BEV) perspective plays a crucial role in facilitating planning and decision-making for mobile robots. Although recent vision-only methods have demonstrated notable advancements in performance, they often struggle under adverse illumination conditions such as rain or nighttime. While active sensors offer a solution to this challenge, the prohibitively high cost of LiDARs remains a limiting factor. Fusing camera data with automotive radars poses a more inexpensive alternative but has received less attention in prior research. In this work, we aim to advance this promising avenue by introducing BEVCar, a novel approach for joint BEV object and map segmentation. The core novelty of our approach lies in first learning a point-based encoding of raw radar data, which is then leveraged to efficiently initialize the lifting of image features into the BEV space. We perform extensive experiments on the nuScenes dataset and demonstrate that BEVCar outperforms the current state of the art. Moreover, we show that incorporating radar information significantly enhances robustness in challenging environmental conditions and improves segmentation performance for distant objects. To foster future research, we provide the weather split of the nuScenes dataset used in our experiments, along with our code and trained models at http://bevcar.cs.uni-freiburg.de.

Informatics in Control, Automation and Robotics: 19th International Conference, ICINCO 2022 Lisbon, Portugal, July 14-16, 2022 Revised Selected Papers

Authors

Giuseppina Gini,Henk Nijmeijer,Wolfram Burgard,Dimitar Filev

Published Date

2023/11/29

The book focuses the latest endeavors relating researches and developments conducted in fields of control, robotics, and automation. Through ten revised and extended articles, the present book aims to provide the most up-to-date state-of-the-art of the aforementioned fields allowing researcher, Ph. D. students, and engineers not only updating their knowledge but also benefiting from the source of inspiration that represents the set of selected articles of the book. The deliberate intention of editors to cover as well theoretical facets of those fields as their practical accomplishments and implementations offers the benefit of gathering in a same volume a factual and well-balanced prospect of nowadays research in those topics. A special attention toward “Intelligent Robots and Control” may characterize another benefit of this book.

Covio: Online continual learning for visual-inertial odometry

Authors

Niclas Vödisch,Daniele Cattaneo,Wolfram Burgard,Abhinav Valada

Published Date

2023

Visual odometry is a fundamental task for many applications on mobile devices and robotic platforms. Since such applications are oftentimes not limited to predefined target domains and learning-based vision systems are known to generalize poorly to unseen environments, methods for continual adaptation during inference time are of significant interest. In this work, we introduce CoVIO for online continual learning of visual-inertial odometry. CoVIO effectively adapts to new domains while mitigating catastrophic forgetting by exploiting experience replay. In particular, we propose a novel sampling strategy to maximize image diversity in a fixed-size replay buffer that targets the limited storage capacity of embedded devices. We further provide an asynchronous version that decouples the odometry estimation from the network weight update step enabling continuous inference in real time. We extensively evaluate CoVIO on various real-world datasets demonstrating that it successfully adapts to new domains while outperforming previous methods. The code of our work is publicly available at http://continual-slam. cs. uni-freiburg. de.

Grounding language with visual affordances over unstructured data

Authors

Oier Mees,Jessica Borja-Diaz,Wolfram Burgard

Published Date

2023/5/4

Recent works have shown that Large Language Models (LLMs) can be applied to ground natural language to a wide variety of robot skills. However, in practice, learning multi-task, language-conditioned robotic skills typically requires large-scale data collection and frequent human intervention to reset the environment or help correcting the current policies. In this work, we propose a novel approach to efficiently learn general-purpose language-conditioned robot skills from unstructured, offline and reset-free data in the real world by exploiting a self-supervised visuo-lingual affordance model, which requires annotating as little as 1% of the total data with language. We evaluate our method in extensive experiments both in simulated and real-world robotic tasks, achieving state-of-the-art performance on the challenging CALVIN benchmark and learning over 25 distinct visuomotor manipulation tasks with a single policy …

Pov-slam: Probabilistic object-aware variational slam in semi-static environments

Authors

Jingxing Qian,Veronica Chatrath,James Servos,Aaron Mavrinac,Wolfram Burgard,Steven L Waslander,Angela P Schoellig

Journal

arXiv preprint arXiv:2307.00488

Published Date

2023/7/2

Simultaneous localization and mapping (SLAM) in slowly varying scenes is important for long-term robot task completion. Failing to detect scene changes may lead to inaccurate maps and, ultimately, lost robots. Classical SLAM algorithms assume static scenes, and recent works take dynamics into account, but require scene changes to be observed in consecutive frames. Semi-static scenes, wherein objects appear, disappear, or move slowly over time, are often overlooked, yet are critical for long-term operation. We propose an object-aware, factor-graph SLAM framework that tracks and reconstructs semi-static object-level changes. Our novel variational expectation-maximization strategy is used to optimize factor graphs involving a Gaussian-Uniform bimodal measurement likelihood for potentially-changing objects. We evaluate our approach alongside the state-of-the-art SLAM solutions in simulation and on our novel real-world SLAM dataset captured in a warehouse over four months. Our method improves the robustness of localization in the presence of semi-static changes, providing object-level reasoning about the scene.

Uncertainty-aware panoptic segmentation

Authors

Kshitij Sirohi,Sajad Marvi,Daniel Büscher,Wolfram Burgard

Journal

IEEE Robotics and Automation Letters

Published Date

2023/3/14

Reliable scene understanding is indispensable for modern autonomous systems. Current learning-based methods typically try to maximize their performance based on segmentation metrics that only consider the quality of the segmentation. However, for the safe operation of a system in the real world it is crucial to consider the uncertainty in the prediction as well. In this work, we introduce the novel task of uncertainty-aware panoptic segmentation, which aims to predict per-pixel semantic and instance segmentations, together with per-pixel uncertainty estimates. We define two novel metrics to facilitate its quantitative analysis, the uncertainty-aware Panoptic Quality (uPQ) and the panoptic Expected Calibration Error (pECE). We further propose the novel top-down Evidential Panoptic Segmentation Network (EvPSNet) to solve this task. Our architecture employs a simple yet effective panoptic fusion module that …

Few-shot panoptic segmentation with foundation models

Authors

Markus Käppeler,Kürsat Petek,Niclas Vödisch,Wolfram Burgard,Abhinav Valada

Journal

arXiv preprint arXiv:2309.10726

Published Date

2023/9/19

Current state-of-the-art methods for panoptic segmentation require an immense amount of annotated training data that is both arduous and expensive to obtain posing a significant challenge for their widespread adoption. Concurrently, recent breakthroughs in visual representation learning have sparked a paradigm shift leading to the advent of large foundation models that can be trained with completely unlabeled images. In this work, we propose to leverage such task-agnostic image features to enable few-shot panoptic segmentation by presenting Segmenting Panoptic Information with Nearly 0 labels (SPINO). In detail, our method combines a DINOv2 backbone with lightweight network heads for semantic segmentation and boundary estimation. We show that our approach, albeit being trained with only ten annotated images, predicts high-quality pseudo-labels that can be used with any existing panoptic segmentation method. Notably, we demonstrate that SPINO achieves competitive results compared to fully supervised baselines while using less than 0.3% of the ground truth labels, paving the way for learning complex visual recognition tasks leveraging foundation models. To illustrate its general applicability, we further deploy SPINO on real-world robotic vision systems for both outdoor and indoor environments. To foster future research, we make the code and trained models publicly available at http://spino.cs.uni-freiburg.de.

Care3D: An Active 3D Object Detection Dataset of Real Robotic-Care Environments

Authors

Michael G Adam,Sebastian Eger,Martin Piccolrovazzi,Maged Iskandar,Joern Vogel,Alexander Dietrich,Seongjien Bien,Jon Skerlj,Abdeldjallil Naceri,Eckehard Steinbach,Alin Albu-Schaeffer,Sami Haddadin,Wolfram Burgard

Journal

arXiv preprint arXiv:2310.05600

Published Date

2023/10/9

As labor shortage increases in the health sector, the demand for assistive robotics grows. However, the needed test data to develop those robots is scarce, especially for the application of active 3D object detection, where no real data exists at all. This short paper counters this by introducing such an annotated dataset of real environments. The captured environments represent areas which are already in use in the field of robotic health care research. We further provide ground truth data within one room, for assessing SLAM algorithms running directly on a health care robot.

AutoGraph: Predicting Lane Graphs from Traffic Observations

Authors

Jannik Zürn,Ingmar Posner,Wolfram Burgard

Journal

IEEE Robotics and Automation Letters

Published Date

2023/11/9

Lane graph estimation is a long-standing problem in the context of autonomous driving. Previous works aimed at solving this problem by relying on large-scale, hand-annotated lane graphs, introducing a data bottleneck for training models to solve this task. To overcome this limitation, we propose to use the motion patterns of traffic participants as lane graph annotations. In our AutoGraph approach, we employ a pre-trained object tracker to collect the tracklets of traffic participants such as vehicles and trucks. Based on the location of these tracklets, we predict the successor lane graph from an initial position using overhead RGB images only, not requiring any human supervision. In a subsequent stage, we show how the individual successor predictions can be aggregated into a consistent lane graph. We demonstrate the efficacy of our approach on the UrbanLaneGraph dataset and perform extensive quantitative …

Learning and aggregating lane graphs for urban automated driving

Authors

Martin Büchner,Jannik Zürn,Ion-George Todoran,Abhinav Valada,Wolfram Burgard

Published Date

2023

Lane graph estimation is an essential and highly challenging task in automated driving and HD map learning. Existing methods using either onboard or aerial imagery struggle with complex lane topologies, out-of-distribution scenarios, or significant occlusions in the image space. Moreover, merging overlapping lane graphs to obtain consistent largescale graphs remains difficult. To overcome these challenges, we propose a novel bottom-up approach to lane graph estimation from aerial imagery that aggregates multiple overlapping graphs into a single consistent graph. Due to its modular design, our method allows us to address two complementary tasks: predicting ego-respective successor lane graphs from arbitrary vehicle positions using a graph neural network and aggregating these predictions into a consistent global lane graph. Extensive experiments on a large-scale lane graph dataset demonstrate that our approach yields highly accurate lane graphs, even in regions with severe occlusions. The presented approach to graph aggregation proves to eliminate inconsistent predictions while increasing the overall graph quality. We make our large-scale urban lane graph dataset and code publicly available at http://urbanlanegraph. cs. uni-freiburg. de.

Audio visual language maps for robot navigation

Authors

Chenguang Huang,Oier Mees,Andy Zeng,Wolfram Burgard

Journal

arXiv preprint arXiv:2303.07522

Published Date

2023/3/13

While interacting in the world is a multi-sensory experience, many robots continue to predominantly rely on visual perception to map and navigate in their environments. In this work, we propose Audio-Visual-Language Maps (AVLMaps), a unified 3D spatial map representation for storing cross-modal information from audio, visual, and language cues. AVLMaps integrate the open-vocabulary capabilities of multimodal foundation models pre-trained on Internet-scale data by fusing their features into a centralized 3D voxel grid. In the context of navigation, we show that AVLMaps enable robot systems to index goals in the map based on multimodal queries, e.g., textual descriptions, images, or audio snippets of landmarks. In particular, the addition of audio information enables robots to more reliably disambiguate goal locations. Extensive experiments in simulation show that AVLMaps enable zero-shot multimodal goal navigation from multimodal prompts and provide 50% better recall in ambiguous scenarios. These capabilities extend to mobile robots in the real world - navigating to landmarks referring to visual, audio, and spatial concepts. Videos and code are available at: https://avlmaps.github.io.

Method and system for behavioral cloning of autonomous driving policies for safe autonomous agents

Published Date

2023/4/18

A method for behavior cloned vehicle trajectory planning is described. The method includes perceiving vehicles proximate an ego vehicle in a driving environment, including a scalar confidence value of each perceived vehicle. The method also includes generating a bird's-eye-view (BEV) grid showing the ego vehicle and each perceived vehicle based on each of the scalar confidence values. The method further includes ignoring at least one of the perceived vehicles when the scalar confidence value of the at least one of the perceived vehicles is less than a predetermined value. The method also includes selecting an ego vehicle trajectory based on a cloned expert vehicle behavior policy according to remaining perceived vehicles.

Geometric Regularity with Robot Intrinsic Symmetry in Reinforcement Learning

Authors

Shengchao Yan,Yuan Zhang,Baohe Zhang,Joschka Boedecker,Wolfram Burgard

Journal

arXiv preprint arXiv:2306.16316

Published Date

2023/6/28

Geometric regularity, which leverages data symmetry, has been successfully incorporated into deep learning architectures such as CNNs, RNNs, GNNs, and Transformers. While this concept has been widely applied in robotics to address the curse of dimensionality when learning from high-dimensional data, the inherent reflectional and rotational symmetry of robot structures has not been adequately explored. Drawing inspiration from cooperative multi-agent reinforcement learning, we introduce novel network structures for deep learning algorithms that explicitly capture this geometric regularity. Moreover, we investigate the relationship between the geometric prior and the concept of Parameter Sharing in multi-agent reinforcement learning. Through experiments conducted on various challenging continuous control tasks, we demonstrate the significant potential of the proposed geometric regularity in enhancing robot learning capabilities.

See List of Professors in Wolfram Burgard University(Albert-Ludwigs-Universität Freiburg)

Wolfram Burgard FAQs

What is Wolfram Burgard's h-index at Albert-Ludwigs-Universität Freiburg?

The h-index of Wolfram Burgard has been 82 since 2020 and 132 in total.

What are Wolfram Burgard's top articles?

The articles with the titles of

Automatic Target-Less Camera-LiDAR Calibration From Motion and Deep Point Correspondences

Evaluation of a Smart Mobile Robotic System for Industrial Plant Inspection and Supervision

Centergrasp: Object-aware implicit representation learning for simultaneous shape reconstruction and 6-dof grasp estimation

uPLAM: Robust Panoptic Localization and Mapping Leveraging Perception Uncertainties

Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation

Language, affordance and physics in robot cognition and intelligent systems

Bayesian Optimization for Sample-Efficient Policy Improvement in Robotic Manipulation

BEVCar: Camera-Radar Fusion for BEV Map and Object Segmentation

...

are the top articles of Wolfram Burgard at Albert-Ludwigs-Universität Freiburg.

What are Wolfram Burgard's research interests?

The research interests of Wolfram Burgard are: Robotics, Artificial Intelligence, AI, Machine Learning, Computer Vision

What is Wolfram Burgard's total number of citations?

Wolfram Burgard has 111,331 citations in total.

What are the co-authors of Wolfram Burgard?

The co-authors of Wolfram Burgard are Dieter Fox, Howie Choset, Cyrill Stachniss, Frank Dellaert, seth hutchinson, Kevin Lynch.

    Co-Authors

    H-index: 128
    Dieter Fox

    Dieter Fox

    University of Washington

    H-index: 78
    Howie Choset

    Howie Choset

    Carnegie Mellon University

    H-index: 76
    Cyrill Stachniss

    Cyrill Stachniss

    Rheinische Friedrich-Wilhelms-Universität Bonn

    H-index: 75
    Frank Dellaert

    Frank Dellaert

    Georgia Institute of Technology

    H-index: 52
    seth hutchinson

    seth hutchinson

    Georgia Institute of Technology

    H-index: 50
    Kevin Lynch

    Kevin Lynch

    North Western University

    academic-engine

    Useful Links