Author: stachnis

2018-12: Code Available: Release Surfel-based Mapping using 3D Laser Range Data by Jens Behley

SuMa – Surfel-based Mapping using 3D Laser Range Data

SuMa on github

Mapping of 3d laser range data from a rotating laser range scanner, e.g., the Velodyne HDL-64E. For representing the map, we use surfels that enables fast rendering of the map for point-to-plane ICP and loop closure detection. If you use our implementation in your academic work, please cite the corresponding paper: J. Behley, C. Stachniss. Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments, Proc. of Robotics: Science and Systems (RSS), 2018 (pdf).

This code is related to the following publications:
J. Behley, C. Stachniss. Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments, Proc. of Robotics: Science and Systems (RSS), 2018 (pdf).

2018-11: Igor Bogoslavskyi defended his PhD Thesis

Igor Bogoslavskyi successfully defended his PhD thesis entitled “Robot mapping and navigation in real-world environments” at the University of Bonn on the Photogrammetry & Robotics Lab.

Download the PhD thesis

Robots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities required for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial di culty as these components are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown environment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robot navigation pipeline that can work with different types of sensors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots.
The techniques presented in this thesis aim to be used with both RGBD and LiDAR sensors without adaptations for individual sensor models by using range image representation and aim to provide methods for navigation and scene interpretation in both static and dynamic environments. For a static world, we present a number of approaches that address the core components of a typical robot navigation pipeline. At the core of building a consistent map of the environment using a mobile robot lies point cloud matching. To this end, we present a method for photometric point cloud matching that treats RGBD and LiDAR sensors in a uniform fashion and is able to accurately register point clouds at the frame rate of the sensor. This method serves as a building block for the further mapping pipeline. In addition to the matching algorithm, we present a method for traversability analysis of the currently observed terrain in order to guide an autonomous robot to the safe parts of the surrounding environment. A source of danger when navigating di cult to access sites is the fact that the robot may fail in building a correct map of the environment. This dramatically impacts the ability of an autonomous robot to navigate towards its goal in a robust way, thus, it is important for the robot to be able to detect these situations and in this way home not relying on any kind of map. To address this challenge, we present a method for analyzing the quality of the map that the robot has built to date, and safely returning the robot to the starting point in case the map is found to be in an inconsistent state.
The scenes in dynamic environments are vastly different from the ones experienced in static ones. In a dynamic setting, objects can be moving, thus making static traversability estimates not enough. With the approaches developed in this thesis, we aim at identifying distinct objects and tracking them to aid navigation and scene understanding. We target these challenges by providing a method for clustering a scene taken with a LiDAR scanner and a measure that can be used to determine if two clustered objects are similar that can aid the tracking performance.
All methods presented in this thesis are capable of supporting real-time robot operation, rely on RGBD or LiDAR sensors and have been tested on real robots in real-world environments and on real-world datasets. All approaches have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software.

2018-10: New Dataset Release: Agricultural Sugar Beet Datasets with Annotations

We released 12340 labeled images containing pixel-wise annotations of sugar beets and weeds. The labels belong to our previously published IJRR dataset paper: “Agricultural robot dataset for plant classification, localization, and mapping on sugar beet fields.” On average, we recorded data three times per week over 6 weeks within the season, which captures the interesting period for weed control starting at the emergence of the plants. The robot carried a 4-channel multi-spectral camera.

Link to the dataset
www.ipb.uni-bonn.de/data/sugarbeets2016/

Link to the new labels:
www.ipb.uni-bonn.de/datasets_IJRR2017/annotations/cropweed/

2018-09-27: Cluster of Excellence PhenoRob got Accepted

Our Cluster of Excellence proposal “PhenoRob – Robotics and Phenotyping for Sustainable Crop Production” has been accepted today.

One of the greatest challenges for humanity is to produce sufficient food, feed, fiber, and fuel for an ever-growing world population while simultaneously reducing the environmental footprint of agricultural production. x arable land is limited, and the input of agro-chemicals needs to be reduced to curb environmental pollution and halt the decline in biodiversity. Climate change poses additional constraints on crop farming. Achieving sustainable crop production with limited resources is, thus, a task of immense proportions.

Our main hypothesis is that a major shift toward sustainable crop production can be achieved via two approaches: (1) multi-scale monitoring of plants and their environment using autonomous robots with automated and individualized intervention and big data analytics combined with machine learning to improve our understanding of the relation between input and output parameters of crop production, and (2) assessing, modeling, and optimizing the implications of the developed technical innovations in a systemic manner.

To realize our vision, we will take a technology-driven approach to address the challenging scientific objectives. We foresee novel ways of growing crops and managing fields, and aim at reducing the environmental footprint of crop production, maintaining the quality of soil and arable land, and analyzing the best routes to improve the adoption of technology.

The novel approach of PhenoRob is characterized by the integration of robotics, digitalization, and machine learning on one hand, and modern phenotyping, modeling, and crop production on the other. First, we will systematically monitor all essential aspects of crop production using sensor networks as well as ground and aerial robots. This is expected to provide detailed spatially and temporally aligned information at the level of individual plants, nutrient and disease status, soil information as well as ecosystem parameters, such as vegetation diversity. This will enable a more targeted management of inputs (genetic resources, crop protection, fertilization) for optimizing outputs (yield, growth, environmental impact). Second, we will develop novel technologies to enable real-time control of weeds and selective spraying and fertilization of individual plants in field stands. This will help reduce the environmental footprint by reducing chemical input. Third, machine learning applied to crop data will improve our understanding and modeling of plant growth and resource efficiencies and will further assist in the identification of correlations. Furthermore, we will develop integrated multi-scale models for the soil-crop-atmosphere system. These technologies and the gained knowledge will change crop production on all levels. Fourth, in addition to the impact on management decisions at the farm level, we will investigate the requirements for technology adoption as well as socioeconomic and environmental impact of the innovations resulting from upscaling.

2018-07: GPU Grant from NVIDIA

We gratefully acknowledge the support by NVIDIA providing us with the GPU grant to support our research on semantic segmentation and object instance detection for scene understanding and agricultural robotics.

2018-03: Code Available: Bonnet – Tensorflow Convolutional Semantic Segmentation Pipeline by Andres Milioto and Cyrill Stachniss

Bonnet: Tensorflow Convolutional Semantic Segmentation pipeline by Andres Milioto and Cyrill Stachniss

Bonnet is available on GitHub

Bonnet provides a framework to easily add architectures and datasets, in order to train and deploy CNNs for a robot. It contains a full training pipeline in python using Tensorflow and OpenCV, and it also some C++ apps to deploy a frozen protobuf in ROS and standalone. The C++ library is made in a way which allows to add other backends (such as TensorRT and MvNCS), but only Tensorflow and TensorRT are implemented for now. For now, we will keep it this way because we are mostly interested in deployment for the Jetson and Drive platforms, but if you have a specific need, we accept pull requests!

The networks included is based of of many other architectures (see below), but not exactly a copy of any of them. As seen in the videos, they run very fast in both GPU and CPU, and they are designed with performance in mind, at the cost of a slight accuracy loss. Feel free to use it as a model to implement your own architecture.

All scripts have been tested on the following configurations:

  • x86 Ubuntu 16.04 with an NVIDIA GeForce 940MX GPU (nvidia-384, CUDA8, CUDNN6, TF 1.4.1, TensorRT3)
  • x86 Ubuntu 16.04 with an NVIDIA GTX1080Ti GPU (nvidia-375, CUDA8, CUDNN6, TF 1.4.1, TensorRT3)
  • x86 Ubuntu 16.04 and 14.04 with no GPU (TF 1.4.1, running on CPU in NHWC mode, no TensorRT support)
  • Jetson TX2 (full Jetpack 3.2)

We also provide a Dockerfile to make it easy to run without worrying about the dependencies, which is based on the official nvidia/cuda image containing cuda9 and cudnn7.

This code is related to the following publications:

A. Milioto and C. Stachniss, “Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics using CNNs”, in arXiv, 1802.08960, 2018.

2018-02: Code Available: Fast Change Detection by Emanuele Palazzolo and Cyrill Stachniss

Fast Change Detection by Emanuele Palazzolo and Cyrill Stachniss

Fast Change Detection is available on GitHub

The program allows for identifying, in real-time, changes on a 3D model from a sequence of images. The idea is to first detect inconsistencies between pairs of images by reprojecting an image onto another one by passing through the 3D model. Ambiguities about possible inconsistencies resulting from this process are then resolved by combining multiple images. Finally, the 3D location of the change is estimated by projecting in 3D these inconsistencies.

This code is related to the following publications:
E. Palazzolo and C. Stachniss, “Fast Image-Based Geometric Change Detection Given a 3D Model”, in Proceedings of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2018.