Author: stachnis

2021-10: Rodrigo Marcuzzi wins Challenge at ICCV’21 Workshops on 4D LiDAR Panoptic Segmentation

The work “Contrastive Instance Association for 4D Panoptic Segmentation” by Rodrigo Marcuzzi et al. win the 4D LiDAR Panoptic Segmentation Challenge of the 6th ICCV Workshop on Benchmarking Multi-Target Tracking. The challenge was be based on the SemanticKITTI dataset, which is densely labeled in the spatial and temporal domain. The task was to assign a semantic and unique instance label to every 3D LiDAR point.

2021-07: Faculty Award 2021 for Geodesy for Nived Chebrolu

Nived Chebrolu win the Faculty Award 2021 for Geodesy for the Adaptive Robust Kernels for Non-Linear Least Squares Problems.

N. Chebrolu, T. Läbe, O. Vysotska, J. Behley, and C. Stachniss, “Adaptive Robust Kernels for Non-Linear Least Squares Problems,” IEEE Robotics and Automation Letters (RA-L), vol. 6, pp. 2240-2247, 2021. doi:10.1109/LRA.2021.3061331. PDF: https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/chebrolu2021ral.pdf

2021-07: Wolfgang Förstner and Bernhard Wrobel receive the Karl-Kraus-Medal from ISPRS

The winner of the ISPRS Karl-Kraus-Medal award for 2020 is the textbook Photogrammetric Computer Vision – Statistics, Geometry, Orientation and Reconstruction by Prof. Dr. Wolfgang Förstner (Emeritus Professor, University of Bonn, Germany) and Prof. Dr. Bernhard P. Wrobel, (Retired Professor, Technical University of Darmstadt, Germany). The Photogrammetric Computer Vision textbook is published by Springer International Publishing, Switzerland.

2021-05: Nived Chebrolu Defended His PhD Thesis

Summary

A critical challenge that we face today is to meet the rising demand for food, feed, fiber, and fuel from an ever-growing world population. We must meet this demand within the limited arable land available to us and do so in the aggravated situation caused by climate change. Moreover, present-day levels of agro-chemical usage is unsustainable. They lead to large scale environmental pollution and adverse effects on the biodiversity of our planet. A promising way to meet this challenge is through intensifying production sustainably using existing resources and novel technology in combination. Robotic systems deployed in agricultural fields are seen as a potential solution to achieve this goal. These systems can increase productivity by providing high-quality site-specific treatment at the level of an individual plant through continuous monitoring and timely intervention in the field, while drastically reducing or eliminating the use of agro-chemicals. The development of such automated robotic systems is envisioned to play an essential role in the future of agricultural plant production.
Agricultural robots are ideal platforms to monitor the plants in the field with a high spatial and temporal frequency and provide intervention capability whenever an action is required. In this thesis, we focus on the fundamental task of registration, which would form the core of such robotic systems. The goal of registration is to bring two sets of measurements into a common coordinate frame, which forms the basis for associating data separated in space and time. It is a core building block for solving several state estimation problems in robotics, geodesy, and photogrammetry. As a result, registration of sensor data has been extensively studied in the literature from multiple disciplines. However, existing techniques fail to perform reliably in the agricultural domain due to a unique set of challenges. These challenges vary from the large change in the visual appearance of the field over time to the structural change of individual plants as they grow over the crop season and to the vastly differing viewpoints where data is captured from multiple platforms in an aerial-ground robotics system.
Our main contribution in the thesis is a set of novel registration techniques that explicitly considers the challenges brought forward by the spatio-temporal nature of the task in agricultural application. We show that our registration techniques perform reliably in challenging conditions and demonstrate their advantages over state-of-the-art registration approaches. We use these registration techniques to demonstrate their application for long-term monitoring of crops in the field, for accurate localization of ground robots for navigation in crop fields, and for performing automated phenotyping to analyze the growth of individual plant parts from high-fidelity point cloud data. We also study the effect of outliers in data for registration and state estimation problems and propose a general solution for robust state estimation in the presence of different outlier distributions that occur in these tasks. The registration techniques developed in this thesis contribute to the robust operation of autonomous robots in crop fields over long periods of time and form the backbone of applications interested in tracking spatio-temporal traits of plants.
In sum, this thesis makes several contributions in the context of spatio-temporal registration in the agricultural domain of plant production. Compared to the current state-of-the-art, the approaches presented in this thesis allow for a more robust and longer-term registration of data captured by robots in the fields and effectively handle the challenges resulting from plant growth. All approaches described in this thesis have been published in peer-reviewed conference papers and journal articles. In addition to that, we have released most of the techniques developed in this thesis as open-source software and also published three challenging datasets for long-term spatio-temporal registration tasks.

2021-01: Andres Milioto Defended His PhD Thesis

Summary

Over the last few years, robots have been slowly making their way into our everyday lives. From robotic vacuum cleaners picking up after us already working in our homes, to the fleets of robo-taxis and self-driving vehicles lurking on the horizon, all of these robots are designed to operate in conjunction with, and in an environment designed for us, humans. This means that unlike traditional robots working in industrial settings where the world is designed around them, these \emph{mobile robots} need to acquire an accurate understanding of the surroundings in order to operate safely, and reliably. We call this type of knowledge about the surroundings of the robot \emph{semantic scene understanding}. This understanding serves as the first layer of interpretation of the robot’s raw sensor data and provides other tasks with useful and complete information about the status of the surroundings. These tasks include the avoidance of obstacles, the localization of the robot in the world, the mapping of an unknown environment for later use, the planning of trajectories, and the manipulation of objects in the scene, among others.

In this thesis, we focus on semantic scene understanding for mobile robots. As their mobility usually requires these robots to be powered by batteries, the key characteristics they require from perception algorithms are to be computationally, as well as energy~\emph{efficient}. Efficient means that the approach can exploit all the information available to it to run fast enough for the robot’s online operation, both in power- as well as compute-constrained embedded computers. We approach this goal through three different avenues. First, in all of the algorithms presented in this thesis, we exploit background knowledge about the task we are trying to solve to make our algorithms fast to execute and at the same time, more accurate.
Second, we instruct the approaches to exploit peculiarities of the particular sensor used in each application in order to make the processing more efficient.
Finally, we present a software infrastructure that serves as an example of how to implement said scene understanding approaches on real robots, exploiting commercially available hardware accelerators for the task, and allowing for scalability. Because of this, every method presented in this thesis is capable of running faster than the frame rate of the sensor, both when using cameras or laser sensors.

All parts of this thesis have been published in proceedings of international conferences or as journal articles, undergoing a thorough peer-reviewing process. Furthermore, the work presented in this thesis resulted in the publication of a large-scale dataset and benchmark for the community to develop, share, and compare their semantic scene understanding approaches, as well as four open-source libraries for this task, using multiple sensor modalities.

2021-01: Philipp Lottes Defended His PhD Thesis

Summary

Due to a continually growing world population, the demand for food and energy increases continuously. As a central source of food, feed, and energy, crop production is therefore called upon to produce higher yields. To achieve high crop yields, weed control, fertilization, and disease control are essential tasks. Nowadays, these tasks are performed by uniformly applying large amounts of agrochemicals, such as herbicides and fertilizers, to our fields. At the same time, we need to reduce the ecological footprint of agricultural production to achieve the required sustainability to protect our environment for future generations.

Autonomous agricultural field robots offer the potential for a drastic reduction of applied agrochemicals through selectively treating individual plants and weeds in the field. For selective weeding or fertilizing, a robot can be equipped with different actuators such as selective sprayers, mechanical tools, or even lasers. A prerequisite for selective and plant-specific treatment is that the robots can distinguish and locate the plants and weeds in the field. With this information, the robots can decide where and when to trigger the actuators to perform the treatment selectively. In contrast to ground robots, unmanned aerial vehicles (UAVs) can monitor farmland on a larger scale without interacting with the soil. In combination with a vision-based system for the classification of plants, UAVs serve excellent capabilities to retrieve the status of a field on a per-plant basis in small amounts of time.

In this thesis, we develop novel vision-based plant classification systems that enable agricultural ground robots for online in-field interventions and for aerial robots to perform accurate monitoring of the plantation. We investigate traditional and more modern machine-learning approaches based on random forests and fully convolutional neural networks to perform the necessary classification of the crop plants and weeds. We propose a coupled plant and stem classification system that jointly classifies the crop plants and weeds, further distinguishing herbs from grasses, and additionally provides the precise stem locations at the same time. Based on the classification output, the robot can select the most effective treatment for the current situation in the field.\\

A major challenge for vision-based classification systems is that agricultural robots need to operate in different field environments under drastic changes in the visual appearance of the plants, weeds, and soil. For real-world applications, plant classifications systems not only need to provide a high performance in known fields but also need to be robust to new and changing field conditions. This thesis aims at improving the generalization performance of plant classification systems for robots that operate under different environmental conditions. We propose two vision-based classification systems that, in addition to visual information, exploit the spatial arrangement of the plants in the case of row crops. Such geometric information is typically similar within and across fields and thus less dependent on the visual appearance of the plants. Our approaches exploiting the spatial arrangement of plants provide superior generalization capabilities to changing field conditions compared to state-of-the-art vision-based classifiers.

A further challenge for scalable development of robust plant classification systems is their requirement for large and diverse training datasets. Typically, a classifier needs to be adapted with additional labeled data representing the conditions of the new field environment. However, this procedure comes at the cost of a continuous effort to label new data. We present a semi-supervised online learning approach that combines purely visual classification with geometric classification exploiting the plant arrangement. We show that with only a one-minute labeling effort, our approach provides a classification performance on the same level as classically re-trained classifiers.

We conduct a comprehensive experimental evaluation of the classification systems under real-world conditions using a wide range of field datasets. We collected a large and diverse database in various field environments located in central Europe consisting of around 26,500 labeled images acquired by different field and aerial robots. Using our database, we evaluate different aspects of the plant classifiers considering their performance, generalization capabilities, needed labeling effort, exploitation of additional near-infrared information, and explicitly compare the random forest performance with the one obtained by fully convolutional neural networks. Our experiments suggest that fully convolutional neural networks are well suited for the plant classification task. They provide a better performance than random forest-based approaches, are more robust to changing field conditions, and provide results faster by exploiting dedicated hardware.

All plant classification systems presented in this thesis have been published in peer-reviewed conference papers and journal articles. One of our UAV-based plant classification systems won the best automation paper award at the International Conference on Robotics and Automation (ICRA). Our semi-supervised online-learning approach for plant classification was a finalist for the best application paper award at the Conference on Robots and Systems (IROS).