Given two sequences of images represented by the descriptors, the code constructs a data association graph and performs a search within this graph, so that for every query image, the code computes a matching hypothesis to an image in a database sequence as well as matching hypothesis for the previous images. The matching procedure can be perfomed in two modes: feature based and cost matrix based mode. For more theoretical details, please refer to our paper Lazy data association for image sequence matching under substantial appearance changes.
This code is related to the following publication:
O. Vysotska and C. Stachniss, “Lazy Data Association For Image Sequences Matching Under Substantial Appearance Changes,” IEEE Robotics and Automation Letters (RA-L)and IEEE International Conference on Robotics & Automation (ICRA), vol. 1, iss. 1, pp. 1-8, 2016. doi:10.1109/LRA.2015.2512936.
The work “UAV-Based Crop and Weed Classification for Smart Farming” by Philipp Lottes, Raghav Khanna, Johannes Pfeifer, Roland Siegwart and Cyrill Stachniss received the Best Paper Award in Automation of the IEEE Robotics and Automation Society at ICRA 2017.
Abstract: Unmanned aerial vehicles (UAVs) and other robots in smart farming applications offer the potential to monitor farm land on a per-plant basis, which in turn can reduce the amount of herbicides and pesticides that must be applied. A central information for the farmer as well as for autonomous agriculture robots is the knowledge about the type and distribution of the weeds in the field. In this regard, UAVs offer excellent survey capabilities at low cost. In this paper, we address the problem of detecting value crops such as sugar beets as well as typical weeds using a camera installed on a light-weight UAV. We propose a system that performs vegetation detection, plant-tailored feature extraction, and classification to obtain an estimate of the distribution of crops and weeds in the field. We implemented and evaluated our system using UAVs on two farms, one in Germany and one in Switzerland and demonstrate that our approach allows for analyzing the field and classifying individual plants.
We released our code that implements a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velodyne sensors, i.e. 16, 32 and 64 beam ones. See a video that shows all objects which have a bounding box with the volume of less than 10 qubic meters:
This code is related to the following publications:
I. Bogoslavskyi and C. Stachniss, “Efficient Online Segmentation for Sparse 3D Laser Scans,” PFG — Journal of Photogrammetry, Remote Sensing and Geoinformation Science, pp. 1-12, 2017.
as well as
I. Bogoslavskyi and C. Stachniss, “Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation”, In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2016.
Cyrill Stachniss and Heiner Kuhlmann submitted together with several principal investigators and collaborators from the University of Bonn and the Research Center Jülich a Cluster of Excellence draft proposal to the DFG called “PhenoRob – Robotics and Phenotyping for Sustainable Crop Production”. More details can be found at http://www.phenorob.de.
EXIST-Funded startup focussing on an autonomous laser-based weeding system has started in Feb 2017 at the Photogrammetry & Robotics Lab.
Abstract – For decades, herbicides have been the main means of weed control. The extensive use of chemical substances in agriculture is generating devastating effects on our environment and is also affecting human health. Additionally, several weed varieties have been developing a natural resistance to the applied chemicals, hence new and more potent herbicides have to be developed.
Due to the great ecological and human health effects, the world is demanding chemical free crops from our fields. Laser-based weeding is an excellent non-herbicide solution to weed control that will enable the production of organic certified products.
The proposed system uses multi-spectral sensors and state of the art computer vision algorithms to detect and classify all plants on the field. After identifying the weed plants, a laser beam will be used to eliminate or to seriously damage the weeds. In this way, value crops can grow without the competition from weed and have higher yields because all available nutrients do not have to be shared.
This technology has the potential for installing a new generation of sustainable crop production farms. With the use of the laser-based weeding system at large scales, herbicide use can be substantially reduced. Our goal is that within the foreseeable future this weeding method will be used by every major crop producer, thus, protecting the the environment and the health of the population.
Julio Pastrana (firstname.lastname@example.org) and Tim Wigbels (email@example.com)
The kickoff meeting of the EUROPA project on urban robot navigation was held in Feb 2009 in Freiburg and on Jan 20, 2017, the final review of the successor project EUROPA2 ended with an excellent evaluation. A big thank you to all our partners and team members from the University of Freiburg and Oxford, KU Leuven, ETH Zürich, RWTH Aachen as well as GeoAutomation and BlueBotics.
Demo of the sugar beets and weed classification system at a test fields of the ETH Crop Science lab in Eschikon.
For more details on the approach see: P. Lottes, M. Hoeferlin, S. Sanders, and C. Stachniss: “Effective Vision-Based Classification for Separating Sugar Beets and Weeds for Precision Farming”, Journal of Field Robotics, 2016 on our publication page.
The EC-funded project ROVINA has the goal of providing new means for accessing and digitizing cultural heritage sites. It is targeted at developing innovative digital preservation tools and at the same time novel technologies that strengthen the robustness of robots operating in previously unknown and unstructured 3D environments. On September 26, 2016 the final review of the 42 month project was successful so that ROVINA has been evaluated excellently in all four review meetings.
The collaborative project has been conducted by a consortium coordinated by Cyrill Stachniss at the University of Bonn that includes experts from robotics, computer vision and archeology: KU Leuven, La Sapienza University of Rome, RWTH Aachen, University of Freiburg, Algorithmica and the International Council of Monuments and Sites.