Author: stachnis

2017-12: We will be organizing the ICRA’18 workshop “Robotic Vision and Action in Agriculture” in Brisbane jointly with ETH Zürich and the Australian Centre of Excellence for Robotic Vision

Robotic Vision and Action in Agriculture: the future of agri-food systems and its deployment to the real-world

Link to the workshop page

This workshop will bring together researchers and practitioners to discuss advances in robotics applications and the intersection of these advances with agricultural practices. As such, the workshop will focus not only on recent advancements in vision systems and action but it will also explore what this means for agricultural practices and how robotics can be used to better manage and understand the crop and environment.

Motivation and Objectives

Agriculture robotics faces a number of unique challenges and operates at the intersection of applied robotic vision, manipulation and crop science. Robotics will provide a key role in improving productivity, increasing crop quality and even enabling individualised weed and crop treatment. All of these advancements are integral for providing food to a growing population expected to reach 9 billion by 2050, requiring agricultural production to double in order to meet food demands.

This workshop brings together researchers and industry working on novel approaches for long term operation across changing agricultural environments, including broad-acre crops, orchard crops, nurseries and greenhouses, and horticulture. It will also host, Prof. Achim Walter, an internationally renowned crop scientist who will provide a unique perspective on the far future of how robotics can further revolutionise agriculture.

The goal of the workshop is to discuss the future of agricultural robotics and how thinking and acting with a robot in the field enables a range of different applications and approaches. A particular emphasis will be placed on vision and action that works in the field by coping with changes in appearance and geometry of the environment. Learning how to interact within this complicated environment will also be of special interest to the workshop as will be the alternative applications enabled by better understanding and exploiting the link between robotics and crop science.

List of Topics

Topics of interest to this workshop include, but are not necessarily limited to:

  • Novel perception for agricultural robots including passive and active methods
  • Manipulators for harvesting, soil preparation and crop protection
  • Long-term autonomy and navigation in unstructured environments
  • Data analytics and real-time decision making with robots-in-the-loop
  • Low-cost sensing and algorithms for day/night operation, and
  • User interfaces for end-users

Invited Presenters

The workshop will feature the following distinguished experts for invited talks:

Prof. Achim Walter (ETHZ Department of Environmental Systems Science)
Prof. Qin Zhang (Washington State University)
Organisers
Chris McCool
Australian Centre of Excellence for Robotic Vision
Queensland University of Technology
c.mccool@nullqut.edu.au

Chris Lehnert
Australian Centre of Excellence for Robotic Vision
Queensland University of Technology
c.lehnert@nullqut.edu.au

Inkyu Sa
ETH Zurich
Autonomous Systems Laboratory
inkyu.sa@nullmavt.ethz.ch

Juan Nieto
ETH Zurich
Autonomous Systems Laboratory
jnieto@nullethz.ch

Cyrill Stachniss
University of Bonn
Photogrammetry, IGG
cyrill.stachniss@nulligg.uni-bonn.de

2017-11: Code Available: Extended Version of Visual Place Recognition using Hashing by Olga Vysotska

Visual Place Recognition using Hashing by Olga Vysotska and Cyrill Stachniss

Localization system is available on GitHub

Given two sequences of images represented by the descriptors, the code constructs a data association graph and performs a search within this graph, so that for every query image, the code computes a matching hypothesis to an image in a database sequence as well as matching hypotheses for the previous images. The matching procedure can be performed in two modes: feature based and cost matrix based mode. The new version using an own hashing approach to quickly relocalize.

This code is related to the following publications:
O. Vysotska and C. Stachniss, “Relocalization under Substantial Appearance Changes using Hashing,” in Proc. of the 9th Workshop on Planning, Perception, and Navigation for Intelligent Vehicles at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2017.

2017-09: Work on Semi-Supervised Crop-Weed Detection becomes IROS 2017 Best Application Paper Finalist

The work “Semi-Supervised Online Visual Crop and Weed Classification in Precision Farming Exploiting Plant Arrangement” by Philipp Lottes and Cyrill stachniss, presented at IROS 2017 in Vancouver, becomes IROS 2017 Best Application Paper Finalist.

lottes-iros17-award-1400

Abstract – Precision farming robots offer a great potential for reducing the amount of agro-chemicals that is required in the fields through a targeted, per-plant intervention. To achieve this, robots must be able to reliably distinguish crops from weeds on different fields and across growth stages. In this paper, we tackle the problem of separating crops from weeds reliably while requiring only a minimal amount of training data through a semi-supervised approach. We exploit the fact that most crops are planted in rows with a similar spacing along the row, which in turn can be used to initialize a vision-based classifier requiring only minimal user efforts to adapt it to a new field. We implemented our approach using C++ and ROS and thoroughly tested it on real farm robots operating in different countries. The experiments presented in this paper show that with around 1 min of labeling time, we can achieve classification results with an accuracy of more than 95% in real sugar beet fields in Germany and Switzerland.

2017-09: Code Available: MPR: Multi-Cue Photometric Registration by B. Della Corte, I. Bogoslavskyi, C. Stachniss, and G. Grisett

A General Framework for Flexible Multi-Cue Photometric Point Cloud Registration by Bartolomeo Della Corte, Igor Bogoslavskyi, Cyrill Stachniss, and Giorgio Grisetti

MPR: Multi-Cue Photometric Registration is available on GitLab

The ability to build maps is a key functionality for the majority of mobile robots. A central ingredient to most mapping systems is the registration or alignment of the recorded sensor data. In this paper, we present a general methodology for photometric registration that can deal with multiple different cues. We provide examples for registering RGBD as well as 3D LIDAR data. In contrast to popular point cloud registration approaches such as ICP our method does not rely on explicit data association and exploits multiple modalities such as raw range and image data streams. Color, depth, and normal information are handled in an uniform manner and the registration is obtained by minimizing the pixel-wise difference between two multi-channel images. We developed a flexible and general framework and implemented our approach inside that framework. We also released our implementation as open source C++ code. The experiments show that our approach allows for an accurate registration of the sensor data without requiring an explicit data association or model-specific adaptations to datasets or sensors. Our approach exploits the different cues in a natural and consistent way and the registration can be done at framerate for a typical range or imaging sensor.

This code is related to the following publication:
“A General Framework for Flexible Multi-Cue Photometric Point Cloud Registration” by Bartolomeo Della Corte, Igor Bogoslavskyi, Cyrill Stachniss, Giorgio Grisetti. Submitted to ICRA 2018 and available on arXiv:1709.05945.