Author: stachnis

2018-10: New Dataset Release: Agricultural Sugar Beet Datasets with Annotations

We released 12340 labeled images containing pixel-wise annotations of sugar beets and weeds. The labels belong to our previously published IJRR dataset paper: “Agricultural robot dataset for plant classification, localization, and mapping on sugar beet fields.” On average, we recorded data three times per week over 6 weeks within the season, which captures the interesting period for weed control starting at the emergence of the plants. The robot carried a 4-channel multi-spectral camera.

Link to the dataset
www.ipb.uni-bonn.de/data/sugarbeets2016/

Link to the new labels:
www.ipb.uni-bonn.de/datasets_IJRR2017/annotations/cropweed/

2018-09-27: Cluster of Excellence PhenoRob got Accepted

Our Cluster of Excellence proposal “PhenoRob – Robotics and Phenotyping for Sustainable Crop Production” has been accepted today.

One of the greatest challenges for humanity is to produce sufficient food, feed, fiber, and fuel for an ever-growing world population while simultaneously reducing the environmental footprint of agricultural production. x arable land is limited, and the input of agro-chemicals needs to be reduced to curb environmental pollution and halt the decline in biodiversity. Climate change poses additional constraints on crop farming. Achieving sustainable crop production with limited resources is, thus, a task of immense proportions.

Our main hypothesis is that a major shift toward sustainable crop production can be achieved via two approaches: (1) multi-scale monitoring of plants and their environment using autonomous robots with automated and individualized intervention and big data analytics combined with machine learning to improve our understanding of the relation between input and output parameters of crop production, and (2) assessing, modeling, and optimizing the implications of the developed technical innovations in a systemic manner.

To realize our vision, we will take a technology-driven approach to address the challenging scientific objectives. We foresee novel ways of growing crops and managing fields, and aim at reducing the environmental footprint of crop production, maintaining the quality of soil and arable land, and analyzing the best routes to improve the adoption of technology.

The novel approach of PhenoRob is characterized by the integration of robotics, digitalization, and machine learning on one hand, and modern phenotyping, modeling, and crop production on the other. First, we will systematically monitor all essential aspects of crop production using sensor networks as well as ground and aerial robots. This is expected to provide detailed spatially and temporally aligned information at the level of individual plants, nutrient and disease status, soil information as well as ecosystem parameters, such as vegetation diversity. This will enable a more targeted management of inputs (genetic resources, crop protection, fertilization) for optimizing outputs (yield, growth, environmental impact). Second, we will develop novel technologies to enable real-time control of weeds and selective spraying and fertilization of individual plants in field stands. This will help reduce the environmental footprint by reducing chemical input. Third, machine learning applied to crop data will improve our understanding and modeling of plant growth and resource efficiencies and will further assist in the identification of correlations. Furthermore, we will develop integrated multi-scale models for the soil-crop-atmosphere system. These technologies and the gained knowledge will change crop production on all levels. Fourth, in addition to the impact on management decisions at the farm level, we will investigate the requirements for technology adoption as well as socioeconomic and environmental impact of the innovations resulting from upscaling.

2018-07: GPU Grant from NVIDIA

We gratefully acknowledge the support by NVIDIA providing us with the GPU grant to support our research on semantic segmentation and object instance detection for scene understanding and agricultural robotics.

2018-03: Code Available: Bonnet – Tensorflow Convolutional Semantic Segmentation Pipeline by Andres Milioto and Cyrill Stachniss

Bonnet: Tensorflow Convolutional Semantic Segmentation pipeline by Andres Milioto and Cyrill Stachniss

Bonnet is available on GitHub

Bonnet provides a framework to easily add architectures and datasets, in order to train and deploy CNNs for a robot. It contains a full training pipeline in python using Tensorflow and OpenCV, and it also some C++ apps to deploy a frozen protobuf in ROS and standalone. The C++ library is made in a way which allows to add other backends (such as TensorRT and MvNCS), but only Tensorflow and TensorRT are implemented for now. For now, we will keep it this way because we are mostly interested in deployment for the Jetson and Drive platforms, but if you have a specific need, we accept pull requests!

The networks included is based of of many other architectures (see below), but not exactly a copy of any of them. As seen in the videos, they run very fast in both GPU and CPU, and they are designed with performance in mind, at the cost of a slight accuracy loss. Feel free to use it as a model to implement your own architecture.

All scripts have been tested on the following configurations:

  • x86 Ubuntu 16.04 with an NVIDIA GeForce 940MX GPU (nvidia-384, CUDA8, CUDNN6, TF 1.4.1, TensorRT3)
  • x86 Ubuntu 16.04 with an NVIDIA GTX1080Ti GPU (nvidia-375, CUDA8, CUDNN6, TF 1.4.1, TensorRT3)
  • x86 Ubuntu 16.04 and 14.04 with no GPU (TF 1.4.1, running on CPU in NHWC mode, no TensorRT support)
  • Jetson TX2 (full Jetpack 3.2)

We also provide a Dockerfile to make it easy to run without worrying about the dependencies, which is based on the official nvidia/cuda image containing cuda9 and cudnn7.

This code is related to the following publications:

A. Milioto and C. Stachniss, “Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics using CNNs”, in arXiv, 1802.08960, 2018.

2018-02: Code Available: Fast Change Detection by Emanuele Palazzolo and Cyrill Stachniss

Fast Change Detection by Emanuele Palazzolo and Cyrill Stachniss

Fast Change Detection is available on GitHub

The program allows for identifying, in real-time, changes on a 3D model from a sequence of images. The idea is to first detect inconsistencies between pairs of images by reprojecting an image onto another one by passing through the 3D model. Ambiguities about possible inconsistencies resulting from this process are then resolved by combining multiple images. Finally, the 3D location of the change is estimated by projecting in 3D these inconsistencies.

This code is related to the following publications:
E. Palazzolo and C. Stachniss, “Fast Image-Based Geometric Change Detection Given a 3D Model”, in Proceedings of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2018.

2017-12: We will be organizing the ICRA’18 workshop “Robotic Vision and Action in Agriculture” in Brisbane jointly with ETH Zürich and the Australian Centre of Excellence for Robotic Vision

Robotic Vision and Action in Agriculture: the future of agri-food systems and its deployment to the real-world

Link to the workshop page

This workshop will bring together researchers and practitioners to discuss advances in robotics applications and the intersection of these advances with agricultural practices. As such, the workshop will focus not only on recent advancements in vision systems and action but it will also explore what this means for agricultural practices and how robotics can be used to better manage and understand the crop and environment.

Motivation and Objectives

Agriculture robotics faces a number of unique challenges and operates at the intersection of applied robotic vision, manipulation and crop science. Robotics will provide a key role in improving productivity, increasing crop quality and even enabling individualised weed and crop treatment. All of these advancements are integral for providing food to a growing population expected to reach 9 billion by 2050, requiring agricultural production to double in order to meet food demands.

This workshop brings together researchers and industry working on novel approaches for long term operation across changing agricultural environments, including broad-acre crops, orchard crops, nurseries and greenhouses, and horticulture. It will also host, Prof. Achim Walter, an internationally renowned crop scientist who will provide a unique perspective on the far future of how robotics can further revolutionise agriculture.

The goal of the workshop is to discuss the future of agricultural robotics and how thinking and acting with a robot in the field enables a range of different applications and approaches. A particular emphasis will be placed on vision and action that works in the field by coping with changes in appearance and geometry of the environment. Learning how to interact within this complicated environment will also be of special interest to the workshop as will be the alternative applications enabled by better understanding and exploiting the link between robotics and crop science.

List of Topics

Topics of interest to this workshop include, but are not necessarily limited to:

  • Novel perception for agricultural robots including passive and active methods
  • Manipulators for harvesting, soil preparation and crop protection
  • Long-term autonomy and navigation in unstructured environments
  • Data analytics and real-time decision making with robots-in-the-loop
  • Low-cost sensing and algorithms for day/night operation, and
  • User interfaces for end-users

Invited Presenters

The workshop will feature the following distinguished experts for invited talks:

Prof. Achim Walter (ETHZ Department of Environmental Systems Science)
Prof. Qin Zhang (Washington State University)
Organisers
Chris McCool
Australian Centre of Excellence for Robotic Vision
Queensland University of Technology
c.mccool@nullqut.edu.au

Chris Lehnert
Australian Centre of Excellence for Robotic Vision
Queensland University of Technology
c.lehnert@nullqut.edu.au

Inkyu Sa
ETH Zurich
Autonomous Systems Laboratory
inkyu.sa@nullmavt.ethz.ch

Juan Nieto
ETH Zurich
Autonomous Systems Laboratory
jnieto@nullethz.ch

Cyrill Stachniss
University of Bonn
Photogrammetry, IGG
cyrill.stachniss@nulligg.uni-bonn.de

2017-11: Code Available: Extended Version of Visual Place Recognition using Hashing by Olga Vysotska

Visual Place Recognition using Hashing by Olga Vysotska and Cyrill Stachniss

Localization system is available on GitHub

Given two sequences of images represented by the descriptors, the code constructs a data association graph and performs a search within this graph, so that for every query image, the code computes a matching hypothesis to an image in a database sequence as well as matching hypotheses for the previous images. The matching procedure can be performed in two modes: feature based and cost matrix based mode. The new version using an own hashing approach to quickly relocalize.

This code is related to the following publications:
O. Vysotska and C. Stachniss, “Relocalization under Substantial Appearance Changes using Hashing,” in Proc. of the 9th Workshop on Planning, Perception, and Navigation for Intelligent Vehicles at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2017.