MoD – Mapping on Demand (DFG Research Unit)
Sub-project P4 ‘Incremental Mapping from Image Sequences’
Sub-project P8 ‘Exploration for UAVs’
The goal of the project is the development and testing of procedures and algorithms for the fast three-dimensional identification and mensuration of inaccessible objects on the basis of a semantically specified user inquiry. The sensor platform is a lightweight autonomously flying drone. It uses the visual information from cameras for navigation, obstacle detection, exploration and object acquisition. The methods to be develop focus on the autonomous acquisition of probabilistic models to capture spatial-temporal patterns including semantics. The ability to cope with noisy sensor data and to explicitly represent the uncertainty is a central design element. The targeted approaches can be summarized with the term Mapping on Demand. This includes all processes, techniques, and tools to acquire the sensor data, to process and to interpret it with the goal of building a semantically annotated 3D model for the user. The user obtains the model on time and it supports his decision making process. Within the second phase, the research unit focuses on techniques for onboard execution and that are of incremental nature.
Flourish (EC, H2020)
To feed a growing world population with the given amount of available farm land, we must develop new methods of sustainable farming that increase yield while reducing reliance on herbicides and pesticides. Precision agricultural techniques seek to address this challenge by monitoring key indicators of crop health and targeting treatment only to plants that need it. This is a time consuming and expensive activity and while there has been great progress on autonomous farm robots, most systems have been developed to solve only specialized tasks. This lack of flexibility poses a high risk of no return on investment for farmers. The goal of the Flourish project is to bridge the gap between the current and desired capabilities of agricultural robots by developing an adaptable robotic solution for precision farming. By combining the aerial survey capabilities of a small autonomous multi-copter Unmanned Aerial Vehicle (UAV) with a multi-purpose agricultural Unmanned Ground Vehicle, the system will be able to survey a field from the air, perform targeted intervention on the ground, and provide detailed information for decision support, all with minimal user intervention. The system can be adapted to a wide range of crops by choosing different sensors and ground treatment packages. This development requires improvements in technological abilities for safe accurate navigation within farms, coordinated multi-robot mission planning that enables large field survey even with short UAV flight times, multispectral three-dimensional mapping with high temporal and spatial resolution, ground intervention tools and techniques, data analysis tools for crop monitoring and weed detection, and user interface design to support agricultural decision making. As these aspects are addressed in Flourish, the project will unlock new prospects for commercial agricultural robotics in the near future.
RobDREAM (EC, H2020)
Sleep! For hominids and most other mammals sleep means more than regeneration. Sleep positively affects working memory, which in turn improves higher-level cognitive functions such as decision making and reasoning. This is the inspiration of RobDREAM! What if robots could also improve their capabilities in their inactive phases – by processing experiences made during the working day and by exploring – or “dreaming” of – possible future situations and how to solve them best? In RobDREAM we will improve industrial mobile manipulators’ perception, navigation and manipulation and grasping capabilities by automatic optimization of parameters, strategies and selection of tools within a portfolio of key algorithms for perception, navigation and manipulation and grasping, by means of learning and simulation, and through use case driven evaluation. As a result, mobile manipulation systems will adapt more quickly to new tasks, jobs, parts, areas of operation and various other constraints. From a scientific perspective the RobDREAM robots will feature increased adaptability, dependability, flexibility, configurability, decisional autonomy, as well as improved abilities in perception, interaction manipulation and motion. The technology readiness level (TRL) of the related key technologies will be increased by means of frequent and iterative real-world testing, validation and improvement phases from the very beginning of the project.From an economic perspective the Quality of Service and the Overall Equipment Efficiency will increase, while at the same time the Total Cost of Ownership for setup, programming and parameter tuning will decrease. These advantages will support the competitiveness of Europe’s manufacturing sector, in particular in SME-like settings with higher product variety and smaller lot-sizes. They also support the head start of technology providers adopting RobDREAM’s technologies to conquer market shares in industrial and professional service robotics.
Rovina – Robots for Exploration, Digital Preservation and Visualization of Archeological Sites (EC, FP7, 2013-2016)
Mapping and digitizing archeological sites is an important task to preserve cultural heritage and to make it accessible to the public. Current systems for digitizing sites typically build upon static 3D laser scanning technology that is brought into archeological sites by humans. This is acceptable in general, but prevents the digitization of sites that are inaccessible by humans. In the field of robotics, however, there has recently been a tremendous progress in the development of autonomous robots that can access hazardous areas. ROVINA aims at extending this line of research with respect to reliability, accuracy and autonomy to enable the novel application scenario of autonomously mapping of areas of high archeological value that are hardly accessible. ROVINA will develop methods for building accurate, textured 3D models of large sites including annotations and semantic information. To construct the detailed model, it will combine innovative techniques to interpret vision and depth data. ROVINA will furthermore develop advanced techniques for the safe navigation in the cultural heritage site. To actively control the robot, ROVINA will provide interfaces with different levels of robot autonomy. Already during the exploration mission, we will visualize relevant environmental aspects to the end-users so that they can appropriately interact and provide direct feedback.
EUROPA2 (EC, FP7, 2013-2016)
The goal of the EUROPA2 project, which builds on top of the results of the successfully completed FP7 project EUROPA (see below), is to bridge this gap and to develop the foundations for robots designed to autonomously navigate in urban environments outdoors as well as in shopping malls and shops, for example, to provide various services to humans. Based on the combination of publicly available maps and the data gathered with the robot’s sensors, it will acquire, maintain, and revise a detailed model of the environment including semantic information, detect and track moving objects in the environment, adapt its navigation behavior according to the current situation and anticipate interactions with users during navigation. A central aspect in the project is life-long operation and reduced deployment efforts by avoiding to build maps with the robot before it can operate. EUROPA2 is targeted at developing novel technologies that will open new perspectives for commercial applications of service robots in the future.
Previous Projects by C. Stachniss as a PI conducted at the University of Freiburg
- STAMINA – Sustainable and Reliable Robotics for Part Handling in Manufacturing Automation
- AdvancedEDC – Advanced Intracortical Neural Probes with Electronic Depth Control (within EXC-BrainLinks-BrainTools)
- SFB/TR-8 – Spatial Congnition
- TAPAS – Robotics-enabled Logistics and Assistive Services for the Transformable Factory of the Future
- First-MM – Flexible Skill Acquisition and Intuitive Robot Tasking for Mobile Manipulation in the Real World
- EUROPA – European Robotic Pedestrian Assistant
Previous Projects by W. Förster and B. Waske
- Modelling the spatio-temporal variability of crop and cropping system processes under heterogenous field conditions
- Structural-ecolgical mapping of river courses, using TerraSAR-X and RaipdEye data
- Remote sensing based retrieval of biomethane potential (BMP) of crops, with regard to the EnMAP mission
- Monitoring Farmland Abandonment by multitemporal and multisensor remote sensing imagery (MOFA) (transfered to FU Berlin 01.01.2014)
- Semi-automatic Generation of Highly Detailed Textured Building Models
- eTRIMS- E-Training for Interpreting Images of Man-Made Scenes (2006-2009)
- Ontological Scales
- A Control Point Model Database for Automatic Exterior Orientation (together with Survey Department NRW) (1994-2000)
- Semantic Modeling and Extraction of Spatial Objects from Images and Maps (SM) (1993-1999)
- Image Processing for Automatic Carthographic Tools (IMPACT)
- Photogrammetric investigations with MOMS-02 imagery
- Calibration of a fringe projection (structured light) sensor system (in german)
- Automatic Geometric and Semantic Reconstruction of Buildings from Images by Extraction of 3D-Corners and their 3D-Aggregation
- Photogrammetric Eye
- A Generic Adjustment Module
- Photogrammetric Observation and Reconstruction of the Development of a Fluvial Sediment Surface (in german)
- Semi-automatic Building Acquisition
- A Photogrammetric Scanner for the Analytical Plotter P3