Dr.-Ing. Johannes Schneider

Scientific assistant
Contact:
Email: johannes.schneider@nulldeepup.de
Tel: +49 – 228 – 73 – 27 13
Fax: +49 – 228 – 73 – 27 12
Office: Nussallee 15, 1. OG
Address:
University of Bonn
Photogrammetry, IGG
Nussallee 15
53115 Bonn

Research Interests

  • Bundle Adjustment
  • Visual Odometry
  • Multi-camera Systems
  • Mapping on Demand

Short CV

Johannes Schneider studied Geodesy and Geoinformation and received his master’s degree at the University of Bonn in 2011. Since 2012 Johannes is a PhD student supervised by Wolfgang Förstner and scientific associate at the University of Bonn, Department of Photogrammetry in the Institute of Geodesy and Geoinformation. His area of research is visual online SLAM with omnidirectional multi-camera systems. He is working in the research project Mapping on Demand founded by the German Research Foundation (DFG) under research unit FOR 1505 headed by Cyrill Stachniss.

Software Downloads

  • BACS (Bundle Adjustment for Camera Systems)

Teaching Activities

  • Lectures “3D Coordinate Systems”, winter term 2017/18, basis, eCampus
  • Exercises “3D Coordinate Systems”, winter term 2016/17, basis, eCampus
  • Exercises “3D Coordinate Systems”, winter term 2015/16, basis, eCampus
  • Exercises “3D Coordinate Systems”, winter term 2014/15, basis, eCampus
  • Project for master students “Test field for kinematic multi-sensor systems, summer/winter term 2014/15, basis, eCampus
  • Exercises “3D Coordinate Systems”, winter term 2013/14, basis, eCampus
  • Project for master students “Determination of high-resolution building models with flying robots”, winter term 2013/14, basis, eCampus
  • Exercises “Projective Geometry and Statistics”, summer term 2013, basis, eCampus
  • Exercises “3D Coordinate Systems”, winter term 2012/13, basis, eCampus
  • Exercises “Projective Geometry and Statistics”, summer term 2012, eCampus

Awards

  • Karl Kraus Young Scientist Award 2013 (1st price)

Publications

2017

  • C. Beekmans, J. Schneider, T. Laebe, M. Lennefer, C. Stachniss, and C. Simmer, “3D-Cloud Morphology and Motion from Dense Stereo for Fisheye Cameras,” in In Proc. of the European Geosciences Union General Assembly (EGU), 2017.
    [BibTeX] [PDF]
    @InProceedings{beekmans2017egu,
    title = {3D-Cloud Morphology and Motion from Dense Stereo for Fisheye Cameras},
    author = {Ch. Beekmans and J. Schneider and T. Laebe and M. Lennefer and C. Stachniss and C. Simmer},
    booktitle = {In Proc. of the European Geosciences Union General Assembly (EGU)},
    year = {2017},
    }

  • J. Schneider, C. Stachniss, and W. Förstner, “On the Quality and Efficiency of Approximate Solutions to Bundle Adjustment with Epipolar and Trifocal Constraints,” in ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2017, pp. 81-88. doi:10.5194/isprs-annals-IV-2-W3-81-2017
    [BibTeX] [PDF]

    Bundle adjustment is a central part of most visual SLAM and Structure from Motion systems and thus a relevant component of UAVs equipped with cameras. This paper makes two contributions to bundle adjustment. First, we present a novel approach which exploits trifocal constraints, i.e., constraints resulting from corresponding points observed in three camera images, which allows to estimate the camera pose parameters without 3D point estimation. Second, we analyze the quality loss compared to the optimal bundle adjustment solution when applying different types of approximations to the constrained optimization problem to increase efficiency. We implemented and thoroughly evaluated our approach using a UAV performing mapping tasks in outdoor environments. Our results indicate that the complexity of the constraint bundle adjustment can be decreased without loosing too much accuracy.

    @InProceedings{schneider2017uavg,
    title = {On the Quality and Efficiency of Approximate Solutions to Bundle Adjustment with Epipolar and Trifocal Constraints},
    author = {J. Schneider and C. Stachniss and W. F\"orstner},
    booktitle = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences},
    year = {2017},
    pages = {81-88},
    volume = {IV-2/W3},
    abstract = {Bundle adjustment is a central part of most visual SLAM and Structure from Motion systems and thus a relevant component of UAVs equipped with cameras. This paper makes two contributions to bundle adjustment. First, we present a novel approach which exploits trifocal constraints, i.e., constraints resulting from corresponding points observed in three camera images, which allows to estimate the camera pose parameters without 3D point estimation. Second, we analyze the quality loss compared to the optimal bundle adjustment solution when applying different types of approximations to the constrained optimization problem to increase efficiency. We implemented and thoroughly evaluated our approach using a UAV performing mapping tasks in outdoor environments. Our results indicate that the complexity of the constraint bundle adjustment can be decreased without loosing too much accuracy.},
    doi = {10.5194/isprs-annals-IV-2-W3-81-2017},
    url = {https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/IV-2-W3/81/2017/isprs-annals-IV-2-W3-81-2017.pdf},
    }

2016

  • C. Beekmans, J. Schneider, T. Läbe, M. Lennefer, C. Stachniss, and C. Simmer, “Cloud Photogrammetry with Dense Stereo for Fisheye Cameras,” Atmospheric Chemistry and Physics (ACP), vol. 16, iss. 22, pp. 14231-14248, 2016. doi:10.5194/acp-16-14231-2016
    [BibTeX] [PDF]

    We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.

    @Article{beekmans16acp,
    title = {Cloud Photogrammetry with Dense Stereo for Fisheye Cameras},
    author = {C. Beekmans and J. Schneider and T. L\"abe and M. Lennefer and C. Stachniss and C. Simmer},
    journal = {Atmospheric Chemistry and Physics (ACP)},
    year = {2016},
    number = {22},
    pages = {14231-14248},
    volume = {16},
    abstract = {We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.},
    doi = {10.5194/acp-16-14231-2016},
    url = {https://www.ipb.uni-bonn.de/pdfs/beekmans16acp.pdf},
    }

  • J. Schneider, C. Eling, L. Klingbeil, H. Kuhlmann, W. Förstner, and C. Stachniss, “Fast and Effective Online Pose Estimation and Mapping for UAVs,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2016, p. 4784–4791. doi:10.1109/ICRA.2016.7487682
    [BibTeX] [PDF]

    Online pose estimation and mapping in unknown environments is essential for most mobile robots. Especially autonomous unmanned aerial vehicles require good pose estimates at comparably high frequencies. In this paper, we propose an effective system for online pose and simultaneous map estimation designed for light-weight UAVs. Our system consists of two components: (1) real-time pose estimation combining RTK-GPS and IMU at 100 Hz and (2) an effective SLAM solution running at 10 Hz using image data from an omnidirectional multi-fisheye-camera system. The SLAM procedure combines spatial resection computed based on the map that is incrementally refined through bundle adjustment and combines the image data with raw GPS observations and IMU data on keyframes. The overall system yields a real-time, georeferenced pose at 100 Hz in GPS-friendly situations. Additionally, we obtain a precise pose and feature map at 10 Hz even in cases where the GPS is not observable or underconstrained. Our system has been implemented and thoroughly tested on a 5 kg copter and yields accurate and reliable pose estimation at high frequencies. We compare the point cloud obtained by our method with a model generated from georeferenced terrestrial laser scanner.

    @InProceedings{schneider16icra,
    title = {Fast and Effective Online Pose Estimation and Mapping for UAVs},
    author = {J. Schneider and C. Eling and L. Klingbeil and H. Kuhlmann and W. F\"orstner and C. Stachniss},
    booktitle = icra,
    year = {2016},
    pages = {4784--4791},
    abstract = {Online pose estimation and mapping in unknown environments is essential for most mobile robots. Especially autonomous unmanned aerial vehicles require good pose estimates at comparably high frequencies. In this paper, we propose an effective system for online pose and simultaneous map estimation designed for light-weight UAVs. Our system consists of two components: (1) real-time pose estimation combining RTK-GPS and IMU at 100 Hz and (2) an effective SLAM solution running at 10 Hz using image data from an omnidirectional multi-fisheye-camera system. The SLAM procedure combines spatial resection computed based on the map that is incrementally refined through bundle adjustment and combines the image data with raw GPS observations and IMU data on keyframes. The overall system yields a real-time, georeferenced pose at 100 Hz in GPS-friendly situations. Additionally, we obtain a precise pose and feature map at 10 Hz even in cases where the GPS is not observable or underconstrained. Our system has been implemented and thoroughly tested on a 5 kg copter and yields accurate and reliable pose estimation at high frequencies. We compare the point cloud obtained by our method with a model generated from georeferenced terrestrial laser scanner.},
    doi = {10.1109/ICRA.2016.7487682},
    url = {https://www.ipb.uni-bonn.de/pdfs/schneider16icra.pdf},
    }

  • J. Schneider, C. Stachniss, and W. Förstner, “Dichtes Stereo mit Fisheye-Kameras,” in UAV 2016 – Vermessung mit unbemannten Flugsystemen, 2016, pp. 247-264.
    [BibTeX]
    @InProceedings{schneider16dvw,
    title = {Dichtes Stereo mit Fisheye-Kameras},
    author = {J. Schneider and C. Stachniss and W. F\"orstner},
    booktitle = {UAV 2016 -- Vermessung mit unbemannten Flugsystemen},
    year = {2016},
    pages = {247-264},
    publisher = {Wi{\ss}ner Verlag},
    series = {Schriftenreihe des DVW},
    volume = {82},
    }

  • J. Schneider, C. Stachniss, and W. Förstner, “On the Accuracy of Dense Fisheye Stereo,” IEEE Robotics and Automation Letters (RA-L), vol. 1, iss. 1, pp. 227-234, 2016. doi:10.1109/LRA.2016.2516509
    [BibTeX] [PDF]

    Fisheye cameras offer a large field of view, which is important for several robotics applications as a larger field of view allows for covering a large area with a single image. In contrast to classical cameras, however, fisheye cameras cannot be approximated well using the pinhole camera model and this renders the computation of depth information from fisheye stereo image pairs more complicated. In this work, we analyze the combination of an epipolar rectification model for fisheye stereo cameras with existing dense methods. This has the advantage that existing dense stereo systems can be applied as a black-box even with cameras that have field of view of more than 180 deg to obtain dense disparity information. We thoroughly investigate the accuracy potential of such fisheye stereo systems using image data from our UAV. The empirical analysis is based on image pairs of a calibrated fisheye stereo camera system and two state-of-the-art algorithms for dense stereo applied to adequately rectified image pairs from fisheye stereo cameras. The canonical stochastic model for sensor points assumes homogeneous uncertainty and we generalize this model based on an empirical analysis using a test scene consisting of mutually orthogonal planes. We show (1) that the combination of adequately rectified fisheye image pairs and dense methods provides dense 3D point clouds at 6-7 Hz on our autonomous multi-copter UAV, (2) that the uncertainty of points depends on their angular distance from the optical axis, (3) how to estimate the variance component as a function of that distance, and (4) how the improved stochastic model improves the accuracy of the scene points.

    @Article{schneider16ral,
    title = {On the Accuracy of Dense Fisheye Stereo},
    author = {J. Schneider and C. Stachniss and W. F\"orstner},
    journal = ral,
    year = {2016},
    number = {1},
    pages = {227-234},
    volume = {1},
    abstract = {Fisheye cameras offer a large field of view, which is important for several robotics applications as a larger field of view allows for covering a large area with a single image. In contrast to classical cameras, however, fisheye cameras cannot be approximated well using the pinhole camera model and this renders the computation of depth information from fisheye stereo image pairs more complicated. In this work, we analyze the combination of an epipolar rectification model for fisheye stereo cameras with existing dense methods. This has the advantage that existing dense stereo systems can be applied as a black-box even with cameras that have field of view of more than 180 deg to obtain dense disparity information. We thoroughly investigate the accuracy potential of such fisheye stereo systems using image data from our UAV. The empirical analysis is based on image pairs of a calibrated fisheye stereo camera system and two state-of-the-art algorithms for dense stereo applied to adequately rectified image pairs from fisheye stereo cameras. The canonical stochastic model for sensor points assumes homogeneous uncertainty and we generalize this model based on an empirical analysis using a test scene consisting of mutually orthogonal planes. We show (1) that the combination of adequately rectified fisheye image pairs and dense methods provides dense 3D point clouds at 6-7 Hz on our autonomous multi-copter UAV, (2) that the uncertainty of points depends on their angular distance from the optical axis, (3) how to estimate the variance component as a function of that distance, and (4) how the improved stochastic model improves the accuracy of the scene points.},
    doi = {10.1109/LRA.2016.2516509},
    url = {https://www.ipb.uni-bonn.de/pdfs/schneider16ral.pdf},
    }

2014

  • L. Klingbeil, M. Nieuwenhuisen, J. Schneider, C. Eling, D. Droeschel, D. Holz, T. Läbe, W. Förstner, S. Behnke, and H. Kuhlmann, “Towards Autonomous Navigation of an UAV-based Mobile Mapping System,” in 4th International Conf. on Machine Control & Guidance, 2014, p. 136–147.
    [BibTeX] [PDF]

    For situations, where mapping is neither possible from high altitudes nor from the ground, we are developing an autonomous micro aerial vehicle able to fly at low altitudes in close vicinity of obstacles. This vehicle is based on a MikroKopterTM octocopter platform (maximum total weight: 5kg), and contains a dual frequency GPS board, an IMU, a compass, two stereo camera pairs with fisheye lenses, a rotating 3D laser scanner, 8 ultrasound sensors, a real-time processing unit, and a compact PC for on-board ego-motion estimation and obstacle detection for autonomous navigation. A high-resolution camera is used for the actual mapping task, where the environment is reconstructed in three dimensions from images, using a highly accurate bundle adjustment. In this contribution, we describe the sensor system setup and present results from the evaluation of several aspects of the different subsystems as well as initial results from flight tests.

    @InProceedings{klingbeil14mcg,
    title = {Towards Autonomous Navigation of an UAV-based Mobile Mapping System},
    author = {Klingbeil, Lasse and Nieuwenhuisen, Matthias and Schneider, Johannes and Eling, Christian and Droeschel, David and Holz, Dirk and L\"abe, Thomas and F\"orstner, Wolfgang and Behnke, Sven and Kuhlmann, Heiner},
    booktitle = {4th International Conf. on Machine Control \& Guidance},
    year = {2014},
    pages = {136--147},
    abstract = {For situations, where mapping is neither possible from high altitudes nor from the ground, we are developing an autonomous micro aerial vehicle able to fly at low altitudes in close vicinity of obstacles. This vehicle is based on a MikroKopterTM octocopter platform (maximum total weight: 5kg), and contains a dual frequency GPS board, an IMU, a compass, two stereo camera pairs with fisheye lenses, a rotating 3D laser scanner, 8 ultrasound sensors, a real-time processing unit, and a compact PC for on-board ego-motion estimation and obstacle detection for autonomous navigation. A high-resolution camera is used for the actual mapping task, where the environment is reconstructed in three dimensions from images, using a highly accurate bundle adjustment. In this contribution, we describe the sensor system setup and present results from the evaluation of several aspects of the different subsystems as well as initial results from flight tests.},
    url = {https://www.ipb.uni-bonn.de/pdfs/klingbeil14mcg.pdf},
    }

  • J. Schneider and W. Förstner, “Real-time Accurate Geo-localization of a MAV with Omnidirectional Visual Odometry and GPS,” in Computer Vision – ECCV 2014 Workshops, 2014, p. 271–282. doi:10.1007/978-3-319-16178-5_18
    [BibTeX] [PDF]

    This paper presents a system for direct geo-localization of a MAV in an unknown environment using visual odometry and precise real time kinematic (RTK) GPS information. Visual odometry is performed with a multi-camera system with four fisheye cameras that cover a wide field of view which leads to better constraints for localization due to long tracks and a better intersection geometry. Visual observations from the acquired image sequences are refined with a high accuracy on selected keyframes by an incremental bundle adjustment using the iSAM2 algorithm. The optional integration of GPS information yields long-time stability and provides a direct geo-referenced solution. Experiments show the high accuracy which is below 3 cm standard deviation in position.

    @InProceedings{schneider14eccv-ws,
    title = {Real-time Accurate Geo-localization of a MAV with Omnidirectional Visual Odometry and GPS},
    author = {J. Schneider and W. F\"orstner},
    booktitle = {Computer Vision - ECCV 2014 Workshops},
    year = {2014},
    pages = {271--282},
    abstract = {This paper presents a system for direct geo-localization of a MAV in an unknown environment using visual odometry and precise real time kinematic (RTK) GPS information. Visual odometry is performed with a multi-camera system with four fisheye cameras that cover a wide field of view which leads to better constraints for localization due to long tracks and a better intersection geometry. Visual observations from the acquired image sequences are refined with a high accuracy on selected keyframes by an incremental bundle adjustment using the iSAM2 algorithm. The optional integration of GPS information yields long-time stability and provides a direct geo-referenced solution. Experiments show the high accuracy which is below 3 cm standard deviation in position.},
    doi = {10.1007/978-3-319-16178-5_18},
    url = {https://www.ipb.uni-bonn.de/pdfs/schneider14eccv-ws.pdf},
    }

  • J. Schneider, T. Läbe, and W. Förstner, “Real-Time Bundle Adjustment with an Omnidirectional Multi-Camera System and GPS,” in Proc. of the 4th International Conf. on Machine Control & Guidance, 2014, p. 98–103.
    [BibTeX] [PDF]

    In this paper we present our system for visual odometry that performs a fast incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. It is applicable to image streams of a calibrated multi-camera system with omnidirectional cameras. In this paper we use an autonomously flying octocopter that is equipped for visual odometry and obstacle detection with four fisheye cameras, which provide a large field of view. For real-time ego-motion estimation the platform is equipped, besides the cameras, with a dual frequency GPS board, an IMU and a compass. In this paper we show how we apply our system for visual odometry using the synchronized video streams of the four fisheye cameras. The position and orientation information from the GPS-unit and the inertial sensors can optionally be integrated into our system. We will show the obtained accuracy of pure odometry and compare it with the solution from GPS/INS.

    @InProceedings{schneider14mcg,
    title = {Real-Time Bundle Adjustment with an Omnidirectional Multi-Camera System and GPS},
    author = {J. Schneider and T. L\"abe and W. F\"orstner},
    booktitle = {Proc. of the 4th International Conf. on Machine Control \& Guidance},
    year = {2014},
    pages = {98--103},
    abstract = {In this paper we present our system for visual odometry that performs a fast incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. It is applicable to image streams of a calibrated multi-camera system with omnidirectional cameras. In this paper we use an autonomously flying octocopter that is equipped for visual odometry and obstacle detection with four fisheye cameras, which provide a large field of view. For real-time ego-motion estimation the platform is equipped, besides the cameras, with a dual frequency GPS board, an IMU and a compass. In this paper we show how we apply our system for visual odometry using the synchronized video streams of the four fisheye cameras. The position and orientation information from the GPS-unit and the inertial sensors can optionally be integrated into our system. We will show the obtained accuracy of pure odometry and compare it with the solution from GPS/INS.},
    city = {Braunschweig},
    url = {https://www.ipb.uni-bonn.de/pdfs/schneider14mcg.pdf},
    }

2013

  • M. Nieuwenhuisen, D. Droeschel, J. Schneider, D. Holz, T. Läbe, and S. Behnke, “Multimodal Obstacle Detection and Collision Avoidance for Micro Aerial Vehicles,” in Proc. of the 6th European Conf. on Mobile Robots (ECMR), 2013. doi:10.1109/ECMR.2013.6698812
    [BibTeX] [PDF]

    Reliably perceiving obstacles and avoiding collisions is key for the fully autonomous application of micro aerial vehicles (MAVs). Limiting factors for increasing autonomy and complexity of MAVs (without external sensing and control) are limited onboard sensing and limited onboard processing power. In this paper, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception. We developed a lightweight 3D laser scanner setup and visual obstacle detection using wide-angle stereo cameras. Together with our fast reactive collision avoidance approach based on local egocentric grid maps of the environment we aim at safe operation in the vicinity of structures like buildings or vegetation.

    @InProceedings{nieuwenhuisen13ecmr,
    title = {Multimodal Obstacle Detection and Collision Avoidance for Micro Aerial Vehicles},
    author = {Nieuwenhuisen, Matthias and Droeschel, David and Schneider, Johannes and Holz, Dirk and L\"abe, Thomas and Behnke, Sven},
    booktitle = {Proc. of the 6th European Conf. on Mobile Robots (ECMR)},
    year = {2013},
    abstract = {Reliably perceiving obstacles and avoiding collisions is key for the fully autonomous application of micro aerial vehicles (MAVs). Limiting factors for increasing autonomy and complexity of MAVs (without external sensing and control) are limited onboard sensing and limited onboard processing power. In this paper, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception. We developed a lightweight 3D laser scanner setup and visual obstacle detection using wide-angle stereo cameras. Together with our fast reactive collision avoidance approach based on local egocentric grid maps of the environment we aim at safe operation in the vicinity of structures like buildings or vegetation.},
    city = {Barcelona},
    doi = {10.1109/ECMR.2013.6698812},
    url = {https://www.ais.uni-bonn.de/papers/ECMR_2013_Nieuwenhuisen_Multimodal_Obstacle_Avoidance.pdf},
    }

  • J. Schneider and W. Förstner, “Bundle Adjustment and System Calibration with Points at Infinity for Omnidirectional Camera Systems,” Z. f. Photogrammetrie, Fernerkundung und Geoinformation, vol. 4, p. 309–321, 2013. doi:10.1127/1432-8364/2013/0179
    [BibTeX] [PDF]

    We present a calibration method for multi-view cameras that provides a rigorous maximum likelihood estimation of the mutual orientation of the cameras within a rigid multi-camera system. No calibration targets are needed, just a movement of the multi-camera system taking synchronized images of a highly textured and static scene. Multi-camera systems with non-overlapping views have to be rotated within the scene so that corresponding points are visible in different cameras at different times of exposure. By using an extended version of the projective collinearity equation all estimates can be optimized in one bundle adjustment where we constrain the relative poses of the cameras to be fixed. For stabilizing camera orientations – especially rotations – one should generally use points at the horizon within the bundle adjustment, which classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points which allows us to use images of omnidirectional cameras with single viewpoint like fisheye cameras and scene points at a large distance from the camera or even at infinity. We show results of our calibration method on (1) the omnidirectional multi-camera system Ladybug 3 from Point Grey, (2) a camera-rig with five cameras used for the acquisition of complex 3D structures and (3) a camera-rig mounted on a UAV consisting of four fisheye cameras which provide a large field of view and which is used for visual odometry and obstacle detection in the project MoD (DFG-Project FOR 1505 “Mapping on Demand”).

    @Article{schneider13pfg,
    title = {Bundle Adjustment and System Calibration with Points at Infinity for Omnidirectional Camera Systems},
    author = {J. Schneider and W. F\"orstner},
    journal = {Z. f. Photogrammetrie, Fernerkundung und Geoinformation},
    year = {2013},
    pages = {309--321},
    volume = {4},
    abstract = {We present a calibration method for multi-view cameras that provides a rigorous maximum likelihood estimation of the mutual orientation of the cameras within a rigid multi-camera system. No calibration targets are needed, just a movement of the multi-camera system taking synchronized images of a highly textured and static scene. Multi-camera systems with non-overlapping views have to be rotated within the scene so that corresponding points are visible in different cameras at different times of exposure. By using an extended version of the projective collinearity equation all estimates can be optimized in one bundle adjustment where we constrain the relative poses of the cameras to be fixed. For stabilizing camera orientations - especially rotations - one should generally use points at the horizon within the bundle adjustment, which classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points which allows us to use images of omnidirectional cameras with single viewpoint like fisheye cameras and scene points at a large distance from the camera or even at infinity. We show results of our calibration method on (1) the omnidirectional multi-camera system Ladybug 3 from Point Grey, (2) a camera-rig with five cameras used for the acquisition of complex 3D structures and (3) a camera-rig mounted on a UAV consisting of four fisheye cameras which provide a large field of view and which is used for visual odometry and obstacle detection in the project MoD (DFG-Project FOR 1505 "Mapping on Demand").},
    doi = {10.1127/1432-8364/2013/0179},
    url = {https://www.dgpf.de/pfg/2013/pfg2013_4_schneider.pdf},
    }

  • J. Schneider, T. Läbe, and W. Förstner, “Incremental Real-time Bundle Adjustment for Multi-camera Systems with Points at Infinity,” in ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013, pp. 355-360. doi:10.5194/isprsarchives-XL-1-W2-355-2013
    [BibTeX] [PDF]

    This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1) multi-view cameras by taking the rigid transformation between the cameras into account, (2) omni-directional cameras as it can handle arbitrary bundles of rays and (3) scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment \wrt time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.

    @InProceedings{schneider13isprs,
    title = {Incremental Real-time Bundle Adjustment for Multi-camera Systems with Points at Infinity},
    author = {J. Schneider and T. L\"abe and W. F\"orstner},
    booktitle = {ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences},
    year = {2013},
    pages = {355-360},
    volume = {XL-1/W2},
    abstract = {This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1) multi-view cameras by taking the rigid transformation between the cameras into account, (2) omni-directional cameras as it can handle arbitrary bundles of rays and (3) scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment \wrt time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.},
    doi = {10.5194/isprsarchives-XL-1-W2-355-2013},
    url = {https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-1-W2/355/2013/isprsarchives-XL-1-W2-355-2013.pdf},
    }

2012

  • S. Gehrig, A. Barth, N. Schneider, and J. Siegemund, “A Multi-Cue Approach for Stereo-Based Object Confidence Estimation,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, 2012, p. 3055 – 3060. doi:10.1109/IROS.2012.6385455
    [BibTeX]

    In this contribution we present an approach to compute object confidences for stereo-vision-based object tracking schemes. Meaningful object confidences help to reduce false alarm rates of safety systems and improve the downstream system performance for modules such as sensor fusion and situation analysis. Several cues from stereo vision and from the tracking process are fused in a Bayesian manner. An evaluation on a 38,000 frames urban drive shows the effectiveness of the approach compared to the same object tracking scheme with simple heuristics for the object confidence. Within the evaluation, also the relevance of occurring phantoms is considered by computing the collision risk. The proposed confidence measures reduce the number of predicted imminent collisions from 86 to 0 maintaining almost the same system availability.

    @InProceedings{gehrig2012multi,
    title = {A Multi-Cue Approach for Stereo-Based Object Confidence Estimation},
    author = {Gehrig, Stefan and Barth, Alexander and Schneider, Nicolai and Siegemund, Jan},
    booktitle = iros,
    year = {2012},
    address = {Vilamoura, Portugal},
    pages = {3055 -- 3060},
    abstract = {In this contribution we present an approach to compute object confidences for stereo-vision-based object tracking schemes. Meaningful object confidences help to reduce false alarm rates of safety systems and improve the downstream system performance for modules such as sensor fusion and situation analysis. Several cues from stereo vision and from the tracking process are fused in a Bayesian manner. An evaluation on a 38,000 frames urban drive shows the effectiveness of the approach compared to the same object tracking scheme with simple heuristics for the object confidence. Within the evaluation, also the relevance of occurring phantoms is considered by computing the collision risk. The proposed confidence measures reduce the number of predicted imminent collisions from 86 to 0 maintaining almost the same system availability.},
    doi = {10.1109/IROS.2012.6385455},
    }

  • J. Schneider, F. Schindler, T. Läbe, and W. Förstner, “Bundle Adjustment for Multi-camera Systems with Points at Infinity,” in ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2012, p. 75–80. doi:10.5194/isprsannals-I-3-75-2012
    [BibTeX] [PDF]

    We present a novel approach for a rigorous bundle adjustment for omnidirectional and multi-view cameras, which enables an efficient maximum-likelihood estimation with image and scene points at infinity. Multi-camera systems are used to increase the resolution, to combine cameras with different spectral sensitivities (Z/I DMC, Vexcel Ultracam) or – like omnidirectional cameras – to augment the effective aperture angle (Blom Pictometry, Rollei Panoscan Mark III). Additionally multi-camera systems gain in importance for the acquisition of complex 3D structures. For stabilizing camera orientations – especially rotations – one should generally use points at the horizon over long periods of time within the bundle adjustment that classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points. Instead of eliminating the scale factor of the homogeneous vectors by Euclidean normalization, we normalize the homogeneous coordinates spherically. This way we can use images of omnidirectional cameras with single-view point like fisheye cameras and scene points, which are far away or at infinity. We demonstrate the feasibility and the potential of our approach on real data taken with a single camera, the stereo camera FinePix Real 3D W3 from Fujifilm and the multi-camera system Ladybug3 from Point Grey.

    @InProceedings{schneider12isprs,
    title = {Bundle Adjustment for Multi-camera Systems with Points at Infinity},
    author = {J. Schneider and F. Schindler and T. L\"abe and W. F\"orstner},
    booktitle = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences},
    year = {2012},
    pages = {75--80},
    volume = {I-3},
    abstract = {We present a novel approach for a rigorous bundle adjustment for omnidirectional and multi-view cameras, which enables an efficient maximum-likelihood estimation with image and scene points at infinity. Multi-camera systems are used to increase the resolution, to combine cameras with different spectral sensitivities (Z/I DMC, Vexcel Ultracam) or - like omnidirectional cameras - to augment the effective aperture angle (Blom Pictometry, Rollei Panoscan Mark III). Additionally multi-camera systems gain in importance for the acquisition of complex 3D structures. For stabilizing camera orientations - especially rotations - one should generally use points at the horizon over long periods of time within the bundle adjustment that classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points. Instead of eliminating the scale factor of the homogeneous vectors by Euclidean normalization, we normalize the homogeneous coordinates spherically. This way we can use images of omnidirectional cameras with single-view point like fisheye cameras and scene points, which are far away or at infinity. We demonstrate the feasibility and the potential of our approach on real data taken with a single camera, the stereo camera FinePix Real 3D W3 from Fujifilm and the multi-camera system Ladybug3 from Point Grey.},
    city = {Melbourne},
    doi = {10.5194/isprsannals-I-3-75-2012},
    url = {https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/I-3/75/2012/isprsannals-I-3-75-2012.pdf},
    }

2011

  • J. Schneider, F. Schindler, and W. Förstner, “Bündelausgleichung für Multikamerasysteme,” in Proc. of the 31th DGPF Conf., 2011.
    [BibTeX] [PDF]

    Wir stellen einen Ansatz für eine strenge Bündelausgleichung für Multikamerasysteme vor. Hierzu verwenden wir eine minimale Repräsentation von homogenen Koordinatenvektoren für eine Maximum-Likelihood-Schätzung. Statt den Skalierungsfaktor von homogenen Vektoren durch Verwendung von euklidischen Grö\ssen zu eliminieren, werden die homogenen Koordinaten sphärisch normiert, so dass Bild- und Objektpunkte im Unendlichen repräsentierbar bleiben. Dies ermöglicht auch Bilder omnidirektionaler Kameras mit Einzelblickpunkt, wie Fisheyekameras, und weit entfernte bzw. unendlich ferne Punkte zu behandeln. Speziell Punkte am Horizont können über lange Zeiträume beobachtet werden und liefern somit eine stabile Richtungsinformation. Wir demonstrieren die praktische Umsetzung des Ansatzes anhand einer Bildfolge mit dem Multikamerasystem Ladybug3 von Point Grey, welches mit sechs Kameras 80 % der gesamten Sphäre abbildet.

    @InProceedings{schneider11dgpf,
    title = {B\"undelausgleichung f\"ur Multikamerasysteme},
    author = {J. Schneider and F. Schindler and W. F\"orstner},
    booktitle = {Proc. of the 31th DGPF Conf.},
    year = {2011},
    abstract = {Wir stellen einen Ansatz f\"ur eine strenge B\"undelausgleichung f\"ur Multikamerasysteme vor. Hierzu verwenden wir eine minimale Repr\"asentation von homogenen Koordinatenvektoren f\"ur eine Maximum-Likelihood-Sch\"atzung. Statt den Skalierungsfaktor von homogenen Vektoren durch Verwendung von euklidischen Gr\"o\ssen zu eliminieren, werden die homogenen Koordinaten sph\"arisch normiert, so dass Bild- und Objektpunkte im Unendlichen repr\"asentierbar bleiben. Dies erm\"oglicht auch Bilder omnidirektionaler Kameras mit Einzelblickpunkt, wie Fisheyekameras, und weit entfernte bzw. unendlich ferne Punkte zu behandeln. Speziell Punkte am Horizont k\"onnen \"uber lange Zeitr\"aume beobachtet werden und liefern somit eine stabile Richtungsinformation. Wir demonstrieren die praktische Umsetzung des Ansatzes anhand einer Bildfolge mit dem Multikamerasystem Ladybug3 von Point Grey, welches mit sechs Kameras 80 % der gesamten Sph\"are abbildet.},
    city = {Mainz},
    url = {https://www.ipb.uni-bonn.de/pdfs/schneider11dgpf.pdf},
    }

2010

  • F. Korč, D. Schneider, and W. Förstner, “On Nonparametric Markov Random Field Estimation for Fast Automatic Segmentation of MRI Knee Data,” in Proc. of the 4th Medical Image Analysis for the Clinic – A Grand Challenge workshop, MICCAI, 2010, p. 261–270.
    [BibTeX] [PDF]

    We present a fast automatic reproducible method for 3d semantic segmentation of magnetic resonance images of the knee. We formulate a single global model that allows to jointly segment all classes. The model estimation was performed automatically without manual interaction and parameter tuning. The segmentation of a magnetic resonance image with 11 Mio voxels took approximately one minute. Our labeling results by far do not reach the performance of complex state of the art approaches designed to produce clinically relevant results. Our results could potentially be useful for rough visualization or initialization of computationally demanding methods. Our main contribution is to provide insights in possible strategies when employing global statistical models

    @InProceedings{korvc2010nonparametric,
    title = {On Nonparametric Markov Random Field Estimation for Fast Automatic Segmentation of MRI Knee Data},
    author = {Kor{\vc}, Filip and Schneider, David and F\"orstner, Wolfgang},
    booktitle = {Proc. of the 4th Medical Image Analysis for the Clinic - A Grand Challenge workshop, MICCAI},
    year = {2010},
    note = {Beijing},
    pages = {261--270},
    abstract = {We present a fast automatic reproducible method for 3d semantic segmentation of magnetic resonance images of the knee. We formulate a single global model that allows to jointly segment all classes. The model estimation was performed automatically without manual interaction and parameter tuning. The segmentation of a magnetic resonance image with 11 Mio voxels took approximately one minute. Our labeling results by far do not reach the performance of complex state of the art approaches designed to produce clinically relevant results. Our results could potentially be useful for rough visualization or initialization of computationally demanding methods. Our main contribution is to provide insights in possible strategies when employing global statistical models},
    url = {https://www.ipb.uni-bonn.de/pdfs/Korvc2010Nonparametric.pdf},
    }

2009

  • A. Schneider, S. J. C. Stachniss, M. Reisert, H. Burkhardt, and W. Burgard, “Object Identification with Tactile Sensors Using Bag-of-Features,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2009.
    [BibTeX] [PDF]
    [none]
    @InProceedings{schneider2009,
    title = {Object Identification with Tactile Sensors Using Bag-of-Features},
    author = {A. Schneider and J. Sturm C. Stachniss and M. Reisert and H. Burkhardt and W. Burgard},
    booktitle = iros,
    year = {2009},
    abstract = {[none]},
    timestamp = {2014.04.24},
    url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/wurm09iros.pdf},
    }

2005

  • W. Burgard, M. Moors, C. Stachniss, and F. Schneider, “Coordinated Multi-Robot Exploration,” IEEE Transactions on Robotics, vol. 21, iss. 3, p. 376–378, 2005.
    [BibTeX] [PDF]
    [none]
    @Article{burgard2005a,
    title = {Coordinated Multi-Robot Exploration},
    author = {W. Burgard and M. Moors and C. Stachniss and F. Schneider},
    journal = ieeetransrob,
    year = {2005},
    number = {3},
    pages = {376--378},
    volume = {21},
    abstract = {[none]},
    timestamp = {2014.04.24},
    url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/burgard05tro.pdf},
    }