Publications

2017

  • J. Rosentreter, R. Hagensieker, A. Okujeni, R. Roscher, and B. Waske, “Sub-pixel mapping of urban areas using EnMAP data and multioutput support vector regression,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017.
    [BibTeX]
    @Article{Rosentreter2017Subpixel,
    Title = {Sub-pixel mapping of urban areas using EnMAP data and multioutput support vector regression},
    Author = {Rosentreter, J. and Hagensieker, R. and Okujeni, A. and Roscher, R. and Waske, B.},
    Journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
    Year = {2017},
    Note = {to appear},
    Owner = {ribana},
    Timestamp = {2017.01.08}
    }

  • F. Liebisch, M. Popovic, J. Pfeifer, R. Khanna, P. Lottes, C. Stachniss, A. Pretto, I. S. Kyu, J. Nieto, R. Siegwart, and A. Walter, “Automatic UAV-based field inspection campaigns for weeding in row crops,” in Proceedings of the 10th EARSeL SIG Imaging Spectroscopy Workshop , 2017.
    [BibTeX]
    @InProceedings{liebisch17earsel,
    author = {F. Liebisch and M. Popovic and J. Pfeifer and R. Khanna and P. Lottes and C. Stachniss and A. Pretto and S. In Kyu and J. Nieto and R. Siegwart and A. Walter},
    title = {Automatic UAV-based field inspection campaigns for weeding in row crops},
    booktitle = {Proceedings of the 10th EARSeL SIG Imaging Spectroscopy Workshop},
    year = {2017},
    }

  • P. Lottes, R. Khanna, J. Pfeifer, R. Siegwart, and C. Stachniss, “UAV-Based Crop and Weed Classification for Smart Farming,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , 2017.
    [BibTeX] [PDF]
    @InProceedings{lottes17icra,
    author = {P. Lottes and R. Khanna and J. Pfeifer and R. Siegwart and C. Stachniss},
    title = {UAV-Based Crop and Weed Classification for Smart Farming},
    booktitle = ICRA,
    year = {2017},
    Url = {http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/lottes17icra.pdf}
    }

  • C. Beekmans, J. Schneider, T. Laebe, M. Lennefer, C. Stachniss, and C. Simmer, “3D-Cloud Morphology and Motion from Dense Stereo for Fisheye Cameras,” in In Proceedings of the European Geosciences Union General Assembly (EGU) , 2017.
    [BibTeX]
    @InProceedings{beekmans17egu,
    author = {Ch. Beekmans and J. Schneider and T. Laebe and M. Lennefer and C. Stachniss and C. Simmer},
    title = {3D-Cloud Morphology and Motion from Dense Stereo for Fisheye Cameras},
    booktitle = {In Proceedings of the European Geosciences Union General Assembly (EGU)},
    year = {2017},
    }

  • I. Bogoslavskyi and C. Stachniss, “Efficient Online Segmentation for Sparse 3D Laser Scans,” PFG — Journal of Photogrammetry, Remote Sensing and Geoinformation Science, pp. 1-12, 2017.
    [BibTeX] [PDF]
    The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.

    @Article{bogoslavskyi17pfg,
    author = {Bogoslavskyi, Igor and Stachniss, Cyrill},
    title = {Efficient Online Segmentation for Sparse 3D Laser Scans},
    journal = {PFG -- Journal of Photogrammetry, Remote Sensing and Geoinformation Science},
    year = {2017},
    pages = {1--12},
    abstract = {The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.},
    url = {http://link.springer.com/article/10.1007/s41064-016-0003-y},
    }

  • W. Förstner, Some Comments on the Relations of Photogrammetry and Industry, 2017.
    [BibTeX] [PDF]
    @UNPUBLISHED{foerstner17:some,
    title = {{Some Comments on the Relations of Photogrammetry and Industry}},
    Author = {W. F{\"o}rstner},
    note = {Note for Photogrammetric Record},
    year = {2017},
    owner = {wf},
    Url = {http://www.ipb.uni-bonn.de/pdfs/foerstner17comments.pdf}
    }

  • C. Merfels and C. Stachniss, “Sensor Fusion for Self-Localisation of Automated Vehicles,” PFG — Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 2017.
    [BibTeX] [PDF]
    @Article{merfels17pfg,
    author = {Merfels, C. and Stachniss, C.},
    title = {Sensor Fusion for Self-Localisation of Automated Vehicles},
    journal = {PFG -- Journal of Photogrammetry, Remote Sensing and Geoinformation Science},
    year = {2017},
    url = {http://link.springer.com/article/10.1007/s41064-017-0008-1},
    }

  • O. Vysotska and C. Stachniss, “Improving SLAM by Exploiting Building Information from Publicly Available Maps and Localization Priors,” PFG — Journal of Photogrammetry, Remote Sensing and Geoinformation Science, vol. 85, iss. 1, pp. 53-65, 2017.
    [BibTeX] [PDF]
    @Article{vysotska17pfg,
    author = {Vysotska, O. and Stachniss, C.},
    title = {Improving SLAM by Exploiting Building Information from Publicly Available Maps and Localization Priors},
    journal = {PFG -- Journal of Photogrammetry, Remote Sensing and Geoinformation Science},
    year = {2017},
    volume = {85},
    number = {1},
    pages = {53-65},
    url = {http://link.springer.com/article/10.1007/s41064-017-0006-3},
    }

2016

  • N. Abdo, C. Stachniss, L. Spinello, and W. Burgard, “Organizing Objects by Predicting User Preferences Through Collaborative Filtering,” The International Journal of Robotics Research, 2016.
    [BibTeX] [PDF]
    [none]
    @Article{abdo16ijrr,
    Title = {Organizing Objects by Predicting User Preferences Through Collaborative Filtering},
    Author = {N. Abdo and C. Stachniss and L. Spinello and W. Burgard},
    Journal = IJRR,
    Year = {2016},
    Note = {arXiv:1512.06362},
    Abstract = {[none]},
    Url = {http://arxiv.org/abs/1512.06362}
    }

  • C. Beekmans, J. Schneider, T. Läbe, M. Lennefer, C. Stachniss, and C. Simmer, “Cloud Photogrammetry with Dense Stereo for Fisheye Cameras,” Atmospheric Chemistry and Physics (ACP), vol. 16, iss. 22, pp. 14231-14248, 2016. doi:10.5194/acp-16-14231-2016
    [BibTeX] [PDF]
    We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.

    @Article{beekmans16acp,
    Title = {Cloud Photogrammetry with Dense Stereo for Fisheye Cameras},
    Author = {C. Beekmans and J. Schneider and T. L\"abe and M. Lennefer and C. Stachniss and C. Simmer},
    Journal = {Atmospheric Chemistry and Physics (ACP)},
    Year = {2016},
    Number = {22},
    Pages = {14231-14248},
    Volume = {16},
    Abstract = {We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example.
    Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied.
    Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras.
    We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.},
    Doi = {10.5194/acp-16-14231-2016},
    Url = {http://www.ipb.uni-bonn.de/pdfs/beekmans16acp.pdf}
    }

  • I. Bogoslavskyi, M. Mazuran, and C. Stachniss, “Robust Homing for Autonomous Robots,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , 2016.
    [BibTeX] [PDF]
    [none]
    @InProceedings{bogoslavskyi16icra,
    Title = {Robust Homing for Autonomous Robots},
    Author = {I. Bogoslavskyi and M. Mazuran and C. Stachniss},
    Booktitle = icra,
    Year = {2016},
    Abstract = {[none]},
    Url = {http://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16icra.pdf}
    }

  • I. Bogoslavskyi and C. Stachniss, “Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2016.
    [BibTeX] [PDF]
    [none]
    @InProceedings{bogoslavskyi16iros,
    Title = {Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation},
    Author = {I. Bogoslavskyi and C. Stachniss},
    Booktitle = iros,
    Year = {2016},
    Abstract = {[none]},
    Url = {http://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16iros.pdf}
    }

  • W. Förstner, “A Future for Learning Semantic Models of Man-Made Environments,” in Proc. of Int. Conf. on Pattern Recognition (ICPR) , 2016.
    [BibTeX] [PDF]
    Deriving semantic 3D models of man-made environments hitherto has not reached the desired maturity which makes human interaction obsolete. Man-made environments play a central role in navigation, city planning, building management systems, disaster management or augmented reality. They are characterised by rich geometric and semantic structures. These cause conceptual problems when learning generic models or when developing automatic acquisition systems. The problems appear to be caused by (1) the incoherence of the models for signal analysis, (2) the type of interplay between discrete and continuous geometric representations, (3) the inefficiency of the interaction between crisp models, such as partonomies and taxonomies, and soft models, mostly having a probabilistic nature, and (4) the vagueness of the used notions in the envisaged application domains. The paper wants to encourage the development and learning of generative models, specifically for man-made objects, to be able to understand, reason about, and explain interpretations.

    @InProceedings{Foerstner2016Future,
    Title = {{A Future for Learning Semantic Models of Man-Made Environments}},
    Author = {W. F{\"o}rstner},
    Booktitle = {Proc. of Int. Conf. on Pattern Recognition (ICPR)},
    Year = {2016},
    Abstract = {Deriving semantic 3D models of man-made environments hitherto has not reached the desired maturity which makes human interaction obsolete. Man-made environments play a central role in navigation, city planning, building management systems, disaster management or augmented reality. They are characterised by rich geometric and semantic structures. These cause conceptual problems when learning generic models or when developing automatic acquisition systems. The problems appear to be caused by (1) the incoherence of the models for signal analysis, (2) the type of interplay between discrete and continuous geometric representations, (3) the inefficiency of the interaction between crisp models, such as partonomies and taxonomies, and soft models, mostly having a probabilistic nature, and (4) the vagueness of the used notions in the envisaged application domains. The paper wants to encourage the development and learning of generative models, specifically for man-made objects, to be able to understand, reason about, and explain interpretations.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/foerstner16Future.pdf}
    }

  • W. Förstner and B. P. Wrobel, Photogrammetric Computer Vision — Statistics, Geometry, Orientation and Reconstruction, Springer, 2016.
    [BibTeX]
    @Book{Foerstner2016Photogrammetric,
    Title = {{Photogrammetric Computer Vision -- Statistics, Geometry, Orientation
    and Reconstruction}},
    Author = {W. F{\"o}rstner and B. P. Wrobel},
    Publisher = {Springer},
    Year = {2016}
    }

  • B. Franke, J. Plante, R. Roscher, A. Lee, C. Smyth, A. Hatefi, F. Chen, E. Gil, A. Schwing, A. Selvitella, M. M. Hoffman, R. Grosse, D. Hendricks, and N. Reid, “Statistical Inference, Learning and Models in Big Data,” International Statistical Review, 2016.
    [BibTeX] [PDF]
    Big data provides big opportunities for statistical inference, but perhaps even bigger challenges, often related to differences in volume, variety, velocity, and veracity of information when compared to smaller carefully collected datasets. From January to June, 2015, the Canadian Institute of Statistical Sciences organized a thematic program on Statistical Inference, Learning and Models in Big Data. This paper arose from presentations and discussions that took place during the thematic program.

    @Article{Franke2016BigData,
    Title = {Statistical Inference, Learning and Models in Big Data},
    Author = {Franke, Beate and Plante, Jean-Fran\c{c}ois and Roscher, Ribana and Lee, Annie and Smyth, Cathal and Hatefi, Armin and Chen, Fuqi and Gil, Einat and Schwing, Alex and Selvitella, Alessandro and Hoffman, Michael M. and Grosse, Roger and Hendricks, Dieter and Reid, Nancy},
    Journal = {International Statistical Review},
    Year = {2016},
    Note = {to appear},
    Abstract = {Big data provides big opportunities for statistical inference, but perhaps even bigger challenges, often related to differences in volume, variety, velocity, and veracity of information when compared to smaller carefully collected datasets. From January to June, 2015, the Canadian Institute of Statistical Sciences organized a thematic program on Statistical Inference, Learning and Models in Big Data. This paper arose from presentations and discussions that took place during the thematic program.},
    Owner = {ribana},
    Timestamp = {2016.03.01},
    Url = {http://onlinelibrary.wiley.com/doi/10.1111/insr.12176/full}
    }

  • F. Liebisch, J. Pfeifer, R. Khanna, P. Lottes, C. Stachniss, T. Falck, S. Sander, R. Siegwart, A. Walter, and E. Galceran, “Flourish — A robotic approach for automation in crop management,” in Proceedings of the Workshop für Computer-Bildanalyse und unbemannte autonom fliegende Systeme in der Landwirtschaft , 2016.
    [BibTeX] [PDF]
    @InProceedings{liebisch16wslw,
    Title = {Flourish -- A robotic approach for automation in crop management},
    Author = {F. Liebisch and J. Pfeifer and R. Khanna and P. Lottes and C. Stachniss and T. Falck and S. Sander and R. Siegwart and A. Walter and E. Galceran},
    Booktitle = {Proceedings of the Workshop f\"ur Computer-Bildanalyse und unbemannte autonom fliegende Systeme in der Landwirtschaft},
    Year = {2016},
    Timestamp = {2016.06.15},
    Url = {http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/liebisch16cbaws.pdf}
    }

  • P. Lottes, M. Höferlin, S. Sander, M. Müter, P. Schulze-Lammers, and C. Stachniss, “An Effective Classification System for Separating Sugar Beets and Weeds for Precision Farming Applications,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , 2016.
    [BibTeX] [PDF]
    @InProceedings{lottes16icra,
    Title = {An Effective Classification System for Separating Sugar Beets and Weeds for Precision Farming Applications},
    Author = {P. Lottes and M. H\"oferlin and S. Sander and M. M\"uter and P. Schulze-Lammers and C. Stachniss},
    Booktitle = ICRA,
    Year = {2016},
    Timestamp = {2016.01.15},
    Url = {http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/lottes16icra.pdf}
    }

  • P. Lottes, M. Höferlin, S. Sander, and C. Stachniss, “Effective Vision-based Classification for Separating Sugar Beets and Weeds for Precision Farming,” Journal of Field Robotics, 2016. doi:10.1002/rob.21675
    [BibTeX] [PDF]
    @Article{lottes16jfr,
    Title = {Effective Vision-based Classification for Separating Sugar Beets and Weeds for Precision Farming},
    Author = {Lottes, Philipp and H\"oferlin, Markus and Sander, Slawomir and Stachniss, Cyrill},
    Journal = {Journal of Field Robotics},
    Year = {2016},
    Doi = {10.1002/rob.21675},
    ISSN = {1556-4967},
    Timestamp = {2016.10.5},
    Url = {http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/lottes16jfr.pdf}
    }

  • B. Mack, R. Roscher, S. Stenzel, H. Feilhauer, S. Schmidtlein, and B. Waske, “Mapping raised bogs with an iterative one-class classification approach,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 120, pp. 53-64, 2016. doi:http://dx.doi.org/10.1016/j.isprsjprs.2016.07.008
    [BibTeX] [PDF]
    Abstract Land use and land cover maps are one of the most commonly used remote sensing products. In many applications the user only requires a map of one particular class of interest, e.g. a specific vegetation type or an invasive species. One-class classifiers are appealing alternatives to common supervised classifiers because they can be trained with labeled training data of the class of interest only. However, training an accurate one-class classification (OCC) model is challenging, particularly when facing a large image, a small class and few training samples. To tackle these problems we propose an iterative \{OCC\} approach. The presented approach uses a biased Support Vector Machine as core classifier. In an iterative pre-classification step a large part of the pixels not belonging to the class of interest is classified. The remaining data is classified by a final classifier with a novel model and threshold selection approach. The specific objective of our study is the classification of raised bogs in a study site in southeast Germany, using multi-seasonal RapidEye data and a small number of training sample. Results demonstrate that the iterative \{OCC\} outperforms other state of the art one-class classifiers and approaches for model selection. The study highlights the potential of the proposed approach for an efficient and improved mapping of small classes such as raised bogs. Overall the proposed approach constitutes a feasible approach and useful modification of a regular one-class classifier.

    @Article{Mack2016Raised,
    Title = {Mapping raised bogs with an iterative one-class classification approach },
    Author = {Mack, Benjamin and Roscher, Ribana and Stenzel, Stefanie and Feilhauer, Hannes and Schmidtlein, Sebastian and Waske, Bj{\"o}rn},
    Journal = {{ISPRS} Journal of Photogrammetry and Remote Sensing},
    Year = {2016},
    Pages = {53 - 64},
    Volume = {120},
    Abstract = {Abstract Land use and land cover maps are one of the most commonly used remote sensing products. In many applications the user only requires a map of one particular class of interest, e.g. a specific vegetation type or an invasive species. One-class classifiers are appealing alternatives to common supervised classifiers because they can be trained with labeled training data of the class of interest only. However, training an accurate one-class classification (OCC) model is challenging, particularly when facing a large image, a small class and few training samples. To tackle these problems we propose an iterative \{OCC\} approach. The presented approach uses a biased Support Vector Machine as core classifier. In an iterative pre-classification step a large part of the pixels not belonging to the class of interest is classified. The remaining data is classified by a final classifier with a novel model and threshold selection approach. The specific objective of our study is the classification of raised bogs in a study site in southeast Germany, using multi-seasonal RapidEye data and a small number of training sample. Results demonstrate that the iterative \{OCC\} outperforms other state of the art one-class classifiers and approaches for model selection. The study highlights the potential of the proposed approach for an efficient and improved mapping of small classes such as raised bogs. Overall the proposed approach constitutes a feasible approach and useful modification of a regular one-class classifier. },
    Doi = {http://dx.doi.org/10.1016/j.isprsjprs.2016.07.008},
    ISSN = {0924-2716},
    Keywords = {Remote sensing},
    Url = {http://www.sciencedirect.com/science/article/pii/S0924271616302180}
    }

  • C. Merfels and C. Stachniss, “Pose Fusion with Chain Pose Graphs for Automated Driving,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2016.
    [BibTeX] [PDF]
    @InProceedings{merfels16iros,
    Title = {Pose Fusion with Chain Pose Graphs for Automated Driving},
    Author = {Ch. Merfels and C. Stachniss},
    Booktitle = iros,
    Year = {2016},
    Url = {http://www.ipb.uni-bonn.de/pdfs/merfels16iros.pdf}
    }

  • L. Nardi and C. Stachniss, “Experience-Based Path Planning for Mobile Robots Exploiting User Preferences,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2016.
    [BibTeX] [PDF]
    [none]
    @InProceedings{nardi16iros,
    Title = {Experience-Based Path Planning for Mobile Robots Exploiting User Preferences},
    Author = {L. Nardi and C. Stachniss},
    Booktitle = iros,
    Year = {2016},
    Abstract = {[none]},
    Url = {http://www.ipb.uni-bonn.de/pdfs/nardi16iros.pdf}
    }

  • S. Osswald, M. Bennewitz, W. Burgard, and C. Stachniss, “Speeding-Up Robot Exploration by Exploiting Background Information,” IEEE Robotics and Automation Letters (RA-L) and IEEE International Conference on Robotics & Automation (ICRA), 2016.
    [BibTeX] [PDF]
    @Article{osswald16ral,
    Title = {Speeding-Up Robot Exploration by Exploiting Background Information},
    Author = {S. Osswald and M. Bennewitz and W. Burgard and C. Stachniss},
    Journal = {IEEE Robotics and Automation Letters (RA-L) and IEEE International Conference on Robotics \& Automation (ICRA)},
    Year = {2016},
    Url = {http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/osswald16ral.pdf}
    }

  • D. Perea-Ström, I. Bogoslavskyi, and C. Stachniss, “Robust Exploration and Homing for Autonomous Robots,” in Robotics and Autonomous Systems , 2016.
    [BibTeX] [PDF]
    @InProceedings{perea16jras,
    Title = {Robust Exploration and Homing for Autonomous Robots},
    Author = {D. Perea-Str{\"o}m and I. Bogoslavskyi and C. Stachniss},
    Booktitle = jras,
    Year = {2016},
    Url = {http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/perea16jras.pdf}
    }

  • R. Roscher, J. Behmann, A. -K. Mahlein, J. Dupuis, H. Kuhlmann, and L. Plümer, “Detection of Disease Symptoms on Hyperspectral 3D Plant Models,” in ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences , 2016, pp. 89-96.
    [BibTeX]
    We analyze the benefit of combining hyperspectral images information with 3D geometry information for the detection of Cercospora leaf spot disease symptoms on sugar beet plants. Besides commonly used one-class Support Vector Machines, we utilize an unsupervised sparse representation-based approach with group sparsity prior. Geometry information is incorporated by representing each sample of interest with an inclination-sorted dictionary, which can be seen as an 1D topographic dictionary. We compare this approach with a sparse representation based approach without geometry information and One-Class Support Vector Machines. One-Class Support Vector Machines are applied to hyperspectral data without geometry information as well as to hyperspectral images with additional pixelwise inclination information. Our results show a gain in accuracy when using geometry information beside spectral information regardless of the used approach. However, both methods have different demands on the data when applied to new test data sets. One-Class Support Vector Machines require full inclination information on test and training data whereas the topographic dictionary approach only need spectral information for reconstruction of test data once the dictionary is build by spectra with inclination.

    @InProceedings{Roscher2016detection,
    Title = {Detection of Disease Symptoms on Hyperspectral {3D} Plant Models},
    Author = {Roscher, R. and Behmann, J. and Mahlein, A.-K. and Dupuis, J. and Kuhlmann, H. and Pl{\"u}mer, L.},
    Booktitle = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2016},
    Pages = {89--96},
    Abstract = {We analyze the benefit of combining hyperspectral images information with 3D geometry information for the detection of Cercospora leaf spot disease symptoms on sugar beet plants. Besides commonly used one-class Support Vector Machines, we utilize an unsupervised sparse representation-based approach with group sparsity prior. Geometry information is incorporated by representing each sample of interest with an inclination-sorted dictionary, which can be seen as an 1D topographic dictionary. We compare this approach with a sparse representation based approach without geometry information and One-Class Support Vector Machines. One-Class Support Vector Machines are applied to hyperspectral data without geometry information as well as to hyperspectral images with additional pixelwise inclination information. Our results show a gain in accuracy when using geometry information beside spectral information regardless of the used approach. However, both methods have different demands on the data when applied to new test data sets. One-Class Support Vector Machines require full inclination information on test and training data whereas the topographic dictionary approach only need spectral information for reconstruction of test data once the dictionary is build by spectra with inclination.}
    }

  • R. Roscher, J. Behmann, A. -K. Mahlein, and L. Plümer, “On the Benefit of Topographic Dictionaries for Detecting Disease Symptoms on Hyperspectral 3D Plant Models,” in Workshop on Hyperspectral Image and Signal Processing , 2016.
    [BibTeX]
    We analyze the benefit of using topographic dictionaries for a sparse representation (SR) approach for the detection of Cercospora leaf spot disease symptoms on sugar beet plants. Topographic dictionaries are an arranged set of basis elements in which neighbored dictionary elements tend to cause similar activations in the SR approach. In this paper, the dictionary is obtained from samples of a healthy plant and partly build in a topographic way by using hyperspectral as well as geometry information, i.e. depth and inclination. It turns out that hyperspectral signals of leafs show a typical structure depending on depth and inclination and thus, both influences can be disentangled in our approach. Rare signals which do not fit into this model, e.g. leaf veins, are also captured in the dictionary in a non-topographic way. A reconstruction error index is used as indicator, in which disease symptoms can be distinguished from healthy plant regions.nThe advantage of the presented approach is that full spectral and geometry information is needed only once to built the dictionary, whereas the sparse reconstruction is done solely on hyperspectral information.

    @InProceedings{Roscher2016Topographic,
    Title = {On the Benefit of Topographic Dictionaries for Detecting Disease Symptoms on Hyperspectral 3D Plant Models},
    Author = {Roscher, R. and Behmann, J. and Mahlein, A.-K. and Pl{\"u}mer, L.},
    Booktitle = {Workshop on Hyperspectral Image and Signal Processing},
    Year = {2016},
    Abstract = {We analyze the benefit of using topographic dictionaries for a sparse representation (SR) approach for the detection of Cercospora leaf spot disease symptoms on sugar beet plants. Topographic dictionaries are an arranged set of basis elements in which neighbored dictionary elements tend to cause similar activations in the SR approach. In this paper, the dictionary is obtained from samples of a healthy plant and partly build in a topographic way by using hyperspectral as well as geometry information, i.e. depth and inclination. It turns out that hyperspectral signals of leafs show a typical structure depending on depth and inclination and thus, both influences can be disentangled in our approach. Rare signals which do not fit into this model, e.g. leaf veins, are also captured in the dictionary in a non-topographic way. A reconstruction error index is used as indicator, in which disease symptoms can be distinguished from healthy plant regions.nThe advantage of the presented approach is that full spectral and geometry information is needed only once to built the dictionary, whereas the sparse reconstruction is done solely on hyperspectral information.},
    Owner = {ribana},
    Timestamp = {2016.06.20}
    }

  • R. Roscher, S. Wenzel, and B. Waske, “Discriminative Archetypal Self-taught Learning for Multispectral Landcover Classification,” in Proc. of Pattern Recogniton in Remote Sensing 2016 (PRRS), Workshop at ICPR; to appear in IEEE Xplore , 2016.
    [BibTeX] [PDF]
    Self-taught learning (STL) has become a promising paradigm to exploit unlabeled data for classification. The most commonly used approach to self-taught learning is sparse representation, in which it is assumed that each sample can be represented by a weighted linear combination of elements of a unlabeled dictionary. This paper proposes discriminative archetypal self-taught learning for the application of landcover classification, in which unlabeled discriminative archetypal samples are selected to build a powerful dictionary. Our main contribution is to present an approach which utilizes reversible jump Markov chain Monte Carlo method to jointly determine the best set of archetypes and the number of elements to build the dictionary. Experiments are conducted using synthetic data, a multi-spectral Landsat 7 image of a study area in the Ukraine and the Zurich benchmark data set comprising 20 multispectral Quickbird images. Our results confirm that the proposed approach can learn discriminative features for classification and show better classification results compared to self-taught learning with the original feature representation and compared to randomly initialized archetypal dictionaries.

    @InProceedings{Roscher2016Discriminative,
    Title = {Discriminative Archetypal Self-taught Learning for Multispectral Landcover Classification},
    Author = {Roscher, R. and Wenzel, S. and Waske, B.},
    Booktitle = {Proc. of Pattern Recogniton in Remote Sensing 2016 (PRRS), Workshop at ICPR; to appear in IEEE Xplore},
    Year = {2016},
    Abstract = {Self-taught learning (STL) has become a promising paradigm to exploit unlabeled data for classification. The most commonly used approach to self-taught learning is sparse representation, in which it is assumed that each sample can be represented by a weighted linear combination of elements of a unlabeled dictionary. This paper proposes discriminative archetypal self-taught learning for the application of landcover classification, in which unlabeled discriminative archetypal samples are selected to build a powerful dictionary. Our main contribution is to present an approach which utilizes reversible jump Markov chain Monte Carlo method to jointly determine the best set of archetypes and the number of elements to build the dictionary. Experiments are conducted using synthetic data, a multi-spectral Landsat 7 image of a study area in the Ukraine and the Zurich benchmark data set comprising 20 multispectral Quickbird images. Our results confirm that the proposed approach can learn discriminative features for classification and show better classification results compared to self-taught learning with the original feature representation and compared to randomly initialized archetypal dictionaries.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2016Discriminative.pdf}
    }

  • J. Schneider, C. Eling, L. Klingbeil, H. Kuhlmann, W. Förstner, and C. Stachniss, “Fast and Effective Online Pose Estimation and Mapping for UAVs,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , 2016, pp. 4784-4791. doi:10.1109/ICRA.2016.7487682
    [BibTeX] [PDF]
    Online pose estimation and mapping in unknown environments is essential for most mobile robots. Especially autonomous unmanned aerial vehicles require good pose estimates at comparably high frequencies. In this paper, we propose an effective system for online pose and simultaneous map estimation designed for light-weight UAVs. Our system consists of two components: (1) real-time pose estimation combining RTK-GPS and IMU at 100 Hz and (2) an effective SLAM solution running at 10 Hz using image data from an omnidirectional multi-fisheye-camera system. The SLAM procedure combines spatial resection computed based on the map that is incrementally refined through bundle adjustment and combines the image data with raw GPS observations and IMU data on keyframes. The overall system yields a real-time, georeferenced pose at 100 Hz in GPS-friendly situations. Additionally, we obtain a precise pose and feature map at 10 Hz even in cases where the GPS is not observable or underconstrained. Our system has been implemented and thoroughly tested on a 5 kg copter and yields accurate and reliable pose estimation at high frequencies. We compare the point cloud obtained by our method with a model generated from georeferenced terrestrial laser scanner.

    @InProceedings{schneider16icra,
    Title = {Fast and Effective Online Pose Estimation and Mapping for UAVs},
    Author = {J. Schneider and C. Eling and L. Klingbeil and H. Kuhlmann and W. F\"orstner and C. Stachniss},
    Booktitle = icra,
    Year = {2016},
    Pages = {4784--4791},
    Abstract = {Online pose estimation and mapping in unknown environments is essential for most mobile robots. Especially autonomous unmanned aerial vehicles require good pose estimates at comparably high frequencies. In this paper, we propose an effective system for online pose and simultaneous map estimation designed for light-weight UAVs. Our system consists of two components: (1) real-time pose estimation combining RTK-GPS and IMU at 100 Hz and (2) an effective SLAM solution running at 10 Hz using image data from an omnidirectional multi-fisheye-camera system. The SLAM procedure combines spatial resection computed based on the map that is incrementally refined through bundle adjustment and combines the image data with raw GPS observations and IMU data on keyframes. The overall system yields a real-time, georeferenced pose at 100 Hz in GPS-friendly situations. Additionally, we obtain a precise pose and feature map at 10 Hz even in cases where the GPS is not observable or underconstrained. Our system has been implemented and thoroughly tested on a 5 kg copter and yields accurate and reliable pose estimation at high frequencies. We compare the point cloud obtained by our method with a model generated from georeferenced terrestrial laser scanner.},
    Doi = {10.1109/ICRA.2016.7487682},
    Url = {http://www.ipb.uni-bonn.de/pdfs/schneider16icra.pdf}
    }

  • J. Schneider, C. Stachniss, and W. Förstner, “Dichtes Stereo mit Fisheye-Kameras,” in UAV 2016 — Vermessung mit unbemannten Flugsystemen , 2016, pp. 247-264.
    [BibTeX]
    @InProceedings{schneider16dvw,
    Title = {Dichtes Stereo mit Fisheye-Kameras},
    Author = {J. Schneider and C. Stachniss and W. F\"orstner},
    Booktitle = {UAV 2016 -- Vermessung mit unbemannten Flugsystemen},
    Year = {2016},
    Pages = {247-264},
    Publisher = {Wi{\ss}ner Verlag},
    Series = {Schriftenreihe des DVW},
    Volume = {82}
    }

  • J. Schneider, C. Stachniss, and W. Förstner, “On the Accuracy of Dense Fisheye Stereo,” IEEE Robotics and Automation Letters (RA-L)and IEEE International Conference on Robotics & Automation (ICRA), vol. 1, iss. 1, pp. 227-234, 2016. doi:10.1109/LRA.2016.2516509
    [BibTeX] [PDF]
    Fisheye cameras offer a large field of view, which is important for several robotics applications as a larger field of view allows for covering a large area with a single image. In contrast to classical cameras, however, fisheye cameras cannot be approximated well using the pinhole camera model and this renders the computation of depth information from fisheye stereo image pairs more complicated. In this work, we analyze the combination of an epipolar rectification model for fisheye stereo cameras with existing dense methods. This has the advantage that existing dense stereo systems can be applied as a black-box even with cameras that have field of view of more than 180 deg to obtain dense disparity information. We thoroughly investigate the accuracy potential of such fisheye stereo systems using image data from our UAV. The empirical analysis is based on image pairs of a calibrated fisheye stereo camera system and two state-of-the-art algorithms for dense stereo applied to adequately rectified image pairs from fisheye stereo cameras. The canonical stochastic model for sensor points assumes homogeneous uncertainty and we generalize this model based on an empirical analysis using a test scene consisting of mutually orthogonal planes. We show (1) that the combination of adequately rectified fisheye image pairs and dense methods provides dense 3D point clouds at 6-7 Hz on our autonomous multi-copter UAV, (2) that the uncertainty of points depends on their angular distance from the optical axis, (3) how to estimate the variance component as a function of that distance, and (4) how the improved stochastic model improves the accuracy of the scene points.

    @Article{schneider16ral,
    Title = {On the Accuracy of Dense Fisheye Stereo},
    Author = {J. Schneider and C. Stachniss and W. F\"orstner},
    Journal = {IEEE Robotics and Automation Letters (RA-L)and IEEE International Conference on Robotics \& Automation (ICRA)},
    Year = {2016},
    Number = {1},
    Pages = {227-234},
    Volume = {1},
    Abstract = {Fisheye cameras offer a large field of view, which is important for several robotics applications as a larger field of view allows for covering a large area with a single image. In contrast to classical cameras, however, fisheye cameras cannot be approximated well using the pinhole camera model and this renders the computation of depth information from fisheye stereo image pairs more complicated. In this work, we analyze the combination of an epipolar rectification model for fisheye stereo cameras with existing dense methods. This has the advantage that existing dense stereo systems can be applied as a black-box even with cameras that have field of view of more than 180 deg to obtain dense disparity information. We thoroughly investigate the accuracy potential of such fisheye stereo systems using image data from our UAV. The empirical analysis is based on image pairs of a calibrated fisheye stereo camera system and two state-of-the-art algorithms for dense stereo applied to adequately rectified image pairs from fisheye stereo cameras. The canonical stochastic model for sensor points assumes homogeneous uncertainty and we generalize this model based on an empirical analysis using a test scene consisting of mutually orthogonal planes. We show (1) that the combination of adequately rectified fisheye image pairs and dense methods provides dense 3D point clouds at 6-7 Hz on our autonomous multi-copter UAV, (2) that the uncertainty of points depends on their angular distance from the optical axis, (3) how to estimate the variance component as a function of that distance, and (4) how the improved stochastic model improves the accuracy of the scene points.},
    Doi = {10.1109/LRA.2016.2516509},
    Url = {http://www.ipb.uni-bonn.de/pdfs/schneider16ral.pdf}
    }

  • T. Schubert, S. Wenzel, R. Roscher, and C. Stachniss, “Investigation of Latent Traces Using Infrared Reflectance Hyperspectral Imaging,” in ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences , 2016, pp. 97-102. doi:10.5194/isprs-annals-III-7-97-2016
    [BibTeX] [PDF]
    The detection of traces is a main task of forensic science. A potential method is hyperspectral imaging (HSI) from which we expect to capture more fluorescence effects than with common Forensic Light Sources (FLS). Specimen of blood, semen and saliva traces in several dilution steps are prepared on cardboard substrate. As our key result we successfully make latent traces visible up to highest available dilution (1:8000). We can attribute most of the detectability to interference of electromagnetic light with the water content of the traces in the Shortwave Infrared region of the spectrum. In a classification task we use several dimensionality reduction methods (PCA and LDA) in combination with a Maximum Likelihood (ML) classifier assuming normally distributed data. Random Forest builds a competitive approach. The classifiers retrieve the exact positions of labeled trace preparation up to highest dilution and determine posterior probabilities. By modeling the classification with a Markov Random Field we obtain smoothed results.

    @InProceedings{Schubert2016Investigation,
    Title = {{Investigation of Latent Traces Using Infrared Reflectance Hyperspectral Imaging}},
    Author = {Schubert, Till and Wenzel, Susanne and Roscher, Ribana and Stachniss, Cyrill},
    Booktitle = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2016},
    Pages = {97--102},
    Volume = {III-7},
    Abstract = {The detection of traces is a main task of forensic science. A potential method is hyperspectral imaging (HSI) from which we expect to capture more fluorescence effects than with common Forensic Light Sources (FLS). Specimen of blood, semen and saliva traces in several dilution steps are prepared on cardboard substrate. As our key result we successfully make latent traces visible up to highest available dilution (1:8000). We can attribute most of the detectability to interference of electromagnetic light with the water content of the traces in the Shortwave Infrared region of the spectrum. In a classification task we use several dimensionality reduction methods (PCA and LDA) in combination with a Maximum Likelihood (ML) classifier assuming normally distributed data. Random Forest builds a competitive approach. The classifiers retrieve the exact positions of labeled trace preparation up to highest dilution and determine posterior probabilities. By modeling the classification with a Markov Random Field we obtain smoothed results.},
    Doi = {10.5194/isprs-annals-III-7-97-2016},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schubert2016Investigation.pdf}
    }

  • C. Stachniss, “Springer Handbook of Robotics.” Springer, 2016.
    [BibTeX]
    [none]
    @InBook{springerbook-photo-slamchapter,
    Title = {Springer Handbook of Robotics},
    Author = {C. Stachniss},
    Chapter = {Simultaneous Localization and Mapping},
    Publisher = {Springer},
    Year = {2016},
    Abstract = {[none]},
    Timestamp = {2016.04.25}
    }

  • O. Vysotska and C. Stachniss, “Exploiting Building Information from Publicly Available Maps in Graph-Based SLAM,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2016.
    [BibTeX] [PDF]
    [none]
    @InProceedings{vysotska16iros,
    Title = {Exploiting Building Information from Publicly Available Maps in Graph-Based SLAM},
    Author = {O. Vysotska and C. Stachniss},
    Booktitle = iros,
    Year = {2016},
    Abstract = {[none]},
    Url = {http://www.ipb.uni-bonn.de/pdfs/vysotska16iros.pdf}
    }

  • O. Vysotska and C. Stachniss, “Lazy Data Association For Image Sequences Matching Under Substantial Appearance Changes,” IEEE Robotics and Automation Letters (RA-L)and IEEE International Conference on Robotics & Automation (ICRA), vol. 1, iss. 1, pp. 1-8, 2016. doi:10.1109/LRA.2015.2512936
    [BibTeX] [PDF]
    Localization is an essential capability for mobile robots and the ability to localize in changing environments is key to robust outdoor navigation. Robots operating over extended periods of time should be able to handle substantial appearance changes such as those occurring over seasons or under different weather conditions. In this letter, we investigate the problem of efficiently coping with seasonal appearance changes in online localization. We propose a lazy data association approach for matching streams of incoming images to a reference image sequence in an online fashion. We present a search heuristic to quickly find matches between the current image sequence and a database using a data association graph. Our experiments conducted under substantial seasonal changes suggest that our approach can efficiently match image sequences while requiring a comparably small number of image to image comparisons

    @Article{vysotska16ral,
    Title = {Lazy Data Association For Image Sequences Matching Under Substantial Appearance Changes},
    Author = {O. Vysotska and C. Stachniss},
    Journal = {IEEE Robotics and Automation Letters (RA-L)and IEEE International Conference on Robotics \& Automation (ICRA)},
    Year = {2016},
    Number = {1},
    Pages = {1-8},
    Volume = {1},
    Abstract = {Localization is an essential capability for mobile robots and the ability to localize in changing environments is key to robust outdoor navigation. Robots operating over extended periods of time should be able to handle substantial appearance changes such as those occurring over seasons or under different weather conditions. In this letter, we investigate the problem of efficiently coping with seasonal appearance changes in online localization. We propose a lazy data association approach for matching streams of incoming images to a reference image sequence in an online fashion. We present a search heuristic to quickly find matches between the current image sequence and a database using a data association graph. Our experiments conducted under substantial seasonal changes suggest that our approach can efficiently match image sequences while requiring a comparably small number of image to image comparisons},
    Doi = {10.1109/LRA.2015.2512936},
    Timestamp = {2016.04.18},
    Url = {http://www.ipb.uni-bonn.de/pdfs/vysotska16ral-icra.pdf}
    }

  • S. Wenzel, “High-Level Facade Image Interpretation using Marked Point Processes,” PhD Thesis, 2016.
    [BibTeX] [PDF]
    In this thesis, we address facade image interpretation as one essential ingredient for the generation of high-detailed, semantic meaningful, three-dimensional city-models. Given a single rectified facade image, we detect relevant facade objects such as windows, entrances, and balconies, which yield a description of the image in terms of accurate position and size of these objects. Urban digital three-dimensional reconstruction and documentation is an active area of research with several potential applications, e.g., in the area of digital mapping for navigation, urban planning, emergency management, disaster control or the entertainment industry. A detailed building model which is not just a geometric object enriched with texture, allows for semantic requests as the number of floors or the location of balconies and entrances. Facade image interpretation is one essential step in order to yield such models. In this thesis, we propose the interpretation of facade images by combining evidence for the occurrence of individual object classes which we derive from data, and prior knowledge which guides the image interpretation in its entirety. We present a three-step procedure which generates features that are suited to describe relevant objects, learns a representation that is suited for object detection, and that enables the image interpretation using the results of object detection while incorporating prior knowledge about typical configurations of facade objects, which we learn from training data. According to these three sub-tasks, our major achievements are: We propose a novel method for facade image interpretation based on a marked point process. Therefor, we develop a model for the description of typical configurations of facade objects and propose an image interpretation system which combines evidence derived from data and prior knowledge about typical configurations of facade objects. In order to generate evidence from data, we propose a feature type which we call shapelets. They are scale invariant and provide large distinctiveness for facade objects. Segments of lines, arcs, and ellipses serve as basic features for the generation of shapelets. Therefor, we propose a novel line simplification approach which approximates given pixel-chains by a sequence of lines, circular, and elliptical arcs. Among others, it is based on an adaption to Douglas-Peucker’s algorithm, which is based on circles as basic geometric elements We evaluate each step separately. We show the effects of polyline segmentation and simplification on several images with comparable good or even better results, referring to a state-of-the-art algorithm, which proves their large distinctiveness for facade objects. Using shapelets we provide a reasonable classification performance on a challenging dataset, including intra-class variations, clutter, and scale changes. Finally, we show promising results for the facade interpretation system on several datasets and provide a qualitative evaluation which demonstrates the capability of complete and accurate detection of facade objects.

    @PhdThesis{Wenzel2016High-Level,
    Title = {High-Level Facade Image Interpretation using Marked Point Processes},
    Author = {Wenzel, Susanne},
    School = {Department of Photogrammetry, University of Bonn},
    Year = {2016},
    Abstract = {In this thesis, we address facade image interpretation as one essential ingredient for the generation of high-detailed, semantic meaningful, three-dimensional city-models. Given a single rectified facade image, we detect relevant facade objects such as windows, entrances, and balconies, which yield a description of the image in terms of accurate position and size of these objects.
    Urban digital three-dimensional reconstruction and documentation is an active area of research with several potential applications, e.g., in the area of digital mapping for navigation, urban planning, emergency management, disaster control or the entertainment industry. A detailed building model which is not just a geometric object enriched with texture, allows for semantic requests as the number of floors or the location of balconies and entrances. Facade image interpretation is one essential step in order to yield such models.
    In this thesis, we propose the interpretation of facade images by combining evidence for the occurrence of individual object classes which we derive from data, and prior knowledge which guides the image interpretation in its entirety. We present a three-step procedure which generates features that are suited to describe relevant objects, learns a representation that is suited for object detection, and that enables the image interpretation using the results of object detection while incorporating prior knowledge about typical configurations of facade objects, which we learn from training data.
    According to these three sub-tasks, our major achievements are: We propose a novel method for facade image interpretation based on a marked point process. Therefor, we develop a model for the description of typical configurations of facade objects and propose an image interpretation system which combines evidence derived from data and prior knowledge about typical configurations of facade objects. In order to generate evidence from data, we propose a feature type which we call shapelets. They are scale invariant and provide large distinctiveness for facade objects. Segments of lines, arcs, and ellipses serve as basic features for the generation of shapelets. Therefor, we propose a novel line simplification approach which approximates given pixel-chains by a sequence of lines, circular, and elliptical arcs. Among others, it is based on an adaption to Douglas-Peucker's algorithm, which is based on circles as basic geometric elements
    We evaluate each step separately. We show the effects of polyline segmentation and simplification on several images with comparable good or even better results, referring to a state-of-the-art algorithm, which proves their large distinctiveness for facade objects. Using shapelets we provide a reasonable classification performance on a challenging dataset, including intra-class variations, clutter, and scale changes. Finally, we show promising results for the facade interpretation system on several datasets and provide a qualitative evaluation which demonstrates the capability of complete and accurate detection of facade objects.},
    City = {Bonn},
    Url = {http://hss.ulb.uni-bonn.de/2016/4412/4412.htm}
    }

  • S. Wenzel and W. Förstner, “Facade Interpretation Using a Marked Point Process,” in ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences , 2016, pp. 363-370. doi:10.5194/isprs-annals-III-3-363-2016
    [BibTeX] [PDF]
    Our objective is the interpretation of facade images in a top-down manner, using a Markov marked point process formulated as a Gibbs process. Given single rectified facade images we aim at the accurate detection of relevant facade objects as windows and entrances, using prior knowledge about their possible configurations within facade images. We represent facade objects by a simplified rectangular object model and present an energy model which evaluates the agreement of a proposed configuration with the given image and the statistics about typical configurations which we learned from training data. We show promising results on different datasets and provide a quantitative evaluation, which demonstrates the capability of complete and accurate detection of facade objects.

    @InProceedings{Wenzel2016Facade,
    Title = {{Facade Interpretation Using a Marked Point Process}},
    Author = {Wenzel, Susanne and F{\" o}rstner, Wolfgang},
    Booktitle = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2016},
    Pages = {363--370},
    Volume = {III-3},
    Abstract = {Our objective is the interpretation of facade images in a top-down manner, using a Markov marked point process formulated as a Gibbs process. Given single rectified facade images we aim at the accurate detection of relevant facade objects as windows and entrances, using prior knowledge about their possible configurations within facade images. We represent facade objects by a simplified rectangular object model and present an energy model which evaluates the agreement of a proposed configuration with the given image and the statistics about typical configurations which we learned from training data. We show promising results on different datasets and provide a quantitative evaluation, which demonstrates the capability of complete and accurate detection of facade objects.},
    Doi = {10.5194/isprs-annals-III-3-363-2016},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2016Facade.pdf}
    }

  • C. Siedentop, V. Laukhart, B. Krastev, D. Kasper, A. Wenden, G. Breuel, and C. Stachniss, “Autonomous Parking Using Previous Paths,” in Advanced Microsystems for Automotive Applications 2015: Smart Systems for Green and Automated Driving. Lecture Notes in Mobility., T. Schulze, B. Müller, and G. Meyer, Eds., Springer, 2016, pp. 3-14. doi:10.1007/978-3-319-20855-8_1
    [BibTeX]
    @InBook{siedentop16lnib,
    pages = {3-14},
    title = {Autonomous Parking Using Previous Paths},
    publisher = {Springer},
    year = {2016},
    author = {C. Siedentop and V. Laukhart and B. Krastev and D. Kasper and A. Wenden and G. Breuel and C. Stachniss},
    editor = {T. Schulze and B. M{\"u}ller and G. Meyer},
    booktitle = {Advanced Microsystems for Automotive Applications 2015: Smart Systems for Green and Automated Driving. Lecture Notes in Mobility.},
    doi = {10.1007/978-3-319-20855-8_1},
    }

2015

  • N. Abdo, C. Stachniss, L. Spinello, and W. Burgard, “Collaborative Filtering for Predicting User Preferences for Organizing Objects,” arxiv–CoRR, vol. abs/1512.06362, 2015.
    [BibTeX] [PDF]
    [none]
    @Article{abdo15arxiv,
    Title = {Collaborative Filtering for Predicting User Preferences for Organizing Objects},
    Author = {N. Abdo and C. Stachniss and L. Spinello and W. Burgard},
    Journal = {arxiv--CoRR},
    Year = {2015},
    Note = {arXiv:1512.06362 [cs.RO]},
    Volume = {abs/1512.06362},
    Abstract = {[none]},
    Timestamp = {2016.04.18},
    Url = {http://arxiv.org/abs/1512.06362}
    }

  • N. Abdo, C. Stachniss, L. Spinello, and W.Burgard, “Robot, Organize my Shelves! Tidying up Objects by Predicting User Preferences,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , 2015, pp. 1557-1564. doi:10.1109/ICRA.2015.7139396
    [BibTeX] [PDF]
    As service robots become more and more capable of performing useful tasks for us, there is a growing need to teach robots how we expect them to carry out these tasks. However, learning our preferences is a nontrivial problem, as many of them stem from a variety of factors including personal taste, cultural background, or common sense. Obviously, such factors are hard to formulate or model a priori. In this paper, we present a solution for tidying up objects in containers, e.g., shelves or boxes, by following user preferences. We learn the user preferences using collaborative filtering based on crowdsourced and mined data. First, we predict pairwise object preferences of the user. Then, we subdivide the objects in containers by modeling a spectral clustering problem. Our solution is easy to update, does not require complex modeling, and improves with the amount of user data. We evaluate our approach using crowdsoucing data from over 1,200 users and demonstrate its effectiveness for two tidy-up scenarios. Additionally, we show that a real robot can reliably predict user preferences using our approach.

    @InProceedings{abdo15icra,
    Title = {Robot, Organize my Shelves! Tidying up Objects by Predicting User Preferences},
    Author = {N. Abdo and C. Stachniss and L. Spinello and W.Burgard},
    Booktitle = ICRA,
    Year = {2015},
    Pages = {1557-1564},
    Abstract = {As service robots become more and more capable of performing useful tasks for us, there is a growing need to teach robots how we expect them to carry out these tasks. However, learning our preferences is a nontrivial problem, as many of them stem from a variety of factors including personal taste, cultural background, or common sense. Obviously, such factors are hard to formulate or model a priori. In this paper, we present a solution for tidying up objects in containers, e.g., shelves or boxes, by following user preferences. We learn the user preferences using collaborative filtering based on crowdsourced and mined data. First, we predict pairwise object preferences of the user. Then, we subdivide the objects in containers by modeling a spectral clustering problem. Our solution is easy to update, does not require complex modeling, and improves with the amount of user data. We evaluate our approach using crowdsoucing data from over 1,200 users and demonstrate its effectiveness for two tidy-up scenarios. Additionally, we show that a real robot can reliably predict user preferences using our approach.},
    Doi = {10.1109/ICRA.2015.7139396},
    Url = {http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/abdo15icra.pdf}
    }

  • I. Bogoslavskyi, L. Spinello, W. Burgard, and C. Stachniss, “Where to Park? Minimizing the Expected Time to Find a Parking Space,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , 2015, pp. 2147-2152. doi:10.1109/ICRA.2015.7139482
    [BibTeX] [PDF]
    Quickly finding a free parking spot that is close to a desired target location can be a difficult task. This holds for human drivers and autonomous cars alike. In this paper, we investigate the problem of predicting the occupancy of parking spaces and exploiting this information during route planning. We propose an MDP-based planner that considers route information as well as the occupancy probabilities of parking spaces to compute the path that minimizes the expected total time for finding an unoccupied parking space and for walking from the parking location to the target destination. We evaluated our system on real world data gathered over several days in a real parking lot. We furthermore compare our approach to three parking strategies and show that our method outperforms the alternative behaviors.

    @InProceedings{bogoslavskyi15icra,
    Title = {Where to Park? Minimizing the Expected Time to Find a Parking Space},
    Author = {I. Bogoslavskyi and L. Spinello and W. Burgard and C. Stachniss},
    Booktitle = ICRA,
    Year = {2015},
    Pages = {2147-2152},
    Abstract = {Quickly finding a free parking spot that is close to a desired target location can be a difficult task. This holds for human drivers and autonomous cars alike. In this paper, we investigate the problem of predicting the occupancy of parking spaces and exploiting this information during route planning. We propose an MDP-based planner that considers route information as well as the occupancy probabilities of parking spaces to compute the path that minimizes the expected total time for finding an unoccupied parking space and for walking from the parking location to the target destination. We evaluated our system on real world data gathered over several days in a real parking lot. We furthermore compare our approach to three parking strategies and show that our method outperforms the alternative behaviors.},
    Doi = {10.1109/ICRA.2015.7139482},
    Timestamp = {2015.06.29},
    Url = {http://www.ipb.uni-bonn.de/pdfs/bogoslavskyi15icra.pdf}
    }

  • F. M. Carlucci, L. Nardi, L. Iocchi, and D. Nardi, “Explicit Representation of Social Norms for Social Robots,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2015, pp. 4191-4196. doi:10.1109/IROS.2015.7353970
    [BibTeX] [PDF]
    As robots are expected to become more and more available in everyday environments, interaction with humans is assuming a central role. Robots working in populated environments are thus expected to demonstrate socially acceptable behaviors and to follow social norms. However, most of the recent works in this field do not address the problem of explicit representation of the social norms and their integration in the reasoning and the execution components of a cognitive robot. In this paper, we address the design of robotic systems that support some social behavior by implementing social norms. We present a framework for planning and execution of social plans, in which social norms are described in a domain and language independent form. A full implementation of the proposed framework is described and tested in a realistic scenario with non-expert and non-recruited users.

    @InProceedings{carlucci15iros,
    Title = {Explicit Representation of Social Norms for Social Robots},
    Author = {F.M. Carlucci and L. Nardi and L. Iocchi and D. Nardi},
    Booktitle = iros,
    Year = {2015},
    Pages = {4191 - 4196},
    Abstract = {As robots are expected to become more and more available in everyday environments, interaction with humans is assuming a central role. Robots working in populated environments are thus expected to demonstrate socially acceptable behaviors and to follow social norms. However, most of the recent works in this field do not address the problem of explicit representation of the social norms and their integration in the reasoning and the execution components of a cognitive robot. In this paper, we address the design of robotic systems that support some social behavior by implementing social norms. We present a framework for planning and execution of social plans, in which social norms are described in a domain and language independent form. A full implementation of the proposed framework is described and tested in a realistic scenario with non-expert and non-recruited users.},
    Doi = {10.1109/IROS.2015.7353970},
    Timestamp = {2016.04.19},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Carlucci2015Explicit.pdf}
    }

  • T. Naseer, M. Ruhnke, L. Spinello, C. Stachniss, and W. Burgard, “Robust Visual SLAM Across Seasons,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2015, pp. 2529-2535. doi:10.1109/IROS.2015.7353721
    [BibTeX] [PDF]
    In this paper, we present an appearance-based visual SLAM approach that focuses on detecting loop closures across seasons. Given two image sequences, our method first extracts one descriptor per image for both sequences using a deep convolutional neural network. Then, we compute a similarity matrix by comparing each image of a query sequence with a database. Finally, based on the similarity matrix, we formulate a flow network problem and compute matching hypotheses between sequences. In this way, our approach can handle partially matching routes, loops in the trajectory and different speeds of the robot. With a matching hypothesis as loop closure information and the odometry information of the robot, we formulate a graph based SLAM problem and compute a joint maximum likelihood trajectory.

    @InProceedings{naseer15iros,
    Title = {Robust Visual SLAM Across Seasons},
    Author = {Naseer, Tayyab and Ruhnke, Michael and Spinello, Luciano and Stachniss, Cyrill and Burgard, Wolfram},
    Booktitle = iros,
    Year = {2015},
    Pages = {2529 - 2535},
    Abstract = {In this paper, we present an appearance-based visual SLAM approach that focuses on detecting loop closures across seasons. Given two image sequences, our method first extracts one descriptor per image for both sequences using a deep convolutional neural network. Then, we compute a similarity matrix by comparing each image of a query sequence with a database. Finally, based on the similarity matrix, we formulate a flow network problem and compute matching hypotheses between sequences. In this way, our approach can handle partially matching routes, loops in the trajectory and different speeds of the robot. With a matching hypothesis as loop closure information and the odometry information of the robot, we formulate a graph based SLAM problem and compute a joint maximum likelihood trajectory.},
    Doi = {10.1109/IROS.2015.7353721},
    Timestamp = {2016.04.19},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Naseer2015Robust.pdf}
    }

  • D. Perea-Ström, F. Nenci, and C. Stachniss, “Predictive Exploration Considering Previously Mapped Environments,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , 2015, pp. 2761-2766. doi:10.1109/ICRA.2015.7139574
    [BibTeX] [PDF]
    The ability to explore an unknown environment is an important prerequisite for building truly autonomous robots. The central decision that a robot needs to make when exploring an unknown environment is to select the next view point(s) for gathering observations. In this paper, we consider the problem of how to select view points that support the underlying mapping process. We propose a novel approach that makes predictions about the structure of the environments in the unexplored areas by relying on maps acquired previously. Our approach seeks to find similarities between the current surroundings of the robot and previously acquired maps stored in a database in order to predict how the environment may expand in the unknown areas. This allows us to predict potential future loop closures early. This knowledge is used in the view point selection to actively close loops and in this way reduce the uncertainty in the robot’s belief. We implemented and tested the proposed approach. The experiments indicate that our method improves the ability of a robot to explore challenging environments and improves the quality of the resulting maps.

    @InProceedings{perea15icra,
    Title = {Predictive Exploration Considering Previously Mapped Environments},
    Author = {D. Perea-Str{\"o}m and F. Nenci and C. Stachniss},
    Booktitle = ICRA,
    Year = {2015},
    Pages = {2761-2766},
    Abstract = {The ability to explore an unknown environment is an important prerequisite for building truly autonomous robots. The central decision that a robot needs to make when exploring an unknown environment is to select the next view point(s) for gathering observations. In this paper, we consider the problem of how to select view points that support the underlying mapping process. We propose a novel approach that makes predictions about the structure of the environments in the unexplored areas by relying on maps acquired previously. Our approach seeks to find similarities between the current surroundings of the robot and previously acquired maps stored in a database in order to predict how the environment may expand in the unknown areas. This allows us to predict potential future loop closures early. This knowledge is used in the view point selection to actively close loops and in this way reduce the uncertainty in the robot's belief. We implemented and tested the proposed approach. The experiments indicate that our method improves the ability of a robot to explore challenging environments and improves the quality of the resulting maps.},
    Doi = {10.1109/ICRA.2015.7139574},
    Url = {http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/perea15icra.pdf}
    }

  • R. Roscher, C. Römer, B. Waske, and L. Plümer, “Landcover classification with self-taught learning on archetypal dictionaries,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2015, pp. 2358-2361. doi:10.1109/IGARSS.2015.7326282
    [BibTeX]
    @InProceedings{Roscher2015Selftaught,
    Title = {Landcover classification with self-taught learning on archetypal dictionaries},
    Author = {Roscher, R. and R\"omer, C. and Waske, B. and Pl\"umer, L.},
    Booktitle = {{IEEE} International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2015},
    Month = {July},
    Pages = {2358-2361},
    Doi = {10.1109/IGARSS.2015.7326282}
    }

  • R. Roscher, B. Uebbing, and J. Kusche, “Spatio-temporal altimeter waveform retracking via sparse representation and conditional random fields,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2015, pp. 1234-1237. doi:10.1109/IGARSS.2015.7325996
    [BibTeX]
    @InProceedings{Roscher2015Altimeter,
    Title = {Spatio-temporal altimeter waveform retracking via sparse representation and conditional random fields},
    Author = {Roscher, R. and Uebbing, B. and Kusche, J.},
    Booktitle = {{IEEE} International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2015},
    Month = {July},
    Pages = {1234-1237},
    Doi = {10.1109/IGARSS.2015.7325996}
    }

  • R. Roscher and B. Waske, “Shapelet-Based Sparse Representation for Landcover Classification of Hyperspectral Images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, iss. 3, pp. 1623-1634, 2015. doi:10.1109/TGRS.2015.2484619
    [BibTeX]
    This paper presents a sparse-representation-based classification approach with a novel dictionary construction procedure. By using the constructed dictionary, sophisticated prior knowledge about the spatial nature of the image can be integrated. The approach is based on the assumption that each image patch can be factorized into characteristic spatial patterns, also called shapelets, and patch-specific spectral information. A set of shapelets is learned in an unsupervised way, and spectral information is embodied by training samples. A combination of shapelets and spectral information is represented in an undercomplete spatial-spectral dictionary for each individual patch, where the elements of the dictionary are linearly combined to a sparse representation of the patch. The patch-based classification is obtained by means of the representation error. Experiments are conducted on three well-known hyperspectral image data sets. They illustrate that our proposed approach shows superior results in comparison to sparse-representation-based classifiers that use only limited spatial information and behaves competitively with or better than state-of-the-art classifiers utilizing spatial information and kernelized sparse-representation-based classifiers.

    @Article{Roscher2015Shapelet,
    Title = {Shapelet-Based Sparse Representation for Landcover Classification of Hyperspectral Images},
    Author = {Roscher, R. and Waske, B.},
    Journal = {IEEE Transactions on Geoscience and Remote Sensing},
    Year = {2015},
    Number = {3},
    Pages = {1623--1634},
    Volume = {54},
    Abstract = {This paper presents a sparse-representation-based classification approach with a novel dictionary construction procedure. By using the constructed dictionary, sophisticated prior knowledge about the spatial nature of the image can be integrated. The approach is based on the assumption that each image patch can be factorized into characteristic spatial patterns, also called shapelets, and patch-specific spectral information. A set of shapelets is learned in an unsupervised way, and spectral information is embodied by training samples. A combination of shapelets and spectral information is represented in an undercomplete spatial-spectral dictionary for each individual patch, where the elements of the dictionary are linearly combined to a sparse representation of the patch. The patch-based classification is obtained by means of the representation error. Experiments are conducted on three well-known hyperspectral image data sets. They illustrate that our proposed approach shows superior results in comparison to sparse-representation-based classifiers that use only limited spatial information and behaves competitively with or better than state-of-the-art classifiers utilizing spatial information and kernelized sparse-representation-based classifiers.},
    Doi = {10.1109/TGRS.2015.2484619},
    ISSN = {0196-2892}
    }

  • T. Schubert, “Investigation of Latent Traces Using Hyperspectral Imaging,” bachelor thesis Master Thesis, 2015.
    [BibTeX]
    The detection of traces is a main task of forensic science. A potential method is hyperspectral imaging (HSI) which is the process of recording many narrowband intensity images across a wide range of the light spectrum. From this technique we expect to capture more fluorescence effects than with common Forensic Light Sources (FLS). Specimen of blood, semen and saliva traces in several dilution steps are prepared on cardboard substrate. The hyperspectral images are acquired by scanning with two line sensors of visible and infrared light over the specimen. After an image normalization step we obtain reflectance values arranged as an image plane for each wavelength. The atomic process is initiated by excitation with illumination light such that absorption and elastic scattering cause emission of trace-specific light. In a spectroscopic investigation we can attribute most of the trace-specific signal to chemical interaction of infrared light with the water content of the traces. Image planes (i.e. band images) at infrared wavelengths allow detectability to a much higher level than light of the visible region. Ratio images provide definition of new features which can be established as spectral indices. By these arithmetic operations with image planes we can account for variations in the tissue and make traces even more highlighted towards the fabric. The spectral regions which we obtain at a maximal measure of discriminative power indicate regions known as absorption peaks for biological components such as hemoglobin and water. In this thesis we make latent traces, i.e. non-visible for the human eye, visible up to highest available dilution (1:8000) in infrared data. Hyperspectral images in the region of visible light achieve to detect traces only marginally beyond visibility by human eye. In order to evaluate the detectability of traces we exploit several classifiers to labeled data. We use several dimensionality reduction methods (PCA, LDA, band image and ratio image) in combination with a Maximum Likelihood (ML) classifier assuming normally distributed data. Random Forest builds a competitive approach. In the classification task we retrieve the exact positions of labeled trace preparation up to highest dilution. PCA prior to LDA and ML decision function achieves best results for classifying trace against background. Random Forest is preferable in multiclass classification. On the contrary, neither spectral indices nor classification approaches yield adequate achievements for the application of methods learned on labeled data to other images of specimen with arbitrary fabrics. Customized preprocessing and dimensionality reduction methods achieve no significant reduction of background influence. The proportion of trace-specific signal in the data is not sufficient for this task. We suggest supervision of the illumination light to pointedly initiate trace-specific interference. Concerning field usage of HSI we prefer area-scanning cameras (i.e. image plane acquisition with spectral scanning by a wavelength-tunable bandpass filter). Band and ratio images at established spectral indices qualify for live view screening on an external screen.

    @MastersThesis{Schubert2015,
    Title = {Investigation of Latent Traces Using Hyperspectral Imaging},
    Author = {Till Schubert},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2015},
    Type = {bachelor thesis},
    Abstract = {The detection of traces is a main task of forensic science. A potential method is hyperspectral imaging (HSI) which is the process of recording many narrowband intensity images across a wide range of the light spectrum. From this technique we expect to capture more fluorescence effects than with common Forensic Light Sources (FLS). Specimen of blood, semen and saliva traces in several dilution steps are prepared on cardboard substrate. The hyperspectral images are acquired by scanning with two line sensors of visible and infrared light over the specimen. After an image normalization step we obtain reflectance values arranged as an image plane for each wavelength. The atomic process is initiated by excitation with illumination light such that absorption and elastic scattering cause emission of trace-specific light. In a spectroscopic investigation we can attribute most of the trace-specific signal to chemical interaction of infrared light with the water content of the traces. Image planes (i.e. band images) at infrared wavelengths allow detectability to a much higher level than light of the visible region. Ratio images provide definition of new features which can be established as spectral indices. By these arithmetic operations with image planes we can account for variations in the tissue and make traces even more highlighted towards the fabric. The spectral regions which we obtain at a maximal measure of discriminative power indicate regions known as absorption peaks for biological components such as hemoglobin and water. In this thesis we make latent traces, i.e. non-visible for the human eye, visible up to highest available dilution (1:8000) in infrared data. Hyperspectral images in the region of visible light achieve to detect traces only marginally beyond visibility by human eye. In order to evaluate the detectability of traces we exploit several classifiers to labeled data. We use several dimensionality reduction methods (PCA, LDA, band image and ratio image) in combination with a Maximum Likelihood (ML) classifier assuming normally distributed data. Random Forest builds a competitive approach. In the classification task we retrieve the exact positions of labeled trace preparation up to highest dilution. PCA prior to LDA and ML decision function achieves best results for classifying trace against background. Random Forest is preferable in multiclass classification. On the contrary, neither spectral indices nor classification approaches yield adequate achievements for the application of methods learned on labeled data to other images of specimen with arbitrary fabrics. Customized preprocessing and dimensionality reduction methods achieve no significant reduction of background influence. The proportion of trace-specific signal in the data is not sufficient for this task. We suggest supervision of the illumination light to pointedly initiate trace-specific interference. Concerning field usage of HSI we prefer area-scanning cameras (i.e. image plane acquisition with spectral scanning by a wavelength-tunable bandpass filter). Band and ratio images at established spectral indices qualify for live view screening on an external screen.},
    Timestamp = {2015.09.28}
    }

  • O. Vysotska, T. Naseer, L. Spinello, W. Burgard, and C. Stachniss, “Efficient and Effective Matching of Image Sequences Under Substantial Appearance Changes Exploiting GPS Prior,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , 2015, pp. 2774-2779. doi:10.1109/ICRA.2015.7139576
    [BibTeX] [PDF]
    The ability to localize a robot is an important capability and matching of observations under substantial changes is a prerequisite for robust long-term operation. This paper investigates the problem of efficiently coping with seasonal changes in image data. We present an extension of a recent approach [15] to visual image matching using sequence information. Our extension allows for exploiting GPS priors in the matching process to overcome the main computational bottleneck of the previous method and to handle loops within the image sequences. We present an experimental evaluation using real world data containing substantial seasonal changes and show that our approach outperforms the previous method in case a noisy GPS pose prior is available.

    @InProceedings{vysotska15icra,
    Title = {Efficient and Effective Matching of Image Sequences Under Substantial Appearance Changes Exploiting GPS Prior},
    Author = {O. Vysotska and T. Naseer and L. Spinello and W. Burgard and C. Stachniss},
    Booktitle = ICRA,
    Year = {2015},
    Pages = {2774-2779},
    Abstract = {The ability to localize a robot is an important capability and matching of observations under substantial changes is a prerequisite for robust long-term operation. This paper investigates the problem of efficiently coping with seasonal changes in image data. We present an extension of a recent approach [15] to visual image matching using sequence information. Our extension allows for exploiting GPS priors in the matching process to overcome the main computational bottleneck of the previous method and to handle loops within the image sequences. We present an experimental evaluation using real world data containing substantial seasonal changes and show that our approach outperforms the previous method in case a noisy GPS pose prior is available.},
    Doi = {10.1109/ICRA.2015.7139576},
    Timestamp = {2015.06.29},
    Url = {http://www.ipb.uni-bonn.de/pdfs/vysotska15icra.pdf}
    }

  • O. Vysotska and C. Stachniss, “Lazy Sequences Matching Under Substantial Appearance Changes,” in Workshop on Visual Place Recognition in Changing Environments at the IEEE Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , 2015.
    [BibTeX] [PDF]
    [none]
    @InProceedings{vysotska15icraws,
    Title = {Lazy Sequences Matching Under Substantial Appearance Changes},
    Author = {O. Vysotska and C. Stachniss},
    Booktitle = {Workshop on Visual Place Recognition in Changing Environments at the IEEE } # ICRA,
    Year = {2015},
    Abstract = {[none]},
    Timestamp = {2015.06.29},
    Url = {http://www.ipb.uni-bonn.de/pdfs/vysotska15icra-ws.pdf}
    }

  • C. Siedentop, R. Heinze, D. Kasper, G. Breuel, and C. Stachniss, “Path-Planning for Autonomous Parking with Dubins Curves,” in Proceedings of the Workshop Fahrerassistenzsysteme , 2015.
    [BibTeX]
    @InProceedings{siedentop15fas,
    author = {C. Siedentop and R. Heinze and D. Kasper and G. Breuel and C. Stachniss},
    title = {Path-Planning for Autonomous Parking with Dubins Curves},
    booktitle = {Proceedings of the Workshop Fahrerassistenzsysteme},
    year = {2015},
    }

2014

  • B. Frank, C. Stachniss, R. Schmedding, M. Teschner, and W. Burgard, “Learning object deformation models for robot motion planning,” Robotics and Autonomous Systems, p. -, 2014. doi:http://dx.doi.org/10.1016/j.robot.2014.04.005
    [BibTeX] [PDF]
    [none]
    @Article{Frank2014,
    Title = {Learning object deformation models for robot motion planning },
    Author = {Barbara Frank and Cyrill Stachniss and R\"{u}diger Schmedding and Matthias Teschner and Wolfram Burgard},
    Journal = {Robotics and Autonomous Systems },
    Year = {2014},
    Pages = { - },
    Abstract = {[none]},
    Crossref = {mn},
    Doi = {http://dx.doi.org/10.1016/j.robot.2014.04.005},
    ISSN = {0921-8890},
    Keywords = {Mobile robots},
    Url = {http://www.sciencedirect.com/science/article/pii/S0921889014000797}
    }

  • N. Abdo, L. Spinello, W. Burgard, and C. Stachniss, “Inferring What to Imitate in Manipulation Actions by Using a Recommender System,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Hong Kong, China, 2014.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Abdo2014,
    Title = {Inferring What to Imitate in Manipulation Actions by Using a Recommender System},
    Author = {N. Abdo and L. Spinello and W. Burgard and C. Stachniss},
    Booktitle = ICRA,
    Year = {2014},
    Address = {Hong Kong, China},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www2.informatik.uni-freiburg.de/~stachnis/pdf/abdo14icra.pdf}
    }

  • P. Agarwal, W. Burgard, and C. Stachniss, “Helmert’s and Bowie’s Geodetic Mapping Methods and Their Relation to Graph-Based SLAM,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Hong Kong, China, 2014.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Agarwal2014,
    Title = {Helmert's and Bowie's Geodetic Mapping Methods and Their Relation to Graph-Based SLAM},
    Author = {P. Agarwal and W. Burgard and C. Stachniss},
    Booktitle = ICRA,
    Year = {2014},
    Address = {Hong Kong, China},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.lifelong-navigation.eu/files/agarwal14bicra.pdf}
    }

  • P. Agarwal, W. Burgard, and C. Stachniss, “A Survey of Geodetic Approaches to Mapping and the Relationship to Graph-Based SLAM,” IEEE Robotics and Automation Magazine, vol. 21, pp. 63-80, 2014. doi:10.1109/MRA.2014.2322282
    [BibTeX] [PDF]
    The ability to simultaneously localize a robot and build a map of the environment is central to most robotics applications, and the problem is often referred to as simultaneous localization and mapping (SLAM). Robotics researchers have proposed a large variety of solutions allowing robots to build maps and use them for navigation. In addition, the geodetic community has addressed large-scale map building for centuries, computing maps that span across continents. These large-scale mapping processes had to deal with several challenges that are similar to those of the robotics community. In this article, we explain key geodetic map building methods that we believe are relevant for robot mapping. We also aim at providing a geodetic perspective on current state-of-the-art SLAM methods and identifying similarities both in terms of challenges faced and the solutions proposed by both communities. The central goal of this article is to connect both fields and enable future synergies between them.

    @Article{Agarwal2014b,
    Title = {A Survey of Geodetic Approaches to Mapping and the Relationship to Graph-Based SLAM},
    Author = {Pratik Agarwal and Wolfram Burgard and Cyrill Stachniss},
    Journal = {IEEE Robotics and Automation Magazine},
    Year = {2014},
    Pages = {63 - 80},
    Volume = {21},
    Abstract = {The ability to simultaneously localize a robot and build a map of the environment is central to most robotics applications, and the problem is often referred to as simultaneous localization and mapping (SLAM). Robotics researchers have proposed a large variety of solutions allowing robots to build maps and use them for navigation. In addition, the geodetic community has addressed large-scale map building for centuries, computing maps that span across continents. These large-scale mapping processes had to deal with several challenges that are similar to those of the robotics community. In this article, we explain key geodetic map building methods that we believe are relevant for robot mapping. We also aim at providing a geodetic perspective on current state-of-the-art SLAM methods and identifying similarities both in terms of challenges faced and the solutions proposed by both communities. The central goal of this article is to connect both fields and enable future synergies between them.},
    Doi = {10.1109/MRA.2014.2322282},
    Timestamp = {2014.09.18},
    Url = {http://www2.informatik.uni-freiburg.de/~stachnis/pdf/agarwal14ram-preprint.pdf}
    }

  • P. Agarwal, G. Grisetti, G. D. Tipaldi, L. Spinello, W. Burgard, and C. Stachniss, “Experimental Analysis of Dynamic Covariance Scaling for Robust Map Optimization Under Bad Initial Estimates,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Hong Kong, China, 2014.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Agarwal2014a,
    Title = {Experimental Analysis of Dynamic Covariance Scaling for Robust Map Optimization Under Bad Initial Estimates},
    Author = {P. Agarwal and G. Grisetti and G.D. Tipaldi and L. Spinello and W. Burgard and C. Stachniss},
    Booktitle = ICRA,
    Year = {2014},
    Address = {Hong Kong, China},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www2.informatik.uni-freiburg.de/~stachnis/pdf/agarwal14icra_dcs.pdf}
    }

  • M. Flick, “Localisation Using Open Street Map Data,” bachelor thesis Master Thesis, 2014.
    [BibTeX]
    The goal of this project is to build an online localisation system that localises a vehicle by using OpenStreetMap data and a record of the driven path. Since the Global Positioning Service (GPS) can only be used reliably when the satellite signal can be clearly received, which is when positioned outside of buildings and clear interfering signals, it is only accessible for a certain group of users. Furthermore, it can only be used to the conditions of the US government, as it is them who maintain the GPS-system. Our project develops an alternative for localisation by using independent data, such as OpenStreetMap data and measurements of the driven vehicle, the odometry. This approach uses a particle filter to localise a vehicle. It is a sampling approach that samples complex posterior densities over state spaces. Samples, called particles, are resampled according to the likelihood of the vehicle being at that position. To calculate this information, the position of a particle is weighted. A chamfer matching function performs this task by comparing the driven odometry to OpenStreetMap data and finding the best matches. Chamfer matching evaluates the match of edges to query image. The more similar the current odometry to the query image the better the match. According to the euclidean distance of the particles to their nearest match, the importance of the particle is measured. The particle filter loops over time and with each measurement update the particles move according to their motion update and conglomerate on the most likely position. Assuming that this approach can work in real time and with high accuracy, it is usable on its own with free accessible and contemporary geodata. For this purpose the vehicle tracks its driven path, for example by wheel odometry, and both, the track and the OpenStreetMap data, are evaluated during the runtime of the program to get the current position. We show that the particle filter compensates uncertainties of measurement on the basis of loose measuring by performing a robust sampling update. A novel feature of this approach is that we show that the type of odometry does not matter as the chamfer matching and due to the robustness of the particle filter this can overcome. We demonstrate the located position of the vehicle in comparing it to the GPS position of the vehicle to show the difference and accuracy. Also, we will compare the runtime efficiency of GPS to that of the combination of particle filter and chamfer matching approach.

    @MastersThesis{Flick2014Localisation,
    Title = {Localisation Using Open Street Map Data},
    Author = {Mareike Flick},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2014},
    Type = {bachelor thesis},
    Abstract = {The goal of this project is to build an online localisation system that localises a vehicle by using OpenStreetMap data and a record of the driven path. Since the Global Positioning Service (GPS) can only be used reliably when the satellite signal can be clearly received, which is when positioned outside of buildings and clear interfering signals, it is only accessible for a certain group of users. Furthermore, it can only be used to the conditions of the US government, as it is them who maintain the GPS-system. Our project develops an alternative for localisation by using independent data, such as OpenStreetMap data and measurements of the driven vehicle, the odometry. This approach uses a particle filter to localise a vehicle. It is a sampling approach that samples complex posterior densities over state spaces. Samples, called particles, are resampled according to the likelihood of the vehicle being at that position. To calculate this information, the position of a particle is weighted. A chamfer matching function performs this task by comparing the driven odometry to OpenStreetMap data and finding the best matches. Chamfer matching evaluates the match of edges to query image. The more similar the current odometry to the query image the better the match. According to the euclidean distance of the particles to their nearest match, the importance of the particle is measured. The particle filter loops over time and with each measurement update the particles move according to their motion update and conglomerate on the most likely position. Assuming that this approach can work in real time and with high accuracy, it is usable on its own with free accessible and contemporary geodata. For this purpose the vehicle tracks its driven path, for example by wheel odometry, and both, the track and the OpenStreetMap data, are evaluated during the runtime of the program to get the current position. We show that the particle filter compensates uncertainties of measurement on the basis of loose measuring by performing a robust sampling update. A novel feature of this approach is that we show that the type of odometry does not matter as the chamfer matching and due to the robustness of the particle filter this can overcome. We demonstrate the located position of the vehicle in comparing it to the GPS position of the vehicle to show the difference and accuracy. Also, we will compare the runtime efficiency of GPS to that of the combination of particle filter and chamfer matching approach.},
    Timestamp = {2015.01.19}
    }

  • K. Franz, “Bestimmung der Trajektorie des ATV-4 bei der Separation von der Ariane-5 Oberstufe aus einer Stereo-Bildsequenz,” bachelor thesis Master Thesis, 2014.
    [BibTeX]
    The successfull launch of the spacecraft, ATV-4, on June 5, 2013 by the German Aerospace Center (abbr. DLR) and the European Space Agency (abbr. ESA) is a relevant issue in photogrammetric regard. For the first time, the seperation process and the first seconds of space flight of an automated transfer vehicle could be recorded, tracked and supervised by assembling a stereo camerasystem at the Ariane rocket. This monitoring task includes the reconstruction of the ATV’s trajectory in the stereo image sequence. As main goal of this bachelor thesis we developed a routine that derives this trajectory. Our approach is based on object tracking by implementing a KLT-Tracker. First, interesting points have to be detected in a region of interest and be tracked over time. The homologous points in the stereo image partner can be extracted by template matching with subpixel precision. Subsequently, the object coordinates can be calculated by spatial intersection. With the resulting 3D point clouds the motion can be computed. However, numerous analyses have shown, that the reconstruction of the ATV’s trajectory is very insufficient due to the mechanical constellations and also to a missing photogrammetric calibration. To overcome this and to get more suitable data, it was necessary to create a test scenario. This data allow a more realistic validation of our approach. The records of the test scenario are realized with a comparable stereo camerasystem to the DLR configurations. However, a photogrammetric calibration is performed for this system. In addition, long distances to the camerasystems are avoided, since such long distances cause problems in the DLR sequence. Here, the stereoscopical evaluation can fail soon. Nevertheless, the trajectory of the ATV is reconstructable. For this purpose, the stereo image sequences have to be shortened. Only about two thirds of the sequence can be used as input for this method. Because of the missing stochastical information the resulting uncertainties can not be adjusted. Especially the implementation on the test data revealed that our approach generates reasonable trajectories.

    @MastersThesis{Franz2014,
    Title = {Bestimmung der Trajektorie des ATV-4 bei der Separation von der Ariane-5 Oberstufe aus einer Stereo-Bildsequenz},
    Author = {Katharina Franz},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2014},
    Type = {bachelor thesis},
    Abstract = {The successfull launch of the spacecraft, ATV-4, on June 5, 2013 by the German Aerospace Center (abbr. DLR) and the European Space Agency (abbr. ESA) is a relevant issue in photogrammetric regard. For the first time, the seperation process and the first seconds of space flight of an automated transfer vehicle could be recorded, tracked and supervised by assembling a stereo camerasystem at the Ariane rocket. This monitoring task includes the reconstruction of the ATV's trajectory in the stereo image sequence. As main goal of this bachelor thesis we developed a routine that derives this trajectory. Our approach is based on object tracking by implementing a KLT-Tracker. First, interesting points have to be detected in a region of interest and be tracked over time. The homologous points in the stereo image partner can be extracted by template matching with subpixel precision. Subsequently, the object coordinates can be calculated by spatial intersection. With the resulting 3D point clouds the motion can be computed. However, numerous analyses have shown, that the reconstruction of the ATV's trajectory is very insufficient due to the mechanical constellations and also to a missing photogrammetric calibration. To overcome this and to get more suitable data, it was necessary to create a test scenario. This data allow a more realistic validation of our approach. The records of the test scenario are realized with a comparable stereo camerasystem to the DLR configurations. However, a photogrammetric calibration is performed for this system. In addition, long distances to the camerasystems are avoided, since such long distances cause problems in the DLR sequence. Here, the stereoscopical evaluation can fail soon. Nevertheless, the trajectory of the ATV is reconstructable. For this purpose, the stereo image sequences have to be shortened. Only about two thirds of the sequence can be used as input for this method. Because of the missing stochastical information the resulting uncertainties can not be adjusted. Especially the implementation on the test data revealed that our approach generates reasonable trajectories.},
    Timestamp = {2014.09.30}
    }

  • R. Hagensieker, R. Roscher, and B. Waske, “Texture-based classification of a tropical forest area using multi-temporal ASAR data,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2014.
    [BibTeX]
    [none]
    @InProceedings{Hagensieker2014Texture,
    Title = {Texture-based classification of a tropical forest area using multi-temporal ASAR data},
    Author = {Hagensieker, Ron and Roscher, Ribana and Waske, Bj{\"o}rn},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2014},
    Abstract = {[none]},
    Owner = {ribana},
    Timestamp = {2014.11.04}
    }

  • K. Herzog, R. Roscher, M. Wieland, A. Kicherer, T. Läbe, W. Förstner, H. Kuhlmann, and R. Töpfer, “Initial steps for high-throughput phenotyping in vineyards,” VITIS – Journal of Grapevine Research, vol. 53, iss. 1, pp. 1-8, 2014.
    [BibTeX]
    The evaluation of phenotypic characters of grape- vines is required directly in the vineyard and is strongly limited by time, costs and the subjectivity of person in charge. Sensor-based techniques are prerequisite to al- low non-invasive phenotyping of individual plant traits, to increase the quantity of object records and to reduce error variation. Thus, a Prototype-Image-Acquisition- System (PIAS) was developed for semi-automated cap- ture of geo-referenced RGB images in an experimental vineyard. Different strategies were tested for image in- terpretation using Matlab. The interpretation of imag- es from the vineyard with the real background is more practice-oriented but requires the calculation of depth maps. Images were utilised to verify the phenotyping results of two semi-automated and one automated pro- totype image interpretation framework. The semi-auto- mated procedures enable contactless and non-invasive detection of bud burst and quantification of shoots at an early developmental stage (BBCH 10) and enable fast and accurate determination of the grapevine berry size at BBCH 89. Depending on the time of image ac- quisition at BBCH 10 up to 94 \% of green shoots were visible in images. The mean berry size (BBCH 89) was recorded non-invasively with a precision of 1 mm.

    @Article{Herzog2014Initial,
    Title = {Initial steps for high-throughput phenotyping in vineyards},
    Author = {Herzog, Katja and Roscher, Ribana and Wieland, Markus and Kicherer,Anna and L\"abe, Thomas and F\"orstner, Wolfgang and Kuhlmann, Heiner and T\"opfer, Reinhard},
    Journal = {VITIS - Journal of Grapevine Research},
    Year = {2014},
    Month = jan,
    Number = {1},
    Pages = {1--8},
    Volume = {53},
    Abstract = {The evaluation of phenotypic characters of grape- vines is required directly in the vineyard and is strongly limited by time, costs and the subjectivity of person in charge. Sensor-based techniques are prerequisite to al- low non-invasive phenotyping of individual plant traits, to increase the quantity of object records and to reduce error variation. Thus, a Prototype-Image-Acquisition- System (PIAS) was developed for semi-automated cap- ture of geo-referenced RGB images in an experimental vineyard. Different strategies were tested for image in- terpretation using Matlab. The interpretation of imag- es from the vineyard with the real background is more practice-oriented but requires the calculation of depth maps. Images were utilised to verify the phenotyping results of two semi-automated and one automated pro- totype image interpretation framework. The semi-auto- mated procedures enable contactless and non-invasive detection of bud burst and quantification of shoots at an early developmental stage (BBCH 10) and enable fast and accurate determination of the grapevine berry size at BBCH 89. Depending on the time of image ac- quisition at BBCH 10 up to 94 \% of green shoots were visible in images. The mean berry size (BBCH 89) was recorded non-invasively with a precision of 1 mm.}
    }

  • S. Ito, F. Endres, M. Kuderer, G. D. Tipaldi, C. Stachniss, and W. Burgard, “W-RGB-D: Floor-Plan-Based Indoor Global Localization Using a Depth Camera and WiFi,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Hong Kong, China, 2014.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Ito2014,
    Title = {W-RGB-D: Floor-Plan-Based Indoor Global Localization Using a Depth Camera and WiFi},
    Author = {S. Ito and F. Endres and M. Kuderer and G.D. Tipaldi and C. Stachniss and W. Burgard},
    Booktitle = ICRA,
    Year = {2014},
    Address = {Hong Kong, China},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www2.informatik.uni-freiburg.de/~tipaldi/papers/ito14icra.pdf}
    }

  • R. Kümmerle, M. Ruhnke, B. Steder, C. Stachniss, and W. Burgard, “Autonomous Robot Navigation in Highly Populated Pedestrian Zones,” Journal of Field Robotics, 2014. doi:10.1002/rob.21534
    [BibTeX] [PDF]
    [none]
    @Article{kummerle14jfr,
    Title = {Autonomous Robot Navigation in Highly Populated Pedestrian Zones},
    Author = {K{\"u}mmerle, Rainer and Ruhnke, Michael and Steder, Bastian and Stachniss,Cyrill and Burgard, Wolfram},
    Journal = jfr,
    Year = {2014},
    Abstract = {[none]},
    Doi = {10.1002/rob.21534},
    Timestamp = {2015.01.22},
    Url = {http://ais.informatik.uni-freiburg.de/publications/papers/kuemmerle14jfr.pdf}
    }

  • A. Kicherer, R. Roscher, K. Herzog, W. Förstner, and R. Töpfer, “Image based Evaluation for the Detection of Cluster Parameters in Grapevine,” in Acta horticulturae , 2014.
    [BibTeX]
    @InProceedings{Kicherer2014Evaluation,
    Title = {Image based Evaluation for the Detection of Cluster Parameters in Grapevine},
    Author = {Kicherer, A. and Roscher, R. and Herzog, K. and F\"orstner, W. and T\"opfer, R.},
    Booktitle = {Acta horticulturae},
    Year = {2014},
    Owner = {ribana},
    Timestamp = {2016.06.20}
    }

  • L. Klingbeil, M. Nieuwenhuisen, J. Schneider, C. Eling, D. Droeschel, D. Holz, T. Läbe, W. Förstner, S. Behnke, and H. Kuhlmann, “Towards Autonomous Navigation of an UAV-based Mobile Mapping System,” in 4th International Conference on Machine Control & Guidance , 2014, pp. 136-147.
    [BibTeX] [PDF]
    For situations, where mapping is neither possible from high altitudes nor from the ground, we are developing an autonomous micro aerial vehicle able to fly at low altitudes in close vicinity of obstacles. This vehicle is based on a MikroKopterTM octocopter platform (maximum total weight: 5kg), and contains a dual frequency GPS board, an IMU, a compass, two stereo camera pairs with fisheye lenses, a rotating 3D laser scanner, 8 ultrasound sensors, a real-time processing unit, and a compact PC for on-board ego-motion estimation and obstacle detection for autonomous navigation. A high-resolution camera is used for the actual mapping task, where the environment is reconstructed in three dimensions from images, using a highly accurate bundle adjustment. In this contribution, we describe the sensor system setup and present results from the evaluation of several aspects of the different subsystems as well as initial results from flight tests.

    @InProceedings{klingbeil14mcg,
    Title = {Towards Autonomous Navigation of an UAV-based Mobile Mapping System},
    Author = {Klingbeil, Lasse and Nieuwenhuisen, Matthias and Schneider, Johannes and Eling, Christian and Droeschel, David and Holz, Dirk and L\"abe, Thomas and F\"orstner, Wolfgang and Behnke, Sven and Kuhlmann, Heiner},
    Booktitle = {4th International Conference on Machine Control \& Guidance},
    Year = {2014},
    Pages = {136--147},
    Abstract = {For situations, where mapping is neither possible from high altitudes nor from the ground, we are developing an autonomous micro aerial vehicle able to fly at low altitudes in close vicinity of obstacles. This vehicle is based on a MikroKopterTM octocopter platform (maximum total weight: 5kg), and contains a dual frequency GPS board, an IMU, a compass, two stereo camera pairs with fisheye lenses, a rotating 3D laser scanner, 8 ultrasound sensors, a real-time processing unit, and a compact PC for on-board ego-motion estimation and obstacle detection for autonomous navigation. A high-resolution camera is used for the actual mapping task, where the environment is reconstructed in three dimensions from images, using a highly accurate bundle adjustment. In this contribution, we describe the sensor system setup and present results from the evaluation of several aspects of the different subsystems as well as initial results from flight tests.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/klingbeil14mcg.pdf}
    }

  • B. Mack, R. Roscher, and B. Waske, “Can I trust my one-class classification?,” Remote Sensing, vol. 6, iss. 9, pp. 8779-8802, 2014.
    [BibTeX] [PDF]
    Contrary to binary and multi-class classifiers, the purpose of a one-class classifier for remote sensing applications is to map only one specific land use/land cover class of interest. Training these classifiers exclusively requires reference data for the class of interest, while training data for other classes is not required. Thus, the acquisition of reference data can be significantly reduced. However, one-class classification is fraught with uncertainty and full automatization is difficult, due to the limited reference information that is available for classifier training. Thus, a user-oriented one-class classification strategy is proposed, which is based among others on the visualization and interpretation of the one-class classifier outcomes during the data processing. Careful interpretation of the diagnostic plots fosters the understanding of the classification outcome, e.g., the class separability and suitability of a particular threshold. In the absence of complete and representative validation data, which is the fact in the context of a real one-class classification application, such information is valuable for evaluation and improving the classification. The potential of the proposed strategy is demonstrated by classifying different crop types with hyperspectral data from Hyperion.

    @Article{Mack2014Can,
    Title = {Can I trust my one-class classification?},
    Author = {Mack, Benjamin and Roscher, Ribana and Waske, Bj{\"o}rn},
    Journal = {Remote Sensing},
    Year = {2014},
    Number = {9},
    Pages = {8779--8802},
    Volume = {6},
    Abstract = {Contrary to binary and multi-class classifiers, the purpose of a one-class classifier for remote sensing applications is to map only one specific land use/land cover class of interest. Training these classifiers exclusively requires reference data for the class of interest, while training data for other classes is not required. Thus, the acquisition of reference data can be significantly reduced. However, one-class classification is fraught with uncertainty and full automatization is difficult, due to the limited reference information that is available for classifier training. Thus, a user-oriented one-class classification strategy is proposed, which is based among others on the visualization and interpretation of the one-class classifier outcomes during the data processing. Careful interpretation of the diagnostic plots fosters the understanding of the classification outcome, e.g., the class separability and suitability of a particular threshold. In the absence of complete and representative validation data, which is the fact in the context of a real one-class classification application, such information is valuable for evaluation and improving the classification. The potential of the proposed strategy is demonstrated by classifying different crop types with hyperspectral data from Hyperion.},
    Owner = {ribana},
    Timestamp = {2014.11.04},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Mack2014Can.pdf}
    }

  • M. Mazuran, G. D. Tipaldi, L. Spinello, W. Burgard, and C. Stachniss, “A Statistical Measure for Map Consistency in SLAM,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Hong Kong, China, 2014.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Mazuran2014,
    Title = {A Statistical Measure for Map Consistency in SLAM},
    Author = {M. Mazuran and G.D. Tipaldi and L. Spinello and W. Burgard and C. Stachniss},
    Booktitle = ICRA,
    Year = {2014},
    Address = {Hong Kong, China},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www2.informatik.uni-freiburg.de/~stachnis/pdf/mazuran14icra.pdf}
    }

  • T. Naseer, L. Spinello, W. Burgard, and Stachniss, “Robust Visual Robot Localization Across Seasons using Network Flows,” in Proceedings of the National Conference on Artificial Intelligence (AAAI) , 2014.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Naseer2014,
    Title = {Robust Visual Robot Localization Across Seasons using Network Flows},
    Author = {Naseer, T. and Spinello, L. and Burgard, W. and Stachniss},
    Booktitle = aaai,
    Year = {2014},
    Abstract = {[none]},
    Timestamp = {2014.05.12},
    Url = {http://www2.informatik.uni-freiburg.de/~stachnis/pdf/naseer14aaai.pdf}
    }

  • F. Nenci, L. Spinello, and C. Stachniss, “Effective Compression of Range Data Streams for Remote Robot Operations using H.264,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2014.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Nenci2014,
    Title = {Effective Compression of Range Data Streams for Remote Robot Operations using H.264},
    Author = {Fabrizio Nenci and Luciano Spinello and Cyrill Stachniss},
    Booktitle = iros,
    Year = {2014},
    Abstract = {[none]},
    Timestamp = {2014.09.18},
    Url = {http://www2.informatik.uni-freiburg.de/~stachnis/pdf/nenci14iros.pdf}
    }

  • S. Oßwald, H. Kretzschmar, W. Burgard, and C. Stachniss, “Learning to Give Route Directions from Human Demonstrations,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Hong Kong, China, 2014.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Osswald2014,
    Title = {Learning to Give Route Directions from Human Demonstrations},
    Author = {S. O{\ss}wald and H. Kretzschmar and W. Burgard and C. Stachniss},
    Booktitle = ICRA,
    Year = {2014},
    Address = {Hong Kong, China},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www2.informatik.uni-freiburg.de/~kretzsch/pdf/osswald14icra.pdf}
    }

  • R. Roscher, K. Herzog, A. Kunkel, A. Kicherer, R. Töpfer, and W. Förstner, “Automated image analysis framework for high-throughput determination of grapevine berry sizes using conditional random fields,” Computers and Electronics in Agriculture, vol. 100, pp. 148-158, 2014. doi:10.1016/j.compag.2013.11.008
    [BibTeX]
    @Article{Roscher2014Automated,
    Title = {Automated image analysis framework for high-throughput determination of grapevine berry sizes using conditional random fields},
    Author = {Roscher, Ribana and Herzog, Katja and Kunkel, Annemarie and Kicherer, Anna and T{\"o}pfer, Reinhard and F{\"o}rstner, Wolfgang},
    Journal = {Computers and Electronics in Agriculture},
    Year = {2014},
    Pages = {148--158},
    Volume = {100},
    Doi = {10.1016/j.compag.2013.11.008},
    Publisher = {Elsevier}
    }

  • R. Roscher and B. Waske, “Shapelet-based sparse image representation for landcover classification of hyperspectral data,” in IAPR Workshop on Pattern Recognition in Remote Sensing , 2014, pp. 1-6.
    [BibTeX] [PDF]
    This paper presents a novel sparse representation-based classifier for landcover mapping of hyperspectral image data. Each image patch is factorized into segmentation patterns, also called shapelets, and patch-specific spectral features. The combination of both is represented in a patch-specific spatial-spectral dictionary, which is used for a sparse coding procedure for the reconstruction and classification of image patches. Hereby, each image patch is sparsely represented by a linear combination of elements out of the dictionary. The set of shapelets is specifically learned for each image in an unsupervised way in order to capture the image structure. The spectral features are assumed to be the training data. The experiments show that the proposed approach shows superior results in comparison to sparse-representation based classifiers that use no or only limited spatial information and behaves competitive or better than state-of-the-art classifiers utilizing spatial information and kernelized sparse representation-based classifiers.

    @InProceedings{Roscher2014Shapelet,
    Title = {Shapelet-based sparse image representation for landcover classification of hyperspectral data},
    Author = {Roscher, Ribana and Waske, Bj{\"o}rn},
    Booktitle = {IAPR Workshop on Pattern Recognition in Remote Sensing},
    Year = {2014},
    Pages = {1--6},
    Abstract = {This paper presents a novel sparse representation-based classifier for landcover mapping of hyperspectral image data. Each image patch is factorized into segmentation patterns, also called shapelets, and patch-specific spectral features. The combination of both is represented in a patch-specific spatial-spectral dictionary, which is used for a sparse coding procedure for the reconstruction and classification of image patches. Hereby, each image patch is sparsely represented by a linear combination of elements out of the dictionary. The set of shapelets is specifically learned for each image in an unsupervised way in order to capture the image structure. The spectral features are assumed to be the training data. The experiments show that the proposed approach shows superior results in comparison to sparse-representation based classifiers that use no or only limited spatial information and behaves competitive or better than state-of-the-art classifiers utilizing spatial information and kernelized sparse representation-based classifiers.},
    Owner = {ribana},
    Timestamp = {2014.11.04},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2014Shapelet.pdf}
    }

  • R. Roscher and B. Waske, “Superpixel-based classification of hyperspectral data using sparse representation and conditional random fields,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2014.
    [BibTeX] [PDF]
    This paper presents a superpixel-based classifier for landcover mapping of hyperspectral image data. The approach relies on the sparse representation of each pixel by a weighted linear combination of the training data. Spatial information is incorporated by using a coarse patch-based neighborhood around each pixel as well as data-adapted superpixels. The classification is done via a hierarchical conditional random field, which utilizes the sparse-representation output and models spatial and hierarchical structures in the hyperspectral image. The experiments show that the proposed approach results in superior accuracies in comparison to sparse-representation based classifiers that solely use a patch-based neighborhood.

    @InProceedings{Roscher2014Superpixel,
    Title = {Superpixel-based classification of hyperspectral data using sparse representation and conditional random fields},
    Author = {Roscher, Ribana and Waske, Bj{\"o}rn},
    Booktitle = {{IEEE} International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2014},
    Abstract = {This paper presents a superpixel-based classifier for landcover mapping of hyperspectral image data. The approach relies on the sparse representation of each pixel by a weighted linear combination of the training data. Spatial information is incorporated by using a coarse patch-based neighborhood around each pixel as well as data-adapted superpixels. The classification is done via a hierarchical conditional random field, which utilizes the sparse-representation output and models spatial and hierarchical structures in the hyperspectral image. The experiments show that the proposed approach results in superior accuracies in comparison to sparse-representation based classifiers that solely use a patch-based neighborhood.},
    Owner = {ribana},
    Timestamp = {2014.11.04},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2014Superpixel.pdf}
    }

  • J. Schneider and W. Förstner, “Real-time Accurate Geo-localization of a MAV with Omnidirectional Visual Odometry and GPS,” in Computer Vision – ECCV 2014 Workshops , 2014, pp. 271-282. doi:10.1007/978-3-319-16178-5_18
    [BibTeX] [PDF]
    This paper presents a system for direct geo-localization of a MAV in an unknown environment using visual odometry and precise real time kinematic (RTK) GPS information. Visual odometry is performed with a multi-camera system with four fisheye cameras that cover a wide field of view which leads to better constraints for localization due to long tracks and a better intersection geometry. Visual observations from the acquired image sequences are refined with a high accuracy on selected keyframes by an incremental bundle adjustment using the iSAM2 algorithm. The optional integration of GPS information yields long-time stability and provides a direct geo-referenced solution. Experiments show the high accuracy which is below 3 cm standard deviation in position.

    @InProceedings{schneider14eccv-ws,
    Title = {Real-time Accurate Geo-localization of a MAV with Omnidirectional Visual Odometry and GPS},
    Author = {J. Schneider and W. F\"orstner},
    Booktitle = {Computer Vision - ECCV 2014 Workshops},
    Year = {2014},
    Pages = {271--282},
    Abstract = {This paper presents a system for direct geo-localization of a MAV in an unknown environment using visual odometry and precise real time kinematic (RTK) GPS information. Visual odometry is performed with a multi-camera system with four fisheye cameras that cover a wide field of view which leads to better constraints for localization due to long tracks and a better intersection geometry. Visual observations from the acquired image sequences are refined with a high accuracy on selected keyframes by an incremental bundle adjustment using the iSAM2 algorithm. The optional integration of GPS information yields long-time stability and provides a direct geo-referenced solution. Experiments show the high accuracy which is below 3 cm standard deviation in position.},
    Doi = {10.1007/978-3-319-16178-5_18},
    Url = {http://www.ipb.uni-bonn.de/pdfs/schneider14eccv-ws.pdf}
    }

  • J. Schneider, T. Läbe, and W. Förstner, “Real-Time Bundle Adjustment with an Omnidirectional Multi-Camera System and GPS,” in Proceedings of the 4th International Conference on Machine Control & Guidance , 2014, pp. 98-103.
    [BibTeX] [PDF]
    In this paper we present our system for visual odometry that performs a fast incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. It is applicable to image streams of a calibrated multi-camera system with omnidirectional cameras. In this paper we use an autonomously flying octocopter that is equipped for visual odometry and obstacle detection with four fisheye cameras, which provide a large field of view. For real-time ego-motion estimation the platform is equipped, besides the cameras, with a dual frequency GPS board, an IMU and a compass. In this paper we show how we apply our system for visual odometry using the synchronized video streams of the four fisheye cameras. The position and orientation information from the GPS-unit and the inertial sensors can optionally be integrated into our system. We will show the obtained accuracy of pure odometry and compare it with the solution from GPS/INS.

    @InProceedings{schneider14mcg,
    Title = {Real-Time Bundle Adjustment with an Omnidirectional Multi-Camera System and GPS},
    Author = {J. Schneider and T. L\"abe and W. F\"orstner},
    Booktitle = {Proceedings of the 4th International Conference on Machine Control \& Guidance},
    Year = {2014},
    Pages = {98--103},
    Abstract = {In this paper we present our system for visual odometry that performs a fast incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. It is applicable to image streams of a calibrated multi-camera system with omnidirectional cameras. In this paper we use an autonomously flying octocopter that is equipped for visual odometry and obstacle detection with four fisheye cameras, which provide a large field of view. For real-time ego-motion estimation the platform is equipped, besides the cameras, with a dual frequency GPS board, an IMU and a compass. In this paper we show how we apply our system for visual odometry using the synchronized video streams of the four fisheye cameras. The position and orientation information from the GPS-unit and the inertial sensors can optionally be integrated into our system. We will show the obtained accuracy of pure odometry and compare it with the solution from GPS/INS.},
    City = {Braunschweig},
    Url = {http://www.ipb.uni-bonn.de/pdfs/schneider14mcg.pdf}
    }

  • C. Stachniss and W. Burgard, “Particle Filters for Robot Navigation,” Foundations and Trends in Robotics, vol. 3, iss. 4, pp. 211-282, 2014. doi:10.1561/2300000013
    [BibTeX] [PDF]
    [none]
    @Article{Stachniss2014,
    Title = {Particle Filters for Robot Navigation},
    Author = {C. Stachniss and W. Burgard},
    Journal = fntr,
    Year = {2014},
    Month = {2012, published 2014},
    Number = {4},
    Pages = {211-282},
    Volume = {3},
    Abstract = {[none]},
    Doi = {10.1561/2300000013},
    Timestamp = {2014.04.24},
    Url = {http://www.nowpublishers.com/articles/foundations-and-trends-in-robotics/ROB-013}
    }

  • J. Stefanski, O. Chaskovskyy, and B. Waske, “Mapping and monitoring of land use changes in post-Soviet western Ukraine using remote sensing data,” Applied Geography, vol. 55, pp. 155-164, 2014. doi:10.1016/j.apgeog.2014.08.003
    [BibTeX]
    While agriculture is expanded and intensified in many parts of the world, decreases in land use intensity and farmland abandonment take place in other parts. Eastern Europe experienced widespread changes of agricultural land use after the collapse of the Soviet Union in 1991, however, rates and patterns of these changes are still not well understood. Our objective was to map and analyze changes of land management regimes, including large-scale cropland, small-scale cropland, and abandoned farmland. Monitoring land management regimes is a promising avenue to better understand the temporal and spatial patterns of land use intensity changes. For mapping and change detection, we used an object-based approach with Superpixel segmentation for delineating objects and a Random Forest classifier. We applied this approach to Landsat and ERS SAR data for the years 1986, 1993, 1999, 2006, and 2010 to estimate change trajectories for this time period in western Ukraine. The first period during the 1990s was characterized by post-socialist transition processes including farmland abandonment and substantial subsistence agriculture. Later on, recultivation processes and the recurrence of industrial, large-scale farming were triggered by global food prices that have led to a growing interest in this region.

    @Article{Stefanski2014Mapping2,
    Title = {Mapping and monitoring of land use changes in post-Soviet western Ukraine using remote sensing data},
    Author = {Stefanski, Jan and Chaskovskyy, Oleh and Waske, Bj{\"o}rn},
    Journal = {Applied Geography},
    Year = {2014},
    Pages = {155--164},
    Volume = {55},
    Abstract = {While agriculture is expanded and intensified in many parts of the world, decreases in land use intensity and farmland abandonment take place in other parts. Eastern Europe experienced widespread changes of agricultural land use after the collapse of the Soviet Union in 1991, however, rates and patterns of these changes are still not well understood. Our objective was to map and analyze changes of land management regimes, including large-scale cropland, small-scale cropland, and abandoned farmland. Monitoring land management regimes is a promising avenue to better understand the temporal and spatial patterns of land use intensity changes. For mapping and change detection, we used an object-based approach with Superpixel segmentation for delineating objects and a Random Forest classifier. We applied this approach to Landsat and ERS SAR data for the years 1986, 1993, 1999, 2006, and 2010 to estimate change trajectories for this time period in western Ukraine. The first period during the 1990s was characterized by post-socialist transition processes including farmland abandonment and substantial subsistence agriculture. Later on, recultivation processes and the recurrence of industrial, large-scale farming were triggered by global food prices that have led to a growing interest in this region.},
    Doi = {10.1016/j.apgeog.2014.08.003},
    ISSN = {01436228}
    }

  • J. Stefanski, T. Kuemmerle, O. Chaskovskyy, P. Griffiths, V. Havryluk, J. Knorn, N. Korol, A. Sieber, and B. Waske, “Mapping Land Management Regimes in Western Ukraine Using Optical and SAR Data,” Remote Sensing, vol. 6, iss. 6, pp. 5279-5305, 2014. doi:10.3390/rs6065279
    [BibTeX]
    The global demand for agricultural products is surging due to population growth, more meat-based diets, and the increasing role of bioenergy. Three strategies can increase agricultural production: (1) expanding agriculture into natural ecosystems; (2) intensifying existing farmland; or (3) recultivating abandoned farmland. Because agricultural expansion entails substantial environmental trade-offs, intensification and recultivation are currently gaining increasing attention. Assessing where these strategies may be pursued, however, requires improved spatial information on land use intensity, including where farmland is active and fallow. We developed a framework to integrate optical and radar data in order to advance the mapping of three farmland management regimes: (1) large-scale, mechanized agriculture; (2) small-scale, subsistence agriculture; and (3) fallow or abandoned farmland. We applied this framework to our study area in western Ukraine, a region characterized by marked spatial heterogeneity in management intensity due to the legacies from Soviet land management, the breakdown of the Soviet Union in 1991, and the recent integration of this region into world markets. We mapped land management regimes using a hierarchical, object-based framework. Image segmentation for delineating objects was performed by using the Superpixel Contour algorithm. We then applied Random Forest classification to map land management regimes and validated our map using randomly sampled in-situ data, obtained during an extensive field campaign. Our results showed that farmland management regimes were mapped reliably, resulting in a final map with an overall accuracy of 83.4%. Comparing our land management regimes map with a soil map revealed that most fallow land occurred on soils marginally suited for agriculture, but some areas within our study region contained considerable potential for recultivation. Overall, our study highlights the potential for an improved, more nuanced mapping of agricultural land use by combining imagery of different sensors.

    @Article{Stefanski2014Mapping,
    Title = {Mapping Land Management Regimes in Western Ukraine Using Optical and SAR Data},
    Author = {Stefanski, Jan and Kuemmerle, Tobias and Chaskovskyy, Oleh and Griffiths, Patrick and Havryluk, Vassiliy and Knorn, Jan and Korol, Nikolas and Sieber, Anika and Waske, Bj{\"o}rn},
    Journal = {Remote Sensing},
    Year = {2014},
    Number = {6},
    Pages = {5279--5305},
    Volume = {6},
    Abstract = {The global demand for agricultural products is surging due to population growth, more meat-based diets, and the increasing role of bioenergy. Three strategies can increase agricultural production: (1) expanding agriculture into natural ecosystems; (2) intensifying existing farmland; or (3) recultivating abandoned farmland. Because agricultural expansion entails substantial environmental trade-offs, intensification and recultivation are currently gaining increasing attention. Assessing where these strategies may be pursued, however, requires improved spatial information on land use intensity, including where farmland is active and fallow. We developed a framework to integrate optical and radar data in order to advance the mapping of three farmland management regimes: (1) large-scale, mechanized agriculture; (2) small-scale, subsistence agriculture; and (3) fallow or abandoned farmland. We applied this framework to our study area in western Ukraine, a region characterized by marked spatial heterogeneity in management intensity due to the legacies from Soviet land management, the breakdown of the Soviet Union in 1991, and the recent integration of this region into world markets. We mapped land management regimes using a hierarchical, object-based framework. Image segmentation for delineating objects was performed by using the Superpixel Contour algorithm. We then applied Random Forest classification to map land management regimes and validated our map using randomly sampled in-situ data, obtained during an extensive field campaign. Our results showed that farmland management regimes were mapped reliably, resulting in a final map with an overall accuracy of 83.4%. Comparing our land management regimes map with a soil map revealed that most fallow land occurred on soils marginally suited for agriculture, but some areas within our study region contained considerable potential for recultivation. Overall, our study highlights the potential for an improved, more nuanced mapping of agricultural land use by combining imagery of different sensors.},
    Doi = {10.3390/rs6065279},
    ISSN = {2072-4292},
    Owner = {JanS}
    }

  • O. Vysotska, B. Frank, I. Ulbert, O. Paul, P. Ruther, C. Stachniss, and W. Burgard, “Automatic Channel Selection and Neural Signal Estimation across Channels of Neural Probes,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Chicago, USA, 2014.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Vysotska2014,
    Title = {Automatic Channel Selection and Neural Signal Estimation across Channels of Neural Probes},
    Author = {O. Vysotska and B. Frank and I. Ulbert and O. Paul and P. Ruther and C. Stachniss and W. Burgard},
    Booktitle = iros,
    Year = {2014},
    Address = {Chicago, USA},
    Abstract = {[none]},
    Timestamp = {2014.09.22},
    Url = {http://www2.informatik.uni-freiburg.de/~stachnis/pdf/vysotska14iros.pdf}
    }

  • V. A. Ziparo, G. Castelli, L. Van Gool, G. Grisetti, B. Leibe, M. Proesmans, and C. Stachniss, “The ROVINA Project. Robots for Exploration, Digital Preservation and Visualization of Archeological sites,” in Proc. of the 18th ICOMOS General Assembly and Scientific Symposium “Heritage and Landscape as Human Values" , 2014.
    [BibTeX]
    [none]
    @InProceedings{ziparo14icomosga,
    Title = {The ROVINA Project. Robots for Exploration, Digital Preservation and Visualization of Archeological sites},
    Author = {Ziparo, V.A. and Castelli, G. and Van Gool, L. and Grisetti, G. and Leibe, B. and Proesmans, M. and Stachniss, C.},
    Booktitle = {Proc. of the 18th ICOMOS General Assembly and Scientific Symposium ``Heritage and Landscape as Human Values"},
    Year = {2014},
    Abstract = {[none]},
    Timestamp = {2015.03.02}
    }

2013

  • N. Abdo, H. Kretzschmar, L. Spinello, and C. Stachniss, “Learning Manipulation Actions from a Few Demonstrations,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Karlsruhe, Germany, 2013.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Abdo2013,
    Title = {Learning Manipulation Actions from a Few Demonstrations},
    Author = {N. Abdo and H. Kretzschmar and L. Spinello and C. Stachniss},
    Booktitle = ICRA,
    Year = {2013},
    Address = {Karlsruhe, Germany},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/abdo13icra.pdf}
    }

  • P. Agarwal, G. D. Tipaldi, L. Spinello, C. Stachniss, and W. Burgard, “Dynamic Covariance Scaling for Robust Robotic Mapping,” in ICRA Workshop on robust and Multimodal Inference in Factor Graphs , Karlsruhe, Germany, 2013.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Agarwal2013,
    Title = {Dynamic Covariance Scaling for Robust Robotic Mapping},
    Author = {P. Agarwal and G.D. Tipaldi and L. Spinello and C. Stachniss and W. Burgard},
    Booktitle = {ICRA Workshop on robust and Multimodal Inference in Factor Graphs},
    Year = {2013},
    Address = {Karlsruhe, Germany},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/agarwal13icraws.pdf}
    }

  • P. Agarwal, G. D. Tipaldi, L. Spinello, C. Stachniss, and W. Burgard, “Robust Map Optimization using Dynamic Covariance Scaling,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Karlsruhe, Germany, 2013.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Agarwal2013a,
    Title = {Robust Map Optimization using Dynamic Covariance Scaling},
    Author = {P. Agarwal and G.D. Tipaldi and L. Spinello and C. Stachniss and W. Burgard},
    Booktitle = ICRA,
    Year = {2013},
    Address = {Karlsruhe, Germany},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/agarwal13icra.pdf}
    }

  • K. Böhm, “Tiefenbildsegmentierung mit Hilfe geod\E4tischer Distanztransformation,” bachelor thesis Master Thesis, 2013.
    [BibTeX]
    [none]
    @MastersThesis{Bohm2013,
    Title = {Tiefenbildsegmentierung mit Hilfe geod\E4tischer Distanztransformation},
    Author = {B\"ohm, Karsten},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2013},
    Type = {bachelor thesis},
    Abstract = {[none]},
    Timestamp = {2014.01.20}
    }

  • A. Barth, J. Siegemund, and J. Schwehr, “Fast and precise localization at stop intersections,” in Intelligent Vehicles Symposium Workshops (IV Workshops) , Gold Coast, Australia, 2013, pp. 75-80.
    [BibTeX] [PDF]
    This article presents a practical solution for fast and precise localization of a vehicle’s position and orientation with respect to stop sign controlled intersections based on video sequences and mapped data. It consists of two steps. First, an intersection map is generated offline based on street-level imagery and GPS data, collected by a vehicle driving through an intersection from different directions. The map contains both landmarks for localization and information about stop line positions. This information is used in the second step to precisely and efficiently derive a vehicle’s pose in real-time when approaching a mapped intersection. At this point, we only need coarse GPS information to be able to load the proper map data.

    @InProceedings{Barth2013Fast,
    Title = {Fast and precise localization at stop intersections},
    Author = {Barth, Alexander and Siegemund, Jan and Schwehr, Julian},
    Booktitle = {Intelligent Vehicles Symposium Workshops (IV Workshops) },
    Year = {2013},
    Address = {Gold Coast, Australia},
    Pages = {75--80},
    Publisher = {IEEE},
    Abstract = {This article presents a practical solution for fast and precise localization of a vehicle's position and orientation with respect to stop sign controlled intersections based on video sequences and mapped data. It consists of two steps. First, an intersection map is generated offline based on street-level imagery and GPS data, collected by a vehicle driving through an intersection from different directions. The map contains both landmarks for localization and information about stop line positions. This information is used in the second step to precisely and efficiently derive a vehicle's pose in real-time when approaching a mapped intersection. At this point, we only need coarse GPS information to be able to load the proper map data.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Barth2013Fast.pdf}
    }

  • I. Bogoslavskyi, O. Vysotska, J. Serafin, G. Grisetti, and C. Stachniss, “Efficient Traversability Analysis for Mobile Robots using the Kinect Sensor,” in Proceedings of the European Conference on Mobile Robots (ECMR) , Barcelona, Spain, 2013.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Bogoslavskyi2013,
    Title = {Efficient Traversability Analysis for Mobile Robots using the Kinect Sensor},
    Author = {I. Bogoslavskyi and O. Vysotska and J. Serafin and G. Grisetti and C. Stachniss},
    Booktitle = ECMR,
    Year = {2013},
    Address = {Barcelona, Spain},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/bogoslavskyi13ecmr.pdf}
    }

  • W. Burgard and C. Stachniss, “Gestatten, Obelix!,” Forschung — Das Magazin der Deutschen Forschungsgemeinschaft, vol. 1, 2013.
    [BibTeX] [PDF]
    [none]
    @Article{Burgard2013,
    Title = {Gestatten, Obelix!},
    Author = {W. Burgard and C. Stachniss},
    Journal = {Forschung -- Das Magazin der Deutschen Forschungsgemeinschaft},
    Year = {2013},
    Note = {In German, invited},
    Volume = {1},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/forschung_2013_01-pg4-9.pdf}
    }

  • D. Chai, W. Förstner, and F. Lafarge, “Recovering Line-Networks in Images by Junction-Point Processes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 2013, pp. 1894-1901. doi:10.1109/CVPR.2013.247
    [BibTeX] [PDF]
    [none]
    @InProceedings{chai13recovering,
    Title = {Recovering Line-Networks in Images by Junction-Point Processes},
    Author = {D. Chai and W. F\"orstner and F. Lafarge},
    Booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
    Year = {2013},
    Pages = {1894-1901},
    Abstract = {[none]},
    Doi = {10.1109/CVPR.2013.247},
    Timestamp = {2015.07.14},
    Url = {http://www.ipb.uni-bonn.de/pdfs/chai13recovering.pdf}
    }

  • T. Dickscheid and W. Förstner, “A Trainable Markov Random Field for Low-Level Image Feature Matching with Spatial Relationships,” Photogrammetrie, Fernerkundung, Geoinformation (PFG), vol. 4, pp. 269-284, 2013. doi:10.1127/1432-8364/2013/0176
    [BibTeX]
    Many vision applications rely on local features for image analysis, notably in the areas of object recognition, image registration and camera calibration. One important example in photogrammetry are fully automatic algorithms for relative image orientation. Such applications rely on a matching algorithm to extract a sufficient number of correct feature correspondences at acceptable outlier rates, which is most often based on the similarity of feature descriptions. When the number of detected features is low, it is advisable to use multiple feature detectors with complementary properties. When feature similarity is not sufficient for matching, spatial feature relationships provide valuable information. In this work, a highly generic matching algorithm is proposed which is based on a trainable Markov random field (MRF). It is able to incorporate almost arbitrary combinations of features, similarity measures and pairwise spatial relationships, and has a clear statistical interpretation. A major novelty is its ability to compensate for weaknesses in one information cue by implicitely exploiting the strengths of others.

    @Article{Dickscheid2013Trainable,
    Title = {A Trainable Markov Random Field for Low-Level Image Feature Matching with Spatial Relationships},
    Author = {Dickscheid, Timo and F\"orstner, Wolfgang},
    Journal = {Photogrammetrie, Fernerkundung, Geoinformation (PFG)},
    Year = {2013},
    Pages = {269--284},
    Volume = {4},
    Abstract = { Many vision applications rely on local features for image analysis, notably in the areas of object recognition, image registration and camera calibration. One important example in photogrammetry are fully automatic algorithms for relative image orientation. Such applications rely on a matching algorithm to extract a sufficient number of correct feature correspondences at acceptable outlier rates, which is most often based on the similarity of feature descriptions. When the number of detected features is low, it is advisable to use multiple feature detectors with complementary properties. When feature similarity is not sufficient for matching, spatial feature relationships provide valuable information. In this work, a highly generic matching algorithm is proposed which is based on a trainable Markov random field (MRF). It is able to incorporate almost arbitrary combinations of features, similarity measures and pairwise spatial relationships, and has a clear statistical interpretation. A major novelty is its ability to compensate for weaknesses in one information cue by implicitely exploiting the strengths of others. },
    Doi = {10.1127/1432-8364/2013/0176}
    }

  • W. Förstner, “Graphical Models in Geodesy and Photogrammetry,” Photogrammetrie, Fernerkundung, Geoinformation (PFG), vol. 4, pp. 255-268, 2013. doi:10.1127/1432-8364/2013/0175
    [BibTeX]
    The paper gives an introduction into graphical models and their use in specifying stochastic models in geodesy and photogrammetry. Basic task in adjustment theory can intuitively be described and analysed using graphical models. The paper shows that geodetic networks and bundle adjustments can be interpreted as graphical models, both as Bayesian networks or as conditional random fields. Especially hidden Markov random fields and conditional random fields are demonstrated to be versatile models for parameter estimation and classification.

    @Article{Foerstner2013Graphical,
    Title = {Graphical Models in Geodesy and Photogrammetry},
    Author = {F\"orstner, Wolfgang},
    Journal = {Photogrammetrie, Fernerkundung, Geoinformation (PFG)},
    Year = {2013},
    Pages = {255--268},
    Volume = {4},
    Abstract = { The paper gives an introduction into graphical models and their use in specifying stochastic models in geodesy and photogrammetry. Basic task in adjustment theory can intuitively be described and analysed using graphical models. The paper shows that geodetic networks and bundle adjustments can be interpreted as graphical models, both as Bayesian networks or as conditional random fields. Especially hidden Markov random fields and conditional random fields are demonstrated to be versatile models for parameter estimation and classification. },
    Doi = {10.1127/1432-8364/2013/0175}
    }

  • W. Förstner, “Photogrammetrische Forschung – Eine Zwischenbilanz aus Bonner Sicht,” Photogrammetrie, Fernerkundung, Geoinformation (PFG), vol. 4, pp. 251-254, 2013. doi:10.1127/1432-8364/2013/0186
    [BibTeX]
    Photogrammetrische Forschung – Eine Zwischenbilanz aus Bonner Sicht

    @Article{Foerstner2013Photogrammetrische,
    Title = {Photogrammetrische Forschung - Eine Zwischenbilanz aus Bonner Sicht},
    Author = {F\"orstner, Wolfgang},
    Journal = {Photogrammetrie, Fernerkundung, Geoinformation (PFG)},
    Year = {2013},
    Pages = {251--254},
    Volume = {4},
    Abstract = {Photogrammetrische Forschung - Eine Zwischenbilanz aus Bonner Sicht},
    Doi = {10.1127/1432-8364/2013/0186}
    }

  • A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees,” Autonomous Robots, vol. 34, pp. 189-206, 2013.
    [BibTeX] [PDF]
    [none]
    @Article{Hornung2013,
    Title = {{OctoMap}: An Efficient Probabilistic 3D Mapping Framework Based on Octrees},
    Author = {A. Hornung and K.M. Wurm and M. Bennewitz and C. Stachniss and W. Burgard},
    Journal = auro,
    Year = {2013},
    Pages = {189-206},
    Volume = {34},
    Abstract = {[none]},
    Issue = {3},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/hornung13auro.pdf}
    }

  • R. Kümmerle, M. Ruhnke, B. Steder, C. Stachniss, and W. Burgard, “A Navigation System for Robots Operating in Crowded Urban Environments,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Karlsruhe, Germany, 2013.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Kummerle2013,
    Title = {A Navigation System for Robots Operating in Crowded Urban Environments},
    Author = {R. K\"ummerle and M. Ruhnke and B. Steder and C. Stachniss and W. Burgard},
    Booktitle = ICRA,
    Year = {2013},
    Address = {Karlsruhe, Germany},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/kuemmerle13icra.pdf}
    }

  • A. Kicherer, R. Roscher, K. Herzog, S. Šimon, W. Förstner, and R. Töpfer, “BAT (Berry Analysis Tool): A high-throughput image interpretation tool to acquire the number, diameter, and volume of grapevine berries,” Vitis, vol. 52, iss. 3, pp. 129-135, 2013.
    [BibTeX]
    QTL-analysis (quantitative trait loci) and marker development rely on efficient phenotyping techniques. Objectivity and precision of a phenotypic data evaluation is crucial but time consuming. In the present study a high-throughput image interpretation tool was developed to acquire automatically number, size, and volume of grape berries from RGB (red-green-blue) images. Individual berries of one cluster were placed on a defined construction to take a RGB image from the top. The image interpretation of one dataset with an arbitrary number of images occurs automatically by starting the BAT (Berry-Analysis-Tool) developed in MATLAB. For validation of results, the number of berries was counted and their size was measured using a digital calliper. A measuring cylinder was used to determine reliably the berry volume by displacement of water. All placed berries could be counted by BAT 100\A0\% correctly. Manual ratings compared with BAT ratings showed strong correlation of r\A0=\A00,964 for mean berry diameter/image and r\A0=\A00.984 for berry volume.

    @Article{Kicherer2013,
    Title = {BAT (Berry Analysis Tool): A high-throughput image interpretation tool to acquire the number, diameter, and volume of grapevine berries},
    Author = {Kicherer, A. and Roscher, R. and Herzog, K. and {\vS}imon, S. and F\"orstner, W. and T\"opfer, R.},
    Journal = {Vitis},
    Year = {2013},
    Number = {3},
    Pages = {129-135},
    Volume = {52},
    Abstract = {QTL-analysis (quantitative trait loci) and marker development rely on efficient phenotyping techniques. Objectivity and precision of a phenotypic data evaluation is crucial but time consuming. In the present study a high-throughput image interpretation tool was developed to acquire automatically number, size, and volume of grape berries from RGB (red-green-blue) images. Individual berries of one cluster were placed on a defined construction to take a RGB image from the top. The image interpretation of one dataset with an arbitrary number of images occurs automatically by starting the BAT (Berry-Analysis-Tool) developed in MATLAB. For validation of results, the number of berries was counted and their size was measured using a digital calliper. A measuring cylinder was used to determine reliably the berry volume by displacement of water. All placed berries could be counted by BAT 100\A0\% correctly. Manual ratings compared with BAT ratings showed strong correlation of r\A0=\A00,964 for mean berry diameter/image and r\A0=\A00.984 for berry volume.},
    Owner = {ribana1},
    Timestamp = {2013.08.14}
    }

  • D. Maier, C. Stachniss, and M. Bennewitz, “Vision-Based Humanoid Navigation Using Self-Supervised Obstacle Detection,” The Int. Journal of Humanoid Robotics (IJHR), vol. 10, 2013.
    [BibTeX] [PDF]
    [none]
    @Article{Maier2013,
    Title = {Vision-Based Humanoid Navigation Using Self-Supervised Obstacle Detection},
    Author = {D. Maier and C. Stachniss and M. Bennewitz},
    Journal = ijhr,
    Year = {2013},
    Volume = {10},
    Abstract = {[none]},
    Issue = {2},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/maier13ijhr.pdf}
    }

  • M. Nieuwenhuisen, D. Droeschel, J. Schneider, D. Holz, T. Läbe, and S. Behnke, “Multimodal Obstacle Detection and Collision Avoidance for Micro Aerial Vehicles,” in Proceedings of the 6th European Conference on Mobile Robots (ECMR) , 2013. doi:10.1109/ECMR.2013.6698812
    [BibTeX] [PDF]
    Reliably perceiving obstacles and avoiding collisions is key for the fully autonomous application of micro aerial vehicles (MAVs). Limiting factors for increasing autonomy and complexity of MAVs (without external sensing and control) are limited onboard sensing and limited onboard processing power. In this paper, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception. We developed a lightweight 3D laser scanner setup and visual obstacle detection using wide-angle stereo cameras. Together with our fast reactive collision avoidance approach based on local egocentric grid maps of the environment we aim at safe operation in the vicinity of structures like buildings or vegetation.

    @InProceedings{nieuwenhuisen13ecmr,
    Title = {Multimodal Obstacle Detection and Collision Avoidance for Micro Aerial Vehicles},
    Author = {Nieuwenhuisen, Matthias and Droeschel, David and Schneider, Johannes and Holz, Dirk and L\"abe, Thomas and Behnke, Sven},
    Booktitle = {Proceedings of the 6th European Conference on Mobile Robots (ECMR)},
    Year = {2013},
    Abstract = {Reliably perceiving obstacles and avoiding collisions is key for the fully autonomous application of micro aerial vehicles (MAVs). Limiting factors for increasing autonomy and complexity of MAVs (without external sensing and control) are limited onboard sensing and limited onboard processing power. In this paper, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception. We developed a lightweight 3D laser scanner setup and visual obstacle detection using wide-angle stereo cameras. Together with our fast reactive collision avoidance approach based on local egocentric grid maps of the environment we aim at safe operation in the vicinity of structures like buildings or vegetation.},
    City = {Barcelona},
    Doi = {10.1109/ECMR.2013.6698812},
    Url = {http://www.ais.uni-bonn.de/papers/ECMR_2013_Nieuwenhuisen_Multimodal_Obstacle_Avoidance.pdf}
    }

  • J. C. Rose, “Automatische Lokalisierung einer Drohne in einer Karte,” Master Thesis, 2013.
    [BibTeX]
    \textbf{Summary} The number of scientific contributions dealing with automatic vision based localization of mobile robots is significant. For a long time contributions have focused on mobile ground robots almost solely but with the new availability of civilly useable UAVs (Unmanned Aerial Vehicles) an interest in adapting the known methods for airworthy vehicles has risen. This work deals with developing a program system called LOCALZE for determining a full 6DOF (Degree of Freedom) position of an UAV in a metric map using vision whereby the metric map is constituted landmarks derived of SIFT-points. Position determination is reached over solving the correspondence problem between SIFT-points detected in the current image of the vision sensor and the landmarks. The potential of LOCALIZE concerning precision and accuracy of the determined position is evaluated in empirical studies using two vision sensors. Experiments demonstrate a dependency of the precision from the quality of the vision sensor. When using a high quality sensor a point error in position determination of about 1-3 cm and an accuracy of 1-7 cm can be reached. \textbf{Zusammenfassung} Die Anzahl wissenschaftlicher Beiträge zur automatischen Lokalisierung mobiler Roboter mittels Bildsensoren ist beträchtlich. Viele Beiträge fokussierten sich dabei lange auf die Untersuchung bodenbeschränkter Roboter. Im Laufe der letzten Jahre wuchs jedoch die Bedeutung der UAVs (Unmanned Aerial Vehicle) auch für zivile Anwendungen und damit das Interesse an einer Adaption der bisherigen Methoden an flugfähigen Robotern. In dieser Arbeit wird ein Programmsystem LOCALIZE für die 3D-Positionsbestimmung (mit allen 6 Freiheitsgraden) eines UAV mittels eines optischen Systems entworfen und sein Potential in empirischen Testszenarien evaluiert. Die Positionierung des Roboters geschieht dabei innerhalb einer a priori erstellten metrischen Karte eines Innenraums, die sich aus über den SIFT-Algorithmus abgeleiteten Landmarken konstituiert. Die Lokalisierung geschieht über Korrespondenzfindung zwischen den im aktuellen Bild des Roboters extrahierten und den Landmarken. Anhand der korrespondierenden Punkte in beiden Systemen wird ein iterativer Räumlicher Rückwärtsschnitt zur Positionsbestimmung verwendet. LOCALIZE wird anhand zweier Bildsensoren hinsichtlich potentieller Präzision und Richtigkeit der Positionsbestimmung untersucht. Die Experimente demonstrieren eine Abhängigkeit der Präzision von der Qualität des Bildsensors. Bei Verwendung eines hochwertigen Bildsensors kann ein Punktfehler der Positionierung von rund 1-3 cm und eine Richtigkeit von 1-7 cm erreicht werden.

    @MastersThesis{Rose2013Automatische,
    Title = {Automatische Lokalisierung einer Drohne in einer Karte},
    Author = {Rose, Johann Christian},
    School = {University of Bonn},
    Year = {2013},
    Note = {Betreuung: Prof. Dr. Bj\"orn Waske, Johannes Schneider},
    Abstract = {\textbf{Summary} The number of scientific contributions dealing with automatic vision based localization of mobile robots is significant. For a long time contributions have focused on mobile ground robots almost solely but with the new availability of civilly useable UAVs (Unmanned Aerial Vehicles) an interest in adapting the known methods for airworthy vehicles has risen. This work deals with developing a program system called LOCALZE for determining a full 6DOF (Degree of Freedom) position of an UAV in a metric map using vision whereby the metric map is constituted landmarks derived of SIFT-points. Position determination is reached over solving the correspondence problem between SIFT-points detected in the current image of the vision sensor and the landmarks. The potential of LOCALIZE concerning precision and accuracy of the determined position is evaluated in empirical studies using two vision sensors. Experiments demonstrate a dependency of the precision from the quality of the vision sensor. When using a high quality sensor a point error in position determination of about 1-3 cm and an accuracy of 1-7 cm can be reached. \textbf{Zusammenfassung} Die Anzahl wissenschaftlicher Beitr\"age zur automatischen Lokalisierung mobiler Roboter mittels Bildsensoren ist betr\"achtlich. Viele Beitr\"age fokussierten sich dabei lange auf die Untersuchung bodenbeschr\"ankter Roboter. Im Laufe der letzten Jahre wuchs jedoch die Bedeutung der UAVs (Unmanned Aerial Vehicle) auch f\"ur zivile Anwendungen und damit das Interesse an einer Adaption der bisherigen Methoden an flugf\"ahigen Robotern. In dieser Arbeit wird ein Programmsystem LOCALIZE f\"ur die 3D-Positionsbestimmung (mit allen 6 Freiheitsgraden) eines UAV mittels eines optischen Systems entworfen und sein Potential in empirischen Testszenarien evaluiert. Die Positionierung des Roboters geschieht dabei innerhalb einer a priori erstellten metrischen Karte eines Innenraums, die sich aus \"uber den SIFT-Algorithmus abgeleiteten Landmarken konstituiert. Die Lokalisierung geschieht \"uber Korrespondenzfindung zwischen den im aktuellen Bild des Roboters extrahierten und den Landmarken. Anhand der korrespondierenden Punkte in beiden Systemen wird ein iterativer R\"aumlicher R\"uckw\"artsschnitt zur Positionsbestimmung verwendet. LOCALIZE wird anhand zweier Bildsensoren hinsichtlich potentieller Pr\"azision und Richtigkeit der Positionsbestimmung untersucht. Die Experimente demonstrieren eine Abh\"angigkeit der Pr\"azision von der Qualit\"at des Bildsensors. Bei Verwendung eines hochwertigen Bildsensors kann ein Punktfehler der Positionierung von rund 1-3 cm und eine Richtigkeit von 1-7 cm erreicht werden.},
    City = {Bonn}
    }

  • S. Schallenberg, “Erfassung des Landbedeckungswandels im Rheinischen Braunkohlerevier mittels Landsat-Satellitendaten,” bachelor thesis Master Thesis, 2013.
    [BibTeX]
    [none]
    @MastersThesis{Schallenberg2013,
    Title = {Erfassung des Landbedeckungswandels im Rheinischen Braunkohlerevier mittels Landsat-Satellitendaten},
    Author = {Schallenberg, Sebastian},
    School = {Instiute of Photogrammetry, University of Bonn},
    Year = {2013},
    Note = {Betreuung: Prof.Dr. Bj\"orn Waske, M.Sc. Jan Stefanski},
    Type = {bachelor thesis},
    Abstract = {[none]},
    Timestamp = {2014.01.20}
    }

  • F. Schindler, “Man-Made Surface Structures from Triangulated Point-Clouds,” PhD Thesis, 2013.
    [BibTeX] [PDF]
    Photogrammetry aims at reconstructing shape and dimensions of objects captured with cameras, 3D laser scanners or other spatial acquisition systems. While many acquisition techniques deliver triangulated point clouds with millions of vertices within seconds, the interpretation is usually left to the user. Especially when reconstructing man-made objects, one is interested in the underlying surface structure, which is not inherently present in the data. This includes the geometric shape of the object, e.g. cubical or cylindrical, as well as corresponding surface parameters, e.g. width, height and radius. Applications are manifold and range from industrial production control to architectural on-site measurements to large-scale city models. The goal of this thesis is to automatically derive such surface structures from triangulated 3D point clouds of man-made objects. They are defined as a compound of planar or curved geometric primitives. Model knowledge about typical primitives and relations between adjacent pairs of them should affect the reconstruction positively. After formulating a parametrized model for man-made surface structures, we develop a reconstruction framework with three processing steps: During a fast pre-segmentation exploiting local surface properties we divide the given surface mesh into planar regions. Making use of a model selection scheme based on minimizing the description length, this surface segmentation is free of control parameters and automatically yields an optimal number of segments. A subsequent refinement introduces a set of planar or curved geometric primitives and hierarchically merges adjacent regions based on their joint description length. A global classification and constraint parameter estimation combines the data-driven segmentation with high-level model knowledge. Therefore, we represent the surface structure with a graphical model and formulate factors based on likelihood as well as prior knowledge about parameter distributions and class probabilities. We infer the most probable setting of surface and relation classes with belief propagation and estimate an optimal surface parametrization with constraints induced by inter-regional relations. The process is specifically designed to work on noisy data with outliers and a few exceptional freeform regions not describable with geometric primitives. It yields full 3D surface structures with watertightly connected surface primitives of different types. The performance of the proposed framework is experimentally evaluated on various data sets. On small synthetically generated meshes we analyze the accuracy of the estimated surface parameters, the sensitivity w.r.t. various properties of the input data and w.r.t. model assumptions as well as the computational complexity. Additionally we demonstrate the flexibility w.r.t. different acquisition techniques on real data sets. The proposed method turns out to be accurate, reasonably fast and little sensitive to defects in the data or imprecise model assumptions.

    @PhdThesis{Schindler2013:Man-Made,
    Title = {Man-Made Surface Structures from Triangulated Point-Clouds},
    Author = {Schindler, Falko},
    School = {Department of Photogrammetry, University of Bonn},
    Year = {2013},
    Abstract = {Photogrammetry aims at reconstructing shape and dimensions of objects captured with cameras, 3D laser scanners or other spatial acquisition systems. While many acquisition techniques deliver triangulated point clouds with millions of vertices within seconds, the interpretation is usually left to the user. Especially when reconstructing man-made objects, one is interested in the underlying surface structure, which is not inherently present in the data. This includes the geometric shape of the object, e.g. cubical or cylindrical, as well as corresponding surface parameters, e.g. width, height and radius. Applications are manifold and range from industrial production control to architectural on-site measurements to large-scale city models. The goal of this thesis is to automatically derive such surface structures from triangulated 3D point clouds of man-made objects. They are defined as a compound of planar or curved geometric primitives. Model knowledge about typical primitives and relations between adjacent pairs of them should affect the reconstruction positively. After formulating a parametrized model for man-made surface structures, we develop a reconstruction framework with three processing steps: During a fast pre-segmentation exploiting local surface properties we divide the given surface mesh into planar regions. Making use of a model selection scheme based on minimizing the description length, this surface segmentation is free of control parameters and automatically yields an optimal number of segments. A subsequent refinement introduces a set of planar or curved geometric primitives and hierarchically merges adjacent regions based on their joint description length. A global classification and constraint parameter estimation combines the data-driven segmentation with high-level model knowledge. Therefore, we represent the surface structure with a graphical model and formulate factors based on likelihood as well as prior knowledge about parameter distributions and class probabilities. We infer the most probable setting of surface and relation classes with belief propagation and estimate an optimal surface parametrization with constraints induced by inter-regional relations. The process is specifically designed to work on noisy data with outliers and a few exceptional freeform regions not describable with geometric primitives. It yields full 3D surface structures with watertightly connected surface primitives of different types. The performance of the proposed framework is experimentally evaluated on various data sets. On small synthetically generated meshes we analyze the accuracy of the estimated surface parameters, the sensitivity w.r.t. various properties of the input data and w.r.t. model assumptions as well as the computational complexity. Additionally we demonstrate the flexibility w.r.t. different acquisition techniques on real data sets. The proposed method turns out to be accurate, reasonably fast and little sensitive to defects in the data or imprecise model assumptions.},
    Timestamp = {2013.11.26},
    Url = {http://hss.ulb.uni-bonn.de/2013/3435/3435.htm}
    }

  • F. Schindler, Ein LaTeX-Kochbuch, 2013.
    [BibTeX] [PDF]
    Dieses Dokument fasst die wichtigsten LaTeX-Befehle und -Konstrukte zusammen, die man f\FCr das Verfassen von wissenschaftlichen Arbeiten ben\F6tigt. Auf aktuelle und umfangreichere Dokumentationen wird verwiesen. Auf die Installation von LaTeX und einem Editor (Empfehlung: TeX-Maker) sowie den grunds\E4tzlichen Kompiliervorgang3 wird nicht weiter eingegangen. Alle Beispiele sind vollst\E4ndig aufgef\FChrt und dem Dokument als TEX-Datei angeh\E4ngt (Aufruf \FCber B\FCroklammer-Symbol am Seitenrand). Sie sollten sich problemlos \FCbersetzen lassen und liefern das daneben oder darunter abgebildete Ergebnis. Lediglich die Seitenr\E4nder wurden aus Platzgr\FCnden mehr oder weniger gro\DFz\FCgig abgeschnitten.

    @Misc{Schindler2013Latex,
    Title = {Ein LaTeX-Kochbuch},
    Author = {Falko Schindler},
    Month = mar,
    Year = {2013},
    Abstract = {Dieses Dokument fasst die wichtigsten LaTeX-Befehle und -Konstrukte zusammen, die man f\FCr das Verfassen von wissenschaftlichen Arbeiten ben\F6tigt. Auf aktuelle und umfangreichere Dokumentationen wird verwiesen. Auf die Installation von LaTeX und einem Editor (Empfehlung: TeX-Maker) sowie den grunds\E4tzlichen Kompiliervorgang3 wird nicht weiter eingegangen. Alle Beispiele sind vollst\E4ndig aufgef\FChrt und dem Dokument als TEX-Datei angeh\E4ngt (Aufruf \FCber B\FCroklammer-Symbol am Seitenrand). Sie sollten sich problemlos \FCbersetzen lassen und liefern das daneben oder darunter abgebildete Ergebnis. Lediglich die Seitenr\E4nder wurden aus Platzgr\FCnden mehr oder weniger gro\DFz\FCgig abgeschnitten.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schindler2013Latex.pdf:LaTeX}
    }

  • F. Schindler and W. Förstner, “DijkstraFPS: Graph Partitioning in Geometry and Image Processing,” Photogrammetrie, Fernerkundung, Geoinformation (PFG), vol. 4, pp. 285-296, 2013. doi:10.1127/1432-8364/2013/0177
    [BibTeX]
    Data partitioning is a common problem in the field of point cloud and image processing applicable to segmentation and clustering. The general principle is to have high similarity of two data points, e.g.pixels or 3D points, within one region and low similarity among regions. This pair-wise similarity between data points can be represented in an attributed graph. In this article we propose a novel graph partitioning algorithm. It integrates a sampling strategy known as farthest point sampling with Dijkstra’s algorithm for deriving a distance transform on a general graph, which does not need to be embedded in some space. According to the pair-wise attributes a Voronoi diagram on the graph is generated yielding the desired segmentation. We demonstrate our approach on various applications such as surface triangulation, surface segmentation, clustering and image segmentation.

    @Article{Schindler2013DijkstraFPS,
    Title = {DijkstraFPS: Graph Partitioning in Geometry and Image Processing},
    Author = {Schindler, Falko and F\"orstner, Wolfgang},
    Journal = {Photogrammetrie, Fernerkundung, Geoinformation (PFG)},
    Year = {2013},
    Pages = {285--296},
    Volume = {4},
    Abstract = { Data partitioning is a common problem in the field of point cloud and image processing applicable to segmentation and clustering. The general principle is to have high similarity of two data points, e.g.pixels or 3D points, within one region and low similarity among regions. This pair-wise similarity between data points can be represented in an attributed graph. In this article we propose a novel graph partitioning algorithm. It integrates a sampling strategy known as farthest point sampling with Dijkstra's algorithm for deriving a distance transform on a general graph, which does not need to be embedded in some space. According to the pair-wise attributes a Voronoi diagram on the graph is generated yielding the desired segmentation. We demonstrate our approach on various applications such as surface triangulation, surface segmentation, clustering and image segmentation. },
    Doi = {10.1127/1432-8364/2013/0177}
    }

  • J. Schneider and W. Förstner, “Bundle Adjustment and System Calibration with Points at Infinity for Omnidirectional Camera Systems,” Z. f. Photogrammetrie, Fernerkundung und Geoinformation, vol. 4, pp. 309-321, 2013. doi:10.1127/1432-8364/2013/0179
    [BibTeX] [PDF]
    We present a calibration method for multi-view cameras that provides a rigorous maximum likelihood estimation of the mutual orientation of the cameras within a rigid multi-camera system. No calibration targets are needed, just a movement of the multi-camera system taking synchronized images of a highly textured and static scene. Multi-camera systems with non-overlapping views have to be rotated within the scene so that corresponding points are visible in different cameras at different times of exposure. By using an extended version of the projective collinearity equation all estimates can be optimized in one bundle adjustment where we constrain the relative poses of the cameras to be fixed. For stabilizing camera orientations – especially rotations – one should generally use points at the horizon within the bundle adjustment, which classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points which allows us to use images of omnidirectional cameras with single viewpoint like fisheye cameras and scene points at a large distance from the camera or even at infinity. We show results of our calibration method on (1) the omnidirectional multi-camera system Ladybug 3 from Point Grey, (2) a camera-rig with five cameras used for the acquisition of complex 3D structures and (3) a camera-rig mounted on a UAV consisting of four fisheye cameras which provide a large field of view and which is used for visual odometry and obstacle detection in the project MoD (DFG-Project FOR 1505 "Mapping on Demand").

    @Article{schneider13pfg,
    Title = {Bundle Adjustment and System Calibration with Points at Infinity for Omnidirectional Camera Systems},
    Author = {J. Schneider and W. F\"orstner},
    Journal = {Z. f. Photogrammetrie, Fernerkundung und Geoinformation},
    Year = {2013},
    Pages = {309--321},
    Volume = {4},
    Abstract = {We present a calibration method for multi-view cameras that provides a rigorous maximum likelihood estimation of the mutual orientation of the cameras within a rigid multi-camera system. No calibration targets are needed, just a movement of the multi-camera system taking synchronized images of a highly textured and static scene. Multi-camera systems with non-overlapping views have to be rotated within the scene so that corresponding points are visible in different cameras at different times of exposure. By using an extended version of the projective collinearity equation all estimates can be optimized in one bundle adjustment where we constrain the relative poses of the cameras to be fixed. For stabilizing camera orientations - especially rotations - one should generally use points at the horizon within the bundle adjustment, which classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points which allows us to use images of omnidirectional cameras with single viewpoint like fisheye cameras and scene points at a large distance from the camera or even at infinity. We show results of our calibration method on (1) the omnidirectional multi-camera system Ladybug 3 from Point Grey, (2) a camera-rig with five cameras used for the acquisition of complex 3D structures and (3) a camera-rig mounted on a UAV consisting of four fisheye cameras which provide a large field of view and which is used for visual odometry and obstacle detection in the project MoD (DFG-Project FOR 1505 "Mapping on Demand").},
    Doi = {10.1127/1432-8364/2013/0179},
    Url = {http://www.ipb.uni-bonn.de/pdfs/schneider13pfg.pdf}
    }

  • J. Schneider, T. Läbe, and W. Förstner, “Incremental Real-time Bundle Adjustment for Multi-camera Systems with Points at Infinity,” in ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences , 2013, pp. 355-360. doi:10.5194/isprsarchives-XL-1-W2-355-2013
    [BibTeX] [PDF]
    This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1) multi-view cameras by taking the rigid transformation between the cameras into account, (2) omni-directional cameras as it can handle arbitrary bundles of rays and (3) scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment \wrt time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.

    @InProceedings{schneider13isprs,
    Title = {Incremental Real-time Bundle Adjustment for Multi-camera Systems with Points at Infinity},
    Author = {J. Schneider and T. L\"abe and W. F\"orstner},
    Booktitle = {ISPRS Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2013},
    Pages = {355-360},
    Volume = {XL-1/W2},
    Abstract = {This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1) multi-view cameras by taking the rigid transformation between the cameras into account, (2) omni-directional cameras as it can handle arbitrary bundles of rays and (3) scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment \wrt time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.},
    Doi = {10.5194/isprsarchives-XL-1-W2-355-2013},
    Url = {http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-1-W2/355/2013/isprsarchives-XL-1-W2-355-2013.pdf}
    }

  • J. Siegemund, “Street Surfaces and Boundaries from Depth Image Sequences using Probabilistic Models,” PhD Thesis, 2013.
    [BibTeX] [PDF]
    This thesis presents an approach for the detection and reconstruction of street surfaces and boundaries from depth image sequences. Active driver assistance systems which monitor and interpret the environment based on vehicle mounted sensors to support the driver embody a current research focus of the automotive industry. An essential task of these systems is the modeling of the vehicle’s static environment. This comprises the determination of the vertical slope and curvature characteristics of the street surface as well as the robust detection of obstacles and, thus, the free drivable space (alias free-space). In this regard, obstacles of low height, e.g. curbs, are of special interest since they often embody the first geometric delimiter of the free-space. The usage of depth images acquired from stereo camera systems becomes more important in this context due to the high data rate and affordable price of the sensor. However, recent approaches for object detection are often limited to the detection of objects which are distinctive in height, such as cars and guardrails, or explicitly address the detection of particular object classes. These approaches are usually based on extremely restrictive assumptions, such as planar street surfaces, in order to deal with the high measurement noise. The main contribution of this thesis is the development, analysis and evaluation of an approach which detects the free-space in the immediate maneuvering area in front of the vehicle and explicitly models the free-space boundary by means of a spline curve. The approach considers in particular obstacles of low height (higher than 10 cm) without limitation on particular object classes. Furthermore, the approach has the ability to cope with various slope and curvature characteristics of the observed street surface and is able to reconstruct this surface by means of a flexible spline model. In order to allow for robust results despite the flexibility of the model and the high measurement noise, the approach employs probabilistic models for the preprocessing of the depth map data as well as for the detection of the drivable free-space. An elevation model is computed from the depth map considering the paths of the optical rays and the uncertainty of the depth measurements. Based on this elevation model, an iterative two step approach is performed which determines the drivable free-space by means of a Markov Random Field and estimates the spline parameters of the free-space boundary curve and the street surface. Outliers in the elevation data are explicitly modeled. The performance of the overall approach and the influence of key components are systematically evaluated within experiments on synthetic and real world test scenarios. The results demonstrate the ability of the approach to accurately model the boundary of the drivable free-space as well as the street surface even in complex scenarios with multiple obstacles or strong curvature of the street surface. The experiments further reveal the limitations of the approach, which are discussed in detail. Zusammenfassung Sch\E4tzung von Stra\DFenoberfl\E4chen und -begrenzungen aus Sequenzen von Tiefenkarten unter Verwendung probabilistischer Modelle Diese Arbeit pr\E4sentiert ein Verfahren zur Detektion und Rekonstruktion von Stra\DFenoberfl\E4chen und -begrenzungen auf der Basis von Tiefenkarten. Aktive Fahrerassistenzsysteme, welche mit der im Fahrzeug verbauten Sensorik die Umgebung erfassen, interpretieren und den Fahrer unterst\FCtzen, sind ein aktueller Forschungsschwerpunkt der Fahrzeugindustrie. Eine wesentliche Aufgabe dieser Systeme ist die Modellierung der statischen Fahrzeugumgebung. Dies beinhaltet die Bestimmung der vertikalen Neigungs- und Kr\FCmmungseigenschaften der Fahrbahn, sowie die robuste Detektion von Hindernissen und somit des befahrbaren Freiraumes. Hindernisse von geringer H\F6he, wie z.B. Bordsteine, sind in diesem Zusammenhang von besonderem Interesse, da sie h\E4ufig die erste geometrische Begrenzung des Fahrbahnbereiches darstellen. In diesem Kontext gewinnt die Verwendung von Tiefenkarten aus Stereo-Kamera-Systemen wegen der hohen Datenrate und relativ geringen Kosten des Sensors zunehmend an Bedeutung. Aufgrund des starken Messrauschens beschr\E4nken sich herk\F6mmliche Verfahren zur Hinderniserkennung jedoch meist auf erhabene Objekte wie Fahrzeuge oder Leitplanken, oder aber adressieren einzelne Objektklassen wie Bordsteine explizit. Dazu werden h\E4ufig extrem restriktive Annahmen verwendet wie z.B. planare Stra\DFenoberfl\E4chen. Der Hauptbeitrag dieser Arbeit besteht in der Entwicklung, Analyse und Evaluation eines Verfahrens, welches den befahrbaren Freiraum im Nahbereich des Fahrzeugs detektiert und dessen Begrenzung mit Hilfe einer Spline-Kurve explizit modelliert. Das Verfahren ber\FCcksichtigt insbesondere Hindernisse geringer H\F6he (gr\F6\DFer als 10 cm) ohne Beschr\E4nkung auf bestimmte Objektklassen. Weiterhin ist das Verfahren in der Lage, mit verschiedenartigen Neigungs- und Kr\FCmmungseigenschaften der vor dem Fahrzeug liegenden Fahrbahnoberfl\E4che umzugehen und diese durch Verwendung eines flexiblen Spline-Modells zu rekonstruieren. Um trotz der hohen Flexibilit\E4t des Modells und des hohen Messrauschens robuste Ergebnisse zu erzielen, verwendet das Verfahren probabilistische Modelle zur Vorverarbeitung der Eingabedaten und zur Detektion des befahrbaren Freiraumes. Aus den Tiefenkarten wird unter Ber\FCcksichtigung der Strahleng\E4nge und Unsicherheiten der Tiefenmessungen ein H\F6henmodell berechnet. In einem iterativen Zwei-Schritt-Verfahren werden anhand dieses H\F6henmodells der befahrbare Freiraum mit Hilfe eines Markov-Zufallsfeldes bestimmt sowie die Parameter der begrenzenden Spline-Kurve und Stra\DFenoberfl\E4che gesch\E4tzt. Ausrei\DFer in den H\F6hendaten werden dabei explizit modelliert. Die Leistungsf\E4higkeit des Gesamtverfahrens sowie der Einfluss zentraler Komponenten, wird im Rahmen von Experimenten auf synthetischen und realen Testszenen systematisch analysiert. Die Ergebnisse demonstrieren die F\E4higkeit des Verfahrens, die Begrenzung des befahrbaren Freiraumes sowie die Fahrbahnoberfl\E4che selbst in komplexen Szenarien mit multiplen Hindernissen oder starker Fahrbahnkr\FCmmung akkurat zu modellieren. Weiterhin werden die Grenzen des Verfahrens aufgezeigt und detailliert untersucht.

    @PhdThesis{Siegemund2013,
    Title = {Street Surfaces and Boundaries from Depth Image Sequences using Probabilistic Models},
    Author = {Siegemund, Jan},
    School = {Department of Photogrammetry, University of Bonn},
    Year = {2013},
    Abstract = {This thesis presents an approach for the detection and reconstruction of street surfaces and boundaries from depth image sequences. Active driver assistance systems which monitor and interpret the environment based on vehicle mounted sensors to support the driver embody a current research focus of the automotive industry. An essential task of these systems is the modeling of the vehicle's static environment. This comprises the determination of the vertical slope and curvature characteristics of the street surface as well as the robust detection of obstacles and, thus, the free drivable space (alias free-space). In this regard, obstacles of low height, e.g. curbs, are of special interest since they often embody the first geometric delimiter of the free-space. The usage of depth images acquired from stereo camera systems becomes more important in this context due to the high data rate and affordable price of the sensor. However, recent approaches for object detection are often limited to the detection of objects which are distinctive in height, such as cars and guardrails, or explicitly address the detection of particular object classes. These approaches are usually based on extremely restrictive assumptions, such as planar street surfaces, in order to deal with the high measurement noise. The main contribution of this thesis is the development, analysis and evaluation of an approach which detects the free-space in the immediate maneuvering area in front of the vehicle and explicitly models the free-space boundary by means of a spline curve. The approach considers in particular obstacles of low height (higher than 10 cm) without limitation on particular object classes. Furthermore, the approach has the ability to cope with various slope and curvature characteristics of the observed street surface and is able to reconstruct this surface by means of a flexible spline model. In order to allow for robust results despite the flexibility of the model and the high measurement noise, the approach employs probabilistic models for the preprocessing of the depth map data as well as for the detection of the drivable free-space. An elevation model is computed from the depth map considering the paths of the optical rays and the uncertainty of the depth measurements. Based on this elevation model, an iterative two step approach is performed which determines the drivable free-space by means of a Markov Random Field and estimates the spline parameters of the free-space boundary curve and the street surface. Outliers in the elevation data are explicitly modeled. The performance of the overall approach and the influence of key components are systematically evaluated within experiments on synthetic and real world test scenarios. The results demonstrate the ability of the approach to accurately model the boundary of the drivable free-space as well as the street surface even in complex scenarios with multiple obstacles or strong curvature of the street surface. The experiments further reveal the limitations of the approach, which are discussed in detail. Zusammenfassung Sch\E4tzung von Stra\DFenoberfl\E4chen und -begrenzungen aus Sequenzen von Tiefenkarten unter Verwendung probabilistischer Modelle Diese Arbeit pr\E4sentiert ein Verfahren zur Detektion und Rekonstruktion von Stra\DFenoberfl\E4chen und -begrenzungen auf der Basis von Tiefenkarten. Aktive Fahrerassistenzsysteme, welche mit der im Fahrzeug verbauten Sensorik die Umgebung erfassen, interpretieren und den Fahrer unterst\FCtzen, sind ein aktueller Forschungsschwerpunkt der Fahrzeugindustrie. Eine wesentliche Aufgabe dieser Systeme ist die Modellierung der statischen Fahrzeugumgebung. Dies beinhaltet die Bestimmung der vertikalen Neigungs- und Kr\FCmmungseigenschaften der Fahrbahn, sowie die robuste Detektion von Hindernissen und somit des befahrbaren Freiraumes. Hindernisse von geringer H\F6he, wie z.B. Bordsteine, sind in diesem Zusammenhang von besonderem Interesse, da sie h\E4ufig die erste geometrische Begrenzung des Fahrbahnbereiches darstellen. In diesem Kontext gewinnt die Verwendung von Tiefenkarten aus Stereo-Kamera-Systemen wegen der hohen Datenrate und relativ geringen Kosten des Sensors zunehmend an Bedeutung. Aufgrund des starken Messrauschens beschr\E4nken sich herk\F6mmliche Verfahren zur Hinderniserkennung jedoch meist auf erhabene Objekte wie Fahrzeuge oder Leitplanken, oder aber adressieren einzelne Objektklassen wie Bordsteine explizit. Dazu werden h\E4ufig extrem restriktive Annahmen verwendet wie z.B. planare Stra\DFenoberfl\E4chen. Der Hauptbeitrag dieser Arbeit besteht in der Entwicklung, Analyse und Evaluation eines Verfahrens, welches den befahrbaren Freiraum im Nahbereich des Fahrzeugs detektiert und dessen Begrenzung mit Hilfe einer Spline-Kurve explizit modelliert. Das Verfahren ber\FCcksichtigt insbesondere Hindernisse geringer H\F6he (gr\F6\DFer als 10 cm) ohne Beschr\E4nkung auf bestimmte Objektklassen. Weiterhin ist das Verfahren in der Lage, mit verschiedenartigen Neigungs- und Kr\FCmmungseigenschaften der vor dem Fahrzeug liegenden Fahrbahnoberfl\E4che umzugehen und diese durch Verwendung eines flexiblen Spline-Modells zu rekonstruieren. Um trotz der hohen Flexibilit\E4t des Modells und des hohen Messrauschens robuste Ergebnisse zu erzielen, verwendet das Verfahren probabilistische Modelle zur Vorverarbeitung der Eingabedaten und zur Detektion des befahrbaren Freiraumes. Aus den Tiefenkarten wird unter Ber\FCcksichtigung der Strahleng\E4nge und Unsicherheiten der Tiefenmessungen ein H\F6henmodell berechnet. In einem iterativen Zwei-Schritt-Verfahren werden anhand dieses H\F6henmodells der befahrbare Freiraum mit Hilfe eines Markov-Zufallsfeldes bestimmt sowie die Parameter der begrenzenden Spline-Kurve und Stra\DFenoberfl\E4che gesch\E4tzt. Ausrei\DFer in den H\F6hendaten werden dabei explizit modelliert. Die Leistungsf\E4higkeit des Gesamtverfahrens sowie der Einfluss zentraler Komponenten, wird im Rahmen von Experimenten auf synthetischen und realen Testszenen systematisch analysiert. Die Ergebnisse demonstrieren die F\E4higkeit des Verfahrens, die Begrenzung des befahrbaren Freiraumes sowie die Fahrbahnoberfl\E4che selbst in komplexen Szenarien mit multiplen Hindernissen oder starker Fahrbahnkr\FCmmung akkurat zu modellieren. Weiterhin werden die Grenzen des Verfahrens aufgezeigt und detailliert untersucht.},
    Timestamp = {2013.10.07},
    Url = {http://hss.ulb.uni-bonn.de/2013/3436/3436.htm}
    }

  • J. Stefanski, B. Mack, and B. Waske, “Optimization of object-based image analysis with Random Forests for land cover mapping,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6, iss. 6, pp. 2492-2504, 2013. doi:10.1109/JSTARS.2013.2253089
    [BibTeX]
    A prerequisite for object-based image analysis is the generation of adequate segments. However, the parameters for the image segmentation algorithms are often manually defined. Therefore, the generation of an ideal segmentation level is usually costly and user-depended. In this paper a strategy for a semi-automatic optimization of object-based classification of multitemporal data is introduced by using Random Forests (RF) and a novel segmentation algorithm. The Superpixel Contour (SPc) algorithm is used to generate a set of different levels of segmentation, using various combinations of parameters in a user-defined range. Finally, the best parameter combination is selected based on the cross-validation-like out-of-bag (OOB) error that is provided by RF. Therefore, the quality of the parameters and the corresponding segmentation level can be assessed in terms of the classification accuracy, without providing additional independent test data. To evaluate the potential of the proposed concept, we focus on land cover classification of two study areas, using multitemporal RapidEye and SPOT 5 images. A classification that is based on eCognition’s widely used Multiresolution Segmentation algorithm (MRS) is used for comparison. Experimental results underline that the two segmentation algorithms SPc and MRS perform similar in terms of accuracy and visual interpretation. The proposed strategy that uses the OOB error for the selection of the ideal segmentation level provides similar classification accuracies, when compared to the results achieved by manual-based image segmentation. Overall, the proposed strategy is operational and easy to handle and thus economizes the findings of optimal segmentation parameters for the Superpixel Contour algorithm.

    @Article{Stefanski2013Optimization,
    Title = {Optimization of object-based image analysis with Random Forests for land cover mapping},
    Author = {Stefanski, Jan and Mack, Benjamin and Waske, Bj\"orn},
    Journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
    Year = {2013},
    Number = {6},
    Pages = {2492--2504},
    Volume = {6},
    Abstract = {A prerequisite for object-based image analysis is the generation of adequate segments. However, the parameters for the image segmentation algorithms are often manually defined. Therefore, the generation of an ideal segmentation level is usually costly and user-depended. In this paper a strategy for a semi-automatic optimization of object-based classification of multitemporal data is introduced by using Random Forests (RF) and a novel segmentation algorithm. The Superpixel Contour (SPc) algorithm is used to generate a set of different levels of segmentation, using various combinations of parameters in a user-defined range. Finally, the best parameter combination is selected based on the cross-validation-like out-of-bag (OOB) error that is provided by RF. Therefore, the quality of the parameters and the corresponding segmentation level can be assessed in terms of the classification accuracy, without providing additional independent test data. To evaluate the potential of the proposed concept, we focus on land cover classification of two study areas, using multitemporal RapidEye and SPOT 5 images. A classification that is based on eCognition's widely used Multiresolution Segmentation algorithm (MRS) is used for comparison. Experimental results underline that the two segmentation algorithms SPc and MRS perform similar in terms of accuracy and visual interpretation. The proposed strategy that uses the OOB error for the selection of the ideal segmentation level provides similar classification accuracies, when compared to the results achieved by manual-based image segmentation. Overall, the proposed strategy is operational and easy to handle and thus economizes the findings of optimal segmentation parameters for the Superpixel Contour algorithm.},
    Doi = {10.1109/JSTARS.2013.2253089},
    ISSN = {1939-1404},
    Owner = {JanS},
    Timestamp = {2013.03.14}
    }

  • S. Wenzel and W. Förstner, “Finding Poly-Curves of Straight Line and Ellipse Segments in Images,” Photogrammetrie, Fernerkundung, Geoinformation (PFG), vol. 4, pp. 297-308, 2013. doi:10.1127/1432-8364/2013/0178
    [BibTeX]
    Simplification of given polygons has attracted many researchers. Especially, finding circular and elliptical structures in images is relevant in many applications. Given pixel chains from edge detection, this paper proposes a method to segment them into straight line and ellipse segments. We propose an adaption of Douglas-Peucker’s polygon simplification algorithm using circle segments instead of straight line segments and partition the sequence of points instead the sequence of edges. It is robust and decreases the complexity of given polygons better than the original algorithm. In a second step, we further simplify the poly-curve by merging neighbouring segments to straight line and ellipse segments. Merging is based on the evaluation of variation of entropy for proposed geometric models, which turns out as a combination of hypothesis testing and model selection. We demonstrate the results of {\tt circlePeucker} as well as merging on several images of scenes with significant circular structures and compare them with the method of {\sc Patraucean} et al. (2012).

    @Article{Wenzel2013Finding,
    Title = {Finding Poly-Curves of Straight Line and Ellipse Segments in Images},
    Author = {Wenzel, Susanne and F\"orstner, Wolfgang},
    Journal = {Photogrammetrie, Fernerkundung, Geoinformation (PFG)},
    Year = {2013},
    Pages = {297--308},
    Volume = {4},
    Abstract = {Simplification of given polygons has attracted many researchers. Especially, finding circular and elliptical structures in images is relevant in many applications. Given pixel chains from edge detection, this paper proposes a method to segment them into straight line and ellipse segments. We propose an adaption of Douglas-Peucker's polygon simplification algorithm using circle segments instead of straight line segments and partition the sequence of points instead the sequence of edges. It is robust and decreases the complexity of given polygons better than the original algorithm. In a second step, we further simplify the poly-curve by merging neighbouring segments to straight line and ellipse segments. Merging is based on the evaluation of variation of entropy for proposed geometric models, which turns out as a combination of hypothesis testing and model selection. We demonstrate the results of {\tt circlePeucker} as well as merging on several images of scenes with significant circular structures and compare them with the method of {\sc Patraucean} et al. (2012).},
    Doi = {10.1127/1432-8364/2013/0178},
    File = {Technical Report:Wenzel2013Finding.pdf}
    }

  • S. Wenzel and W. Förstner, “Finding Poly-Curves of Straight Line and Ellipse Segments in Images,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2013-02, 2013.
    [BibTeX] [PDF]
    Simplification of given polygons has attracted many researchers. Especially, finding circular and elliptical structures in images is relevant in many applications. Given pixel chains from edge detection, this paper proposes a method to segment them into straight line and ellipse segments. We propose an adaption of Douglas-Peucker’s polygon simplification algorithm using circle segments instead of straight line segments and partition the sequence of points instead the sequence of edges. It is robust and decreases the complexity of given polygons better than the original algorithm. In a second step, we further simplify the poly-curve by merging neighbouring segments to straight line and ellipse segments. Merging is based on the evaluation of variation of entropy for proposed geometric models, which turns out as a combination of hypothesis testing and model selection. We demonstrate the results of {\tt circlePeucker} as well as merging on several images of scenes with significant circular structures and compare them with the method of {\sc Patraucean} et al. (2012).

    @TechReport{Wenzel2013FindingTR,
    Title = {Finding Poly-Curves of Straight Line and Ellipse Segments in Images},
    Author = {Wenzel, Susanne and F\"orstner, Wolfgang},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2013},
    Month = {July},
    Number = {TR-IGG-P-2013-02},
    Abstract = {Simplification of given polygons has attracted many researchers. Especially, finding circular and elliptical structures in images is relevant in many applications. Given pixel chains from edge detection, this paper proposes a method to segment them into straight line and ellipse segments. We propose an adaption of Douglas-Peucker's polygon simplification algorithm using circle segments instead of straight line segments and partition the sequence of points instead the sequence of edges. It is robust and decreases the complexity of given polygons better than the original algorithm. In a second step, we further simplify the poly-curve by merging neighbouring segments to straight line and ellipse segments. Merging is based on the evaluation of variation of entropy for proposed geometric models, which turns out as a combination of hypothesis testing and model selection. We demonstrate the results of {\tt circlePeucker} as well as merging on several images of scenes with significant circular structures and compare them with the method of {\sc Patraucean} et al. (2012).},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2013Finding.pdf}
    }

  • K. M. Wurm, C. Dornhege, B. Nebel, W. Burgard, and C. Stachniss, “Coordinating Heterogeneous Teams of Robots using Temporal Symbolic Planning,” Autonomous Robots, vol. 34, 2013.
    [BibTeX] [PDF]
    [none]
    @Article{Wurm2013,
    Title = {Coordinating Heterogeneous Teams of Robots using Temporal Symbolic Planning},
    Author = {K.M. Wurm and C. Dornhege and B. Nebel and W. Burgard and C. Stachniss},
    Journal = auro,
    Year = {2013},
    Volume = {34},
    Abstract = {[none]},
    Issue = {4},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/wurm13auro.pdf}
    }

  • K. M. Wurm, H. Kretzschmar, R. Kümmerle, C. Stachniss, and W. Burgard, “Identifying Vegetation from Laser Data in Structured Outdoor Environments,” Robotics and Autonomous Systems, 2013.
    [BibTeX] [PDF]
    [none]
    @Article{Wurm2013a,
    Title = {Identifying Vegetation from Laser Data in Structured Outdoor Environments},
    Author = {K.M. Wurm and H. Kretzschmar and R. K{\"u}mmerle and C. Stachniss and W. Burgard},
    Journal = jras,
    Year = {2013},
    Note = {In press},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/wurm13ras.pdf}
    }

2012

  • N. Abdo, H. Kretzschmar, and C. Stachniss, “From Low-Level Trajectory Demonstrations to Symbolic Actions for Planning,” in Proceedings of the ICAPS Workshop on Combining Task and Motion Planning for Real-World Applications (TAMPRA) , 2012.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Abdo2012,
    Title = {From Low-Level Trajectory Demonstrations to Symbolic Actions for Planning},
    Author = {N. Abdo and H. Kretzschmar and C. Stachniss},
    Booktitle = {Proceedings of the ICAPS Workshop on Combining Task and Motion Planning for Real-World Applications (TAMPRA)},
    Year = {2012},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/abdo12tampra.pdf}
    }

  • P. A. Becker, “3D Rekonstruktion symmetrischer Objekte aus Tiefenbildern,” bachelor thesis Master Thesis, 2012.
    [BibTeX]
    none

    @MastersThesis{Becker2012Rekonstruktion,
    Title = {3D Rekonstruktion symmetrischer Objekte aus Tiefenbildern},
    Author = {Becker, Philip Alexander},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2012},
    Type = {bachelor thesis},
    Abstract = {none},
    Timestamp = {2013.04.16}
    }

  • D. Chai, W. Förstner, and M. Ying Yang, “Combine Markov Random Fields and Marked Point Processes to extract Building from Remotely Sensed Images,” in ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences , 2012. doi:10.5194/isprsannals-I-3-365-2012
    [BibTeX] [PDF]
    Automatic building extraction from remotely sensed images is a research topic much more significant than ever. One of the key issues is object and image representation. Markov random fields usually referring to the pixel level can not represent high-level knowledge well. On the contrary, marked point processes can not represent low-level information well even though they are a powerful model at object level. We propose to combine Markov random fields and marked point processes to represent both low-level information and high-level knowledge, and present a combined framework of modelling and estimation for building extraction from single remotely sensed image. At high level, rectangles are used to represent buildings, and a marked point process is constructed to represent the buildings on ground scene. Interactions between buildings are introduced into the the model to represent their relationships. At the low level, a MRF is used to represent the statistics of the image appearance. Histograms of colours are adopted to represent the building’s appearance. The high-level model and the low-level model are combined by establishing correspondences between marked points and nodes of the MRF. We adopt reversible jump Markov Chain Monte Carlo (RJMCMC) techniques to explore the configuration space at the high level, and adopt a Graph Cut algorithm to optimize configuration at the low level. We propose a top-down schema to use results from high level to guide the optimization at low level, and propose a bottom-up schema to use results from low level to drive the sampling at high level. Experimental results demonstrate that better results can be achieved by adopting such hybrid representation.

    @InProceedings{chai*12:combine,
    Title = {Combine Markov Random Fields and Marked Point Processes to extract Building from Remotely Sensed Images},
    Author = {Chai, D. and F\"orstner, W. and Ying Yang, M.},
    Booktitle = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2012},
    Abstract = {Automatic building extraction from remotely sensed images is a research topic much more significant than ever. One of the key issues is object and image representation. Markov random fields usually referring to the pixel level can not represent high-level knowledge well. On the contrary, marked point processes can not represent low-level information well even though they are a powerful model at object level. We propose to combine Markov random fields and marked point processes to represent both low-level information and high-level knowledge, and present a combined framework of modelling and estimation for building extraction from single remotely sensed image. At high level, rectangles are used to represent buildings, and a marked point process is constructed to represent the buildings on ground scene. Interactions between buildings are introduced into the the model to represent their relationships. At the low level, a MRF is used to represent the statistics of the image appearance. Histograms of colours are adopted to represent the building's appearance. The high-level model and the low-level model are combined by establishing correspondences between marked points and nodes of the MRF. We adopt reversible jump Markov Chain Monte Carlo (RJMCMC) techniques to explore the configuration space at the high level, and adopt a Graph Cut algorithm to optimize configuration at the low level. We propose a top-down schema to use results from high level to guide the optimization at low level, and propose a bottom-up schema to use results from low level to drive the sampling at high level. Experimental results demonstrate that better results can be achieved by adopting such hybrid representation.},
    Doi = {10.5194/isprsannals-I-3-365-2012},
    Timestamp = {2015.07.09},
    Url = {http://www.ipb.uni-bonn.de/pdfs/isprsannals-I-3-365-2012.pdf}
    }

  • W. Förstner, “Minimal Representations for Testing and Estimation in Projective Spaces,” Z. f. Photogrammetrie, Fernerkundung und Geoinformation, vol. 3, pp. 209-220, 2012. doi:10.1127/1432-8364/2012/0112
    [BibTeX]
    Testing and estimation using homogeneous coordinates and matrices has to cope with obstacles such as singularities of covariance matrices and redundant parametrizations. The paper proposes a representation of the uncertainty of all types of geometric entities which (1) only requires the minimum number of parameters, (2) is free of singularities, (3) enables to exploit the simplicity of homogeneous coordinates to represent geometric constraints and (4) allows to handle geometric entities which are at infinity or at least very far away. We develop the concept, discuss its usefulness for bundle adjustment and demonstrate its applicability for determining 3D lines from observed image line segments in a multi view setup.

    @Article{Forstner2012Minimal,
    Title = {Minimal Representations for Testing and Estimation in Projective Spaces},
    Author = {F\"orstner, Wolfgang},
    Journal = {Z. f. Photogrammetrie, Fernerkundung und Geoinformation},
    Year = {2012},
    Pages = {209--220},
    Volume = {3},
    Abstract = {Testing and estimation using homogeneous coordinates and matrices has to cope with obstacles such as singularities of covariance matrices and redundant parametrizations. The paper proposes a representation of the uncertainty of all types of geometric entities which (1) only requires the minimum number of parameters, (2) is free of singularities, (3) enables to exploit the simplicity of homogeneous coordinates to represent geometric constraints and (4) allows to handle geometric entities which are at infinity or at least very far away. We develop the concept, discuss its usefulness for bundle adjustment and demonstrate its applicability for determining 3D lines from observed image line segments in a multi view setup.},
    Doi = {10.1127/1432-8364/2012/0112},
    File = {Technical Report:Forstner2012Minimal.pdf},
    Timestamp = {2013.01.09}
    }

  • W. Förstner, “Minimal Representations for Testing and Estimation in Projective Spaces,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2012-03, 2012.
    [BibTeX] [PDF]
    Testing and estimation using homogeneous coordinates and matrices has to cope with obstacles such as singularities of covariance matrices and redundant parametrizations. The paper proposes a representation of the uncertainty of all types of geometric entities which (1) only requires the minimum number of parameters, (2) is free of singularities, (3) enables to exploit the simplicity of homogeneous coordinates to represent geometric constraints and (4) allows to handle geometric entities which are at infinity or at least very far away. We develop the concept, discuss its usefulness for bundle adjustment and demonstrate its applicability for determining 3D lines from observed image line segments in a multi view setup.

    @TechReport{Forstner2012MinimalReport,
    Title = {Minimal Representations for Testing and Estimation in Projective Spaces},
    Author = {F\"orstner, Wolfgang},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2012},
    Number = {TR-IGG-P-2012-03},
    Abstract = {Testing and estimation using homogeneous coordinates and matrices has to cope with obstacles such as singularities of covariance matrices and redundant parametrizations. The paper proposes a representation of the uncertainty of all types of geometric entities which (1) only requires the minimum number of parameters, (2) is free of singularities, (3) enables to exploit the simplicity of homogeneous coordinates to represent geometric constraints and (4) allows to handle geometric entities which are at infinity or at least very far away. We develop the concept, discuss its usefulness for bundle adjustment and demonstrate its applicability for determining 3D lines from observed image line segments in a multi view setup.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2012Minimal.pdf}
    }

  • S. Gehrig, A. Barth, N. Schneider, and J. Siegemund, “A Multi-Cue Approach for Stereo-Based Object Confidence Estimation,” in Intelligent Robots and Systems (IROS) , Vilamoura, Portugal, 2012, pp. 3055-3060. doi:10.1109/IROS.2012.6385455
    [BibTeX]
    In this contribution we present an approach to compute object confidences for stereo-vision-based object tracking schemes. Meaningful object confidences help to reduce false alarm rates of safety systems and improve the downstream system performance for modules such as sensor fusion and situation analysis. Several cues from stereo vision and from the tracking process are fused in a Bayesian manner. An evaluation on a 38,000 frames urban drive shows the effectiveness of the approach compared to the same object tracking scheme with simple heuristics for the object confidence. Within the evaluation, also the relevance of occurring phantoms is considered by computing the collision risk. The proposed confidence measures reduce the number of predicted imminent collisions from 86 to 0 maintaining almost the same system availability.

    @InProceedings{Gehrig2012Multi,
    Title = {A Multi-Cue Approach for Stereo-Based Object Confidence Estimation},
    Author = {Gehrig, Stefan and Barth, Alexander and Schneider, Nicolai and Siegemund, Jan},
    Booktitle = {Intelligent Robots and Systems (IROS)},
    Year = {2012},
    Address = {Vilamoura, Portugal},
    Pages = {3055 -- 3060},
    Abstract = {In this contribution we present an approach to compute object confidences for stereo-vision-based object tracking schemes. Meaningful object confidences help to reduce false alarm rates of safety systems and improve the downstream system performance for modules such as sensor fusion and situation analysis. Several cues from stereo vision and from the tracking process are fused in a Bayesian manner. An evaluation on a 38,000 frames urban drive shows the effectiveness of the approach compared to the same object tracking scheme with simple heuristics for the object confidence. Within the evaluation, also the relevance of occurring phantoms is considered by computing the collision risk. The proposed confidence measures reduce the number of predicted imminent collisions from 86 to 0 maintaining almost the same system availability.},
    Doi = {10.1109/IROS.2012.6385455}
    }

  • G. Grisetti, L. Iocchi, B. Leibe, V. A. Ziparo, and C. Stachniss, “Digitization of Inaccessible Archeological Sites with Autonomous Mobile Robots,” in Conference on Robotics Innovation for Cultural Heritage , 2012.
    [BibTeX]
    [none]
    @InProceedings{Grisetti2012,
    Title = {Digitization of Inaccessible Archeological Sites with Autonomous Mobile Robots},
    Author = {G. Grisetti and L. Iocchi and B. Leibe and V.A. Ziparo and C. Stachniss},
    Booktitle = {Conference on Robotics Innovation for Cultural Heritage},
    Year = {2012},
    Abstract = {[none]},
    Notes = {Extended abstract},
    Timestamp = {2014.04.24}
    }

  • M. Hans, “Die Verbesserung einer Bildsegmentierung unter Verwendung von 3D Merkmalen,” bachelor thesis Master Thesis, 2012.
    [BibTeX] [PDF]
    Ziel einer partionellen Bildsegmentierung ist die Einteilung eines Bildes in Regionen. Dabei wird jedes Pixel zu je einer Region zugeordnet. Liegen ungünstige Beleuchtungsverhältnisse im Bild vor, ist eine Segmentierung einzig basierend auf Bilddaten nicht ausreichend, da aneinandergrenzende Objekteile mit ähnlichen Farbwerten nicht unterschieden werden können. Mit Hilfe von 3D-Merkmalen können wir solche Bildsegmentierungen verbessern. Dabei liegt der Fokus der Arbeit auf segmentierten Luftbildern mit Dachflächen. Mit der Annahme, dass sich die Dächer aus Flächen erster Ordnung zusammensetzen, werden in den vorsegmentierten Bildregionen zunächst zwei Ebenen in den zugeordneten Punkten einer 3D-Punktwolke geschätzt. Hierzu wird der random sample consensus (RANSAC, Fischler and Bolles (1981)) verwendet. Wir beschränken uns auf die Trennkante zweier Dachflächen, die in einem bekannten Winkel $\varphi$ zueinander stehen und die gleiche Neigung haben. Die Berechnung der Ebenenparameter ist somit bereits mit vier geeigneten Punkten der Objektkoordinaten möglich. Mit den geschätzten Ebenen in der Punktwolke segmentierte Bildregion kann diese aufgesplittet werden. Hierzu wenden wir ein lineares diskriminatives Modell an, um eine lineare Kante als Trennung in der Bildsegmentierung einzeichnen zu können. Eine visuelle Evaluierung der Ergebnisse zeigt, dass die hier vorgestellten Verfahren eine Trennung der Dachregionen an einer sinnvollen Stelle ermöglichen. Dabei werden die Verfahren an Bildern mit unterschiedlichen Dachformen getestet. Die Leistungsfähigkeit der Verfahren hängt vor Allem von der Punktkonfiguration der von RANSAC ausgewählten Punkte ab. Diese Arbeit beschreibt uns somit Verfahren, die eine regionenbasierende Segmentierung von Dachflächen auf Luftbildern unter der Verwendung von 3D Merkmalen verbessern.

    @MastersThesis{Hans2010Die,
    Title = {Die Verbesserung einer Bildsegmentierung unter Verwendung von 3D Merkmalen},
    Author = {Hans, Mathias},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2012},
    Note = {Betreuung: Prof. Dr.-Ing Wolfgang F\"orstner, Dipl.-Ing. Ribana Roscher},
    Type = {bachelor thesis},
    Abstract = {Ziel einer partionellen Bildsegmentierung ist die Einteilung eines Bildes in Regionen. Dabei wird jedes Pixel zu je einer Region zugeordnet. Liegen ung\"unstige Beleuchtungsverh\"altnisse im Bild vor, ist eine Segmentierung einzig basierend auf Bilddaten nicht ausreichend, da aneinandergrenzende Objekteile mit \"ahnlichen Farbwerten nicht unterschieden werden k\"onnen. Mit Hilfe von 3D-Merkmalen k\"onnen wir solche Bildsegmentierungen verbessern. Dabei liegt der Fokus der Arbeit auf segmentierten Luftbildern mit Dachfl\"achen. Mit der Annahme, dass sich die D\"acher aus Fl\"achen erster Ordnung zusammensetzen, werden in den vorsegmentierten Bildregionen zun\"achst zwei Ebenen in den zugeordneten Punkten einer 3D-Punktwolke gesch\"atzt. Hierzu wird der random sample consensus (RANSAC, Fischler and Bolles (1981)) verwendet. Wir beschr\"anken uns auf die Trennkante zweier Dachfl\"achen, die in einem bekannten Winkel $\varphi$ zueinander stehen und die gleiche Neigung haben. Die Berechnung der Ebenenparameter ist somit bereits mit vier geeigneten Punkten der Objektkoordinaten m\"oglich. Mit den gesch\"atzten Ebenen in der Punktwolke segmentierte Bildregion kann diese aufgesplittet werden. Hierzu wenden wir ein lineares diskriminatives Modell an, um eine lineare Kante als Trennung in der Bildsegmentierung einzeichnen zu k\"onnen. Eine visuelle Evaluierung der Ergebnisse zeigt, dass die hier vorgestellten Verfahren eine Trennung der Dachregionen an einer sinnvollen Stelle erm\"oglichen. Dabei werden die Verfahren an Bildern mit unterschiedlichen Dachformen getestet. Die Leistungsf\"ahigkeit der Verfahren h\"angt vor Allem von der Punktkonfiguration der von RANSAC ausgew\"ahlten Punkte ab. Diese Arbeit beschreibt uns somit Verfahren, die eine regionenbasierende Segmentierung von Dachfl\"achen auf Luftbildern unter der Verwendung von 3D Merkmalen verbessern.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Hans2010Die.pdf}
    }

  • D. Joho, G. D. Tipaldi, N. Engelhard, C. Stachniss, and W. Burgard, “Nonparametric Bayesian Models for Unsupervised Scene Analysis and Reconstruction,” in Proceedings of Robotics: Science and Systems (RSS) , 2012.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Joho2012,
    Title = {Nonparametric {B}ayesian Models for Unsupervised Scene Analysis and Reconstruction},
    Author = {D. Joho and G.D. Tipaldi and N. Engelhard and C. Stachniss and W. Burgard},
    Booktitle = rss,
    Year = {2012},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/joho12rss.pdf}
    }

  • S. Klemenjak, B. Waske, S. Valero, and J. Chanussot, “Automatic Detection of Rivers in High-Resolution SAR Data,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, iss. 5, pp. 1364-1372, 2012. doi:10.1109/JSTARS.2012.2189099
    [BibTeX]
    Remote sensing plays a major role in supporting decision-making and surveying compliance of several multilateral environmental treaties. In this paper, we present an approach for supporting monitoring compliance of river networks in context of the European Water Framework Directive. Only a few approaches have been developed for extracting river networks from satellite data and usually they require manual input, which seems not feasible for automatic and operational application. We propose a method for the automatic extraction of river structures in TerraSAR-X data. The method is based on mathematical morphology and supervised image classification, using automatically selected training samples. The method is applied on TerraSAR-X images from two different study sites. In addition, the results are compared to an alternative method, which requires manual user interaction. The detailed accuracy assessment shows that the proposed method achieves accurate results (Kappa $ {sim}$ 0.7) and performs almost similar in terms of accuracy, when compared to the alternative approach. Moreover, the proposed method can be applied on various datasets (e.g., multitemporal, multisensoral and multipolarized) and does not require any additional user input. Thus, the highly flexible approach is interesting in terms of operational monitoring systems and large scale applications.

    @Article{Klemenjak2012Automatic,
    Title = {Automatic Detection of Rivers in High-Resolution SAR Data},
    Author = {Klemenjak, Sascha and Waske, Bj\"orn and Valero, Sivia and Chanussot, Jocelyn},
    Journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
    Year = {2012},
    Month = oct,
    Number = {5},
    Pages = {1364--1372},
    Volume = {5},
    Abstract = {Remote sensing plays a major role in supporting decision-making and surveying compliance of several multilateral environmental treaties. In this paper, we present an approach for supporting monitoring compliance of river networks in context of the European Water Framework Directive. Only a few approaches have been developed for extracting river networks from satellite data and usually they require manual input, which seems not feasible for automatic and operational application. We propose a method for the automatic extraction of river structures in TerraSAR-X data. The method is based on mathematical morphology and supervised image classification, using automatically selected training samples. The method is applied on TerraSAR-X images from two different study sites. In addition, the results are compared to an alternative method, which requires manual user interaction. The detailed accuracy assessment shows that the proposed method achieves accurate results (Kappa $ {sim}$ 0.7) and performs almost similar in terms of accuracy, when compared to the alternative approach. Moreover, the proposed method can be applied on various datasets (e.g., multitemporal, multisensoral and multipolarized) and does not require any additional user input. Thus, the highly flexible approach is interesting in terms of operational monitoring systems and large scale applications.},
    Doi = {10.1109/JSTARS.2012.2189099},
    ISSN = {1939-1404},
    Owner = {waske},
    Timestamp = {2012.09.06}
    }

  • F. Korč, “Tractable Learning for a Class of Global Discriminative Models for Context Sensitive Image Interpretation,” PhD Thesis, 2012.
    [BibTeX] [PDF]
    [none]
    @PhdThesis{Korvc2012Tractable,
    Title = {Tractable Learning for a Class of Global Discriminative Models for Context Sensitive Image Interpretation},
    Author = {Kor{\vc}, Filip},
    School = {Department of Photogrammetry, University of Bonn},
    Year = {2012},
    Abstract = {[none]},
    Url = {http://hss.ulb.uni-bonn.de/2012/3010/3010.htm}
    }

  • H. Kretzschmar and C. Stachniss, “Information-Theoretic Pose Graph Compression for Laser-based SLAM,” The International Journal of Robotics Research, vol. 31, pp. 1219-1230, 2012.
    [BibTeX] [PDF]
    [none]
    @Article{Kretzschmar2012,
    Title = {Information-Theoretic Pose Graph Compression for Laser-based {SLAM}},
    Author = {H. Kretzschmar and C. Stachniss},
    Journal = ijrr,
    Year = {2012},
    Pages = {1219--1230},
    Volume = {31},
    Abstract = {[none]},
    Issue = {11},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/kretzschmar12ijrr.pdf}
    }

  • J. Roewekaemper, C. Sprunk, G. D. Tipaldi, C. Stachniss, P. Pfaff, and W. Burgard, “On the Position Accuracy of Mobile Robot Localization based on Particle Filters combined with Scan Matching,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2012.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Roewekaemper2012,
    Title = {On the Position Accuracy of Mobile Robot Localization based on Particle Filters combined with Scan Matching},
    Author = {J. Roewekaemper and C. Sprunk and G.D. Tipaldi and C. Stachniss and P. Pfaff and W. Burgard},
    Booktitle = IROS,
    Year = {2012},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://ais.informatik.uni-freiburg.de/publications/papers/roewekaemper12iros.pdf}
    }

  • R. Roscher, “Sequential Learning using Incremental Import Vector Machines for Semantic Segmentation,” PhD Thesis, 2012.
    [BibTeX] [PDF]
    We propose an innovative machine learning algorithm called incremental import vector machines that is used for classification purposes. The classifier is specifically designed for the task of sequential learning, in which the data samples are successively presented to the classifier. The motivation for our work comes from the effort to formulate a classifier that can manage the major challenges of sequential learning problems, while being a powerful classifier in terms of classification accuracy, efficiency and meaningful output. One challenge of sequential learning is that data samples are not completely available to the learner at a given point of time and generally, waiting for a representative number of data is undesirable and impractical. Thus, in order to allow for a classification of given data samples at any time, the learning phase of the classifier model needs to start immediately, even if not all training samples are available. Another challenge is that the number of sequential arriving data samples can be very large or even infinite and thus, not all samples can be stored. Furthermore, the distribution of the sample can vary over time and the classifier model needs to remain stable and unchanged to irrelevant samples while being plastic to new, important samples. Therefore our key contribution is to develop, analyze and evaluate a powerful incremental learner for sequential learning which we call incremental import vector machines (I2VMs). The classifier is based on the batch machine learning algorithm import vector machines, which was developed by Zhu and Hastie (2005). I2VM is a kernel-based, discriminative classifier and thus, is able to deal with complex data distributions. Additionally ,the learner is sparse for an efficient training and testing and has a probabilistic output. A key achievement of this thesis is the verification and analysis of the discriminative and reconstructive model components of IVM and I2VM. While discriminative classifiers try to separate the classes as well as possible, classifiers with a reconstructive component aspire to have a high information content in order to approximate the distribution of the data samples. Both properties are necessary for a powerful incremental classifier. A further key achievement is the formulation of the incremental learning strategy of I2VM. The strategy deals with adding and removing data samples and the update of the current set of model parameters. Furthermore, also new classes and features can be incorporated. The learning strategy adapts the model continuously, while keeping it stable and efficient. In our experiments we use I2VM for the semantic segmentation of images from an image database, for large area land cover classification of overlapping remote sensing images and for object tracking in image sequences. We show that I2VM results in superior or competitive classification accuracies to comparable classifiers. A substantial achievement of the thesis is that I2VM’s performance is independent of the ordering of the data samples and a reconsidering of already encountered samples for learning is not necessary. A further achievement is that I2VM is able to deal with very long data streams without a loss in the efficiency. Furthermore, as another achievement, we show that I2VM provide reliable posterior probabilities since samples with high class probabilities are accurately classified, whereas relatively low class probabilities are more likely referred to misclassified samples.

    @PhdThesis{Roscher2012Sequential,
    Title = {Sequential Learning using Incremental Import Vector Machines for Semantic Segmentation},
    Author = {Roscher, Ribana},
    School = {Department of Photogrammetry, University of Bonn},
    Year = {2012},
    Abstract = {We propose an innovative machine learning algorithm called incremental import vector machines that is used for classification purposes. The classifier is specifically designed for the task of sequential learning, in which the data samples are successively presented to the classifier. The motivation for our work comes from the effort to formulate a classifier that can manage the major challenges of sequential learning problems, while being a powerful classifier in terms of classification accuracy, efficiency and meaningful output. One challenge of sequential learning is that data samples are not completely available to the learner at a given point of time and generally, waiting for a representative number of data is undesirable and impractical. Thus, in order to allow for a classification of given data samples at any time, the learning phase of the classifier model needs to start immediately, even if not all training samples are available. Another challenge is that the number of sequential arriving data samples can be very large or even infinite and thus, not all samples can be stored. Furthermore, the distribution of the sample can vary over time and the classifier model needs to remain stable and unchanged to irrelevant samples while being plastic to new, important samples. Therefore our key contribution is to develop, analyze and evaluate a powerful incremental learner for sequential learning which we call incremental import vector machines (I2VMs). The classifier is based on the batch machine learning algorithm import vector machines, which was developed by Zhu and Hastie (2005). I2VM is a kernel-based, discriminative classifier and thus, is able to deal with complex data distributions. Additionally ,the learner is sparse for an efficient training and testing and has a probabilistic output. A key achievement of this thesis is the verification and analysis of the discriminative and reconstructive model components of IVM and I2VM. While discriminative classifiers try to separate the classes as well as possible, classifiers with a reconstructive component aspire to have a high information content in order to approximate the distribution of the data samples. Both properties are necessary for a powerful incremental classifier. A further key achievement is the formulation of the incremental learning strategy of I2VM. The strategy deals with adding and removing data samples and the update of the current set of model parameters. Furthermore, also new classes and features can be incorporated. The learning strategy adapts the model continuously, while keeping it stable and efficient. In our experiments we use I2VM for the semantic segmentation of images from an image database, for large area land cover classification of overlapping remote sensing images and for object tracking in image sequences. We show that I2VM results in superior or competitive classification accuracies to comparable classifiers. A substantial achievement of the thesis is that I2VM's performance is independent of the ordering of the data samples and a reconsidering of already encountered samples for learning is not necessary. A further achievement is that I2VM is able to deal with very long data streams without a loss in the efficiency. Furthermore, as another achievement, we show that I2VM provide reliable posterior probabilities since samples with high class probabilities are accurately classified, whereas relatively low class probabilities are more likely referred to misclassified samples.},
    City = {Bonn},
    Url = {http://hss.ulb.uni-bonn.de/2012/3009/3009.htm}
    }

  • R. Roscher, W. Förstner, and B. Waske, “I²VM: Incremental import vector machines,” Image and Vision Computing, vol. 30, iss. 4-5, pp. 263-278, 2012. doi:10.1016/j.imavis.2012.04.004
    [BibTeX]
    We introduce an innovative incremental learner called incremental import vector machines ((IVM)-V-2). The kernel-based discriminative approach is able to deal with complex data distributions. Additionally, the learner is sparse for an efficient training and testing and has a probabilistic output. We particularly investigate the reconstructive component of import vector machines, in order to use it for robust incremental teaming. By performing incremental update steps, we are able to add and remove data samples, as well as update the current set of model parameters for incremental learning. By using various standard benchmarks, we demonstrate how (IVM)-V-2 is competitive or superior to other incremental methods. It is also shown that our approach is capable of managing concept-drifts in the data distributions. (C) 2012 Elsevier B.V. All rights reserved.

    @Article{Roscher2012I2VM,
    Title = {I²VM: Incremental import vector machines},
    Author = {Roscher, Ribana and F\"orstner, Wolfgang and Waske, Bj\"orn},
    Journal = {Image and Vision Computing},
    Year = {2012},
    Month = may,
    Number = {4-5},
    Pages = {263--278},
    Volume = {30},
    Abstract = {We introduce an innovative incremental learner called incremental import vector machines ((IVM)-V-2). The kernel-based discriminative approach is able to deal with complex data distributions. Additionally, the learner is sparse for an efficient training and testing and has a probabilistic output. We particularly investigate the reconstructive component of import vector machines, in order to use it for robust incremental teaming. By performing incremental update steps, we are able to add and remove data samples, as well as update the current set of model parameters for incremental learning. By using various standard benchmarks, we demonstrate how (IVM)-V-2 is competitive or superior to other incremental methods. It is also shown that our approach is capable of managing concept-drifts in the data distributions. (C) 2012 Elsevier B.V. All rights reserved.},
    Doi = {10.1016/j.imavis.2012.04.004},
    Owner = {waske},
    Sn = {0262-8856},
    Tc = {0},
    Timestamp = {2012.09.04},
    Ut = {WOS:000305726700001},
    Z8 = {0},
    Z9 = {0},
    Zb = {0}
    }

  • R. Roscher, J. Siegemund, F. Schindler, and W. Förstner, “Object Tracking by Segmentation Using Incremental Import Vector Machines,” Department of Photogrammetry, University of Bonn 2012.
    [BibTeX] [PDF]
    We propose a framework for object tracking in image sequences, following the concept of tracking-by-segmentation. The separation of object and background is achieved by a consecutive semantic superpixel segmentation of the images, yielding tight object boundaries. I.e., in the first image a model of the object’s characteristics is learned from an initial, incomplete annotation. This model is used to classify the superpixels of subsequent images to object and background employing graph-cut. We assume the object boundaries to be tight-fitting and the object motion within the image to be affine. To adapt the model to radiometric and geometric changes we utilize an incremental learner in a co-training scheme. We evaluate our tracking framework qualitatively and quantitatively on several image sequences.

    @TechReport{Roscher2012Object,
    Title = {Object Tracking by Segmentation Using Incremental Import Vector Machines},
    Author = {Roscher, Ribana and Siegemund, Jan and Schindler, Falko and F\"orstner, Wolfgang},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2012},
    Abstract = {We propose a framework for object tracking in image sequences, following the concept of tracking-by-segmentation. The separation of object and background is achieved by a consecutive semantic superpixel segmentation of the images, yielding tight object boundaries. I.e., in the first image a model of the object's characteristics is learned from an initial, incomplete annotation. This model is used to classify the superpixels of subsequent images to object and background employing graph-cut. We assume the object boundaries to be tight-fitting and the object motion within the image to be affine. To adapt the model to radiometric and geometric changes we utilize an incremental learner in a co-training scheme. We evaluate our tracking framework qualitatively and quantitatively on several image sequences.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2012Object.pdf}
    }

  • R. Roscher, B. Waske, and W. Förstner, “Evaluation of Import Vector Machines for Classifying Hyperspectral Data,” Department of Photogrammetry, University of Bonn 2012.
    [BibTeX] [PDF]
    We evaluate the performance of Import Vector Machines (IVM),a sparse Kernel Logistic Regression approach, for the classification of hyperspectral data. The IVM classifier is applied on two different data sets, using different number of training samples. The performance of IVM to Support Vector Machines (SVM) is compared in terms of accuracy and sparsity. Moreover, the impact of the training sample set on the accuracy and stability of IVM was investigated. The results underline that the IVM perform similar when compared to the popular SVM in terms of accuracy. Moreover, the number of import vectors from the IVM is significantly lower when compared to the number of support vectors from the SVM. Thus, the classification process of the IVM is faster. These findings are independent from the study site, the number of training samples and specific classes. Consequently, the proposed IVM approach is a promising classification method for hyperspectral imagery.

    @TechReport{Roscher2012Evaluation,
    Title = {Evaluation of Import Vector Machines for Classifying Hyperspectral Data},
    Author = {Roscher, Ribana and Waske, Bj\"orn and F\"orstner, Wolfgang},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2012},
    Abstract = {We evaluate the performance of Import Vector Machines (IVM),a sparse Kernel Logistic Regression approach, for the classification of hyperspectral data. The IVM classifier is applied on two different data sets, using different number of training samples. The performance of IVM to Support Vector Machines (SVM) is compared in terms of accuracy and sparsity. Moreover, the impact of the training sample set on the accuracy and stability of IVM was investigated. The results underline that the IVM perform similar when compared to the popular SVM in terms of accuracy. Moreover, the number of import vectors from the IVM is significantly lower when compared to the number of support vectors from the SVM. Thus, the classification process of the IVM is faster. These findings are independent from the study site, the number of training samples and specific classes. Consequently, the proposed IVM approach is a promising classification method for hyperspectral imagery.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2012Evaluation.pdf}
    }

  • R. Roscher, B. Waske, and W. Förstner, “Incremental Import Vector Machines for Classifying Hyperspectral Data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, iss. 9, pp. 3463-3473, 2012. doi:10.1109/TGRS.2012.2184292
    [BibTeX]
    In this paper, we propose an incremental learning strategy for import vector machines (IVM), which is a sparse kernel logistic regression approach. We use the procedure for the concept of self-training for sequential classification of hyperspectral data. The strategy comprises the inclusion of new training samples to increase the classification accuracy and the deletion of noninformative samples to be memory and runtime efficient. Moreover, we update the parameters in the incremental IVM model without retraining from scratch. Therefore, the incremental classifier is able to deal with large data sets. The performance of the IVM in comparison to support vector machines (SVM) is evaluated in terms of accuracy, and experiments are conducted to assess the potential of the probabilistic outputs of the IVM. Experimental results demonstrate that the IVM and SVM perform similar in terms of classification accuracy. However, the number of import vectors is significantly lower when compared to the number of support vectors, and thus, the computation time during classification can be decreased. Moreover, the probabilities provided by IVM are more reliable, when compared to the probabilistic information, derived from an SVM’s output. In addition, the proposed self-training strategy can increase the classification accuracy. Overall, the IVM and its incremental version is worthwhile for the classification of hyperspectral data.

    @Article{Roscher2012Incremental,
    Title = {Incremental Import Vector Machines for Classifying Hyperspectral Data},
    Author = {Roscher, Ribana and Waske, Bj\"orn and F\"orstner, Wolfgang},
    Journal = {IEEE Transactions on Geoscience and Remote Sensing},
    Year = {2012},
    Month = sep,
    Number = {9},
    Pages = {3463--3473},
    Volume = {50},
    Abstract = {In this paper, we propose an incremental learning strategy for import vector machines (IVM), which is a sparse kernel logistic regression approach. We use the procedure for the concept of self-training for sequential classification of hyperspectral data. The strategy comprises the inclusion of new training samples to increase the classification accuracy and the deletion of noninformative samples to be memory and runtime efficient. Moreover, we update the parameters in the incremental IVM model without retraining from scratch. Therefore, the incremental classifier is able to deal with large data sets. The performance of the IVM in comparison to support vector machines (SVM) is evaluated in terms of accuracy, and experiments are conducted to assess the potential of the probabilistic outputs of the IVM. Experimental results demonstrate that the IVM and SVM perform similar in terms of classification accuracy. However, the number of import vectors is significantly lower when compared to the number of support vectors, and thus, the computation time during classification can be decreased. Moreover, the probabilities provided by IVM are more reliable, when compared to the probabilistic information, derived from an SVM's output. In addition, the proposed self-training strategy can increase the classification accuracy. Overall, the IVM and its incremental version is worthwhile for the classification of hyperspectral data.},
    Doi = {10.1109/TGRS.2012.2184292},
    ISSN = {0196-2892},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • F. Schindler and W. Förstner, “Real-time Camera Guidance for 3d Scene Reconstruction,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 2012.
    [BibTeX] [PDF]
    We propose a framework for multi-view stereo reconstruction exploiting the possibility to interactively guiding the operator during the image acquisition process. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that following the camera movements suggested by our system the final scene reconstruction with the automatically extracted key frames is both more complete and more accurate. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

    @InProceedings{Schindler2012Real,
    Title = {Real-time Camera Guidance for 3d Scene Reconstruction},
    Author = {Falko Schindler and Wolfgang F\"orstner},
    Booktitle = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2012},
    Volume = {I-3},
    Abstract = {We propose a framework for multi-view stereo reconstruction exploiting the possibility to interactively guiding the operator during the image acquisition process. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that following the camera movements suggested by our system the final scene reconstruction with the automatically extracted key frames is both more complete and more accurate. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.},
    Keywords = {Three-dimensional Reconstruction, Bundle Adjustment, Camera Orientation, Real-time Planning},
    Url = {http://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/I-3/69/2012/isprsannals-I-3-69-2012.pdf}
    }

  • J. Schneider, F. Schindler, T. Läbe, and W. Förstner, “Bundle Adjustment for Multi-camera Systems with Points at Infinity,” in ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences , 2012, pp. 75-80. doi:10.5194/isprsannals-I-3-75-2012
    [BibTeX] [PDF]
    We present a novel approach for a rigorous bundle adjustment for omnidirectional and multi-view cameras, which enables an efficient maximum-likelihood estimation with image and scene points at infinity. Multi-camera systems are used to increase the resolution, to combine cameras with different spectral sensitivities (Z/I DMC, Vexcel Ultracam) or – like omnidirectional cameras – to augment the effective aperture angle (Blom Pictometry, Rollei Panoscan Mark III). Additionally multi-camera systems gain in importance for the acquisition of complex 3D structures. For stabilizing camera orientations – especially rotations – one should generally use points at the horizon over long periods of time within the bundle adjustment that classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points. Instead of eliminating the scale factor of the homogeneous vectors by Euclidean normalization, we normalize the homogeneous coordinates spherically. This way we can use images of omnidirectional cameras with single-view point like fisheye cameras and scene points, which are far away or at infinity. We demonstrate the feasibility and the potential of our approach on real data taken with a single camera, the stereo camera FinePix Real 3D W3 from Fujifilm and the multi-camera system Ladybug3 from Point Grey.

    @InProceedings{schneider12isprs,
    Title = {Bundle Adjustment for Multi-camera Systems with Points at Infinity},
    Author = {J. Schneider and F. Schindler and T. L\"abe and W. F\"orstner},
    Booktitle = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2012},
    Pages = {75--80},
    Volume = {I-3},
    Abstract = {We present a novel approach for a rigorous bundle adjustment for omnidirectional and multi-view cameras, which enables an efficient maximum-likelihood estimation with image and scene points at infinity. Multi-camera systems are used to increase the resolution, to combine cameras with different spectral sensitivities (Z/I DMC, Vexcel Ultracam) or - like omnidirectional cameras - to augment the effective aperture angle (Blom Pictometry, Rollei Panoscan Mark III). Additionally multi-camera systems gain in importance for the acquisition of complex 3D structures. For stabilizing camera orientations - especially rotations - one should generally use points at the horizon over long periods of time within the bundle adjustment that classical bundle adjustment programs are not capable of. We use a minimal representation of homogeneous coordinates for image and scene points. Instead of eliminating the scale factor of the homogeneous vectors by Euclidean normalization, we normalize the homogeneous coordinates spherically. This way we can use images of omnidirectional cameras with single-view point like fisheye cameras and scene points, which are far away or at infinity. We demonstrate the feasibility and the potential of our approach on real data taken with a single camera, the stereo camera FinePix Real 3D W3 from Fujifilm and the multi-camera system Ladybug3 from Point Grey.},
    City = {Melbourne},
    Doi = {10.5194/isprsannals-I-3-75-2012},
    Url = {http://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/I-3/75/2012/isprsannals-I-3-75-2012.pdf}
    }

  • L. Spinello, C. Stachniss, and W. Burgard, “Scene in the Loop: Towards Adaptation-by-Tracking in RGB-D Data,” in Proceedings of the RSS Workshop RGB-D: Advanced Reasoning with Depth Cameras , 2012.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Spinello2012,
    Title = {Scene in the Loop: Towards Adaptation-by-Tracking in RGB-D Data},
    Author = {L. Spinello and C. Stachniss and W. Burgard},
    Booktitle = {Proceedings of the RSS Workshop RGB-D: Advanced Reasoning with Depth Cameras},
    Year = {2012},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/spinello12rssws.pdf}
    }

  • T. Stroth, “Kartierung landwirtschaftlicher Kulturarten mittels multitemporaler RapidEye und TerraSAR-X Daten,” bachelor thesis Master Thesis, 2012.
    [BibTeX]
    none

    @MastersThesis{Stroth2012Kartierung,
    Title = {Kartierung landwirtschaftlicher Kulturarten mittels multitemporaler RapidEye und TerraSAR-X Daten},
    Author = {Stroth, Tobias},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2012},
    Type = {bachelor thesis},
    Abstract = {none},
    Timestamp = {2013.04.15}
    }

  • B. Waske, J. Benediktsson, and J. Sveinsson, “Signal and Image Processing for Remote Sensing,” in Signal and Image Processing for Remote Sensing, Second Edition, 2nd ed., C. H. Chen, Ed., CRC Press, 2012, pp. 365-374. doi:10.1201/b11656-21
    [BibTeX]
    Land cover classifications are perhaps the widest used application in context of remote sensing. The recent development of remote sensing systems, including numerous bands, high spatial resolution and increased repetition rates as well as the availability of more diverse remote sensing imagery increase the potential of remote sensing based land cover classifications. Nevertheless, recent data sets demand more sophisticated classifiers and the development of adequate methods in an ongoing research topic in the field of remote sensing. In this context the potential of the ensemble technique Random Forest (RF) for classifying hyperspectral and multisensor remote sensing data is demonstrated. The classification is done on two different data sets, comprising of (i) multispectral and SAR data and (ii) hyperspectral imagery. The results are compared to well known algorithms (e.g. Maximum Likelihood Classifier, Spectral Angle Mapper) as well as recent developments such as Support Vector Machines (SVM). Overall the results demonstrate that RF can be considered desirable for classification of hyperspectral as well as multisensor data sets. RF, significantly outperforms common methods in terms of accuracy and is comparable to SVM. RF achieve high accuracies, even with small training sample, and is simple to handle, because it mainly depends on two user-defined values.

    @InBook{Waske2012Signal,
    Title = {Signal and Image Processing for Remote Sensing},
    Author = {Waske, Bj\"orn and Benediktsson, Jon and Sveinsson, Johannes},
    Chapter = {Random Forest Classification of Remote Sensing Data},
    Editor = {Chen, Chi Hau},
    Pages = {365--374},
    Publisher = {CRC Press},
    Year = {2012},
    Edition = {2nd},
    Month = feb,
    Abstract = {Land cover classifications are perhaps the widest used application in context of remote sensing. The recent development of remote sensing systems, including numerous bands, high spatial resolution and increased repetition rates as well as the availability of more diverse remote sensing imagery increase the potential of remote sensing based land cover classifications. Nevertheless, recent data sets demand more sophisticated classifiers and the development of adequate methods in an ongoing research topic in the field of remote sensing. In this context the potential of the ensemble technique Random Forest (RF) for classifying hyperspectral and multisensor remote sensing data is demonstrated. The classification is done on two different data sets, comprising of (i) multispectral and SAR data and (ii) hyperspectral imagery. The results are compared to well known algorithms (e.g. Maximum Likelihood Classifier, Spectral Angle Mapper) as well as recent developments such as Support Vector Machines (SVM). Overall the results demonstrate that RF can be considered desirable for classification of hyperspectral as well as multisensor data sets. RF, significantly outperforms common methods in terms of accuracy and is comparable to SVM. RF achieve high accuracies, even with small training sample, and is simple to handle, because it mainly depends on two user-defined values.},
    Booktitle = {Signal and Image Processing for Remote Sensing, Second Edition},
    Doi = {10.1201/b11656-21},
    ISSN = {978-1-4398-5596-6},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • B. Waske, S. van der Linden, C. Oldenburg, B. Jakimow, A. Rabe, and P. Hostert, “imageRF — A user-oriented implementation for remote sensing image analysis with Random Forests,” Environmental Modelling & Software, vol. 35, pp. 192-193, 2012. doi:10.1016/j.envsoft.2012.01.014
    [BibTeX]
    An IDL implementation for the classification and regression analysis of remote sensing images with Random Forests is introduced. The tool, called imageRF, is platform and license independent and uses generic image file formats. It works well with default parameterization, yet all relevant parameters can be defined in intuitive GUIs. This makes it a user-friendly image processing tool, which is implemented as an add-on in the free EnMAP-Box and may be used in the commercial IDL/ENVI software. (C) 2012 Elsevier Ltd. All rights reserved.

    @Article{Waske2012imageRF,
    Title = {imageRF -- A user-oriented implementation for remote sensing image analysis with Random Forests},
    Author = {Waske, Bj\"orn and van der Linden, Sebastian and Oldenburg, Carsten and Jakimow, Benjamin and Rabe, Andreas and Hostert, Patrick},
    Journal = {Environmental Modelling \& Software},
    Year = {2012},
    Month = jul,
    Pages = {192--193},
    Volume = {35},
    Abstract = {An IDL implementation for the classification and regression analysis of remote sensing images with Random Forests is introduced. The tool, called imageRF, is platform and license independent and uses generic image file formats. It works well with default parameterization, yet all relevant parameters can be defined in intuitive GUIs. This makes it a user-friendly image processing tool, which is implemented as an add-on in the free EnMAP-Box and may be used in the commercial IDL/ENVI software. (C) 2012 Elsevier Ltd. All rights reserved.},
    Doi = {10.1016/j.envsoft.2012.01.014},
    Owner = {waske},
    Sn = {1364-8152},
    Tc = {0},
    Timestamp = {2012.09.04},
    Ut = {WOS:000304217500017},
    Z8 = {0},
    Z9 = {0},
    Zb = {0}
    }

  • S. Wenzel and W. Förstner, “Learning a compositional representation for facade object categorization,” in ISPRS Annals of Photogrammetry, Remote Sensing and the Spatial Information Sciences; Proc. of 22nd Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS) , 2012, pp. 197-202. doi:10.5194/isprsannals-I-3-197-2012
    [BibTeX] [PDF]
    Our objective is the categorization of the most dominant objects in facade images, like windows, entrances and balconies. In order to execute an image interpretation of complex scenes we need an interaction between low level bottom-up feature detection and highlevel inference from top-down. A top-down approach would use results of a bottom-up detection step as evidence for some high-level inference of scene interpretation. We present a statistically founded object categorization procedure that is suited for bottom-up object detection. Instead of choosing a bag of features in advance and learning models based on these features, it is more natural to learn which features best describe the target object classes. Therefore we learn increasingly complex aggregates of line junctions in image sections from man-made scenes. We present a method for the classification of image sections by using the histogram of diverse types of line aggregates.

    @InProceedings{Wenzel2012Learning,
    Title = {Learning a compositional representation for facade object categorization},
    Author = {Wenzel, Susanne and F\"orstner, Wolfgang},
    Booktitle = {ISPRS Annals of Photogrammetry, Remote Sensing and the Spatial Information Sciences; Proc. of 22nd Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)},
    Year = {2012},
    Number = { 2012},
    Pages = {197--202},
    Volume = {I-3},
    Abstract = {Our objective is the categorization of the most dominant objects in facade images, like windows, entrances and balconies. In order to execute an image interpretation of complex scenes we need an interaction between low level bottom-up feature detection and highlevel inference from top-down. A top-down approach would use results of a bottom-up detection step as evidence for some high-level inference of scene interpretation. We present a statistically founded object categorization procedure that is suited for bottom-up object detection. Instead of choosing a bag of features in advance and learning models based on these features, it is more natural to learn which features best describe the target object classes. Therefore we learn increasingly complex aggregates of line junctions in image sections from man-made scenes. We present a method for the classification of image sections by using the histogram of diverse types of line aggregates.},
    City = {Melbourne},
    Doi = {10.5194/isprsannals-I-3-197-2012},
    Proceeding = {ISPRS Annals of Photogrammetry, Remote Sensing and the Spatial Information Sciences; Proc. of 22nd Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2012Learning.pdf}
    }

  • Spatial Cognition VIII, C. Stachniss, K. Schill, and D. Uttal, Eds., Springer, 2012.
    [BibTeX]
    [none]
    @Book{Stachniss2012a,
    Title = {Spatial Cognition VIII},
    Editor = {C. Stachniss and K. Schill and D. Uttal},
    Publisher = {Springer},
    Year = {2012},
    Month = {August},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

2011

  • S. Asadi, M. Reggente, C. Stachniss, C. Plagemann, and A. J. Lilienthal, “Intelligent Systems for Machine Olfaction: Tools and Methodologies,” , E. L. Hines and M. S. Leeson, Eds., {IGI} {G}lobal, 2011, pp. 153-179.
    [BibTeX]
    [none]
    @InBook{Asadi2011,
    Title = {Intelligent Systems for Machine Olfaction: Tools and Methodologies},
    Author = {S. Asadi and M. Reggente and C. Stachniss and C. Plagemann and A.J. Lilienthal},
    Chapter = {Statistical Gas Distribution Modelling using Kernel Methods},
    Editor = {E.L. Hines and M.S. Leeson},
    Pages = {153-179},
    Publisher = {{IGI} {G}lobal},
    Year = {2011},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • S. D. Bauer, F. Korč, and W. Förstner, “The potential of automatic methods of classification to identify leaf diseases from multispectral images,” Precision Agriculture, vol. 12, iss. 3, pp. 361-377, 2011. doi:10.1007/s11119-011-9217-6
    [BibTeX] [PDF]
    Three methods of automatic classification of leaf diseases are described based on high-resolution multispectral stereo images. Leaf diseases are economically important as they can cause a loss of yield. Early and reliable detection of leaf diseases has important practical relevance, especially in the context of precision agriculture for localized treatment with fungicides. We took stereo images of single sugar beet leaves with two cameras (RGB and multispectral) in a laboratory under well controlled illumination conditions. The leaves were either healthy or infected with the leaf spot pathogen Cercospora beticola or the rust fungus Uromyces betae. To fuse information from the two sensors, we generated 3-D models of the leaves. We discuss the potential of two pixelwise methods of classification: k-nearest neighbour and an adaptive Bayes classification with minimum risk assuming a Gaussian mixture model. The medians of pixelwise classification rates achieved in our experiments are 91% for Cercospora beticola and 86% for Uromyces betae. In addition, we investigated the potential of contextual classification with the so called conditional random field method, which seemed to eliminate the typical errors of pixelwise classification.

    @Article{Bauer2011potential,
    Title = {The potential of automatic methods of classification to identify leaf diseases from multispectral images},
    Author = {Bauer, Sabine Daniela and Kor{\vc}, Filip and F\"orstner, Wolfgang},
    Journal = {Precision Agriculture},
    Year = {2011},
    Number = {3},
    Pages = {361--377},
    Volume = {12},
    Abstract = {Three methods of automatic classification of leaf diseases are described based on high-resolution multispectral stereo images. Leaf diseases are economically important as they can cause a loss of yield. Early and reliable detection of leaf diseases has important practical relevance, especially in the context of precision agriculture for localized treatment with fungicides. We took stereo images of single sugar beet leaves with two cameras (RGB and multispectral) in a laboratory under well controlled illumination conditions. The leaves were either healthy or infected with the leaf spot pathogen Cercospora beticola or the rust fungus Uromyces betae. To fuse information from the two sensors, we generated 3-D models of the leaves. We discuss the potential of two pixelwise methods of classification: k-nearest neighbour and an adaptive Bayes classification with minimum risk assuming a Gaussian mixture model. The medians of pixelwise classification rates achieved in our experiments are 91% for Cercospora beticola and 86% for Uromyces betae. In addition, we investigated the potential of contextual classification with the so called conditional random field method, which seemed to eliminate the typical errors of pixelwise classification.},
    Doi = {10.1007/s11119-011-9217-6},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Bauer2011potential.pdf}
    }

  • J. Becker, C. Bersch, D. Pangercic, B. Pitzer, T. Rühr, B. Sankaran, J. Sturm, C. Stachniss, M. Beetz, and W. Burgard, “Mobile Manipulation of Kitchen Containers,” in Proceedings of the IROS’11 Workshop on Results, Challenges and Lessons Learned in Advancing Robots with a Common Platform , San Francisco, CA, USA, 2011.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Becker2011,
    Title = {Mobile Manipulation of Kitchen Containers},
    Author = {J. Becker and C. Bersch and D. Pangercic and B. Pitzer and T. R\"uhr and B. Sankaran and J. Sturm and C. Stachniss and M. Beetz and W. Burgard},
    Booktitle = {Proceedings of the IROS'11 Workshop on Results, Challenges and Lessons Learned in Advancing Robots with a Common Platform},
    Year = {2011},
    Address = {San Francisco, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/becker11irosws.pdf}
    }

  • M. Bennewitz, D. Maier, A. Hornung, and C. Stachniss, “Integrated Perception and Navigation in Complex Indoor Environments,” in Proceedings of the IEEE-RAS Int. Conf. on Humanoid Robots (HUMANOIDS) , 2011.
    [BibTeX]
    [none]
    @InProceedings{Bennewitz2011,
    Title = {Integrated Perception and Navigation in Complex Indoor Environments},
    Author = {M. Bennewitz and D. Maier and A. Hornung and C. Stachniss},
    Booktitle = {Proceedings of the IEEE-RAS Int. Conf. on Humanoid Robots (HUMANOIDS)},
    Year = {2011},
    Note = {Invited presentation at the workshop on Humanoid service robot navigation in crowded and dynamic environments},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • T. Dickscheid, F. Schindler, and W. Förstner, “Coding Images with Local Features,” International Journal of Computer Vision, vol. 94, iss. 2, pp. 154-174, 2011. doi:10.1007/s11263-010-0340-z
    [BibTeX] [PDF]
    We present a scheme for measuring completeness of local feature extraction in terms of image coding. Completeness is here considered as good coverage of relevant image information by the features. As each feature requires a certain number of bits which are representative for a certain subregion of the image, we interpret the coverage as a sparse coding scheme. The measure is therefore based on a comparison of two densities over the image domain: An entropy density p_H(x) based on local image statistics, and a feature coding density p_c(x) which is directly computed from each particular set of local features. Motivated by the coding scheme in JPEG, the entropy distribution is derived from the power spectrum of local patches around each pixel position in a statistically sound manner. As the total number of bits for coding the image and for representing it with local features may be different, we measure incompleteness by the Hellinger distance between p_H(x) and p_c(x). We will derive a procedure for measuring incompleteness of possibly mixed sets of local features and show results on standard datasets using some of the most popular region and keypoint detectors, including Lowe, MSER and the recently published SFOP detectors. Furthermore, we will draw some interesting conclusions about the complementarity of detectors.

    @Article{Dickscheid2011Coding,
    Title = {Coding Images with Local Features},
    Author = {Dickscheid, Timo and Schindler, Falko and F\"orstner, Wolfgang},
    Journal = {International Journal of Computer Vision},
    Year = {2011},
    Number = {2},
    Pages = {154--174},
    Volume = {94},
    Abstract = {We present a scheme for measuring completeness of local feature extraction in terms of image coding. Completeness is here considered as good coverage of relevant image information by the features. As each feature requires a certain number of bits which are representative for a certain subregion of the image, we interpret the coverage as a sparse coding scheme. The measure is therefore based on a comparison of two densities over the image domain: An entropy density p_H(x) based on local image statistics, and a feature coding density p_c(x) which is directly computed from each particular set of local features. Motivated by the coding scheme in JPEG, the entropy distribution is derived from the power spectrum of local patches around each pixel position in a statistically sound manner. As the total number of bits for coding the image and for representing it with local features may be different, we measure incompleteness by the Hellinger distance between p_H(x) and p_c(x). We will derive a procedure for measuring incompleteness of possibly mixed sets of local features and show results on standard datasets using some of the most popular region and keypoint detectors, including Lowe, MSER and the recently published SFOP detectors. Furthermore, we will draw some interesting conclusions about the complementarity of detectors.},
    Doi = {10.1007/s11263-010-0340-z},
    ISSN = {0920-5691},
    Issue = {2},
    Publisher = {Springer Netherlands},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Dickscheid2011Coding.pdf}
    }

  • T. F. Dominicus, “Vergleich von Verfahren zur Rekonstruktion von Oberflächen,” bachelor thesis Master Thesis, 2011.
    [BibTeX]
    \textbf{Summary} There is a growing demand for digital 3D-models in various disciplines. Dense point clouds are often the basis for these. These point clouds can be generated by a variety of different methods. One possible method is Stereo matching. There are different approaches to this. In this thesis, we examine three different Stereo matching Algorithms and compare their qualities with respect to accuracy, point density and point distribution. The used Algorithms are the Patch-based Multi-view stereo Software, the Semi-global Matching and the 3-Image Semi-global matching. In order to test these methods, we conduct two experiments. Each method is used to create dense point cloud, which we then compare to a reference cloud. The reference clouds are predetermined in the first Experiment and gathered with a Laser triangulation scanner in the second. The resulting point cloud is then analyzed. We predicted, that both SGM Algorithms perform better than the PMVS all examined characteristics. However, our experiments show that this is only true under certain conditions. While the point density and distribution is considerably higher in the first experiment, the accuracy is slightly lower compared to the PMVS. Both SGM methods show even worse results in the second experiment. Here, the density of the results of the SGM is lower and the distribution is slightly better. The accuracy of the SGM is on the same level as the PMVS. The 3-Image SGM only produced only a very sparse point cloud with a high number of outliers. We could not calculate an accuracy rating for this method. However, we assume that these findings are due to poor camera orientation in the second experiment. \textbf{Zusammenfassung} Der Bedarf an digitalen 3D-Modellen in verschiedenen Disziplinen nimmt stetig zu. Grundlage dafür sind oft Dichte Punktwolken. Diese Punktwolken können mit Hilfe verschiedener Verfahren erstellt werden. Eine Möglichkeit ist das Stereomatching. Dabei gibt es verschiedene Ansätze. In dieser Arbeit untersuchen wir drei verschiedene Stereomatching Algorithmen und vergleichen deren Eigenschaften in Bezug auf Genauigkeit, Punktdichte und Punktverteilung. Die verwendeten Verfahren sind die Multi-view stereo Software, das Semi-global Matching und das 3-Bild Semi-global matching. Um diese Verfahren zu untersuchen haben wir zwei Experimente durchgeführt. Wir verwenden jede dieser Methoden um eine dichte Punktwolke aus mehreren Bildern einer Szene zu erstellen. Diese Punktwolken vergleichen wir dann mit einer Referenzpunktwolke. Im ersten Experiment ist diese Referenz vorgegeben. Im zweiten Experiment erstellen wir diese Referenz, in dem wir die Szene mit einem Lasertriangulationsscanner erfassen . Wir hatten erwartet, dass die beiden SGM Algorithmen in allen drei Eigenschaften dem PMVS überlegen ist. Unsere Experimente zeigen jedoch, dass dies nur unter bestimmten Bedingungen der Fall ist. Während die Punktdichte im ersten Experiment beim SGM deutlich höher und die Punktverteilung besser ist, ist die Genauigkeit etwas geringer als die des PMVS. Beide SGM Verfahren bringen im zweiten Experiment noch schlechtere Ergebnisse. Die Punktdichte in den Punktwolken des SGM ist geringer und die Punktverteilung leicht besser. Die Genauigkeit des SGM ist leicht schlechter als die des PMVS. The 3-Image SGM only produced only a very sparse point cloud with a high number of outliers. We could not calculate an accuracy rating for this method. However, we assume that these findings are due to poor camera orientation in the second experiment. Das 3-Bild SGM berechnet hier nur eine sehr dünne Punktwolke mit einer hohen Zahl an Ausreißern. Wir konnten keine Punktwolke erstellen, bei der die Berechnung der Genauigkeit sinnvoll gewesen wäre. Wir vermuten jedoch, dass dies nicht am Algorithmus, sondern an einer schlechten Orientierung der Kameras im zweiten Experiment liegt.

    @MastersThesis{Dominicus2011Vergleich,
    Title = {Vergleich von Verfahren zur Rekonstruktion von Oberfl\"achen},
    Author = {Dominicus, Tim Florian},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2011},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Inf. Jan Siegemund},
    Type = {bachelor thesis},
    Abstract = {\textbf{Summary} There is a growing demand for digital 3D-models in various disciplines. Dense point clouds are often the basis for these. These point clouds can be generated by a variety of different methods. One possible method is Stereo matching. There are different approaches to this. In this thesis, we examine three different Stereo matching Algorithms and compare their qualities with respect to accuracy, point density and point distribution. The used Algorithms are the Patch-based Multi-view stereo Software, the Semi-global Matching and the 3-Image Semi-global matching. In order to test these methods, we conduct two experiments. Each method is used to create dense point cloud, which we then compare to a reference cloud. The reference clouds are predetermined in the first Experiment and gathered with a Laser triangulation scanner in the second. The resulting point cloud is then analyzed. We predicted, that both SGM Algorithms perform better than the PMVS all examined characteristics. However, our experiments show that this is only true under certain conditions. While the point density and distribution is considerably higher in the first experiment, the accuracy is slightly lower compared to the PMVS. Both SGM methods show even worse results in the second experiment. Here, the density of the results of the SGM is lower and the distribution is slightly better. The accuracy of the SGM is on the same level as the PMVS. The 3-Image SGM only produced only a very sparse point cloud with a high number of outliers. We could not calculate an accuracy rating for this method. However, we assume that these findings are due to poor camera orientation in the second experiment. \textbf{Zusammenfassung} Der Bedarf an digitalen 3D-Modellen in verschiedenen Disziplinen nimmt stetig zu. Grundlage daf\"ur sind oft Dichte Punktwolken. Diese Punktwolken k\"onnen mit Hilfe verschiedener Verfahren erstellt werden. Eine M\"oglichkeit ist das Stereomatching. Dabei gibt es verschiedene Ans\"atze. In dieser Arbeit untersuchen wir drei verschiedene Stereomatching Algorithmen und vergleichen deren Eigenschaften in Bezug auf Genauigkeit, Punktdichte und Punktverteilung. Die verwendeten Verfahren sind die Multi-view stereo Software, das Semi-global Matching und das 3-Bild Semi-global matching. Um diese Verfahren zu untersuchen haben wir zwei Experimente durchgef\"uhrt. Wir verwenden jede dieser Methoden um eine dichte Punktwolke aus mehreren Bildern einer Szene zu erstellen. Diese Punktwolken vergleichen wir dann mit einer Referenzpunktwolke. Im ersten Experiment ist diese Referenz vorgegeben. Im zweiten Experiment erstellen wir diese Referenz, in dem wir die Szene mit einem Lasertriangulationsscanner erfassen . Wir hatten erwartet, dass die beiden SGM Algorithmen in allen drei Eigenschaften dem PMVS \"uberlegen ist. Unsere Experimente zeigen jedoch, dass dies nur unter bestimmten Bedingungen der Fall ist. W\"ahrend die Punktdichte im ersten Experiment beim SGM deutlich h\"oher und die Punktverteilung besser ist, ist die Genauigkeit etwas geringer als die des PMVS. Beide SGM Verfahren bringen im zweiten Experiment noch schlechtere Ergebnisse. Die Punktdichte in den Punktwolken des SGM ist geringer und die Punktverteilung leicht besser. Die Genauigkeit des SGM ist leicht schlechter als die des PMVS. The 3-Image SGM only produced only a very sparse point cloud with a high number of outliers. We could not calculate an accuracy rating for this method. However, we assume that these findings are due to poor camera orientation in the second experiment. Das 3-Bild SGM berechnet hier nur eine sehr d\"unne Punktwolke mit einer hohen Zahl an Ausrei{\ss}ern. Wir konnten keine Punktwolke erstellen, bei der die Berechnung der Genauigkeit sinnvoll gewesen w\"are. Wir vermuten jedoch, dass dies nicht am Algorithmus, sondern an einer schlechten Orientierung der Kameras im zweiten Experiment liegt.},
    City = {Bonn}
    }

  • B. Frank, C. Stachniss, N. Abdo, and W. Burgard, “Using Gaussian Process Regression for Efficient Motion Planning in Environments with Deformable Objects,” in Proc. of the AAAI-11 Workshop on Automated Action Planning for Autonomous Mobile Robots (PAMR) , San Francisco, CA, USA, 2011.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Frank2011,
    Title = {Using Gaussian Process Regression for Efficient Motion Planning in Environments with Deformable Objects},
    Author = {B. Frank and C. Stachniss and N. Abdo and W. Burgard},
    Booktitle = {Proc. of the AAAI-11 Workshop on Automated Action Planning for Autonomous Mobile Robots (PAMR)},
    Year = {2011},
    Address = {San Francisco, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/frank11pamr.pdf}
    }

  • B. Frank, C. Stachniss, N. Abdo, and W. Burgard, “Efficient Motion Planning for Manipulation Robots in Environments with Deformable Objects,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , San Francisco, CA, USA, 2011.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Frank2011a,
    Title = {Efficient Motion Planning for Manipulation Robots in Environments with Deformable Objects},
    Author = {B. Frank and C. Stachniss and N. Abdo and W. Burgard},
    Booktitle = IROS,
    Year = {2011},
    Address = {San Francisco, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/frank11iros.pdf}
    }

  • M. Hans and R. Roscher, “Zuordnen radiometrischer Informationen zu Laserscandaten von Weintrauben,” Department of Photogrammetry, University of Bonn 2011.
    [BibTeX] [PDF]
    In diesem Report stellen wir zwei Verfahren vor, die radiometrische Informationen 3D-Scandaten zuordnen. Radiometrische Informationen unterstützen und verbessern die Anwendungen der Merkmalserfassung von Objekten, da sie weitere Kenntnisse über das gescannte Objekt liefern.

    @TechReport{Hans2011Zuordnen,
    Title = {Zuordnen radiometrischer Informationen zu Laserscandaten von Weintrauben},
    Author = {Hans, Mathias and Roscher, Ribana},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2011},
    Abstract = {In diesem Report stellen wir zwei Verfahren vor, die radiometrische Informationen 3D-Scandaten zuordnen. Radiometrische Informationen unterst\"utzen und verbessern die Anwendungen der Merkmalserfassung von Objekten, da sie weitere Kenntnisse \"uber das gescannte Objekt liefern.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Hans2011Zuordnen.pdf}
    }

  • R. Kümmerle, G. Grisetti, C. Stachniss, and W. Burgard, “Simultaneous Parameter Calibration, Localization, and Mapping for Robust Service Robotics,” in Proceedings of the IEEE Workshop on Advanced Robotics and its Social Impacts , Half-Moon Bay, CA, USA, 2011.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Kummerle2011,
    Title = {Simultaneous Parameter Calibration, Localization, and Mapping for Robust Service Robotics},
    Author = {R. K\"ummerle and G. Grisetti and C. Stachniss and W. Burgard},
    Booktitle = {Proceedings of the IEEE Workshop on Advanced Robotics and its Social Impacts},
    Year = {2011},
    Address = {Half-Moon Bay, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/kuemmerle11arso.pdf}
    }

  • S. Klemenjak and B. Waske, “Classifying Multilevel Segmented TerraSAR-X Data, using Support Vector Machines,” in 4th TerraSAR-X Science Team Meeting , 2011.
    [BibTeX] [PDF]
    To segment a image with strongly varying object sizes results generally in under-segmentation of small structures or over-segmentation of big ones, which consequences poor classification accuracies. A strategy to produce multiple segmentations of one image and classification with support vector machines (SVM) of this segmentation stack afterwards is shown.

    @InProceedings{Klemenjak2011Classifying,
    Title = {Classifying Multilevel Segmented TerraSAR-X Data, using Support Vector Machines},
    Author = {Klemenjak, Sascha and Waske, Bj\"orn},
    Booktitle = {4th TerraSAR-X Science Team Meeting},
    Year = {2011},
    Abstract = {To segment a image with strongly varying object sizes results generally in under-segmentation of small structures or over-segmentation of big ones, which consequences poor classification accuracies. A strategy to produce multiple segmentations of one image and classification with support vector machines (SVM) of this segmentation stack afterwards is shown.},
    Owner = {waske},
    Timestamp = {2012.09.05},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Klemenjak2011Classifying.pdf}
    }

  • H. Kretzschmar and C. Stachniss, “Pose Graph Compression for Laser-based SLAM,” in Proceedings of the Int. Symposium of Robotics Research (ISRR) , Flagstaff, AZ, USA, 2011.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Kretzschmar2011a,
    Title = {Pose Graph Compression for Laser-based {SLAM}},
    Author = {H. Kretzschmar and C. Stachniss},
    Booktitle = ISRR,
    Year = {2011},
    Address = {Flagstaff, AZ, USA},
    Note = {Invited presentation},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss11isrr.pdf}
    }

  • H. Kretzschmar, C. Stachniss, and G. Grisetti, “Efficient Information-Theoretic Graph Pruning for Graph-Based SLAM with Laser Range Finders,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , San Francisco, CA, USA, 2011.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Kretzschmar2011,
    Title = {Efficient Information-Theoretic Graph Pruning for Graph-Based {SLAM} with Laser Range Finders},
    Author = {H. Kretzschmar and C. Stachniss and G. Grisetti},
    Booktitle = IROS,
    Year = {2011},
    Address = {San Francisco, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/kretzschmar11iros.pdf}
    }

  • B. Mack and B. Waske, “Optimizing support vector data description by automatically generated outliers.,” in 7th Works. of the EARSeL Special Interest Group Imaging Spectroscopy , 2011.
    [BibTeX]
    [none]
    @InProceedings{Mack2011Optimizing,
    Title = {Optimizing support vector data description by automatically generated outliers.},
    Author = {Mack, Benjamin and Waske, Bj{\"o}rn},
    Booktitle = {7th Works. of the EARSeL Special Interest Group Imaging Spectroscopy},
    Year = {2011},
    Abstract = {[none]},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • D. Maier, M. Bennewitz, and C. Stachniss, “Self-supervised Obstacle Detection for Humanoid Navigation Using Monocular Vision and Sparse Laser Data,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Shanghai, China, 2011.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Maier2011,
    Title = {Self-supervised Obstacle Detection for Humanoid Navigation Using Monocular Vision and Sparse Laser Data},
    Author = {D. Maier and M. Bennewitz and C. Stachniss},
    Booktitle = ICRA,
    Year = {2011},
    Address = {Shanghai, China},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/maier11icra.pdf}
    }

  • R. Roscher, F. Schindler, and W. Förstner, “What would you look like in Springfield? Linear Transformations between High-Dimensional Spaces,” Department of Photogrammetry, University of Bonn 2011.
    [BibTeX] [PDF]
    High-dimensional data structures occur in many fields of computer vision and machine learning. Transformation between two high-dimensional spaces usually involves the determination of a large amount of parameters and requires much labeled data to be given. There is much interest in reducing dimensionality if a lower-dimensional structure is underlying the data points. We present a procedure to enable the determination of a low-dimensional, projective transformation between two data sets, making use of state-of-the-art dimensional reduction algorithms. We evaluate multiple algorithms during several experiments with different objectives. We demonstrate the use of this procedure for applications like classification and assignments between two given data sets. Our procedure is semi-supervised due to the fact that all labeled and unlabeled points are used for the dimensionality reduction, but only few them have to be labeled. Using test data we evaluate the quantitative and qualitative performance of different algorithms with respect to the classification and assignment task. We show that with these algorithms and our transformation approach high-dimensional data sets can be related to each other. Finally we can use this procedure to match real world facial images with cartoon images from Springfield, home town of the famous Simpsons.

    @TechReport{Roscher2011What,
    Title = {What would you look like in Springfield? Linear Transformations between High-Dimensional Spaces},
    Author = {Roscher, Ribana and Schindler, Falko and F\"orstner, Wolfgang},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2011},
    Abstract = {High-dimensional data structures occur in many fields of computer vision and machine learning. Transformation between two high-dimensional spaces usually involves the determination of a large amount of parameters and requires much labeled data to be given. There is much interest in reducing dimensionality if a lower-dimensional structure is underlying the data points. We present a procedure to enable the determination of a low-dimensional, projective transformation between two data sets, making use of state-of-the-art dimensional reduction algorithms. We evaluate multiple algorithms during several experiments with different objectives. We demonstrate the use of this procedure for applications like classification and assignments between two given data sets. Our procedure is semi-supervised due to the fact that all labeled and unlabeled points are used for the dimensionality reduction, but only few them have to be labeled. Using test data we evaluate the quantitative and qualitative performance of different algorithms with respect to the classification and assignment task. We show that with these algorithms and our transformation approach high-dimensional data sets can be related to each other. Finally we can use this procedure to match real world facial images with cartoon images from Springfield, home town of the famous Simpsons.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2011What.pdf}
    }

  • R. Roscher, B. Waske, and W. Förstner, “Incremental import vector machines for large area land cover classification,” in IEEE International Conference on Computer Vision Workshops (ICCV Workshops) , 2011. doi:10.1109/ICCVW.2011.6130249
    [BibTeX]
    The classification of large areas consisting of multiple scenes is challenging regarding the handling of large and therefore mostly inhomogeneous data sets. Moreover, large data sets demand for computational efficient methods. We propose a method, which enables the efficient multi-class classification of large neighboring Landsat scenes. We use an incremental realization of the import vector machines, called I2VM, in combination with self-training to update an initial learned classifier with new training data acquired in the overlapping areas between neighboring Landsat scenes. We show in our experiments, that I2VM is a suitable classifier for large area land cover classification.

    @InProceedings{Roscher2011Incremental,
    Title = {Incremental import vector machines for large area land cover classification},
    Author = {Roscher, Ribana and Waske, Bj\"orn and F\"orstner, Wolfgang},
    Booktitle = {{IEEE} International Conference on Computer Vision Workshops (ICCV Workshops)},
    Year = {2011},
    Abstract = {The classification of large areas consisting of multiple scenes is challenging regarding the handling of large and therefore mostly inhomogeneous data sets. Moreover, large data sets demand for computational efficient methods. We propose a method, which enables the efficient multi-class classification of large neighboring Landsat scenes. We use an incremental realization of the import vector machines, called I2VM, in combination with self-training to update an initial learned classifier with new training data acquired in the overlapping areas between neighboring Landsat scenes. We show in our experiments, that I2VM is a suitable classifier for large area land cover classification.},
    Doi = {10.1109/ICCVW.2011.6130249},
    Keywords = {incremental import vector machines;inhomogeneous data sets;land cover classification;neighboring Landsat scenes;scenes classification;training data acquisition;data acquisition;geophysical image processing;image classification;natural scenes;support vector machines;terrain mapping;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • H. Sardemann, “Registrierung von Bildern mit 3D-Punktwolken,” bachelor thesis Master Thesis, 2011.
    [BibTeX]
    [none]
    @MastersThesis{Sardemann2011Registrierung,
    Title = {Registrierung von Bildern mit 3D-Punktwolken},
    Author = {Sardemann, Hannes},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2011},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.- Ing Falko Schindler},
    Type = {bachelor thesis},
    Abstract = {[none]},
    City = {Bonn}
    }

  • F. Schindler and W. Förstner, “Fast Marching for Robust Surface Segmentation,” in LNCS, Photogrammetric Image Analysis , Munich, 2011, pp. 147-158. doi:10.1007/978-3-642-24393-6
    [BibTeX] [PDF]
    We propose a surface segmentation method based on Fast Marching Farthest Point Sampling designed for noisy, visually reconstructed point clouds or laser range data. Adjusting the distance metric between neighboring vertices we obtain robust, edge-preserving segmentations based on local curvature. We formulate a cost function given a segmentation in terms of a description length to be minimized. An incremental-decremental segmentation procedure approximates a global optimum of the cost function and prevents from under- as well as strong over-segmentation. We demonstrate the proposed method on various synthetic and real-world data sets.

    @InProceedings{Schindler2011Fast,
    Title = {Fast Marching for Robust Surface Segmentation},
    Author = {Schindler, Falko and F\"orstner, Wolfgang},
    Booktitle = {LNCS, Photogrammetric Image Analysis},
    Year = {2011},
    Address = {Munich},
    Note = {Volume Editors: Stilla, Uwe and Rottensteiner, Franz and Mayer, Helmut and Jutzi, Boris and Butenuth, Matthias},
    Pages = {147--158},
    Abstract = {We propose a surface segmentation method based on Fast Marching Farthest Point Sampling designed for noisy, visually reconstructed point clouds or laser range data. Adjusting the distance metric between neighboring vertices we obtain robust, edge-preserving segmentations based on local curvature. We formulate a cost function given a segmentation in terms of a description length to be minimized. An incremental-decremental segmentation procedure approximates a global optimum of the cost function and prevents from under- as well as strong over-segmentation. We demonstrate the proposed method on various synthetic and real-world data sets.},
    Doi = {10.1007/978-3-642-24393-6},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schindler2011Fast.pdf}
    }

  • F. Schindler, W. Förstner, and J. Frahm, “Classification and Reconstruction of Surfaces from Point Clouds of Man-made Objects,” in International Conference on Computer Vision, IEEE/ISPRS Workshop on Computer Vision for Remote Sensing of the Environment , Barcelona, 2011, pp. 257-263. doi:10.1109/ICCVW.2011.6130251
    [BibTeX] [PDF]
    We present a novel surface model and reconstruction method for man-made environments that take prior knowledge about topology and geometry into account. The model favors but is not limited to horizontal and vertical planes that are pairwise orthogonal. The reconstruction method does not require one particular class of sensors, as long as a triangulated point cloud is available. It delivers a complete 3D segmentation, parametrization and classification for both surface regions and inter-plane relations. By working on a pre-segmentation we reduce the computational cost and increase robustness to noise and outliers. All reasoning is statistically motivated, based on a few decision variables with meaningful interpretation in measurement space. We demonstrate our reconstruction method for visual reconstructions and laser range data.

    @InProceedings{Schindler2011Classification,
    Title = {Classification and Reconstruction of Surfaces from Point Clouds of Man-made Objects},
    Author = {Schindler, Falko and F\"orstner, Wolfgang and Frahm, Jan-Michael},
    Booktitle = {International Conference on Computer Vision, IEEE/ISPRS Workshop on Computer Vision for Remote Sensing of the Environment},
    Year = {2011},
    Address = {Barcelona},
    Note = {Organizers: Schindler, Konrad and F\"orstner, Wolfgang and Paparoditis, Nicolas},
    Pages = {257--263},
    Abstract = {We present a novel surface model and reconstruction method for man-made environments that take prior knowledge about topology and geometry into account. The model favors but is not limited to horizontal and vertical planes that are pairwise orthogonal. The reconstruction method does not require one particular class of sensors, as long as a triangulated point cloud is available. It delivers a complete 3D segmentation, parametrization and classification for both surface regions and inter-plane relations. By working on a pre-segmentation we reduce the computational cost and increase robustness to noise and outliers. All reasoning is statistically motivated, based on a few decision variables with meaningful interpretation in measurement space. We demonstrate our reconstruction method for visual reconstructions and laser range data.},
    City = {Barcelona},
    Doi = {10.1109/ICCVW.2011.6130251},
    Proceeding = {ICCV Workshop on Computer Vision for Remote Sensing of the Environment},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schindler2011Classification.pdf}
    }

  • J. Schittenhelm, “Empirische Untersuchungen zum Einsatz des SFOP-Punktdetektors zur Objektdetektion,” diploma thesis Master Thesis, 2011.
    [BibTeX] [PDF]
    [none]
    @MastersThesis{Schittenhelm2011Empirische,
    Title = {Empirische Untersuchungen zum Einsatz des SFOP-Punktdetektors zur Objektdetektion},
    Author = {Schittenhelm, J\"org},
    School = {University of Bonn},
    Year = {2011},
    Type = {diploma thesis},
    Abstract = {[none]},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schittenhelm2011Empirische.pdf}
    }

  • B. Schmeing, T. Läbe, and W. Förstner, “Trajectory Reconstruction Using Long Sequences of Digital Images From an Omnidirectional Camera,” in Proceedings of the 31th DGPF Conference (Jahrestagung) , Mainz, 2011, pp. 443-452.
    [BibTeX] [PDF]
    We present a method to perform bundle adjustment using long sequences of digital images from an omnidirectional camera. We use the Ladybug3 camera from PointGrey, which consists of six individual cameras pointing in different directions. There is large overlap between successive images but only a few loop closures provide connections between distant camera positions. We face two challenges: (1) to perform a bundle adjustment with images of an omnidirectional camera and (2) implement outlier detection and estimation of initial parameters for the geometry described above. Our program combines the Ladybug?s individual cameras to a single virtual camera and uses a spherical imaging model within the bundle adjustment, solving problem (1). Outlier detection (2) is done using bundle adjustments with small subsets of images followed by a robust adjustment of all images. Approximate values in our context are taken from an on-board inertial navigation system.

    @InProceedings{Schmeing2011Trajectory,
    Title = {Trajectory Reconstruction Using Long Sequences of Digital Images From an Omnidirectional Camera},
    Author = {Schmeing, Benno and L\"abe, Thomas and F\"orstner, Wolfgang},
    Booktitle = {Proceedings of the 31th DGPF Conference (Jahrestagung)},
    Year = {2011},
    Address = {Mainz},
    Pages = {443--452},
    Abstract = {We present a method to perform bundle adjustment using long sequences of digital images from an omnidirectional camera. We use the Ladybug3 camera from PointGrey, which consists of six individual cameras pointing in different directions. There is large overlap between successive images but only a few loop closures provide connections between distant camera positions. We face two challenges: (1) to perform a bundle adjustment with images of an omnidirectional camera and (2) implement outlier detection and estimation of initial parameters for the geometry described above. Our program combines the Ladybug?s individual cameras to a single virtual camera and uses a spherical imaging model within the bundle adjustment, solving problem (1). Outlier detection (2) is done using bundle adjustments with small subsets of images followed by a robust adjustment of all images. Approximate values in our context are taken from an on-board inertial navigation system.},
    City = {Mainz},
    Proceeding = {Proceedings of the 31th DGPF Conference (Jahrestagung)},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schmeing2011Trajectory.pdf}
    }

  • J. Schneider, F. Schindler, and W. Förstner, “Bündelausgleichung für Multikamerasysteme,” in Proceedings of the 31th DGPF Conference , 2011.
    [BibTeX] [PDF]
    Wir stellen einen Ansatz für eine strenge Bündelausgleichung für Multikamerasysteme vor. Hierzu verwenden wir eine minimale Repräsentation von homogenen Koordinatenvektoren für eine Maximum-Likelihood-Schätzung. Statt den Skalierungsfaktor von homogenen Vektoren durch Verwendung von euklidischen Grö\ssen zu eliminieren, werden die homogenen Koordinaten sphärisch normiert, so dass Bild- und Objektpunkte im Unendlichen repräsentierbar bleiben. Dies ermöglicht auch Bilder omnidirektionaler Kameras mit Einzelblickpunkt, wie Fisheyekameras, und weit entfernte bzw. unendlich ferne Punkte zu behandeln. Speziell Punkte am Horizont können über lange Zeiträume beobachtet werden und liefern somit eine stabile Richtungsinformation. Wir demonstrieren die praktische Umsetzung des Ansatzes anhand einer Bildfolge mit dem Multikamerasystem Ladybug3 von Point Grey, welches mit sechs Kameras 80 % der gesamten Sphäre abbildet.

    @InProceedings{schneider11dgpf,
    Title = {B\"undelausgleichung f\"ur Multikamerasysteme},
    Author = {J. Schneider and F. Schindler and W. F\"orstner},
    Booktitle = {Proceedings of the 31th DGPF Conference},
    Year = {2011},
    Abstract = {Wir stellen einen Ansatz f\"ur eine strenge B\"undelausgleichung f\"ur Multikamerasysteme vor. Hierzu verwenden wir eine minimale Repr\"asentation von homogenen Koordinatenvektoren f\"ur eine Maximum-Likelihood-Sch\"atzung. Statt den Skalierungsfaktor von homogenen Vektoren durch Verwendung von euklidischen Gr\"o\ssen zu eliminieren, werden die homogenen Koordinaten sph\"arisch normiert, so dass Bild- und Objektpunkte im Unendlichen repr\"asentierbar bleiben. Dies erm\"oglicht auch Bilder omnidirektionaler Kameras mit Einzelblickpunkt, wie Fisheyekameras, und weit entfernte bzw. unendlich ferne Punkte zu behandeln. Speziell Punkte am Horizont k\"onnen \"uber lange Zeitr\"aume beobachtet werden und liefern somit eine stabile Richtungsinformation. Wir demonstrieren die praktische Umsetzung des Ansatzes anhand einer Bildfolge mit dem Multikamerasystem Ladybug3 von Point Grey, welches mit sechs Kameras 80 % der gesamten Sph\"are abbildet.},
    City = {Mainz},
    Url = {http://www.ipb.uni-bonn.de/pdfs/schneider11dgpf.pdf}
    }

  • S. Schoppohl, “Klassifikation von Multispektralen und Hyperspektralen Fernerkundungsdaten mittels sequentieller Klassifikationsverfahren,” bachelor thesis Master Thesis, 2011.
    [BibTeX]
    Geography, climate and vegetation – elements in today’s changing. These changes have to be observed and analyzed in detail. To assure being up-to-date the classification of image data is a common procedure in remote sensing. For the implementation of image data classification many classification methods were developed and modified over the past years. The classification methods, the image data and the study area mainly affect the classification accuracy. In particular the progress of increasing training data showed a boost of classification accuracy. Though the costs and expenditure of time are very high in purchasing such training data. Nevertheless so called semi-supervised classification methods try to resolve this problem. In this bachelor thesis the focus is set on the Random Forest developed by Breiman. This classifier is combined with an incremental method. After this the classifier is able to generate new training data. Hence we implement the self-training method. To create an incremental Random Forest we proceed in several phases. First we train a conventional Random Forest with a small set of training data. In a second Phase the predicted classification is made. This allows pixel whose land use classes are unknown to be provided with pseudo-classes. At the same time the accuracy assessment is made on the trained Random Forest. For this we use the predefined test data from the given dataset. In a third stage the selection of the new training data is made. We define a threshold, so the new training data is not randomly selected. The confidence level of the new training data is measured on this threshold. If there is a sufficient number of new training data, which reach or exceed this confidence level, the new training data is added to the existing training data. On this basis a new Random Forest can be trained. This sequential process is determined by a specified iteration, or is stopped prematurely by a stopping criterion. Afterwards it is possible to classify a multi-spectral and hyperspectral dataset The assessment concluded that the combination parameters of the incremental Random Forest have a crucial impact on the classification results. Depending on the data set various configurations of parameters have to be tested. While comparing the conventional Random Forest with the incremental Random Forest partly significant differences in the classification results are obvious. Furthermore it should be noted that only a few class accuracy could be increased with the incremental Random Forest. Though the present thesis provides a good foundation to exploit the potential of the incremental Random Forest for further investigations.

    @MastersThesis{Schoppohl2011Klassifikation,
    Title = {Klassifikation von Multispektralen und Hyperspektralen Fernerkundungsdaten mittels sequentieller Klassifikationsverfahren},
    Author = {Schoppohl, Sebastian-Alexander},
    School = {Institute of Photogrammetry},
    Year = {2011},
    Note = {Betreuung: Prof. Dr. Bj\"orn Waske, Dipl.-Ing. Ribana Roscher},
    Type = {bachelor thesis},
    Abstract = {Geography, climate and vegetation - elements in today's changing. These changes have to be observed and analyzed in detail. To assure being up-to-date the classification of image data is a common procedure in remote sensing. For the implementation of image data classification many classification methods were developed and modified over the past years. The classification methods, the image data and the study area mainly affect the classification accuracy. In particular the progress of increasing training data showed a boost of classification accuracy. Though the costs and expenditure of time are very high in purchasing such training data. Nevertheless so called semi-supervised classification methods try to resolve this problem. In this bachelor thesis the focus is set on the Random Forest developed by Breiman. This classifier is combined with an incremental method. After this the classifier is able to generate new training data. Hence we implement the self-training method. To create an incremental Random Forest we proceed in several phases. First we train a conventional Random Forest with a small set of training data. In a second Phase the predicted classification is made. This allows pixel whose land use classes are unknown to be provided with pseudo-classes. At the same time the accuracy assessment is made on the trained Random Forest. For this we use the predefined test data from the given dataset. In a third stage the selection of the new training data is made. We define a threshold, so the new training data is not randomly selected. The confidence level of the new training data is measured on this threshold. If there is a sufficient number of new training data, which reach or exceed this confidence level, the new training data is added to the existing training data. On this basis a new Random Forest can be trained. This sequential process is determined by a specified iteration, or is stopped prematurely by a stopping criterion. Afterwards it is possible to classify a multi-spectral and hyperspectral dataset The assessment concluded that the combination parameters of the incremental Random Forest have a crucial impact on the classification results. Depending on the data set various configurations of parameters have to be tested. While comparing the conventional Random Forest with the incremental Random Forest partly significant differences in the classification results are obvious. Furthermore it should be noted that only a few class accuracy could be increased with the incremental Random Forest. Though the present thesis provides a good foundation to exploit the potential of the incremental Random Forest for further investigations.},
    City = {Bonn}
    }

  • J. Siegemund, U. Franke, and W. Förstner, “A Temporal Filter Approach for Detection and Reconstruction of Curbs and Road Surfaces based on Conditional Random Fields,” in IEEE Intelligent Vehicles Symposium (IV) , 2011, pp. 637-642. doi:10.1109/IVS.2011.5940447
    [BibTeX] [PDF]
    A temporal filter approach for real-time detection and reconstruction of curbs and road surfaces from 3D point clouds is presented. Instead of local thresholding, as used in many other approaches, a 3D curb model is extracted from the point cloud. The 3D points are classified to different parts of the model (i.e. road and sidewalk) using a temporally integrated Conditional Random Field (CRF). The parameters of curb and road surface are then estimated from the respectively assigned points, providing a temporal connection via a Kalman filter. In this contribution, we employ dense stereo vision for data acquisition. Other sensors capturing point cloud data, e.g. lidar, would also be suitable. The system was tested on real-world scenarios, showing the advantages over a temporally unfiltered version, due to robustness, accuracy and computation time. Further, the lateral accuracy of the system is evaluated. The experiments show the system to yield highly accurate results, for curved and straight-line curbs, up to distances of 20 meters from the camera.

    @InProceedings{Siegemund2011Temporal,
    Title = {A Temporal Filter Approach for Detection and Reconstruction of Curbs and Road Surfaces based on Conditional Random Fields},
    Author = {Siegemund, Jan and Franke, Uwe and F\"orstner, Wolfgang},
    Booktitle = {IEEE Intelligent Vehicles Symposium (IV)},
    Year = {2011},
    Month = {June},
    Pages = {637-642},
    Publisher = {IEEE Computer Society},
    Abstract = {A temporal filter approach for real-time detection and reconstruction of curbs and road surfaces from 3D point clouds is presented. Instead of local thresholding, as used in many other approaches, a 3D curb model is extracted from the point cloud. The 3D points are classified to different parts of the model (i.e. road and sidewalk) using a temporally integrated Conditional Random Field (CRF). The parameters of curb and road surface are then estimated from the respectively assigned points, providing a temporal connection via a Kalman filter. In this contribution, we employ dense stereo vision for data acquisition. Other sensors capturing point cloud data, e.g. lidar, would also be suitable. The system was tested on real-world scenarios, showing the advantages over a temporally unfiltered version, due to robustness, accuracy and computation time. Further, the lateral accuracy of the system is evaluated. The experiments show the system to yield highly accurate results, for curved and straight-line curbs, up to distances of 20 meters from the camera.},
    Doi = {10.1109/IVS.2011.5940447},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Siegemund2011Temporal.pdf}
    }

  • J. Sturm, C. Stachniss, and W. Burgard, “A Probabilistic Framework for Learning Kinematic Models of Articulated Objects,” Journal on Artificial Intelligence Research, vol. 41, pp. 477-526, 2011.
    [BibTeX] [PDF]
    [none]
    @Article{Sturm2011,
    Title = {A Probabilistic Framework for Learning Kinematic Models of Articulated Objects},
    Author = {J. Sturm and C. Stachniss and W. Burgard},
    Journal = jair,
    Year = {2011},
    Pages = {477--526},
    Volume = {41},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/sturm11jair.pdf}
    }

  • B. Uebbing, “Untersuchung zur Nutzung wiederholter Strukturen für die 3D Rekonstruktion aus Einzelaufnahmen,” bachelor thesis Master Thesis, 2011.
    [BibTeX]
    \textbf{Summary} The goal of this work is the derivation of 3D-information from single images. Therefore identical repeated structures are used. These structures are common in man-made scenes. The repeated structures can be seen as multiple pictures of a single object. At first we simplify the problem by projecting it from 3D to 2D. Thus we introduce 1D cameras by taking the rows and columns of the image sections showing the repeated structures. By rectifying the image we can assume the normal case. By reconstructing and intersecting the projection rays of corresponding points from three 1D cameras 2D profiles of the repeated structure can be recovered. Using these profiles we can derive depth information and their uncertainty. By combining more than one profile in horizontal and vertical direction even a 3D model of the repeated structure can be recovered. We pursue this approach in two ways. First we discuss a simulation program which applies the developed concept under optimal circumstances. Furthermore we verify our estimate of the theoretical uncertainty by performing an empirical test. Second we test our approach on real images. Therefore we use images of building facades in which we use geometrically identical windows as repeated objects. In this process edge-feature extraction and matching of these features plays a major role with real images. We examine our results and conclude that our approach performs very well in the theoretical environment of the simulation program. There it is possible to create 2D profiles with a relative uncertainty of depth of 0.04% to 2%, depending on the assumption of the theoretical uncertainty. Also the reconstruction of 3D information of the used model in the simulation performs very well. The results on real images lack in completeness and precision caused by uncertainties during the edgefeature extraction and the following matching of the 1D edgepoints. The results are not very reliable and meaningful. This is mostly due to the relatively small depth of the repeated structures. Mostly, just horizontal 2D profiles can be recovered, because there are not three identical windows on top of each other. Other major sources of uncertainties are incidences of light, radial image distortions and disturbing objects behind the windows or reflections of objects. Our approach is therefore only of limited use on the images used by us. To produce good results with our approach we require certain circumstances like a high resolution image, so the repeated structures are also displayed in a high resolution. Furthermore the repeated objects should have a certain amount of depth, so the parallax is significant. \textbf{Zusammenfassung} Ziel dieser Arbeit ist die Ableitung von 3D-Informationen aus Einzelaufnahmen. Dazu werden identische, wiederholte Strukturen verwendet. Diese treten in von Menschenhand geschaffenen Objekten sehr häufig auf. Wir betrachten diese wiederholten Strukturen als mehrere Aufnahmen eines Objektes. Zunächst vereinfachen wir die Problemstellung, indem wir die 3D Rekonstruktion von Punkten und Linien auf eine 2D Rekonstruktion von Punkten reduzieren. Dazu werden 1D Kameras eingeführt. Die Zeilen und Spalten von Bildausschnitten wiederholter Objekte werden dabei als Aufnahmen von 1D Kameras betrachtet. Aufgrund der Rektifizierung der Bilder können wir das Vorliegen des Normalfalls annehmen. Durch Rekonstruktion und Verschneiden der Abbildungsstrahlen von korrespondierenden Punkten aus drei 1D Kameras werden 2D Profile rekonstruiert. Aus diesen lassen sich Tiefeninformationen und deren Genauigkeit ableiten. Durch Kombination mehrerer Profile in horizontaler und vertikaler Richtung lassen sich unter optimalen Bedingungen 3D Modelle der wiederholten Strukturen erstellen. Wir verfolgen diesen Ansatz auf zwei Wegen. Zunächst wird ein Simulationsprogramm behandelt, welches das entwickelte Konzept an einem Modell unter optimalen Bedingungen testet. Dabei wird zudem die Annahme der theoretischen Genauigkeit empirisch überprüft. In einem nächsten Schritt wird der Ansatz für die Anwendung auf echte Bilder übertragen. Dazu verwenden wir Aufnahmen von Gebäudefassaden, bei denen wir geometrisch identische Fenster als wiederholte Strukturen betrachten. Dabei spielen besonders Aspekte wie Kantenextraktion und eine korrekte Zuordnung korrespondierender Kanten eine Rolle. Letztendlich stellen wir fest, dass der von uns verfolgte Ansatz in der Theorie des Simulationsprogramms sehr gute Ergebnisse liefert. Es ist möglich 2D Profile mit einer relativen Tiefengenauigkeit von 0.04% bis 2%, je nach Annahme der theoretischen Genauigkeit, zu erstellen. Die Rekonstruktion der 3D Informationen des im Simulationsprogramm verwendeten Modells gelingt sehr gut. Die Anwendung auf echte Bilder liefert weniger gute Resultate. Durch Ungenauigkeiten in der Kantenextraktion und der Zuordnung am Rand der wiederholten Strukturen und einer zu geringen Tiefe der verwendeten Testobjekte sind die Ergebnisse nicht sehr akkurat und aussagekräftig. In der Regel werden nur horizontale 2D Profile erstellt, da meist nicht drei identische Fensterstrukturen übereinander liegen. Zudem spielen weitere Faktoren wie Lichteinfall, Verzeichnungen und Störobjekte in den von uns verwendeten Fenstern eine Rolle. Unser entwickeltes Verfahren lässt sich daher nur bedingt zur Rekonstruktion auf den von uns verwendeten Bildern benutzen.

    @MastersThesis{Uebbing2011Untersuchung,
    Title = {Untersuchung zur Nutzung wiederholter Strukturen f\"ur die 3D Rekonstruktion aus Einzelaufnahmen},
    Author = {Uebbing, Bernd},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2011},
    Type = {bachelor thesis},
    Abstract = {\textbf{Summary} The goal of this work is the derivation of 3D-information from single images. Therefore identical repeated structures are used. These structures are common in man-made scenes. The repeated structures can be seen as multiple pictures of a single object. At first we simplify the problem by projecting it from 3D to 2D. Thus we introduce 1D cameras by taking the rows and columns of the image sections showing the repeated structures. By rectifying the image we can assume the normal case. By reconstructing and intersecting the projection rays of corresponding points from three 1D cameras 2D profiles of the repeated structure can be recovered. Using these profiles we can derive depth information and their uncertainty. By combining more than one profile in horizontal and vertical direction even a 3D model of the repeated structure can be recovered. We pursue this approach in two ways. First we discuss a simulation program which applies the developed concept under optimal circumstances. Furthermore we verify our estimate of the theoretical uncertainty by performing an empirical test. Second we test our approach on real images. Therefore we use images of building facades in which we use geometrically identical windows as repeated objects. In this process edge-feature extraction and matching of these features plays a major role with real images. We examine our results and conclude that our approach performs very well in the theoretical environment of the simulation program. There it is possible to create 2D profiles with a relative uncertainty of depth of 0.04% to 2%, depending on the assumption of the theoretical uncertainty. Also the reconstruction of 3D information of the used model in the simulation performs very well. The results on real images lack in completeness and precision caused by uncertainties during the edgefeature extraction and the following matching of the 1D edgepoints. The results are not very reliable and meaningful. This is mostly due to the relatively small depth of the repeated structures. Mostly, just horizontal 2D profiles can be recovered, because there are not three identical windows on top of each other. Other major sources of uncertainties are incidences of light, radial image distortions and disturbing objects behind the windows or reflections of objects. Our approach is therefore only of limited use on the images used by us. To produce good results with our approach we require certain circumstances like a high resolution image, so the repeated structures are also displayed in a high resolution. Furthermore the repeated objects should have a certain amount of depth, so the parallax is significant. \textbf{Zusammenfassung} Ziel dieser Arbeit ist die Ableitung von 3D-Informationen aus Einzelaufnahmen. Dazu werden identische, wiederholte Strukturen verwendet. Diese treten in von Menschenhand geschaffenen Objekten sehr h\"aufig auf. Wir betrachten diese wiederholten Strukturen als mehrere Aufnahmen eines Objektes. Zun\"achst vereinfachen wir die Problemstellung, indem wir die 3D Rekonstruktion von Punkten und Linien auf eine 2D Rekonstruktion von Punkten reduzieren. Dazu werden 1D Kameras eingef\"uhrt. Die Zeilen und Spalten von Bildausschnitten wiederholter Objekte werden dabei als Aufnahmen von 1D Kameras betrachtet. Aufgrund der Rektifizierung der Bilder k\"onnen wir das Vorliegen des Normalfalls annehmen. Durch Rekonstruktion und Verschneiden der Abbildungsstrahlen von korrespondierenden Punkten aus drei 1D Kameras werden 2D Profile rekonstruiert. Aus diesen lassen sich Tiefeninformationen und deren Genauigkeit ableiten. Durch Kombination mehrerer Profile in horizontaler und vertikaler Richtung lassen sich unter optimalen Bedingungen 3D Modelle der wiederholten Strukturen erstellen. Wir verfolgen diesen Ansatz auf zwei Wegen. Zun\"achst wird ein Simulationsprogramm behandelt, welches das entwickelte Konzept an einem Modell unter optimalen Bedingungen testet. Dabei wird zudem die Annahme der theoretischen Genauigkeit empirisch \"uberpr\"uft. In einem n\"achsten Schritt wird der Ansatz f\"ur die Anwendung auf echte Bilder \"ubertragen. Dazu verwenden wir Aufnahmen von Geb\"audefassaden, bei denen wir geometrisch identische Fenster als wiederholte Strukturen betrachten. Dabei spielen besonders Aspekte wie Kantenextraktion und eine korrekte Zuordnung korrespondierender Kanten eine Rolle. Letztendlich stellen wir fest, dass der von uns verfolgte Ansatz in der Theorie des Simulationsprogramms sehr gute Ergebnisse liefert. Es ist m\"oglich 2D Profile mit einer relativen Tiefengenauigkeit von 0.04% bis 2%, je nach Annahme der theoretischen Genauigkeit, zu erstellen. Die Rekonstruktion der 3D Informationen des im Simulationsprogramm verwendeten Modells gelingt sehr gut. Die Anwendung auf echte Bilder liefert weniger gute Resultate. Durch Ungenauigkeiten in der Kantenextraktion und der Zuordnung am Rand der wiederholten Strukturen und einer zu geringen Tiefe der verwendeten Testobjekte sind die Ergebnisse nicht sehr akkurat und aussagekr\"aftig. In der Regel werden nur horizontale 2D Profile erstellt, da meist nicht drei identische Fensterstrukturen \"ubereinander liegen. Zudem spielen weitere Faktoren wie Lichteinfall, Verzeichnungen und St\"orobjekte in den von uns verwendeten Fenstern eine Rolle. Unser entwickeltes Verfahren l\"asst sich daher nur bedingt zur Rekonstruktion auf den von uns verwendeten Bildern benutzen.},
    City = {Bonn}
    }

  • B. Waske, R. Roscher, and S. Klemenjak, “Import Vector Machines Based Classification of Multisensor Remote Sensing Data,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2011. doi:10.1109/IGARSS.2011.6049829
    [BibTeX]
    The classification of multisensor data sets, consisting of multitemporal SAR data and multispectral is addressed. In the present study, Import Vector Machines (IVM) are applied on two data sets, consisting of (i) Envisat ASAR/ERS-2 SAR data and a Landsat 5 TM scene, and (h) TerraSAR-X data and a RapidEye scene. The performance of IVM for classifying multisensor data is evaluated and the method is compared to Support Vector Machines (SVM) in terms of accuracy and complexity. In general, the experimental results demonstrate that the classification accuracy is improved by the multisensor data set. Moreover, IVM and SVM perform similar in terms of the classification accuracy. However, the number of import vectors is considerably less than the number of support vectors, and thus the computation time of the IVM classification is lower. IVM can directly be applied to the multi-class problems and provide probabilistic outputs. Overall IVM constitutes a feasible method and alternative to SVM.

    @InProceedings{Waske2011Import,
    Title = {Import Vector Machines Based Classification of Multisensor Remote Sensing Data},
    Author = {Waske, Bj\"orn and Roscher, Ribana and Klemenjak, Sascha},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2011},
    Abstract = {The classification of multisensor data sets, consisting of multitemporal SAR data and multispectral is addressed. In the present study, Import Vector Machines (IVM) are applied on two data sets, consisting of (i) Envisat ASAR/ERS-2 SAR data and a Landsat 5 TM scene, and (h) TerraSAR-X data and a RapidEye scene. The performance of IVM for classifying multisensor data is evaluated and the method is compared to Support Vector Machines (SVM) in terms of accuracy and complexity. In general, the experimental results demonstrate that the classification accuracy is improved by the multisensor data set. Moreover, IVM and SVM perform similar in terms of the classification accuracy. However, the number of import vectors is considerably less than the number of support vectors, and thus the computation time of the IVM classification is lower. IVM can directly be applied to the multi-class problems and provide probabilistic outputs. Overall IVM constitutes a feasible method and alternative to SVM.},
    Doi = {10.1109/IGARSS.2011.6049829},
    Keywords = {Envisat ASAR ERS-2 SAR data;IVM;Landsat 5 TM scene;RapidEye scene;SVM comparison;TerraSAR-X data;computation time;data classification;import vector machines;multisensor remote sensing data;multispectral data;multitemporal SAR data;support vector machines;geophysical image processing;image classification;knowledge engineering;radar imaging;remote sensing by radar;spaceborne radar;synthetic aperture radar;}
    }

  • K. M. Wurm, D. Hennes, D. Holz, R. B. Rusu, C. Stachniss, K. Konolige, and W. Burgard, “Hierarchies of Octrees for Efficient 3D Mapping,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , San Francisco, CA, USA, 2011.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Wurm2011,
    Title = {Hierarchies of Octrees for Efficient 3D Mapping},
    Author = {K.M. Wurm and D. Hennes and D. Holz and R.B. Rusu and C. Stachniss and K. Konolige and W. Burgard},
    Booktitle = IROS,
    Year = {2011},
    Address = {San Francisco, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/wurm11iros.pdf}
    }

  • M. Y. Yang, “Hierarchical and Spatial Structures for Interpreting Images of Man-made Scenes Using Graphical Models,” PhD Thesis, 2011.
    [BibTeX] [PDF]
    \textbf{Summary} The task of semantic scene interpretation is to label the regions of an image and their relations into meaningful classes. Such task is a key ingredient to many computer vision applications, including object recognition, 3D reconstruction and robotic perception. It is challenging partially due to the ambiguities inherent to the image data. The images of man-made scenes, e. g. the building facade images, exhibit strong contextual dependencies in the form of the spatial and hierarchical structures. Modelling these structures is central for such interpretation task. Graphical models provide a consistent framework for the statistical modelling. Bayesian networks and random fields are two popular types of the graphical models, which are frequently used for capturing such contextual information. The motivation for our work comes from the belief that we can find a generic formulation for scene interpretation that having both the benefits from random fields and Bayesian networks. It should have clear semantic interpretability. Therefore our key contribution is the development of a generic statistical graphical model for scene interpretation, which seamlessly integrates different types of the image features, and the spatial structural information and the hierarchical structural information defined over the multi-scale image segmentation. It unifies the ideas of existing approaches, e. g. conditional random field (CRF) and Bayesian network (BN), which has a clear statistical interpretation as the maximum a posteriori (MAP) estimate of a multi-class labelling problem. Given the graphical model structure, we derive the probability distribution of the model based on the factorization property implied in the model structure. The statistical model leads to an energy function that can be optimized approximately by either loopy belief propagation or graph cut based move making algorithm. The particular type of the features, the spatial structure, and the hierarchical structure however is not prescribed. In the experiments, we concentrate on terrestrial man-made scenes as a specifically difficult problem. We demonstrate the application of the proposed graphical model on the task of multi-class classification of building facade image regions. The framework for scene interpretation allows for significantly better classification results than the standard classical local classification approach on man-made scenes by incorporating the spatial and hierarchical structures. We investigate the performance of the algorithms on a public dataset to show the relative importance ofthe information from the spatial structure and the hierarchical structure. As a baseline for the region classification, we use an efficient randomized decision forest classifier. Two specific models are derived from the proposed graphical model, namely the hierarchical CRF and the hierarchical mixed graphical model. We show that these two models produce better classification results than both the baseline region classifier and the flat CRF. \textbf{Zusammenfassung} Ziel der semantischen Bildinterpretation ist es, Bildregionen und ihre gegenseitigen Beziehungen zu kennzeichnen und in sinnvolle Klassen einzuteilen. Dies ist eine der Hauptaufgabe in vielen Bereichen des maschinellen Sehens, wie zum Beispiel der Objekterkennung, 3D Rekonstruktion oder der Wahrnehmung von Robotern. Insbesondere Bilder anthropogener Szenen, wie z.B. Fassadenaufnahmen, sind durch starke räumliche und hierarchische Strukturen gekennzeichnet. Diese Strukturen zu modellieren ist zentrale Teil der Interpretation, für deren statistische Modellierung graphische Modelle ein geeignetes konsistentes Werkzeug darstellen. Bayes Netze und Zufallsfelder sind zwei bekannte und häufig genutzte Beispiele für graphische Modelle zur Erfassung kontextabhängiger Informationen. Die Motivation dieser Arbeit liegt in der Überzeugung, dass wir eine generische Formulierung der Bildinterpretation mit klarer semantischer Bedeutung finden können, die die Vorteile von Bayes Netzen und Zufallsfeldern verbindet. Der Hauptbeitrag der vorliegenden Arbeit liegt daher in der Entwicklung eines generischen statistischen graphischen Modells zur Bildinterpretation, welches unterschiedlichste Typen von Bildmerkmalen und die räumlichen sowie hierarchischen Strukturinformationen über eine multiskalen Bildsegmentierung integriert. Das Modell vereinheitlicht die existierender Arbeiten zugrunde liegenden Ideen, wie bedingter Zufallsfelder (conditional random field (CRF)) und Bayesnetze (Bayesian network (BN)). Dieses Modell hat eine klare statistische Interpretation als Maximum a posteriori (MAP) Schätzer eines mehr klassen Zuordnungsproblems. Gegeben die Struktur des graphischen Modells und den dadurch definierten Faktorisierungseigenschaften leiten wir die Wahrscheinlichkeitsverteilung des Modells ab. Dies führt zu einer Energiefunktion, die näherungsweise optimiert werden kann. Der jeweilige Typ der Bildmerkmale, die räumliche sowie hierarchische Struktur ist von dieser Formulierung unabhängig. Wir zeigen die Anwendung des vorgeschlagenen graphischen Modells anhand der mehrklassen Zuordnung von Bildregionen in Fassadenaufnahmen. Wir demonstrieren, dass das vorgeschlagene Verfahren zur Bildinterpretation, durch die Berücksichtigung räumlicher sowie hierarchischer Strukturen, signifikant bessere Klassifikationsergebnisse zeigt, als klassische lokale Klassifikationsverfahren. Die Leistungsfähigkeit des vorgeschlagenen Verfahrens wird anhand eines öffentlich verfügbarer Datensatzes evaluiert. Zur Klassifikation der Bildregionen nutzen wir ein Verfahren basierend auf einem effizienten Random Forest Klassifikator. Aus dem vorgeschlagenen allgemeinen graphischen Modell werden konkret zwei spezielle Modelle abgeleitet, ein hierarchisches bedingtes Zufallsfeld (hierarchical CRF) sowie ein hierarchisches gemischtes graphisches Modell. Wir zeigen, dass beide Modelle bessere Klassifikationsergebnisse erzeugen als die zugrunde liegenden lokalen Klassifikatoren oder die einfachen bedingten Zufallsfelder.

    @PhdThesis{Yang2011Hierarchical,
    Title = {Hierarchical and Spatial Structures for Interpreting Images of Man-made Scenes Using Graphical Models},
    Author = {Michael Ying Yang},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2011},
    Abstract = {\textbf{Summary} The task of semantic scene interpretation is to label the regions of an image and their relations into meaningful classes. Such task is a key ingredient to many computer vision applications, including object recognition, 3D reconstruction and robotic perception. It is challenging partially due to the ambiguities inherent to the image data. The images of man-made scenes, e. g. the building facade images, exhibit strong contextual dependencies in the form of the spatial and hierarchical structures. Modelling these structures is central for such interpretation task. Graphical models provide a consistent framework for the statistical modelling. Bayesian networks and random fields are two popular types of the graphical models, which are frequently used for capturing such contextual information. The motivation for our work comes from the belief that we can find a generic formulation for scene interpretation that having both the benefits from random fields and Bayesian networks. It should have clear semantic interpretability. Therefore our key contribution is the development of a generic statistical graphical model for scene interpretation, which seamlessly integrates different types of the image features, and the spatial structural information and the hierarchical structural information defined over the multi-scale image segmentation. It unifies the ideas of existing approaches, e. g. conditional random field (CRF) and Bayesian network (BN), which has a clear statistical interpretation as the maximum a posteriori (MAP) estimate of a multi-class labelling problem. Given the graphical model structure, we derive the probability distribution of the model based on the factorization property implied in the model structure. The statistical model leads to an energy function that can be optimized approximately by either loopy belief propagation or graph cut based move making algorithm. The particular type of the features, the spatial structure, and the hierarchical structure however is not prescribed. In the experiments, we concentrate on terrestrial man-made scenes as a specifically difficult problem. We demonstrate the application of the proposed graphical model on the task of multi-class classification of building facade image regions. The framework for scene interpretation allows for significantly better classification results than the standard classical local classification approach on man-made scenes by incorporating the spatial and hierarchical structures. We investigate the performance of the algorithms on a public dataset to show the relative importance ofthe information from the spatial structure and the hierarchical structure. As a baseline for the region classification, we use an efficient randomized decision forest classifier. Two specific models are derived from the proposed graphical model, namely the hierarchical CRF and the hierarchical mixed graphical model. We show that these two models produce better classification results than both the baseline region classifier and the flat CRF. \textbf{Zusammenfassung} Ziel der semantischen Bildinterpretation ist es, Bildregionen und ihre gegenseitigen Beziehungen zu kennzeichnen und in sinnvolle Klassen einzuteilen. Dies ist eine der Hauptaufgabe in vielen Bereichen des maschinellen Sehens, wie zum Beispiel der Objekterkennung, 3D Rekonstruktion oder der Wahrnehmung von Robotern. Insbesondere Bilder anthropogener Szenen, wie z.B. Fassadenaufnahmen, sind durch starke r\"aumliche und hierarchische Strukturen gekennzeichnet. Diese Strukturen zu modellieren ist zentrale Teil der Interpretation, f\"ur deren statistische Modellierung graphische Modelle ein geeignetes konsistentes Werkzeug darstellen. Bayes Netze und Zufallsfelder sind zwei bekannte und h\"aufig genutzte Beispiele f\"ur graphische Modelle zur Erfassung kontextabh\"angiger Informationen. Die Motivation dieser Arbeit liegt in der \"Uberzeugung, dass wir eine generische Formulierung der Bildinterpretation mit klarer semantischer Bedeutung finden k\"onnen, die die Vorteile von Bayes Netzen und Zufallsfeldern verbindet. Der Hauptbeitrag der vorliegenden Arbeit liegt daher in der Entwicklung eines generischen statistischen graphischen Modells zur Bildinterpretation, welches unterschiedlichste Typen von Bildmerkmalen und die r\"aumlichen sowie hierarchischen Strukturinformationen \"uber eine multiskalen Bildsegmentierung integriert. Das Modell vereinheitlicht die existierender Arbeiten zugrunde liegenden Ideen, wie bedingter Zufallsfelder (conditional random field (CRF)) und Bayesnetze (Bayesian network (BN)). Dieses Modell hat eine klare statistische Interpretation als Maximum a posteriori (MAP) Sch\"atzer eines mehr klassen Zuordnungsproblems. Gegeben die Struktur des graphischen Modells und den dadurch definierten Faktorisierungseigenschaften leiten wir die Wahrscheinlichkeitsverteilung des Modells ab. Dies f\"uhrt zu einer Energiefunktion, die n\"aherungsweise optimiert werden kann. Der jeweilige Typ der Bildmerkmale, die r\"aumliche sowie hierarchische Struktur ist von dieser Formulierung unabh\"angig. Wir zeigen die Anwendung des vorgeschlagenen graphischen Modells anhand der mehrklassen Zuordnung von Bildregionen in Fassadenaufnahmen. Wir demonstrieren, dass das vorgeschlagene Verfahren zur Bildinterpretation, durch die Ber\"ucksichtigung r\"aumlicher sowie hierarchischer Strukturen, signifikant bessere Klassifikationsergebnisse zeigt, als klassische lokale Klassifikationsverfahren. Die Leistungsf\"ahigkeit des vorgeschlagenen Verfahrens wird anhand eines \"offentlich verf\"ugbarer Datensatzes evaluiert. Zur Klassifikation der Bildregionen nutzen wir ein Verfahren basierend auf einem effizienten Random Forest Klassifikator. Aus dem vorgeschlagenen allgemeinen graphischen Modell werden konkret zwei spezielle Modelle abgeleitet, ein hierarchisches bedingtes Zufallsfeld (hierarchical CRF) sowie ein hierarchisches gemischtes graphisches Modell. Wir zeigen, dass beide Modelle bessere Klassifikationsergebnisse erzeugen als die zugrunde liegenden lokalen Klassifikatoren oder die einfachen bedingten Zufallsfelder.},
    Url = {http://hss.ulb.uni-bonn.de/2012/2765/2765.htm}
    }

  • M. Y. Yang and W. Förstner, “Feature Evaluation for Building Facade Images – An Empirical Study,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXIX-B3, pp. 513-518, 2011. doi:10.5194/isprsarchives-XXXIX-B3-513-2012
    [BibTeX] [PDF]
    The classification of building facade images is a challenging problem that receives a great deal of attention in the photogrammetry community. Image classification is critically dependent on the features. In this paper, we perform an empirical feature evaluation task for building facade images. Feature sets we choose are basic features, color features, histogram features, Peucker features, texture features, and SIFT features. We present an approach for region-wise labeling using an efficient randomized decision forest classifier and local features. We conduct our experiments with building facade image classification on the eTRIMS dataset, where our focus is the object classes building, car, door, pavement, road, sky, vegetation, and window.

    @Article{Yang2011Feature,
    Title = {Feature Evaluation for Building Facade Images - An Empirical Study},
    Author = {Yang, Michael Ying and F\"orstner, Wolfgang},
    Journal = {International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Year = {2011},
    Pages = {513--518},
    Volume = {XXXIX-B3},
    Abstract = {The classification of building facade images is a challenging problem that receives a great deal of attention in the photogrammetry community. Image classification is critically dependent on the features. In this paper, we perform an empirical feature evaluation task for building facade images. Feature sets we choose are basic features, color features, histogram features, Peucker features, texture features, and SIFT features. We present an approach for region-wise labeling using an efficient randomized decision forest classifier and local features. We conduct our experiments with building facade image classification on the eTRIMS dataset, where our focus is the object classes building, car, door, pavement, road, sky, vegetation, and window.},
    Doi = {10.5194/isprsarchives-XXXIX-B3-513-2012},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Yang2011Feature.pdf}
    }

  • M. Y. Yang and W. Förstner, “A Hierarchical Conditional Random Field Model for Labeling and Classifying Images of Man-made Scenes,” in International Conference on Computer Vision, IEEE/ISPRS Workshop on Computer Vision for Remote Sensing of the Environment , 2011. doi:10.1109/ICCVW.2011.6130243
    [BibTeX] [PDF]
    Semantic scene interpretation as a collection of meaningful regions in images is a fundamental problem in both photogrammetry and computer vision. Images of man-made scenes exhibit strong contextual dependencies in the form of spatial and hierarchical structures. In this paper, we introduce a hierarchical conditional random field to deal with the problem of image classification by modeling spatial and hierarchical structures. The probability outputs of an efficient randomized decision forest classifier are used as unary potentials. The spatial and hierarchical structures of the regions are integrated into pairwise potentials. The model is built on multi-scale image analysis in order to aggregate evidence from local to global level. Experimental results are provided to demonstrate the performance of the proposed method using images from eTRIMS dataset, where our focus is the object classes building, car, door, pavement, road, sky, vegetation, and window.

    @InProceedings{Yang2011Hierarchicala,
    Title = {A Hierarchical Conditional Random Field Model for Labeling and Classifying Images of Man-made Scenes},
    Author = {Yang, Michael Ying and F\"orstner, Wolfgang},
    Booktitle = {International Conference on Computer Vision, IEEE/ISPRS Workshop on Computer Vision for Remote Sensing of the Environment},
    Year = {2011},
    Abstract = {Semantic scene interpretation as a collection of meaningful regions in images is a fundamental problem in both photogrammetry and computer vision. Images of man-made scenes exhibit strong contextual dependencies in the form of spatial and hierarchical structures. In this paper, we introduce a hierarchical conditional random field to deal with the problem of image classification by modeling spatial and hierarchical structures. The probability outputs of an efficient randomized decision forest classifier are used as unary potentials. The spatial and hierarchical structures of the regions are integrated into pairwise potentials. The model is built on multi-scale image analysis in order to aggregate evidence from local to global level. Experimental results are provided to demonstrate the performance of the proposed method using images from eTRIMS dataset, where our focus is the object classes building, car, door, pavement, road, sky, vegetation, and window.},
    Doi = {10.1109/ICCVW.2011.6130243},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Yang2011Hierarchical.pdf}
    }

  • M. Y. Yang and W. Förstner, “Regionwise Classification of Building Facade Images,” in Photogrammetric Image Analysis (PIA2011) , 2011, pp. 209-220. doi:10.1007/978-3-642-24393-6_18
    [BibTeX] [PDF]
    In recent years, the classification task of building facade images receives a great deal of attention in the photogrammetry community. In this paper, we present an approach for regionwise classification using an efficient randomized decision forest classifier and local features. A conditional random field is then introduced to enforce spatial consistency between neighboring regions. Experimental results are provided to illustrate the performance of the proposed methods using image from eTRIMS database, where our focus is the object classes building, car, door, pavement, road, sky, vegetation, and window.

    @InProceedings{Yang2011Regionwise,
    Title = {Regionwise Classification of Building Facade Images},
    Author = {Yang, Michael Ying and F\"orstner, Wolfgang},
    Booktitle = {Photogrammetric Image Analysis (PIA2011)},
    Year = {2011},
    Note = {Stilla, Uwe / Rottensteiner, Franz / Mayer, H. / Jutzi, Boris / Butenuth, Matthias (Hg.); Munich},
    Pages = {209 -- 220},
    Publisher = {Springer},
    Series = {LNCS 6952},
    Abstract = {In recent years, the classification task of building facade images receives a great deal of attention in the photogrammetry community. In this paper, we present an approach for regionwise classification using an efficient randomized decision forest classifier and local features. A conditional random field is then introduced to enforce spatial consistency between neighboring regions. Experimental results are provided to illustrate the performance of the proposed methods using image from eTRIMS database, where our focus is the object classes building, car, door, pavement, road, sky, vegetation, and window.},
    Doi = {10.1007/978-3-642-24393-6_18},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Yang2011Regionwise.pdf}
    }

  • J. Ziegler, H. Kretzschmar, C. Stachniss, G. Grisetti, and W. Burgard, “Accurate Human Motion Capture in Large Areas by Combining IMU- and Laser-based People Tracking,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , San Francisco, CA, USA, 2011.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Ziegler2011,
    Title = {Accurate Human Motion Capture in Large Areas by Combining IMU- and Laser-based People Tracking},
    Author = {J. Ziegler and H. Kretzschmar and C. Stachniss and G. Grisetti and W. Burgard},
    Booktitle = IROS,
    Year = {2011},
    Address = {San Francisco, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/ziegler11iros.pdf}
    }

2010

  • M. Albrecht, “Erkennung bewegter Objekte auf fluktuierendem Hintergrund in Bildfolgen,” Master Thesis, 2010.
    [BibTeX] [PDF]
    [none]
    @MastersThesis{Albrecht2010Erkennung,
    Title = {Erkennung bewegter Objekte auf fluktuierendem Hintergrund in Bildfolgen},
    Author = {Albrecht, Markus},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2010},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Ing. Ribana Roscher},
    Abstract = {[none]},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Albrecht2010Erkennung.pdf}
    }

  • A. Barth and U. Franke, “Tracking Oncoming and Turning Vehicles at Intersections,” in Intelligent Transportation Systems, IEEE Conference on , Madeira Island, Portugal, 2010, pp. 861-868. doi:10.1109/ITSC.2010.5624969
    [BibTeX] [PDF]
    This article addresses the reliable tracking of oncoming traffic at urban intersections from a moving platform with a stereo vision system. Both motion and depth information is combined to estimate the pose and motion parameters of an oncoming vehicle, including the yaw rate, by means of Kalman filtering. Vehicle tracking at intersections is particularly chal- lenging since vehicles can turn quickly. A single filter approach cannot cover the dynamic range of a vehicle sufficiently. We propose a real-time multi-filter approach for vehicle tracking at intersections. A gauge consistency criteria as well as a robust outlier detection method allow for dealing with sudden accelerations and self-occlusions during turn maneuvers. The system is evaluated both on synthetic and real-world data.

    @InProceedings{Barth2010Tracking,
    Title = {Tracking Oncoming and Turning Vehicles at Intersections},
    Author = {Barth, Alexander and Franke, Uwe},
    Booktitle = {Intelligent Transportation Systems, IEEE Conference on},
    Year = {2010},
    Address = {Madeira Island, Portugal},
    Pages = {861--868},
    Abstract = {This article addresses the reliable tracking of oncoming traffic at urban intersections from a moving platform with a stereo vision system. Both motion and depth information is combined to estimate the pose and motion parameters of an oncoming vehicle, including the yaw rate, by means of Kalman filtering. Vehicle tracking at intersections is particularly chal- lenging since vehicles can turn quickly. A single filter approach cannot cover the dynamic range of a vehicle sufficiently. We propose a real-time multi-filter approach for vehicle tracking at intersections. A gauge consistency criteria as well as a robust outlier detection method allow for dealing with sudden accelerations and self-occlusions during turn maneuvers. The system is evaluated both on synthetic and real-world data.},
    Doi = {10.1109/ITSC.2010.5624969},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Barth2010Tracking.pdf}
    }

  • A. Barth, J. Siegemund, A. Meißner, U. Franke, and W. Förstner, “Probabilistic Multi-Class Scene Flow Segmentation for Traffic Scenes,” in Pattern Recognition (Symposium of DAGM) , 2010, pp. 503-512. doi:10.1007/978-3-642-15986-2_51
    [BibTeX] [PDF]
    A multi-class traffic scene segmentation approach based on scene flow data is presented. Opposed to many other approaches using color or texture features, our approach is purely based on dense depth and 3D motion information. Using prior knowledge on tracked objects in the scene and the pixel-wise uncertainties of the scene flow data, each pixel is assigned to either a particular moving object class (tracked/unknown object), the ground surface, or static background. The global topological order of classes, such as objects are above ground, is locally integrated into a conditional random field by an ordering constraint. The proposed method yields very accurate segmentation results on challenging real world scenes, which we made publicly available for comparison.

    @InProceedings{Barth2010Probabilistic,
    Title = {Probabilistic Multi-Class Scene Flow Segmentation for Traffic Scenes},
    Author = {Barth, Alexander and Siegemund, Jan and Mei{\ss}ner, Annemarie and Franke, Uwe and F\"orstner, Wolfgang},
    Booktitle = {Pattern Recognition (Symposium of DAGM)},
    Year = {2010},
    Editor = {Goesele, M. and Roth, S. and Kuijper, A. and Schiele, B. and Schindler, K.},
    Note = {Darmstadt},
    Pages = {503--512},
    Publisher = {Springer},
    Abstract = {A multi-class traffic scene segmentation approach based on scene flow data is presented. Opposed to many other approaches using color or texture features, our approach is purely based on dense depth and 3D motion information. Using prior knowledge on tracked objects in the scene and the pixel-wise uncertainties of the scene flow data, each pixel is assigned to either a particular moving object class (tracked/unknown object), the ground surface, or static background. The global topological order of classes, such as objects are above ground, is locally integrated into a conditional random field by an ordering constraint. The proposed method yields very accurate segmentation results on challenging real world scenes, which we made publicly available for comparison.},
    Doi = {10.1007/978-3-642-15986-2_51},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Barth2010Probabilistic.pdf}
    }

  • W. Burgard, K. M. Wurm, M. Bennewitz, C. Stachniss, A. Hornung, R. B. Rusu, and K. Konolige, “Modeling the World Around Us: An Efficient 3D Representation for Personal Robotics,” in Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ Int.Conf.on Intelligent Robots and Systems , Taipei, Taiwan, 2010.
    [BibTeX]
    [none]
    @InProceedings{Burgard2010,
    Title = {Modeling the World Around Us: An Efficient 3D Representation for Personal Robotics},
    Author = {Burgard, W. and Wurm, K.M. and Bennewitz, M. and Stachniss, C. and Hornung, A. and Rusu, R.B. and Konolige, K.},
    Booktitle = {Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ Int.Conf.on Intelligent Robots and Systems},
    Year = {2010},
    Address = {Taipei, Taiwan},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • T. Castaings, B. Waske, J. A. Benediktsson, and J. Chanussot, “On the influence of feature reduction for the classification of hyperspectral images based on the extended morphological profile,” International Journal of Remote Sensing, vol. 31, iss. 22, pp. 5975-5991, 2010. doi:10.1080/01431161.2010.512313
    [BibTeX]
    In this study we investigated the classification of hyperspectral data with high spatial resolution. Previously, methods that generate a so-called extended morphological profile (EMP) from the principal components of an image have been proposed to create base images for morphological transformations. However, it can be assumed that the feature reduction (FR) may have a significant effect on the accuracy of the classification of the EMP. We therefore investigated the effect of different FR methods on the generation and classification of the EMP of hyperspectral images from urban areas, using a machine learning-based algorithm for classification. The applied FR methods include: principal component analysis (PCA), nonparametric weighted feature extraction (NWFE), decision boundary feature extraction (DBFE), Gaussian kernel PCA (KPCA) and Bhattacharyya distance feature selection (BDFS). Experiments were run with two classification algorithms: the support vector machine (SVM) and random forest (RF) algorithms. We demonstrate that the commonly used PCA approach seems to be nonoptimal in a large number of cases in terms of classification accuracy, and the other FR methods may be more suitable as preprocessing approaches for the EMP.

    @Article{Castaings2010influence,
    Title = {On the influence of feature reduction for the classification of hyperspectral images based on the extended morphological profile},
    Author = {Castaings, Thibaut and Waske, Bj\"orn and Benediktsson, Jon Atli and Chanussot, Jocelyn},
    Journal = {International Journal of Remote Sensing},
    Year = {2010},
    Number = {22},
    Pages = {5975--5991},
    Volume = {31},
    Abstract = {In this study we investigated the classification of hyperspectral data with high spatial resolution. Previously, methods that generate a so-called extended morphological profile (EMP) from the principal components of an image have been proposed to create base images for morphological transformations. However, it can be assumed that the feature reduction (FR) may have a significant effect on the accuracy of the classification of the EMP. We therefore investigated the effect of different FR methods on the generation and classification of the EMP of hyperspectral images from urban areas, using a machine learning-based algorithm for classification. The applied FR methods include: principal component analysis (PCA), nonparametric weighted feature extraction (NWFE), decision boundary feature extraction (DBFE), Gaussian kernel PCA (KPCA) and Bhattacharyya distance feature selection (BDFS). Experiments were run with two classification algorithms: the support vector machine (SVM) and random forest (RF) algorithms. We demonstrate that the commonly used PCA approach seems to be nonoptimal in a large number of cases in terms of classification accuracy, and the other FR methods may be more suitable as preprocessing approaches for the EMP.},
    Doi = {10.1080/01431161.2010.512313},
    Owner = {waske},
    Sn = {0143-1161},
    Tc = {4},
    Timestamp = {2012.09.04},
    Ut = {WOS:000284956500011},
    Z8 = {0},
    Z9 = {4},
    Zb = {1}
    }

  • X. Ceamanos, B. Waske, J. A. Benediktsson, J. Chanussot, M. Fauvel, and J. R. Sveinsson, “A classifier ensemble based on fusion of support vector machines for classifying hyperspectral data,” International Journal of Image and Data Fusion, vol. 1, iss. 4, pp. 293-307, 2010. doi:10.1080/19479832.2010.485935
    [BibTeX]
    Classification of hyperspectral data using a classifier ensemble that is based on support vector machines (SVMs) are addressed. First, the hyperspectral data set is decomposed into a few data sources according to the similarity of the spectral bands. Then, each source is processed separately by performing classification based on SVM. Finally, all outputs are used as input for final decision fusion performed by an additional SVM classifier. Results of the experiments underline how the proposed SVM fusion ensemble outperforms a standard SVM classifier in terms of overall and class accuracies, the improvement being irrespective of the size of the training sample set. The definition of the data sources resulting from the original data set is also studied.

    @Article{Ceamanos2010classifier,
    Title = {A classifier ensemble based on fusion of support vector machines for classifying hyperspectral data},
    Author = {Ceamanos, Xavier and Waske, Bj\"orn and Benediktsson, Jon Atli and Chanussot, Jocelyn and Fauvel, Mathieu and Sveinsson, Johannes R.},
    Journal = {International Journal of Image and Data Fusion},
    Year = {2010},
    Number = {4},
    Pages = {293--307},
    Volume = {1},
    Abstract = {Classification of hyperspectral data using a classifier ensemble that is based on support vector machines (SVMs) are addressed. First, the hyperspectral data set is decomposed into a few data sources according to the similarity of the spectral bands. Then, each source is processed separately by performing classification based on SVM. Finally, all outputs are used as input for final decision fusion performed by an additional SVM classifier. Results of the experiments underline how the proposed SVM fusion ensemble outperforms a standard SVM classifier in terms of overall and class accuracies, the improvement being irrespective of the size of the training sample set. The definition of the data sources resulting from the original data set is also studied.},
    Doi = {10.1080/19479832.2010.485935}
    }

  • M. Dalla Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone, “Extended profiles with morphological attribute filters for the analysis of hyperspectral data,” International Journal of Remote Sensing, vol. 31, iss. 22, pp. 5975-5991, 2010. doi:10.1080/01431161.2010.512425
    [BibTeX]
    Extended attribute profiles and extended multi-attribute profiles are presented for the analysis of hyperspectral high-resolution images. These extended profiles are based on morphological attribute filters and, through a multi-level analysis, are capable of extracting spatial features that can better model the spatial information, with respect to conventional extended morphological profiles. The features extracted by the proposed extended profiles were considered for a classification task. Two hyperspectral high-resolution datasets acquired for the city of Pavia, Italy, were considered in the analysis. The effectiveness of the introduced operators in modelling the spatial information was proved by the higher classification accuracies obtained with respect to those achieved by a conventional extended morphological profile.

    @Article{DallaMura2010Extended,
    Title = {Extended profiles with morphological attribute filters for the analysis of hyperspectral data},
    Author = {Dalla Mura, Mauro and Benediktsson, Jon Atli and Waske, Bj\"orn and Bruzzone, Lorenzo},
    Journal = {International Journal of Remote Sensing},
    Year = {2010},
    Number = {22},
    Pages = {5975--5991},
    Volume = {31},
    Abstract = {Extended attribute profiles and extended multi-attribute profiles are presented for the analysis of hyperspectral high-resolution images. These extended profiles are based on morphological attribute filters and, through a multi-level analysis, are capable of extracting spatial features that can better model the spatial information, with respect to conventional extended morphological profiles. The features extracted by the proposed extended profiles were considered for a classification task. Two hyperspectral high-resolution datasets acquired for the city of Pavia, Italy, were considered in the analysis. The effectiveness of the introduced operators in modelling the spatial information was proved by the higher classification accuracies obtained with respect to those achieved by a conventional extended morphological profile.},
    Doi = {10.1080/01431161.2010.512425},
    Owner = {waske},
    Sn = {0143-1161},
    Tc = {7},
    Timestamp = {2012.09.04},
    Ut = {WOS:000284956500013},
    Z8 = {0},
    Z9 = {7},
    Zb = {0}
    }

  • M. Dalla Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone, “Morphological Attribute Profiles for the Analysis of Very High Resolution Images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, iss. 10, pp. 3747-3762, 2010. doi:10.1109/TGRS.2010.2048116
    [BibTeX]
    Morphological attribute profiles (APs) are defined as a generalization of the recently proposed morphological profiles (MPs). APs provide a multilevel characterization of an image created by the sequential application of morphological attribute filters that can be used to model different kinds of the structural information. According to the type of the attributes considered in the morphological attribute transformation, different parametric features can be modeled. The generation of APs, thanks to an efficient implementation, strongly reduces the computational load required for the computation of conventional MPs. Moreover, the characterization of the image with different attributes leads to a more complete description of the scene and to a more accurate modeling of the spatial information than with the use of conventional morphological filters based on a predefined structuring element. Here, the features extracted by the proposed operators were used for the classification of two very high resolution panchromatic images acquired by Quickbird on the city of Trento, Italy. The experimental analysis proved the usefulness of APs in modeling the spatial information present in the images. The classification maps obtained by considering different APs result in a better description of the scene (both in terms of thematic and geometric accuracy) than those obtained with an MP.

    @Article{DallaMura2010Morphological,
    Title = {Morphological Attribute Profiles for the Analysis of Very High Resolution Images},
    Author = {Dalla Mura, Mauro and Benediktsson, Jon Atli and Waske, Bj\"orn and Bruzzone, Lorenzo},
    Journal = {IEEE Transactions on Geoscience and Remote Sensing},
    Year = {2010},
    Month = oct,
    Number = {10},
    Pages = {3747--3762},
    Volume = {48},
    Abstract = {Morphological attribute profiles (APs) are defined as a generalization of the recently proposed morphological profiles (MPs). APs provide a multilevel characterization of an image created by the sequential application of morphological attribute filters that can be used to model different kinds of the structural information. According to the type of the attributes considered in the morphological attribute transformation, different parametric features can be modeled. The generation of APs, thanks to an efficient implementation, strongly reduces the computational load required for the computation of conventional MPs. Moreover, the characterization of the image with different attributes leads to a more complete description of the scene and to a more accurate modeling of the spatial information than with the use of conventional morphological filters based on a predefined structuring element. Here, the features extracted by the proposed operators were used for the classification of two very high resolution panchromatic images acquired by Quickbird on the city of Trento, Italy. The experimental analysis proved the usefulness of APs in modeling the spatial information present in the images. The classification maps obtained by considering different APs result in a better description of the scene (both in terms of thematic and geometric accuracy) than those obtained with an MP.},
    Doi = {10.1109/TGRS.2010.2048116},
    Owner = {waske},
    Sn = {0196-2892},
    Tc = {15},
    Timestamp = {2012.09.04},
    Ut = {WOS:000283349400014},
    Z8 = {0},
    Z9 = {15},
    Zb = {1}
    }

  • W. Förstner, “Minimal Representations for Uncertainty and Estimation in Projective Spaces,” in Proc. of Asian Conference on Computer Vision , 2010, p. 619–633, Part II. doi:10.1007/978-3-642-19309-5_48
    [BibTeX] [PDF]
    Estimation using homogeneous entities has to cope with obstacles such as singularities of covariance matrices and redundant parametrizations which do not allow an immediate definition of maximum likelihood estimation and lead to estimation problems with more parameters than necessary. The paper proposes a representation of the uncertainty of all types of geometric entities and estimation procedures for geometric entities and transformations which (1) only require the minimum number of parameters, (2) are free of singularities, (3) allow for a consistent update within an iterative procedure, (4) enable to exploit the simplicity of homogeneous coordinates to represent geometric constraints and (5) allow to handle geometric entities which are at in nity or at least very far, avoiding the usage of concepts like the inverse depth. Such representations are already available for transformations such as rotations, motions (Rosenhahn 2002), homographies (Begelfor 2005), or the projective correlation with fundamental matrix (Bartoli 2004) all being elements of some Lie group. The uncertainty is represented in the tangent space of the manifold, namely the corresponding Lie algebra. However, to our knowledge no such representations are developed for the basic geometric entities such as points, lines and planes, as in addition to use the tangent space of the manifolds we need transformation of the entities such that they stay on their specific manifold during the estimation process. We develop the concept, discuss its usefulness for bundle adjustment and demonstrate (a) its superiority compared to more simple methods for vanishing point estimation, (b) its rigour when estimating 3D lines from 3D points and (c) its applicability for determining 3D lines from observed image line segments in a multi view setup.

    @InProceedings{Forstner2010Minimal,
    Title = {Minimal Representations for Uncertainty and Estimation in Projective Spaces},
    Author = {F\"orstner, Wolfgang},
    Booktitle = {Proc. of Asian Conference on Computer Vision},
    Year = {2010},
    Note = {Queenstown, New Zealand},
    Pages = {619--633, Part II},
    Abstract = {Estimation using homogeneous entities has to cope with obstacles such as singularities of covariance matrices and redundant parametrizations which do not allow an immediate definition of maximum likelihood estimation and lead to estimation problems with more parameters than necessary. The paper proposes a representation of the uncertainty of all types of geometric entities and estimation procedures for geometric entities and transformations which (1) only require the minimum number of parameters, (2) are free of singularities, (3) allow for a consistent update within an iterative procedure, (4) enable to exploit the simplicity of homogeneous coordinates to represent geometric constraints and (5) allow to handle geometric entities which are at innity or at least very far, avoiding the usage of concepts like the inverse depth. Such representations are already available for transformations such as rotations, motions (Rosenhahn 2002), homographies (Begelfor 2005), or the projective correlation with fundamental matrix (Bartoli 2004) all being elements of some Lie group. The uncertainty is represented in the tangent space of the manifold, namely the corresponding Lie algebra. However, to our knowledge no such representations are developed for the basic geometric entities such as points, lines and planes, as in addition to use the tangent space of the manifolds we need transformation of the entities such that they stay on their specific manifold during the estimation process. We develop the concept, discuss its usefulness for bundle adjustment and demonstrate (a) its superiority compared to more simple methods for vanishing point estimation, (b) its rigour when estimating 3D lines from 3D points and (c) its applicability for determining 3D lines from observed image line segments in a multi view setup.},
    Doi = {10.1007/978-3-642-19309-5_48},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2010Minimal.pdf}
    }

  • W. Förstner, “Optimal Vanishing Point Detection and Rotation Estimation of Single Images of a Legolandscene,” in Int. Archives of Photogrammetry and Remote Sensing , 2010, p. 157–163, Part A..
    [BibTeX] [PDF]
    The paper presents a method for automatically and optimally determining the vanishing points of a single image, and in case the interior orientation is given, the rotation of an image with respect to the intrinsic coordinate system of a lego land scene. We perform rigorous testing and estimation in order to be as independent on control parameters as possible. This refers to (1) estimating vanishing points from line segments and the rotation matrix, (2) to testing during RANSAC and during boosting lines and (3) to classifying the line segments w. r. t. their vanishing point. Spherically normalized homogeneous coordinates are used for line segments and especially for vanishing points to allow for points at infinity. We propose a minimal representation for the uncertainty of homogeneous coordinates of 2D points and 2D lines and rotations to avoid the use of singular covariance matrices of observed line segments. This at the same time allows to estimate the parameters with a minimal representation. The vanishing point detection method is experimentally validated on a set of 292 images.

    @InProceedings{Forstner2010Optimal,
    Title = {Optimal Vanishing Point Detection and Rotation Estimation of Single Images of a Legolandscene},
    Author = {F\"orstner, Wolfgang},
    Booktitle = {Int. Archives of Photogrammetry and Remote Sensing},
    Year = {2010},
    Organization = {ISPRS Symposium Comm. III, Paris},
    Pages = {157--163, Part A.},
    Abstract = {The paper presents a method for automatically and optimally determining the vanishing points of a single image, and in case the interior orientation is given, the rotation of an image with respect to the intrinsic coordinate system of a lego land scene. We perform rigorous testing and estimation in order to be as independent on control parameters as possible. This refers to (1) estimating vanishing points from line segments and the rotation matrix, (2) to testing during RANSAC and during boosting lines and (3) to classifying the line segments w. r. t. their vanishing point. Spherically normalized homogeneous coordinates are used for line segments and especially for vanishing points to allow for points at infinity. We propose a minimal representation for the uncertainty of homogeneous coordinates of 2D points and 2D lines and rotations to avoid the use of singular covariance matrices of observed line segments. This at the same time allows to estimate the parameters with a minimal representation. The vanishing point detection method is experimentally validated on a set of 292 images.},
    Location = {wf},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2010Optimal.pdf}
    }

  • B. Frank, R. Schmedding, C. Stachniss, M. Teschner, and W. Burgard, “Learning Deformable Object Models for Mobile Robot Path Planning using Depth Cameras and a Manipulation Robot,” in Proceedings of the Workshop RGB-D: Advanced Reasoning with Depth Cameras at Robotics: Science and Systems (RSS) , Zaragoza, Spain, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Frank2010,
    Title = {Learning Deformable Object Models for Mobile Robot Path Planning using Depth Cameras and a Manipulation Robot},
    Author = {B. Frank and R. Schmedding and C. Stachniss and M. Teschner and W. Burgard},
    Booktitle = {Proceedings of the Workshop RGB-D: Advanced Reasoning with Depth Cameras at Robotics: Science and Systems (RSS)},
    Year = {2010},
    Address = {Zaragoza, Spain},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/frank10rssws.pdf}
    }

  • B. Frank, R. Schmedding, C. Stachniss, M. Teschner, and W. Burgard, “Learning the Elasticity Parameters of Deformable Objects with a Manipulation Robot,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Taipei, Taiwan, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Frank2010a,
    Title = {Learning the Elasticity Parameters of Deformable Objects with a Manipulation Robot},
    Author = {B. Frank and R. Schmedding and C. Stachniss and M. Teschner and W. Burgard},
    Booktitle = iros,
    Year = {2010},
    Address = {Taipei, Taiwan},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/frank10iros.pdf}
    }

  • G. Grisetti, R. Kümmerle, C. Stachniss, and W. Burgard, “A Tutorial on Graph-based SLAM,” IEEE Transactions on Intelligent Transportation Systems Magazine, vol. 2, pp. 31-43, 2010.
    [BibTeX] [PDF]
    [none]
    @Article{Grisetti2010a,
    Title = {A Tutorial on Graph-based {SLAM}},
    Author = {G. Grisetti and R. K{\"u}mmerle and C. Stachniss and W. Burgard},
    Journal = {IEEE Transactions on Intelligent Transportation Systems Magazine},
    Year = {2010},
    Pages = {31--43},
    Volume = {2},
    Abstract = {[none]},
    Issue = {4},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti10titsmag.pdf}
    }

  • G. Grisetti, R. Kümmerle, C. Stachniss, U. Frese, and C. Hertzberg, “Hierarchical Optimization on Manifolds for Online 2D and 3D Mapping,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Anchorage, Alaska, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Grisetti2010,
    Title = {Hierarchical Optimization on Manifolds for Online 2D and 3D Mapping},
    Author = {G. Grisetti and R. K{\"u}mmerle and C. Stachniss and U. Frese and C. Hertzberg},
    Booktitle = icra,
    Year = {2010},
    Address = {Anchorage, Alaska},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti10icra.pdf}
    }

  • A. Hüsgen, “Multi-Modal Segmentation of Anatomical and Functional Image of the Brain,” Diploma Thesis Master Thesis, 2010.
    [BibTeX] [PDF]
    [none]
    @MastersThesis{Husgen2010Multi,
    Title = {Multi-Modal Segmentation of Anatomical and Functional Image of the Brain},
    Author = {H\"usgen, Andreas},
    School = {University of Bonn},
    Year = {2010},
    Note = {Betreuung: Prof. Dr.-Ing. W. F\"orstner, Privatdozent Dr. Volker Steinhage},
    Type = {Diploma Thesis},
    Abstract = {[none]},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Husgen2010Multi.pdf}
    }

  • A. Hecheltjen, B. Waske, F. Thonfeld, M. Braun, and G. Menz, “Support Vector Machines for Multitemporal and Multisensor Change Detection,” in ESA’s Living Planet Symposium (ESA SP-686) , 2010.
    [BibTeX]
    [none]
    @InProceedings{Hecheltjen2010Support,
    Title = {Support Vector Machines for Multitemporal and Multisensor Change Detection},
    Author = {Hecheltjen, Antje and Waske, Bj\"orn and Thonfeld, Frank and Braun, Matthias and Menz, Gunter},
    Booktitle = {ESA's Living Planet Symposium (ESA SP-686)},
    Year = {2010},
    Abstract = {[none]},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • A. Hornung, M.Bennewitz, C. Stachniss, H. Strasdat, S. Oßwald, and W. Burgard, “Learning Adaptive Navigation Strategies for Resource-Constrained Systems,” in Proceedings of the Int. Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems , Lisbon, Portugal, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Hornung2010,
    Title = {Learning Adaptive Navigation Strategies for Resource-Constrained Systems},
    Author = {A. Hornung and M.Bennewitz and C. Stachniss and H. Strasdat and S. O{\ss}wald and W. Burgard},
    Booktitle = {Proceedings of the Int. Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems},
    Year = {2010},
    Address = {Lisbon, Portugal},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/hornung10erlars.pdf}
    }

  • M. Karg, K. M. Wurm, C. Stachniss, K. Dietmayer, and W. Burgard, “Consistent Mapping of Multistory Buildings by Introducing Global Constraints to Graph-based SLAM,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Anchorage, Alaska, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Karg2010,
    Title = {Consistent Mapping of Multistory Buildings by Introducing Global Constraints to Graph-based {SLAM}},
    Author = {M. Karg and K.M. Wurm and C. Stachniss and K. Dietmayer and W. Burgard},
    Booktitle = icra,
    Year = {2010},
    Address = {Anchorage, Alaska},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/karg10icra.pdf}
    }

  • F. Korč, D. Schneider, and W. Förstner, “On Nonparametric Markov Random Field Estimation for Fast Automatic Segmentation of MRI Knee Data,” in Proceedings of the 4th Medical Image Analysis for the Clinic – A Grand Challenge workshop, MICCAI , 2010, pp. 261-270.
    [BibTeX] [PDF]
    We present a fast automatic reproducible method for 3d semantic segmentation of magnetic resonance images of the knee. We formulate a single global model that allows to jointly segment all classes. The model estimation was performed automatically without manual interaction and parameter tuning. The segmentation of a magnetic resonance image with 11 Mio voxels took approximately one minute. Our labeling results by far do not reach the performance of complex state of the art approaches designed to produce clinically relevant results. Our results could potentially be useful for rough visualization or initialization of computationally demanding methods. Our main contribution is to provide insights in possible strategies when employing global statistical models

    @InProceedings{Korvc2010Nonparametric,
    Title = {On Nonparametric Markov Random Field Estimation for Fast Automatic Segmentation of MRI Knee Data},
    Author = {Kor{\vc}, Filip and Schneider, David and F\"orstner, Wolfgang},
    Booktitle = {Proceedings of the 4th Medical Image Analysis for the Clinic - A Grand Challenge workshop, MICCAI},
    Year = {2010},
    Note = {Beijing},
    Pages = {261--270},
    Abstract = {We present a fast automatic reproducible method for 3d semantic segmentation of magnetic resonance images of the knee. We formulate a single global model that allows to jointly segment all classes. The model estimation was performed automatically without manual interaction and parameter tuning. The segmentation of a magnetic resonance image with 11 Mio voxels took approximately one minute. Our labeling results by far do not reach the performance of complex state of the art approaches designed to produce clinically relevant results. Our results could potentially be useful for rough visualization or initialization of computationally demanding methods. Our main contribution is to provide insights in possible strategies when employing global statistical models},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Korvc2010Nonparametric.pdf}
    }

  • H. Kretzschmar, G. Grisetti, and C. Stachniss, “Lifelong Map Learning for Graph-based SLAM in Static Environments,” KI — Künstliche Intelligenz, vol. 24, pp. 199-206, 2010.
    [BibTeX]
    [none]
    @Article{Kretzschmar2010,
    Title = {Lifelong Map Learning for Graph-based {SLAM} in Static Environments},
    Author = {H. Kretzschmar and G. Grisetti and C. Stachniss},
    Journal = {{KI} -- {K}\"unstliche {I}ntelligenz},
    Year = {2010},
    Pages = {199--206},
    Volume = {24},
    Abstract = {[none]},
    Issue = {3},
    Timestamp = {2014.04.24}
    }

  • J. Müller, C. Stachniss, K. O. Arras, and W. Burgard, “Socially Inspired Motion Planning for Mobile Robots in Populated Environments,” in Cognitive Systems, Springer, 2010.
    [BibTeX]
    [none]
    @InCollection{Muller2010,
    Title = {Socially Inspired Motion Planning for Mobile Robots in Populated Environments},
    Author = {M\"{u}ller, J. and Stachniss, C. and Arras, K.O. and Burgard, W.},
    Booktitle = {Cognitive Systems},
    Publisher = springer,
    Year = {2010},
    Note = {In press},
    Series = {Cognitive Systems Monographs},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • T. Mewes, B. Waske, J. Franke, and G. Menz, “Derivation of stress severities in wheat from hyperspectral data using support vector regression,” in 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS 2010) , 2010. doi:10.1109/WHISPERS.2010.5594921
    [BibTeX]
    The benefits and limitations of crop stress detection by hyperspectral data analysis have been examined in detail. It could thereby be demonstrated that even a differentiation between healthy and fungal infected wheat stands is possible and profits by analyzing entire spectra or specifically selected spectral bands/ranges. For reasons of practicability in agriculture, spatial information about the health status of crop plants beyond a binary classification would be a major benefit. Thus, the potential of hyperspectral data for the derivation of several disease severity classes or moreover the derivation of continual disease severity has to be further examined. In the present study, a state-of-the-art regression approach using support vector machines (SVM) has been applied to hyperspectral AISA-Dual data to derive the disease severity caused by leaf rust (Puccinina recondita) in wheat. Ground truth disease ratings were realized within an experimental field. A mean correlation coefficient of r=0.69 between severities and support vector regression predicted severities could be achieved using indepent training and test data. The results show that the SVR is generally suitable for the derivation of continual disease severity values, but the crucial point is the uncertainty in the reference severity data, which is used to train the regression.

    @InProceedings{Mewes2010Derivation,
    Title = {Derivation of stress severities in wheat from hyperspectral data using support vector regression},
    Author = {Mewes, T. and Waske, Bj\"orn and Franke, J. and Menz, G.},
    Booktitle = {2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS 2010)},
    Year = {2010},
    Abstract = {The benefits and limitations of crop stress detection by hyperspectral data analysis have been examined in detail. It could thereby be demonstrated that even a differentiation between healthy and fungal infected wheat stands is possible and profits by analyzing entire spectra or specifically selected spectral bands/ranges. For reasons of practicability in agriculture, spatial information about the health status of crop plants beyond a binary classification would be a major benefit. Thus, the potential of hyperspectral data for the derivation of several disease severity classes or moreover the derivation of continual disease severity has to be further examined. In the present study, a state-of-the-art regression approach using support vector machines (SVM) has been applied to hyperspectral AISA-Dual data to derive the disease severity caused by leaf rust (Puccinina recondita) in wheat. Ground truth disease ratings were realized within an experimental field. A mean correlation coefficient of r=0.69 between severities and support vector regression predicted severities could be achieved using indepent training and test data. The results show that the SVR is generally suitable for the derivation of continual disease severity values, but the crucial point is the uncertainty in the reference severity data, which is used to train the regression.},
    Doi = {10.1109/WHISPERS.2010.5594921},
    Keywords = {AISA-Dual data;Puccinina recondita;agriculture;binary classification;crop stress detection;fungal infected wheat;hyperspectral data;leaf rust;stress severity derivation;support vector machine;support vector regression;agriculture;crops;geophysical techniques;regression analysis;support vector machines;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • M. Muffert, “Verwendung eines mosaikbasierten Kamerasystems zur Bestimmung von räumlichen Orientierungsänderungen von mobilen Objekten,” Master Thesis, 2010.
    [BibTeX] [PDF]
    The estimation of relative spatial positions and orientations is one of the most important tasks of engineering geodesy. For example, we need these parameters in precision farming or controlling the driving direction of construction vehicles. It is usual to use multi-sensor systems in these applications which are often a combination of GPS-sensors with Inertial Navigation Systems (INS). An optimal solution for the searched parameters could be achieved using filtering processes.

    @MastersThesis{Muffert2010Verwendung,
    Title = {Verwendung eines mosaikbasierten Kamerasystems zur Bestimmung von r\"aumlichen Orientierungs\"anderungen von mobilen Objekten},
    Author = {Muffert, Maxilmilian},
    School = {Institute of Photogrammetry,University of Bonn},
    Year = {2010},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Prof. Dr.-Ing. Heiner Kuhlmann},
    Abstract = {The estimation of relative spatial positions and orientations is one of the most important tasks of engineering geodesy. For example, we need these parameters in precision farming or controlling the driving direction of construction vehicles. It is usual to use multi-sensor systems in these applications which are often a combination of GPS-sensors with Inertial Navigation Systems (INS). An optimal solution for the searched parameters could be achieved using filtering processes.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Muffert2010Verwendung.pdf}
    }

  • M. Muffert, J. Siegemund, and W. Förstner, “The estimation of spatial positions by using an omnidirectional camera system,” in 2nd International Conference on Machine Control & Guidance , 2010, pp. 95-104.
    [BibTeX] [PDF]
    With an omnidirectional camera system, it is possible to take 360-degree views of the surrounding area at each camera position. These systems are used particularly in robotic applications, in autonomous navigation and supervision technology for ego-motion estimation. In addition to the visual capture of the environment itself, we can compute the parameters of orientation and position from image sequences, i.e. we get three parameters of position and three of orientation (yaw rate, pitch and roll angle) at each time of acquisition. The aim of the presented project is to investigate the quality of the spatial trajectory of a mobile survey vehicle from the recorded image sequences. In this paper, we explain the required photogrammetric background and show the advantages of omnidirectional camera systems for this task. We present the first results on our test set and discuss alternative applications for omnidirectional cameras.

    @InProceedings{Muffert2010estimation,
    Title = {The estimation of spatial positions by using an omnidirectional camera system},
    Author = {Muffert, Maximilian and Siegemund, Jan and F\"orstner, Wolfgang},
    Booktitle = {2nd International Conference on Machine Control \& Guidance},
    Year = {2010},
    Month = mar,
    Pages = {95--104},
    Abstract = {With an omnidirectional camera system, it is possible to take 360-degree views of the surrounding area at each camera position. These systems are used particularly in robotic applications, in autonomous navigation and supervision technology for ego-motion estimation. In addition to the visual capture of the environment itself, we can compute the parameters of orientation and position from image sequences, i.e. we get three parameters of position and three of orientation (yaw rate, pitch and roll angle) at each time of acquisition. The aim of the presented project is to investigate the quality of the spatial trajectory of a mobile survey vehicle from the recorded image sequences. In this paper, we explain the required photogrammetric background and show the advantages of omnidirectional camera systems for this task. We present the first results on our test set and discuss alternative applications for omnidirectional cameras.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Muffert2010estimation.pdf}
    }

  • C. Plagemann, C. Stachniss, J. Hess, F. Endres, and N. Franklin, “A Nonparametric Learning Approach to Range Sensing from Omnidirectional Vision,” Robotics and Autonomous Systems, vol. 58, pp. 762-772, 2010.
    [BibTeX]
    [none]
    @Article{Plagemann2010,
    Title = {A Nonparametric Learning Approach to Range Sensing from Omnidirectional Vision},
    Author = {C. Plagemann and C. Stachniss and J. Hess and F. Endres and N. Franklin},
    Journal = jras,
    Year = {2010},
    Pages = {762--772},
    Volume = {58},
    Abstract = {[none]},
    Issue = {6},
    Timestamp = {2014.04.24}
    }

  • M. Röder-Sorge, “Konzeption und Anwendung von Entscheidungsnetzwerken im Städtebau,” Diploma Thesis Master Thesis, 2010.
    [BibTeX]
    In dieser Arbeit wird mit dem Programm Netica ein Entscheidungsnetzwerk aufgestellt, das für sechs Gebäude eines Wohnkomplexes in Leipzig-Grünau die optimalen Entscheidungen über deren zukünftige Entwicklung ermittelt. In das Netzwerk werden die Interessen der Mieter, der Stadtverwaltung und der Wohnungsunternehmen Grünaus mit einbezogen, wobei mit letzeren Interviews über die Gewichtung der Einflussfaktoren im Stadtumbau geführt wurden. Netica eignet sich nur mit Einschränkungen für die Modellierung und Entscheidungsfindung im Stadtumbau, da nicht mehr als sechs Gebäude modelliert werden können und, genau wie mit allen anderen Entscheidungsnetzwerkprogrammen, die Darstellung des Free-Rider-Problems nicht möglich ist.

    @MastersThesis{Roder-Sorge2010Konzeption,
    Title = {Konzeption und Anwendung von Entscheidungsnetzwerken im St\"adtebau},
    Author = {R\"oder-Sorge, Marisa},
    School = {University of Bonn},
    Year = {2010},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Prof. Dr.-Ing. Theo K\"otter},
    Type = {Diploma Thesis},
    Abstract = {In dieser Arbeit wird mit dem Programm Netica ein Entscheidungsnetzwerk aufgestellt, das f\"ur sechs Geb\"aude eines Wohnkomplexes in Leipzig-Gr\"unau die optimalen Entscheidungen \"uber deren zuk\"unftige Entwicklung ermittelt. In das Netzwerk werden die Interessen der Mieter, der Stadtverwaltung und der Wohnungsunternehmen Gr\"unaus mit einbezogen, wobei mit letzeren Interviews \"uber die Gewichtung der Einflussfaktoren im Stadtumbau gef\"uhrt wurden. Netica eignet sich nur mit Einschr\"ankungen f\"ur die Modellierung und Entscheidungsfindung im Stadtumbau, da nicht mehr als sechs Geb\"aude modelliert werden k\"onnen und, genau wie mit allen anderen Entscheidungsnetzwerkprogrammen, die Darstellung des Free-Rider-Problems nicht m\"oglich ist.}
    }

  • R. Roscher, F. Schindler, and W. Förstner, “High Dimensional Correspondences from Low Dimensional Manifolds — An Empirical Comparison of Graph-based Dimensionality Reduction Algorithms,” in The 3rd International Workshop on Subspace Methods, in conjunction with ACCV2010 , 2010, p. 10. doi:10.1007/978-3-642-22819-3_34
    [BibTeX] [PDF]
    We discuss the utility of dimensionality reduction algorithms to put data points in high dimensional spaces into correspondence by learning a transformation between assigned data points on a lower dimensional structure. We assume that similar high dimensional feature spaces are characterized by a similar underlying low dimensional structure. To enable the determination of an affine transformation between two data sets we make use of well-known dimensional reduction algorithms. We demonstrate this procedure for applications like classification and assignments between two given data sets and evaluate six well-known algorithms during several experiments with different objectives. We show that with these algorithms and our transformation approach high dimensional data sets can be related to each other. We also show that linear methods turn out to be more suitable for assignment tasks, whereas graph-based methods appear to be superior for classification tasks.

    @InProceedings{Roscher2010High,
    Title = {High Dimensional Correspondences from Low Dimensional Manifolds -- An Empirical Comparison of Graph-based Dimensionality Reduction Algorithms},
    Author = {Roscher, Ribana and Schindler, Falko and F\"orstner, Wolfgang},
    Booktitle = {The 3rd International Workshop on Subspace Methods, in conjunction with ACCV2010},
    Year = {2010},
    Note = {Queenstown, New Zealand},
    Pages = {10},
    Abstract = {We discuss the utility of dimensionality reduction algorithms to put data points in high dimensional spaces into correspondence by learning a transformation between assigned data points on a lower dimensional structure. We assume that similar high dimensional feature spaces are characterized by a similar underlying low dimensional structure. To enable the determination of an affine transformation between two data sets we make use of well-known dimensional reduction algorithms. We demonstrate this procedure for applications like classification and assignments between two given data sets and evaluate six well-known algorithms during several experiments with different objectives. We show that with these algorithms and our transformation approach high dimensional data sets can be related to each other. We also show that linear methods turn out to be more suitable for assignment tasks, whereas graph-based methods appear to be superior for classification tasks.},
    Doi = {10.1007/978-3-642-22819-3_34},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2010High.pdf;Poster:Roscher2010High_Poster.pdf}
    }

  • R. Roscher, B. Waske, and W. Förstner, “Kernel Discriminative Random Fields for land cover classification,” in IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) , 2010. doi:10.1109/PRRS.2010.5742801
    [BibTeX] [PDF]
    Logistic Regression has become a commonly used classifier, not only due to its probabilistic output and its direct usage in multi-class cases. We use a sparse Kernel Logistic Regression approach – the Import Vector Machines – for land cover classification. We improve our segmentation results applying a Discriminative Random Field framework on the probabilistic classification output. We consider the performance regarding to the classification accuracy and the complexity and compare it to the Gaussian Maximum Likelihood classification and the Support Vector Machines.

    @InProceedings{Roscher2010Kernel,
    Title = {Kernel Discriminative Random Fields for land cover classification},
    Author = {Roscher, Ribana and Waske, Bj\"orn and F\"orstner, Wolfgang},
    Booktitle = {IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)},
    Year = {2010},
    Note = {Istanbul, Turkey},
    Abstract = {Logistic Regression has become a commonly used classifier, not only due to its probabilistic output and its direct usage in multi-class cases. We use a sparse Kernel Logistic Regression approach - the Import Vector Machines - for land cover classification. We improve our segmentation results applying a Discriminative Random Field framework on the probabilistic classification output. We consider the performance regarding to the classification accuracy and the complexity and compare it to the Gaussian Maximum Likelihood classification and the Support Vector Machines.},
    Doi = {10.1109/PRRS.2010.5742801},
    Keywords = {Gaussian maximum likelihood classification;image segmentation;import vector machine;kernel discriminative random fields;land cover classification;logistic regression;probabilistic classification;support vector machines;geophysical image processing;image classification;image segmentation;support vector machines;terrain mapping;},
    Owner = {waske},
    Timestamp = {2012.09.05},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2010Kernel.pdf;Slides:Roscher2010Kernel_Slides.pdf}
    }

  • J. Siegemund, D. Pfeiffer, U. Franke, and W. Förstner, “Curb Reconstruction using Conditional Random Fields,” in IEEE Intelligent Vehicles Symposium (IV) , 2010, pp. 203-210. doi:10.1109/IVS.2010.5548096
    [BibTeX] [PDF]
    This paper presents a generic framework for curb detection and reconstruction in the context of driver assistance systems. Based on a 3D point cloud, we estimate the parameters of a 3D curb model, incorporating also the curb adjacent surfaces, e.g. street and sidewalk. We apply an iterative two step approach. First, the measured 3D points, e.g., obtained from dense stereo vision, are assigned to the curb adjacent surfaces using loopy belief propagation on a Conditional Random Field. Based on this result, we reconstruct the surfaces and in particular the curb. Our system is not limited to straight-line curbs, i.e. it is able to deal with curbs of different curvature and varying height. The proposed algorithm runs in real-time on our demon- strator vehicle and is evaluated in urban real-world scenarios. It yields highly accurate results even for low curbs up to 20 m distance.

    @InProceedings{Siegemund2010Curb,
    Title = {Curb Reconstruction using Conditional Random Fields},
    Author = {Siegemund, Jan and Pfeiffer, David and Franke, Uwe and F\"orstner, Wolfgang},
    Booktitle = {IEEE Intelligent Vehicles Symposium (IV)},
    Year = {2010},
    Month = jun,
    Pages = {203--210},
    Publisher = {IEEE Computer Society},
    Abstract = {This paper presents a generic framework for curb detection and reconstruction in the context of driver assistance systems. Based on a 3D point cloud, we estimate the parameters of a 3D curb model, incorporating also the curb adjacent surfaces, e.g. street and sidewalk. We apply an iterative two step approach. First, the measured 3D points, e.g., obtained from dense stereo vision, are assigned to the curb adjacent surfaces using loopy belief propagation on a Conditional Random Field. Based on this result, we reconstruct the surfaces and in particular the curb. Our system is not limited to straight-line curbs, i.e. it is able to deal with curbs of different curvature and varying height. The proposed algorithm runs in real-time on our demon- strator vehicle and is evaluated in urban real-world scenarios. It yields highly accurate results even for low curbs up to 20 m distance.},
    Doi = {10.1109/IVS.2010.5548096},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Siegemund2010Curb.pdf}
    }

  • R. Steffen, J. Frahm, and W. Förstner, “Relative Bundle Adjustment based on Trifocal Constraints,” in ECCV Workshop on Reconstruction and Modeling of Large-Scale 3D Virtual Environments , 2010. doi:10.1007/978-3-642-35740-4_22
    [BibTeX] [PDF]
    In this paper we propose a novel approach to bundle adjustment for large-scale camera configurations. The method does not need to include the 3D points in the optimization as parameters. Additionally, we model the parameters of a camera only relative to a nearby camera to achieve a stable estimation of all cameras. This guarantees to yield a normal equation system with a numerical condition, which practically is independent of the number of images. Secondly, instead of using the classical perspective relation between object point, camera and image point, we use epipolar and trifocal constraints to implicitly establish the relations between the cameras via the object structure. This avoids the explicit reference to 3D points thereby handling points far from the camera in a numerically stable fashion. We demonstrate the resulting stability and high convergence rates using synthetic and real data.

    @InProceedings{Steffen2010Relative,
    Title = {Relative Bundle Adjustment based on Trifocal Constraints},
    Author = {Steffen, Richard and Frahm, Jan-Michael and F\"orstner, Wolfgang},
    Booktitle = {ECCV Workshop on Reconstruction and Modeling of Large-Scale 3D Virtual Environments},
    Year = {2010},
    Organization = {ECCV 2010 Crete, Greece},
    Abstract = {In this paper we propose a novel approach to bundle adjustment for large-scale camera configurations. The method does not need to include the 3D points in the optimization as parameters. Additionally, we model the parameters of a camera only relative to a nearby camera to achieve a stable estimation of all cameras. This guarantees to yield a normal equation system with a numerical condition, which practically is independent of the number of images. Secondly, instead of using the classical perspective relation between object point, camera and image point, we use epipolar and trifocal constraints to implicitly establish the relations between the cameras via the object structure. This avoids the explicit reference to 3D points thereby handling points far from the camera in a numerically stable fashion. We demonstrate the resulting stability and high convergence rates using synthetic and real data.},
    Doi = {10.1007/978-3-642-35740-4_22},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Steffen2010Relative.pdf}
    }

  • J. Sturm, A. Jain, C. Stachniss, C. C. Kemp, and W. Burgard, “Robustly Operating Articulated Objects based on Experience,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Taipei, Taiwan, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Sturm2010b,
    Title = {Robustly Operating Articulated Objects based on Experience},
    Author = {J. Sturm and A. Jain and C. Stachniss and C.C. Kemp and W. Burgard},
    Booktitle = iros,
    Year = {2010},
    Address = {Taipei, Taiwan},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/sturm10iros.pdf}
    }

  • J. Sturm, K. Konolige, C. Stachniss, and W. Burgard, “Vision-based Detection for Learning Articulation Models of Cabinet Doors and Drawers in Household Environments,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Anchorage, Alaska, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Sturm2010,
    Title = {Vision-based Detection for Learning Articulation Models of Cabinet Doors and Drawers in Household Environments},
    Author = {J. Sturm and K. Konolige and C. Stachniss and W. Burgard},
    Booktitle = icra,
    Year = {2010},
    Address = {Anchorage, Alaska},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/sturm10icra.pdf}
    }

  • J. Sturm, K. Konolige, C. Stachniss, and W. Burgard, “3D Pose Estimation, Tracking and Model Learning of Articulated Objects from Dense Depth Video using Projected Texture Stereo,” in Proceedings of the Workshop RGB-D: Advanced Reasoning with Depth Cameras at Robotics: Science and Systems (RSS) , Zaragoza, Spain, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Sturm2010a,
    Title = {3D Pose Estimation, Tracking and Model Learning of Articulated Objects from Dense Depth Video using Projected Texture Stereo},
    Author = {J. Sturm and K. Konolige and C. Stachniss and W. Burgard},
    Booktitle = {Proceedings of the Workshop RGB-D: Advanced Reasoning with Depth Cameras at Robotics: Science and Systems (RSS)},
    Year = {2010},
    Address = {Zaragoza, Spain},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/sturm10rssws.pdf}
    }

  • S. Valero, J. Chanussot, J. A. Benediktsson, H. Talbot, and B. Waske, “Advanced directional mathematical morphology for the detection of the road network in very high resolution remote sensing images,” Pattern Recognition Letters, vol. 31, iss. 10, pp. 1120-1127, 2010. doi:10.1016/j.patrec.2009.12.018
    [BibTeX]
    Very high spatial resolution (VHR) images allow to feature man-made structures such as roads and thus enable their accurate analysis. Geometrical characteristics can be extracted using mathematical morphology. However, the prior choice of a reference shape (structuring element) introduces a shape-bias. This paper presents a new method for extracting roads in Very High Resolution remotely sensed images based on advanced directional morphological operators. The proposed approach introduces the use of Path Openings and Path Closings in order to extract structural pixel information. These morphological operators remain flexible enough to fit rectilinear and slightly curved structures since they do not depend on the choice of a structural element shape. As a consequence, they outperform standard approaches using rotating rectangular structuring elements. The method consists in building a granulometry chain using Path Openings and Path Closing to construct Morphological Profiles. For each pixel, the Morphological Profile constitutes the feature vector on which our road extraction is based. (C) 2009 Published by Elsevier B.V.

    @Article{Valero2010Advanced,
    Title = {Advanced directional mathematical morphology for the detection of the road network in very high resolution remote sensing images},
    Author = {Valero, Sivia and Chanussot, Jocelyn and Benediktsson, Jon Atli and Talbot, Huges and Waske, Bj\"orn},
    Journal = {Pattern Recognition Letters},
    Year = {2010},
    Month = jul,
    Number = {10},
    Pages = {1120--1127},
    Volume = {31},
    Abstract = {Very high spatial resolution (VHR) images allow to feature man-made structures such as roads and thus enable their accurate analysis. Geometrical characteristics can be extracted using mathematical morphology. However, the prior choice of a reference shape (structuring element) introduces a shape-bias. This paper presents a new method for extracting roads in Very High Resolution remotely sensed images based on advanced directional morphological operators. The proposed approach introduces the use of Path Openings and Path Closings in order to extract structural pixel information. These morphological operators remain flexible enough to fit rectilinear and slightly curved structures since they do not depend on the choice of a structural element shape. As a consequence, they outperform standard approaches using rotating rectangular structuring elements. The method consists in building a granulometry chain using Path Openings and Path Closing to construct Morphological Profiles. For each pixel, the Morphological Profile constitutes the feature vector on which our road extraction is based. (C) 2009 Published by Elsevier B.V.},
    Doi = {10.1016/j.patrec.2009.12.018},
    Owner = {waske},
    Si = {SI},
    Sn = {0167-8655},
    Tc = {4},
    Timestamp = {2012.09.04},
    Ut = {WOS:000279284000007},
    Z8 = {2},
    Z9 = {6},
    Zb = {1}
    }

  • B. Waske, S. van der Linden, J. A. Benediktsson, A. Rabe, and P. Hostert, “Sensitivity of Support Vector Machines to Random Feature Selection in Classification of Hyperspectral Data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, iss. 7, pp. 2880-2889, 2010. doi:10.1109/TGRS.2010.2041784
    [BibTeX]
    The accuracy of supervised land cover classifications depends on factors such as the chosen classification algorithm, adequate training data, the input data characteristics, and the selection of features. Hyperspectral imaging provides more detailed spectral and spatial information on the land cover than other remote sensing resources. Over the past ten years, traditional and formerly widely accepted statistical classification methods have been superseded by more recent machine learning algorithms, e.g., support vector machines (SVMs), or by multiple classifier systems (MCS). This can be explained by limitations of statistical approaches with regard to high-dimensional data, multimodal classes, and often limited availability of training data. In the presented study, MCSs based on SVM and random feature selection (RFS) are applied to explore the potential of a synergetic use of the two concepts. We investigated how the number of selected features and the size of the MCS influence classification accuracy using two hyperspectral data sets, from different environmental settings. In addition, experiments were conducted with a varying number of training samples. Accuracies are compared with regular SVM and random forests. Experimental results clearly demonstrate that the generation of an SVM-based classifier system with RFS significantly improves overall classification accuracy as well as producer’s and user’s accuracies. In addition, the ensemble strategy results in smoother, i.e., more realistic, classification maps than those from stand-alone SVM. Findings from the experiments were successfully transferred onto an additional hyperspectral data set.

    @Article{Waske2010Sensitivity,
    Title = {Sensitivity of Support Vector Machines to Random Feature Selection in Classification of Hyperspectral Data},
    Author = {Waske, Bj\"orn and van der Linden, Sebastian and Benediktsson, Jon Atli and Rabe, Andreas and Hostert, Patrick},
    Journal = {IEEE Transactions on Geoscience and Remote Sensing},
    Year = {2010},
    Month = jul,
    Number = {7},
    Pages = {2880--2889},
    Volume = {48},
    Abstract = {The accuracy of supervised land cover classifications depends on factors such as the chosen classification algorithm, adequate training data, the input data characteristics, and the selection of features. Hyperspectral imaging provides more detailed spectral and spatial information on the land cover than other remote sensing resources. Over the past ten years, traditional and formerly widely accepted statistical classification methods have been superseded by more recent machine learning algorithms, e.g., support vector machines (SVMs), or by multiple classifier systems (MCS). This can be explained by limitations of statistical approaches with regard to high-dimensional data, multimodal classes, and often limited availability of training data. In the presented study, MCSs based on SVM and random feature selection (RFS) are applied to explore the potential of a synergetic use of the two concepts. We investigated how the number of selected features and the size of the MCS influence classification accuracy using two hyperspectral data sets, from different environmental settings. In addition, experiments were conducted with a varying number of training samples. Accuracies are compared with regular SVM and random forests. Experimental results clearly demonstrate that the generation of an SVM-based classifier system with RFS significantly improves overall classification accuracy as well as producer's and user's accuracies. In addition, the ensemble strategy results in smoother, i.e., more realistic, classification maps than those from stand-alone SVM. Findings from the experiments were successfully transferred onto an additional hyperspectral data set.},
    Doi = {10.1109/TGRS.2010.2041784},
    Owner = {waske},
    Sn = {0196-2892},
    Tc = {10},
    Timestamp = {2012.09.04},
    Ut = {WOS:000281789800010},
    Z8 = {0},
    Z9 = {10},
    Zb = {2}
    }

  • S. Wenzel and L. Hotz, “The Role of Sequences for Incremental Learning,” in ICAART 2010 – Proceedings of the International Conference on Agents and Artificial Intelligence , Valencia, Spain, 2010, pp. 434-439.
    [BibTeX] [PDF]
    In this paper, we point out the role of sequences of samples for training an incremental learning method. We define characteristics of incremental learning methods to describe the influence of sample ordering on the performance of a learned model. We show the influence of sequence for two different types of incremental learning. One is aimed on learning structural models, the other on learning models to discriminate object classes. In both cases, we show the possibility to find good sequences before starting the training.

    @InProceedings{Wenzel2010Role,
    Title = {The Role of Sequences for Incremental Learning},
    Author = {Wenzel, Susanne and Hotz, Lothar},
    Booktitle = {ICAART 2010 - Proceedings of the International Conference on Agents and Artificial Intelligence},
    Year = {2010},
    Address = {Valencia, Spain},
    Editor = {Joaquim Filipe and Ana L. N. Fred and Bernadette Sharp},
    Month = jan,
    Pages = {434--439},
    Publisher = {INSTICC Press},
    Volume = {1},
    Abstract = {In this paper, we point out the role of sequences of samples for training an incremental learning method. We define characteristics of incremental learning methods to describe the influence of sample ordering on the performance of a learned model. We show the influence of sequence for two different types of incremental learning. One is aimed on learning structural models, the other on learning models to discriminate object classes. In both cases, we show the possibility to find good sequences before starting the training.},
    ISBN = {978-989-674-021-4},
    Timestamp = {2011.01.18},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2010Role.pdf}
    }

  • K. M. Wurm, C. Dornhege, P. Eyerich, C. Stachniss, B. Nebel, and W. Burgard, “Coordinated Exploration with Marsupial Teams of Robots using Temporal Symbolic Planning,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Taipei, Taiwan, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Wurm2010a,
    Title = {Coordinated Exploration with Marsupial Teams of Robots using Temporal Symbolic Planning},
    Author = {K.M. Wurm and C. Dornhege and P. Eyerich and C. Stachniss and B. Nebel and W. Burgard},
    Booktitle = iros,
    Year = {2010},
    Address = {Taipei, Taiwan},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/wurm10iros.pdf}
    }

  • K. M. Wurm, A. Hornung, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: A Probabilistic, Flexible, and Compact 3D Map Representation for Robotic Systems,” in Proc. of the ICRA 2010 Workshop on Best Practice in 3D Perception and Modeling for Mobile Manipulation , Anchorage, AK, USA, 2010.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Wurm2010,
    Title = {{OctoMap}: A Probabilistic, Flexible, and Compact {3D} Map Representation for Robotic Systems},
    Author = {K.M. Wurm and A. Hornung and M. Bennewitz and C. Stachniss and W. Burgard},
    Booktitle = {Proc. of the ICRA 2010 Workshop on Best Practice in 3D Perception and Modeling for Mobile Manipulation},
    Year = {2010},
    Address = {Anchorage, AK, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/wurm10icraws.pdf}
    }

  • K. M. Wurm, C. Stachniss, and G. Grisetti, “Bridging the Gap Between Feature- and Grid-based SLAM,” Robotics and Autonomous Systems, vol. 58, iss. 2, pp. 140-148, 2010. doi:10.1016/j.robot.2009.09.009
    [BibTeX] [PDF]
    [none]
    @Article{Wurm2010b,
    Title = {Bridging the Gap Between Feature- and Grid-based SLAM},
    Author = {Wurm, K.M. and Stachniss, C. and Grisetti, G.},
    Journal = jras,
    Year = {2010},
    Number = {2},
    Pages = {140 - 148},
    Volume = {58},
    Abstract = {[none]},
    Doi = {10.1016/j.robot.2009.09.009},
    ISSN = {0921-8890},
    Timestamp = {2014.04.24},
    Url = {http://ais.informatik.uni-freiburg.de/publications/papers/wurm10ras.pdf}
    }

  • M. Y. Yang, Y. Cao, W. Förstner, and J. McDonald, “Robust wide baseline scene alignment based on 3D viewpoint normalization,” in International Conference on Advances in Visual Computing , 2010, pp. 654-665. doi:10.1007/978-3-642-17289-2_63
    [BibTeX] [PDF]
    This paper presents a novel scheme for automatically aligning two widely separated 3D scenes via the use of viewpoint invariant features. The key idea of the proposed method is following. First, a number of dominant planes are extracted in the SfM 3D point cloud using a novel method integrating RANSAC and MDL to describe the underlying 3D geometry in urban settings. With respect to the extracted 3D planes, the original camera viewing directions are rectified to form the front-parallel views of the scene. Viewpoint invariant features are extracted on the canonical views to provide a basis for further matching. Compared to the conventional 2D feature detectors (e.g. SIFT, MSER), the resulting features have following advantages: (1) they are very discriminative and robust to perspective distortions and viewpoint changes due to exploiting scene structure; (2) the features contain useful local patch information which allow for efficient feature matching. Using the novel viewpoint invariant features, wide-baseline 3D scenes are automatically aligned in terms of robust image matching. The performance of the proposed method is comprehensively evaluated in our experiments. It’s demonstrated that 2D image feature matching can be significantly improved by considering 3D scene structure.

    @InProceedings{Yang2010Robust,
    Title = {Robust wide baseline scene alignment based on 3D viewpoint normalization},
    Author = {Yang, Michael Ying and Cao, Yanpeng and F\"orstner, Wolfgang and McDonald, John},
    Booktitle = {International Conference on Advances in Visual Computing},
    Year = {2010},
    Pages = {654--665},
    Publisher = {Springer-Verlag},
    Abstract = {This paper presents a novel scheme for automatically aligning two widely separated 3D scenes via the use of viewpoint invariant features. The key idea of the proposed method is following. First, a number of dominant planes are extracted in the SfM 3D point cloud using a novel method integrating RANSAC and MDL to describe the underlying 3D geometry in urban settings. With respect to the extracted 3D planes, the original camera viewing directions are rectified to form the front-parallel views of the scene. Viewpoint invariant features are extracted on the canonical views to provide a basis for further matching. Compared to the conventional 2D feature detectors (e.g. SIFT, MSER), the resulting features have following advantages: (1) they are very discriminative and robust to perspective distortions and viewpoint changes due to exploiting scene structure; (2) the features contain useful local patch information which allow for efficient feature matching. Using the novel viewpoint invariant features, wide-baseline 3D scenes are automatically aligned in terms of robust image matching. The performance of the proposed method is comprehensively evaluated in our experiments. It's demonstrated that 2D image feature matching can be significantly improved by considering 3D scene structure.},
    Doi = {10.1007/978-3-642-17289-2_63},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Yang2010Robust.pdf}
    }

  • M. Y. Yang and W. Förstner, “Plane Detection in Point Cloud Data,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2010-01, 2010.
    [BibTeX] [PDF]
    Plane detection is a prerequisite to a wide variety of vision tasks. RANdom SAmple Consensus (RANSAC) algorithm is widely used for plane detection in point cloud data. Minimum description length (MDL) principle is used to deal with several competing hypothesis. This paper presents a new approach to the plane detection by integrating RANSAC and MDL. The method could avoid detecting wrong planes due to the complex geometry of the 3D data. The paper tests the performance of proposed method on both synthetic and real data.

    @TechReport{Yang2010Plane,
    Title = {Plane Detection in Point Cloud Data},
    Author = {Yang, Michael Ying and F\"orstner, Wolfgang},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2010},
    Number = {TR-IGG-P-2010-01 },
    Abstract = {Plane detection is a prerequisite to a wide variety of vision tasks. RANdom SAmple Consensus (RANSAC) algorithm is widely used for plane detection in point cloud data. Minimum description length (MDL) principle is used to deal with several competing hypothesis. This paper presents a new approach to the plane detection by integrating RANSAC and MDL. The method could avoid detecting wrong planes due to the complex geometry of the 3D data. The paper tests the performance of proposed method on both synthetic and real data.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Yang2010Plane.pdf}
    }

  • M. Y. Yang, W. Förstner, and M. Drauschke, “Hierarchical Conditional Random Field for Multi-class Image Classification,” in International Conference on Computer Vision Theory and Applications (VISSAPP) , 2010, pp. 464-469.
    [BibTeX] [PDF]
    Multi-class image classification has made significant advances in recent years through the combination of local and global features. This paper proposes a novel approach called hierarchical conditional random field (HCRF) that explicitly models region adjacency graph and region hierarchy graph structure of an image. This allows to set up a joint and hierarchical model of local and global discriminative methods that augments conditional random field to a multi-layer model. Region hierarchy graph is based on a multi-scale watershed segmentation.

    @InProceedings{Yang2010Hierarchical,
    Title = {Hierarchical Conditional Random Field for Multi-class Image Classification},
    Author = {Yang, Michael Ying and F\"orstner, Wolfgang and Drauschke, Martin},
    Booktitle = {International Conference on Computer Vision Theory and Applications (VISSAPP)},
    Year = {2010},
    Pages = {464--469},
    Abstract = {Multi-class image classification has made significant advances in recent years through the combination of local and global features. This paper proposes a novel approach called hierarchical conditional random field (HCRF) that explicitly models region adjacency graph and region hierarchy graph structure of an image. This allows to set up a joint and hierarchical model of local and global discriminative methods that augments conditional random field to a multi-layer model. Region hierarchy graph is based on a multi-scale watershed segmentation.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Yang2011Hierarchical.pdf}
    }

2009

  • B. Abendroth and M. zur Mühlen, “Genauigkeitsbeurteilung und Untersuchungen der Zuverlässigkeit von optischen Onlinemessungen,” Diploma Thesis Master Thesis, 2009.
    [BibTeX] [PDF]
    Vorwort Der Titel "Genauigkeitsbeurteilung und Untersuchungen der Zuverlässigkeit von optischen Onlinemessungen" impliziert eine weite Bandbreite an Untersuchungsmöglichkeiten. Diese allgemeine Einführung gibt einen Überblick über die untersuchten Aspekte dieser Arbeit. Neben der Motivation, die zu der Entstehung dieser Diplomarbeit geführt hat, beinhaltet diese Einführung eine grobe Gliederung der behandelten Themenschwerpunkte. Motivation Ein neues Aufgabengebiet innerhalb der Nahbereichsphotogrammetrie bietet die Konzeption von photogrammetrischen Messsystemen für industrielle Aufgabenstellungen. Die Firma AICON 3D Systems GmbH, mit deren Kooperation diese Arbeit entstand, hat sich auf die Entwicklung solcher Systeme spezialisiert. Sie gehört zu den weltweit führenden Unternehmen im Bereich der optischen kamerabasierten 3D-Vermessung. Ihr Anspruch ist es, hochgenaue und effiziente Produkte im Bereich von Inspektion und Prüfung zu entwickeln und zu überwachen. Ihre Produkte vertreibt das 1990 gegründete Unternehmen überwiegend in der Automobil-, Luft- und Raumfahrtindustrie sowie im Anlagen- und Schiffsbau. Zur Erfassung von dynamischen Vorgängen bietet das Unternehmen echtzeitfähige optische Messsysteme an, die je nach Konfiguration in der Lage sind einzelne signalisierte Punkte als 3D-Koordinaten zu erfassen oder die Bewegung eines Starrkörpers aufzunehmen. Damit diese photogrammetrischen Systeme gegenüber anderen Messsystemen im Konkurrenzkampf bestehen können, müssen sich diese einer ständigen Weiterentwicklung und Verbesserung unterziehen. Dabei steht insbesondere die Wirtschaftlichkeit, Zuverlässigkeit und die Genauigkeit der Systeme im Vordergrund. Außerdem ist es nötig einzelne Messsysteme zu charakterisieren, um sie vergleichbar zu machen und die Einsatzmöglichkeiten aufzuzeigen. Dazu gehört neben den oben genannten Kriterien der Genauigkeit, Zuverlässigkeit und Wirtschaftlichkeit auch das Spektrum der Einsatzmöglichkeiten mit systemspezifischen Rahmenbedingungen. Des Weiteren kann ein Vergleich über Hardware- und Software-Module geschehen. Im weiteren Verlauf dieser Arbeit werden die Eigenschaften der Genauigkeit und der Software-Module näher untersucht. Dabei zeigt eine Genauigkeitsuntersuchung die Grenzen der Messsysteme auf, deren Kenntnis für die Weiterentwicklung von Bedeutung ist. Für die Verbesserung der Software wird diese anhand ihrer vorhandenen Algorithmik untersucht und mit alternativen Berechnungverfahren verglichen. Als Ausgangspunkt für diese Untersuchungen dienen dabei die beiden Onlinemesssysteme WHEELWATCH und MoveINSPECT der Firma AICON 3D Systems GmbH. Die Motivation der Firma AICON 3D Systems GmbH ein Diplomarbeitsthema im Bereich einer Genauigkeitsuntersuchung zu stellen, liegt darin, das vorhandene theoretische ‘Wissen der Universität mit dem praktischen Anwendungsbeispiel der Onlinemesssysteme zu verbinden. Dies gilt auch für den Bereich der Weiterentwicklung der Algorithmik. Damit die Systeme auch in Zukunft wettbewerbsfähig sind, müssen diese ständig weiter entwickelt werden. Aus diesem Grund beinhaltet diese Arbeit die Untersuchung von zwei verschiedenen Problemstellungen, die sich innerhalb der Algorithmen der Systeme ergeben. Aufgabenstellung Das Ziel dieser gesamten Diplomarbeit besteht in der Verbesserung und Weiterentwicklung von Onlinemesssystemen. Dabei sollen theoretische Verfahren, die an der Universität vermittelt werden, auf die speziellen Messsysteme WHEELWATCH und MOVEINSPECT der Firma AICON 3D Systems GmbH angepasst und modifiziert werden. Insbesonders geht es um drei Aspekte der Onlinemesssysteme. Als erstes soll eine Gnauigkeitsuntersuchung der Onlinemesssysteme in Hinblick auf die Bewegungserfassung von Starrkörpern durchgeführt werden. Hierbei dienen statistische Grundlagen dazu, die bisherigen Genauigkeitsangaben von AICON 3D Systems GmbH durch eine statistisch fundierte Angabe zu validieren. Die allgemeine Problemstellung bezieht sich auf die Entwicklung eines Testverfahrens für die Spezifikation von Genauigkeitsangaben der detektierten Bewegung von starren Objekten, die sich im Nahbereich des Messsystems befinden. Unter Nahbereich ist hier eine maximale Entfernung von bis zu 3m zu verstehen. Die nächsten zwei Teilaspekte bestehen in der Beurteilung der bestehenden Algorithmik und dessen Verbesserung durch alternative Lösungsansätze. Hier handelt es sich um zwei allgemeine Probleme. Für das Einkamerasystem werden direkte Lösungsmöglichkeiten des Räumlichen Rückwärtsschnittes aufgezeigt. Im Fall eines Zweikamerasystems findet eine Verbesserung der Punktzuordnung statt. Diese basiert insbesondere auf der Berücksichtigung der Oberfläche der Objekte. Diese Teilaufgaben werden in den einleitenden Abschnitten der drei Teile genau spezifiziert. Aufbau der Arbeit Diese Diplomarbeit befasst sich mit drei verschiedenen Aspekten der Onlinemesssysteme WHEELWATCH und MoveINSPECT. Aus diesem Grund besteht dieses Dokument aus drei großen Teilen. Der Teil I trägt den Titel "Genauigkeitsbeurteilung von optischen Onlinemesssystemen". Darunter befindet sich die Entwicklung von zwei Testverfahren, die speziell für die Systeme WHEELWATCH und MoveINSPECT konzipiert werden. Dabei beziehen sich die Testverfahren zur Untersuchung der Genauigkeit zum einen auf Strecken und zum anderen auf Winkel. Die genaue Vorgehensweise ist dem ersten Teil dieser Arbeit zu entnehmen. Die beiden folgenden Teile befassen sich mit der Verbesserung der Algorithmik der Onlinemesssysteme. Dabei stellt der Teil Il Alternativen zum Räumlicher Rückwärtsschnitt (RRS) des Systems WHEEL WATCH vor. Das Ziel dieses Abschnitts ist es das bisher von der Firma AICON 3D Systems GmbH implementierte iterative Verfahren des RRS durch ein direktes zu erweitern. Die direkte Lösung des RRS dient dann zur Bestimmung der Näherungswerte für das iterative Verfahren. Mit der Algorithmik des Systems MoveINSPECT befasst sich der Teil III. Hier werden neue Ansatzmöglichkeiten aufgezeigt, um das Problem der Zuordnung von uncodierten Marken bei einem Zweikamerasystem zu verringern.

    @MastersThesis{Abendroth2009Genauigkeitsbeurteilung,
    Title = {Genauigkeitsbeurteilung und Untersuchungen der Zuverl\"assigkeit von optischen Onlinemessungen},
    Author = {Abendroth, Birgit and zur M\"uhlen, Miriam},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2009},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Inform. Timo Dickscheid, Dipl.-Ing. Robert Godding},
    Type = {Diploma Thesis},
    Abstract = {Vorwort Der Titel "Genauigkeitsbeurteilung und Untersuchungen der Zuverl\"assigkeit von optischen Onlinemessungen" impliziert eine weite Bandbreite an Untersuchungsm\"oglichkeiten. Diese allgemeine Einf\"uhrung gibt einen \"Uberblick \"uber die untersuchten Aspekte dieser Arbeit. Neben der Motivation, die zu der Entstehung dieser Diplomarbeit gef\"uhrt hat, beinhaltet diese Einf\"uhrung eine grobe Gliederung der behandelten Themenschwerpunkte. Motivation Ein neues Aufgabengebiet innerhalb der Nahbereichsphotogrammetrie bietet die Konzeption von photogrammetrischen Messsystemen f\"ur industrielle Aufgabenstellungen. Die Firma AICON 3D Systems GmbH, mit deren Kooperation diese Arbeit entstand, hat sich auf die Entwicklung solcher Systeme spezialisiert. Sie geh\"ort zu den weltweit f\"uhrenden Unternehmen im Bereich der optischen kamerabasierten 3D-Vermessung. Ihr Anspruch ist es, hochgenaue und effiziente Produkte im Bereich von Inspektion und Pr\"ufung zu entwickeln und zu \"uberwachen. Ihre Produkte vertreibt das 1990 gegr\"undete Unternehmen \"uberwiegend in der Automobil-, Luft- und Raumfahrtindustrie sowie im Anlagen- und Schiffsbau. Zur Erfassung von dynamischen Vorg\"angen bietet das Unternehmen echtzeitf\"ahige optische Messsysteme an, die je nach Konfiguration in der Lage sind einzelne signalisierte Punkte als 3D-Koordinaten zu erfassen oder die Bewegung eines Starrk\"orpers aufzunehmen. Damit diese photogrammetrischen Systeme gegen\"uber anderen Messsystemen im Konkurrenzkampf bestehen k\"onnen, m\"ussen sich diese einer st\"andigen Weiterentwicklung und Verbesserung unterziehen. Dabei steht insbesondere die Wirtschaftlichkeit, Zuverl\"assigkeit und die Genauigkeit der Systeme im Vordergrund. Au{\ss}erdem ist es n\"otig einzelne Messsysteme zu charakterisieren, um sie vergleichbar zu machen und die Einsatzm\"oglichkeiten aufzuzeigen. Dazu geh\"ort neben den oben genannten Kriterien der Genauigkeit, Zuverl\"assigkeit und Wirtschaftlichkeit auch das Spektrum der Einsatzm\"oglichkeiten mit systemspezifischen Rahmenbedingungen. Des Weiteren kann ein Vergleich \"uber Hardware- und Software-Module geschehen. Im weiteren Verlauf dieser Arbeit werden die Eigenschaften der Genauigkeit und der Software-Module n\"aher untersucht. Dabei zeigt eine Genauigkeitsuntersuchung die Grenzen der Messsysteme auf, deren Kenntnis f\"ur die Weiterentwicklung von Bedeutung ist. F\"ur die Verbesserung der Software wird diese anhand ihrer vorhandenen Algorithmik untersucht und mit alternativen Berechnungverfahren verglichen. Als Ausgangspunkt f\"ur diese Untersuchungen dienen dabei die beiden Onlinemesssysteme WHEELWATCH und MoveINSPECT der Firma AICON 3D Systems GmbH. Die Motivation der Firma AICON 3D Systems GmbH ein Diplomarbeitsthema im Bereich einer Genauigkeitsuntersuchung zu stellen, liegt darin, das vorhandene theoretische 'Wissen der Universit\"at mit dem praktischen Anwendungsbeispiel der Onlinemesssysteme zu verbinden. Dies gilt auch f\"ur den Bereich der Weiterentwicklung der Algorithmik. Damit die Systeme auch in Zukunft wettbewerbsf\"ahig sind, m\"ussen diese st\"andig weiter entwickelt werden. Aus diesem Grund beinhaltet diese Arbeit die Untersuchung von zwei verschiedenen Problemstellungen, die sich innerhalb der Algorithmen der Systeme ergeben. Aufgabenstellung Das Ziel dieser gesamten Diplomarbeit besteht in der Verbesserung und Weiterentwicklung von Onlinemesssystemen. Dabei sollen theoretische Verfahren, die an der Universit\"at vermittelt werden, auf die speziellen Messsysteme WHEELWATCH und MOVEINSPECT der Firma AICON 3D Systems GmbH angepasst und modifiziert werden. Insbesonders geht es um drei Aspekte der Onlinemesssysteme. Als erstes soll eine Gnauigkeitsuntersuchung der Onlinemesssysteme in Hinblick auf die Bewegungserfassung von Starrk\"orpern durchgef\"uhrt werden. Hierbei dienen statistische Grundlagen dazu, die bisherigen Genauigkeitsangaben von AICON 3D Systems GmbH durch eine statistisch fundierte Angabe zu validieren. Die allgemeine Problemstellung bezieht sich auf die Entwicklung eines Testverfahrens f\"ur die Spezifikation von Genauigkeitsangaben der detektierten Bewegung von starren Objekten, die sich im Nahbereich des Messsystems befinden. Unter Nahbereich ist hier eine maximale Entfernung von bis zu 3m zu verstehen. Die n\"achsten zwei Teilaspekte bestehen in der Beurteilung der bestehenden Algorithmik und dessen Verbesserung durch alternative L\"osungsans\"atze. Hier handelt es sich um zwei allgemeine Probleme. F\"ur das Einkamerasystem werden direkte L\"osungsm\"oglichkeiten des R\"aumlichen R\"uckw\"artsschnittes aufgezeigt. Im Fall eines Zweikamerasystems findet eine Verbesserung der Punktzuordnung statt. Diese basiert insbesondere auf der Ber\"ucksichtigung der Oberfl\"ache der Objekte. Diese Teilaufgaben werden in den einleitenden Abschnitten der drei Teile genau spezifiziert. Aufbau der Arbeit Diese Diplomarbeit befasst sich mit drei verschiedenen Aspekten der Onlinemesssysteme WHEELWATCH und MoveINSPECT. Aus diesem Grund besteht dieses Dokument aus drei gro{\ss}en Teilen. Der Teil I tr\"agt den Titel "Genauigkeitsbeurteilung von optischen Onlinemesssystemen". Darunter befindet sich die Entwicklung von zwei Testverfahren, die speziell f\"ur die Systeme WHEELWATCH und MoveINSPECT konzipiert werden. Dabei beziehen sich die Testverfahren zur Untersuchung der Genauigkeit zum einen auf Strecken und zum anderen auf Winkel. Die genaue Vorgehensweise ist dem ersten Teil dieser Arbeit zu entnehmen. Die beiden folgenden Teile befassen sich mit der Verbesserung der Algorithmik der Onlinemesssysteme. Dabei stellt der Teil Il Alternativen zum R\"aumlicher R\"uckw\"artsschnitt (RRS) des Systems WHEEL WATCH vor. Das Ziel dieses Abschnitts ist es das bisher von der Firma AICON 3D Systems GmbH implementierte iterative Verfahren des RRS durch ein direktes zu erweitern. Die direkte L\"osung des RRS dient dann zur Bestimmung der N\"aherungswerte f\"ur das iterative Verfahren. Mit der Algorithmik des Systems MoveINSPECT befasst sich der Teil III. Hier werden neue Ansatzm\"oglichkeiten aufgezeigt, um das Problem der Zuordnung von uncodierten Marken bei einem Zweikamerasystem zu verringern.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Abendroth2009Genauigkeitsbeurteilung.pdf}
    }

  • A. Barth, J. Siegemund, U. Franke, and W. Förstner, “Simultaneous Estimation of Pose and Motion at Highly Dynamic Turn Maneuvers,” in 31th Annual Symposium of the German Association for Pattern Recognition (DAGM) , Jena, Germany, 2009, pp. 262-271. doi:10.1007/978-3-642-03798-6_27
    [BibTeX] [PDF]
    Abstract. The (Extended) Kalman filter has been established as a stan- dard method for object tracking. While a constraining motion model stabilizes the tracking results given noisy measurements, it limits the ability to follow an object in non-modeled maneuvers. In the context of a stereo-vision based vehicle tracking approach, we propose and compare three different strategies to automatically adapt the dynamics of the fil- ter to the dynamics of the object. These strategies include an IMM-based multi-filter setup, an extension of the motion model considering higher order terms, as well as the adaptive parametrization of the filter vari- ances using an independent maximum likelihood estimator. For evalua- tion, various recorded real world trajectories and simulated maneuvers, including skidding, are used. The experimental results show significant improvements in the simultaneous estimation of pose and motion.

    @InProceedings{Barth2009Simultaneous,
    Title = {Simultaneous Estimation of Pose and Motion at Highly Dynamic Turn Maneuvers},
    Author = {Barth, Alexander and Siegemund, Jan and Franke, Uwe and F\"orstner, Wolfgang},
    Booktitle = {31th Annual Symposium of the German Association for Pattern Recognition (DAGM)},
    Year = {2009},
    Address = {Jena, Germany},
    Editor = {Denzler, J. and Notni, G.},
    Pages = {262--271},
    Publisher = {Springer},
    Abstract = {Abstract. The (Extended) Kalman filter has been established as a stan- dard method for object tracking. While a constraining motion model stabilizes the tracking results given noisy measurements, it limits the ability to follow an object in non-modeled maneuvers. In the context of a stereo-vision based vehicle tracking approach, we propose and compare three different strategies to automatically adapt the dynamics of the fil- ter to the dynamics of the object. These strategies include an IMM-based multi-filter setup, an extension of the motion model considering higher order terms, as well as the adaptive parametrization of the filter vari- ances using an independent maximum likelihood estimator. For evalua- tion, various recorded real world trajectories and simulated maneuvers, including skidding, are used. The experimental results show significant improvements in the simultaneous estimation of pose and motion.},
    Doi = {10.1007/978-3-642-03798-6_27},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Bart2009Simultaneous.pdf}
    }

  • S. D. Bauer, F. Korč, and W. Förstner, “Investigation into the classification of diseases of sugar beet leaves using multispectral images,” in Precision Agriculture 2009 , Wageningen, 2009, pp. 229-238.
    [BibTeX] [PDF]
    This paper reports on methods for the automatic detection and classification of leaf diseases based on high resolution multispectral images. Leaf diseases are economically important as they could cause a yield loss. Early and reliable detection of leaf diseases therefore is of utmost practical relevance – especially in the context of precision agriculture for localized treatment with fungicides. Our interest is the analysis of sugar beet due to their economical impact. Leaves of sugar beet may be infected by several diseases, such as rust (Uromyces betae), powdery mildew (Erysiphe betae) and other leaf spot diseases (Cercospora beticola and Ramularia beticola). In order to obtain best classification results we apply conditional random fields. In contrast to pixel based classifiers we are able to model the local context and contrary to object centred classifiers we simultaneously segment and classify the image. In a first investigation we analyse multispectral images of single leaves taken in a lab under well controlled illumination conditions. The photographed sugar beet leaves are healthy or either infected with the leaf spot pathogen Cercospora beticola or with the rust fungus Uromyces betae. We compare the classification methods pixelwise maximum posterior classification (MAP), objectwise MAP as soon as global MAP and global maximum posterior marginal classification using the spatial context within a conditional random field model.

    @InProceedings{Bauer2009Investigation,
    Title = {Investigation into the classification of diseases of sugar beet leaves using multispectral images},
    Author = {Bauer, Sabine Daniela and Kor{\vc}, Filip and F\"orstner, Wolfgang},
    Booktitle = {Precision Agriculture 2009},
    Year = {2009},
    Address = {Wageningen},
    Pages = {229--238},
    Abstract = {This paper reports on methods for the automatic detection and classification of leaf diseases based on high resolution multispectral images. Leaf diseases are economically important as they could cause a yield loss. Early and reliable detection of leaf diseases therefore is of utmost practical relevance - especially in the context of precision agriculture for localized treatment with fungicides. Our interest is the analysis of sugar beet due to their economical impact. Leaves of sugar beet may be infected by several diseases, such as rust (Uromyces betae), powdery mildew (Erysiphe betae) and other leaf spot diseases (Cercospora beticola and Ramularia beticola). In order to obtain best classification results we apply conditional random fields. In contrast to pixel based classifiers we are able to model the local context and contrary to object centred classifiers we simultaneously segment and classify the image. In a first investigation we analyse multispectral images of single leaves taken in a lab under well controlled illumination conditions. The photographed sugar beet leaves are healthy or either infected with the leaf spot pathogen Cercospora beticola or with the rust fungus Uromyces betae. We compare the classification methods pixelwise maximum posterior classification (MAP), objectwise MAP as soon as global MAP and global maximum posterior marginal classification using the spatial context within a conditional random field model.},
    City = {Bonn},
    Proceeding = {Precision Agriculture},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Bauer2009Investigation.pdf}
    }

  • D. Bender, “3D-Rekonstruktion von Blatträndern,” Diploma Thesis Master Thesis, 2009.
    [BibTeX]
    \textbf{Einleitung} Der Anbau von Pflanzen in der Landwirtschaft ist durch eine zunehmende Automatisierung geprägt. Unter anderem werden hierbei Verfahren der Bildverarbeitung eingesetzt, welche zum Beispiel eine Beobachtung von Wachstum, Krankheiten oder Reifegrad der Pflanze sowie die Erkennung von Unkraut ermöglichen. Ausgehend von den Ergebnissen kann eine optimierte Produktion vollzogen und infolgedessen der Ertrag erhöht werden. Bereits an diesen Einsatzgebieten lässt sich erkennen, warum ein großes Interesse an der Verwendung von Bildverarbeitungsverfahren beim Pflanzenanbau besteht. \textbf{1.1 AufgabensteIlung} Ziel dieser Diplomarbeit ist es, eine Anwendung zu entwickeln, welche die automatische 3-D-Rekonstruktion von Blatträndern ermöglicht. Dazu wird, aufbauend auf die 2-D-Konturen mehrerer Aufnahmen eines Blattes, ein Energieminimierungsansatz entwickelt, durch den die optimale 3-D-Kontur berechnet werden kann. Dieses Verfahren wird mit realen Aufnahmen von Rübenblättern getestet, wodurch jedoch nur ein visueller Eindruck über die Qualität der Ergebnisse gewonnen werden kann. Um fundierte Aussagen über die Qualität der Ergebnisse treffen zu können, soll im Anschluss ein Verfahren zur Erstellung von synthetischen Szenen erarbeitet und mit diesen eine statistische Auswertung vollzogen werden. \textbf{1.2 Motivation} Als Silhouette wird der Umriss eines abgebildeten Körpers beschrieben. Sie ist in den meisten Aufnahmen leicht zu extrahieren und häufig der stärkste Hinweis für das abgebildete 3-D-Modell. Aus diesem Grund wird in verschiedenen Verfahren zur vollständigen 3-D-Rekonstruktion die Projektion des 3-D-Modells auf die Silhouetten der Aufnahmen als Kriterium verwendet [PZF05]. Bei Abbildungen von Blättern stimmt für Aufnahmen ohne Scheinkonturen die Silhouette mit der jeweiligen Kontur des Blattes überein. Dies ermöglicht eine Rekonstruktion des Blattrandes durch den in der vorliegenden Arbeit beschriebenen Algorithmus. Ausgehend von dieser Kurve im 3-D-Raum können bereits Aussagen über das Wachstum einer Pflanze getroffen oder bestimmte Klassifizierungen vorgenommen werden. Des Weiteren kann basierend auf der berechneten 3- D- Kontur eine vollständige 3- D- Rekonstruktion vollzogen werden. Insbesondere bei Blättern sind hierbei der Einfluss der Beleuchtung und hierdurch auftretende Spiegelungen innerhalb des Blattes zu beachten, welche die komplette 3-D-Rekonstruktion erheblich erschweren. Möglicherweise kann die Qualität einer kompletten 3-D-Rekonstruktion durch die Übergabe der bekannten 3-D-Kontur verbessert werden. \textbf{1.3 Verwandte Arbeiten} Bei der Rekonstruktion von 3-D-Kurven durch ihre Abbildungen in mehrere Bilder handelt es sich um ein Problem, welches bis zum aktuellen Zeitpunkt noch nicht umfassend erforscht worden ist. Jedoch sind vereinzelt Arbeiten zu finden, welche die Fragestellung bearbeiten. Von diesen werden im Folgenden zwei Arbeiten kurz beschrieben, deren Hauptaugenmerk ebenfalls auf der 3-D-Rekonstruktion von Blätträndern liegt: In [ZWZY08] wird ein Verfahren zur 3-D-Rekonstruktion von Maispflanzen vorgestellt, wobei die Rekonstruktion eines Blattes (Abbildung 1.1) auf zwei Aufnahmen basiert. In diesen werden mithilfe des Canny-Algorithmus [Can86] Kanten extrahiert, aus welchen eine automatische Auswahl getroffen wird. Die anschließende Zuordnung homologer Kanten wird jedoch manuell vollzogen. Es folgen die 3-D-Rekonstruktionen des Blattrandes und der in einer Maispflanze zentral verlaufenden Blattader durch ein Schneiden der Kurven im Raum. Anschließend wird in der Arbeit eine Oberfläche ausgehend von den gefundenen 3-D-Konturen triangliert. Dies ist möglich, da Blätter von Maispflanzen sehr schmal sind und daher die rekonstruierten 3-D-Konturen nahe beieinander liegen. In [Nie04] wird die 3-D-Rekonstruktion von Blättern junger Maispflanzen mit NURBS [PT96] vollzogen. Dabei werden zunächst die Konturen der Blätter manuell gekennzeichnet, um anschließend als Eingabe für die Konstruktion des 3-D-Modells zu dienen. Die theoretische Grundlage ist ein Verfahren, das für eine spezielle Konfiguration von drei Kameras eine 3-D-NURBS-Kurve eines freigeformten, linienähnlichen Objektes konstruiert [DXP+ü3]. Zunächst wird dazu die Abbildung des Objektes in den jeweiligen Bildern in Form von 2-D-NURBS-Kurven approximiert. Sind diese in allen Bildern durch eine gleiche Anzahl von Kontrollpunkten und einen übereinstimmenden Knotenvektor dargestellt, so können die Kontrollpunkte im 3-D-Raum rekonstruiert werden und führen zur gesuchten Rekonstruktion durch eine 3-D-NURBS-Kurve. \textbf{1.4 Aufbau der Arbeit} Zu Beginn wird in Kapitel 2 die Vorverarbeitung der Eingabebilder beschrieben. In diesen wird zunächst mit einem Graph-Cut- Verfahren das Blatt segmentiert und anschließend seine Kontur extrahiert. Es folgt die Berechnung einer Distanztransformation des Konturbildes, wodurch für jeden Bildpunkt der Abstand zur Kontur angegeben wird. In Kapitel 3 wird der in dieser Arbeit vorgestellte Algorithmus zur 3-D-Rekonstruktion des Blattrandes beschrieben. Anschließend werden in Kapitel 4 die Erstellung einer synthetischen Szene zur Bewertung der Ergebnisse und die verwendeten Mittel zur statistischen Auswertung der Fehler dargestellt. In Kapitel 5 werden für reale und synthetische Bilder die Ergebnisse durchgeführter Experimente präsentiert und erörtert. Zum Abschluss der Arbeit folgen in Kapitel 6 eine Zusammenfassung und ein Ausblick auf mögliche Weiterführungen und Alternativen des vorgestellten Verfahrens.

    @MastersThesis{Bender20093D,
    Title = {3D-Rekonstruktion von Blattr\"andern},
    Author = {Bender, Daniel},
    School = {University of Bonn},
    Year = {2009},
    Note = {Betreuung: Prof.Dr.-Ing. Wolfgang F\"orstner, Prof.Dr. Daniel Cremers},
    Type = {Diploma Thesis},
    Abstract = {\textbf{Einleitung} Der Anbau von Pflanzen in der Landwirtschaft ist durch eine zunehmende Automatisierung gepr\"agt. Unter anderem werden hierbei Verfahren der Bildverarbeitung eingesetzt, welche zum Beispiel eine Beobachtung von Wachstum, Krankheiten oder Reifegrad der Pflanze sowie die Erkennung von Unkraut erm\"oglichen. Ausgehend von den Ergebnissen kann eine optimierte Produktion vollzogen und infolgedessen der Ertrag erh\"oht werden. Bereits an diesen Einsatzgebieten l\"asst sich erkennen, warum ein gro{\ss}es Interesse an der Verwendung von Bildverarbeitungsverfahren beim Pflanzenanbau besteht. \textbf{1.1 AufgabensteIlung} Ziel dieser Diplomarbeit ist es, eine Anwendung zu entwickeln, welche die automatische 3-D-Rekonstruktion von Blattr\"andern erm\"oglicht. Dazu wird, aufbauend auf die 2-D-Konturen mehrerer Aufnahmen eines Blattes, ein Energieminimierungsansatz entwickelt, durch den die optimale 3-D-Kontur berechnet werden kann. Dieses Verfahren wird mit realen Aufnahmen von R\"ubenbl\"attern getestet, wodurch jedoch nur ein visueller Eindruck \"uber die Qualit\"at der Ergebnisse gewonnen werden kann. Um fundierte Aussagen \"uber die Qualit\"at der Ergebnisse treffen zu k\"onnen, soll im Anschluss ein Verfahren zur Erstellung von synthetischen Szenen erarbeitet und mit diesen eine statistische Auswertung vollzogen werden. \textbf{1.2 Motivation} Als Silhouette wird der Umriss eines abgebildeten K\"orpers beschrieben. Sie ist in den meisten Aufnahmen leicht zu extrahieren und h\"aufig der st\"arkste Hinweis f\"ur das abgebildete 3-D-Modell. Aus diesem Grund wird in verschiedenen Verfahren zur vollst\"andigen 3-D-Rekonstruktion die Projektion des 3-D-Modells auf die Silhouetten der Aufnahmen als Kriterium verwendet [PZF05]. Bei Abbildungen von Bl\"attern stimmt f\"ur Aufnahmen ohne Scheinkonturen die Silhouette mit der jeweiligen Kontur des Blattes \"uberein. Dies erm\"oglicht eine Rekonstruktion des Blattrandes durch den in der vorliegenden Arbeit beschriebenen Algorithmus. Ausgehend von dieser Kurve im 3-D-Raum k\"onnen bereits Aussagen \"uber das Wachstum einer Pflanze getroffen oder bestimmte Klassifizierungen vorgenommen werden. Des Weiteren kann basierend auf der berechneten 3- D- Kontur eine vollst\"andige 3- D- Rekonstruktion vollzogen werden. Insbesondere bei Bl\"attern sind hierbei der Einfluss der Beleuchtung und hierdurch auftretende Spiegelungen innerhalb des Blattes zu beachten, welche die komplette 3-D-Rekonstruktion erheblich erschweren. M\"oglicherweise kann die Qualit\"at einer kompletten 3-D-Rekonstruktion durch die \"Ubergabe der bekannten 3-D-Kontur verbessert werden. \textbf{1.3 Verwandte Arbeiten} Bei der Rekonstruktion von 3-D-Kurven durch ihre Abbildungen in mehrere Bilder handelt es sich um ein Problem, welches bis zum aktuellen Zeitpunkt noch nicht umfassend erforscht worden ist. Jedoch sind vereinzelt Arbeiten zu finden, welche die Fragestellung bearbeiten. Von diesen werden im Folgenden zwei Arbeiten kurz beschrieben, deren Hauptaugenmerk ebenfalls auf der 3-D-Rekonstruktion von Bl\"attr\"andern liegt: In [ZWZY08] wird ein Verfahren zur 3-D-Rekonstruktion von Maispflanzen vorgestellt, wobei die Rekonstruktion eines Blattes (Abbildung 1.1) auf zwei Aufnahmen basiert. In diesen werden mithilfe des Canny-Algorithmus [Can86] Kanten extrahiert, aus welchen eine automatische Auswahl getroffen wird. Die anschlie{\ss}ende Zuordnung homologer Kanten wird jedoch manuell vollzogen. Es folgen die 3-D-Rekonstruktionen des Blattrandes und der in einer Maispflanze zentral verlaufenden Blattader durch ein Schneiden der Kurven im Raum. Anschlie{\ss}end wird in der Arbeit eine Oberfl\"ache ausgehend von den gefundenen 3-D-Konturen triangliert. Dies ist m\"oglich, da Bl\"atter von Maispflanzen sehr schmal sind und daher die rekonstruierten 3-D-Konturen nahe beieinander liegen. In [Nie04] wird die 3-D-Rekonstruktion von Bl\"attern junger Maispflanzen mit NURBS [PT96] vollzogen. Dabei werden zun\"achst die Konturen der Bl\"atter manuell gekennzeichnet, um anschlie{\ss}end als Eingabe f\"ur die Konstruktion des 3-D-Modells zu dienen. Die theoretische Grundlage ist ein Verfahren, das f\"ur eine spezielle Konfiguration von drei Kameras eine 3-D-NURBS-Kurve eines freigeformten, linien\"ahnlichen Objektes konstruiert [DXP+\"u3]. Zun\"achst wird dazu die Abbildung des Objektes in den jeweiligen Bildern in Form von 2-D-NURBS-Kurven approximiert. Sind diese in allen Bildern durch eine gleiche Anzahl von Kontrollpunkten und einen \"ubereinstimmenden Knotenvektor dargestellt, so k\"onnen die Kontrollpunkte im 3-D-Raum rekonstruiert werden und f\"uhren zur gesuchten Rekonstruktion durch eine 3-D-NURBS-Kurve. \textbf{1.4 Aufbau der Arbeit} Zu Beginn wird in Kapitel 2 die Vorverarbeitung der Eingabebilder beschrieben. In diesen wird zun\"achst mit einem Graph-Cut- Verfahren das Blatt segmentiert und anschlie{\ss}end seine Kontur extrahiert. Es folgt die Berechnung einer Distanztransformation des Konturbildes, wodurch f\"ur jeden Bildpunkt der Abstand zur Kontur angegeben wird. In Kapitel 3 wird der in dieser Arbeit vorgestellte Algorithmus zur 3-D-Rekonstruktion des Blattrandes beschrieben. Anschlie{\ss}end werden in Kapitel 4 die Erstellung einer synthetischen Szene zur Bewertung der Ergebnisse und die verwendeten Mittel zur statistischen Auswertung der Fehler dargestellt. In Kapitel 5 werden f\"ur reale und synthetische Bilder die Ergebnisse durchgef\"uhrter Experimente pr\"asentiert und er\"ortert. Zum Abschluss der Arbeit folgen in Kapitel 6 eine Zusammenfassung und ein Ausblick auf m\"ogliche Weiterf\"uhrungen und Alternativen des vorgestellten Verfahrens.}
    }

  • M. Bennewitz, C. Stachniss, S. Behnke, and W. Burgard, “Utilizing Reflection Properties of Surfaces to Improve Mobile Robot Localization,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Kobe, Japan, 2009.
    [BibTeX]
    [none]
    @InProceedings{Bennewitz2009,
    Title = {Utilizing Reflection Properties of Surfaces to Improve Mobile Robot Localization},
    Author = {M. Bennewitz and Stachniss, C. and Behnke, S. and Burgard, W.},
    Booktitle = icra,
    Year = {2009},
    Address = {Kobe, Japan},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • W. Burgard, C. Stachniss, G. Grisetti, B. Steder, R. Kümmerle, C. Dornhege, M. Ruhnke, A. Kleiner, and J. D. Tardós, “A Comparison of SLAM Algorithms Based on a Graph of Relations,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2009.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Burgard2009,
    Title = {A Comparison of {SLAM} Algorithms Based on a Graph of Relations},
    Author = {W. Burgard and C. Stachniss and G. Grisetti and B. Steder and R. K\"ummerle and C. Dornhege and M. Ruhnke and A. Kleiner and J.D. Tard\'os},
    Booktitle = iros,
    Year = {2009},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/burgard09iros.pdf}
    }

  • Dickscheid and W. Förstner, “Evaluating the Suitability of Feature Detectors for Automatic Image Orientation Systems,” in 7th International Conference on Computer Vision Systems (ICVS’09). , Liege, Belgium, 2009, pp. 305-314. doi:10.1007/978-3-642-04667-4_31
    [BibTeX] [PDF]
    We investigate the suitability of different local feature detectors for the task of automatic image orientation under different scene texturings. Building on an existing system for image orientation, we vary the applied operators while keeping the strategy xed, and evaluate the results. An emphasis is put on the effect of combining detectors for calibrating diffcult datasets. Besides some of the most popular scale and affine invariant detectors available, we include two recently proposed operators in the setup: A scale invariant junction detector and a scale invariant detector based on the local entropy of image patches. After describing the system, we present a detailed performance analysis of the different operators on a number of image datasets. We both analyze ground-truth-deviations and results of a nal bundle adjustment, including observations, 3D object points and camera poses. The paper concludes with hints on the suitability of the different combinations of detectors, and an assessment of the potential of such automatic orientation procedures.

    @InProceedings{Dickscheid2009Evaluating,
    Title = {Evaluating the Suitability of Feature Detectors for Automatic Image Orientation Systems},
    Author = {Dickscheid, and F\"orstner, Wolfgang},
    Booktitle = {7th International Conference on Computer Vision Systems (ICVS'09).},
    Year = {2009},
    Address = {Liege, Belgium},
    Editor = {Mario Fritz and Bernt Schiele and Justus H. Piater},
    Pages = {305--314},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Volume = {5815},
    Abstract = {We investigate the suitability of different local feature detectors for the task of automatic image orientation under different scene texturings. Building on an existing system for image orientation, we vary the applied operators while keeping the strategy xed, and evaluate the results. An emphasis is put on the effect of combining detectors for calibrating diffcult datasets. Besides some of the most popular scale and affine invariant detectors available, we include two recently proposed operators in the setup: A scale invariant junction detector and a scale invariant detector based on the local entropy of image patches. After describing the system, we present a detailed performance analysis of the different operators on a number of image datasets. We both analyze ground-truth-deviations and results of a nal bundle adjustment, including observations, 3D object points and camera poses. The paper concludes with hints on the suitability of the different combinations of detectors, and an assessment of the potential of such automatic orientation procedures.},
    Doi = {10.1007/978-3-642-04667-4_31},
    ISBN = {978-3-642-04666-7},
    Location = {Heidelberg},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Dickscheid2009Evaluating.pdf}
    }

  • M. Drauschke, “Documentation: Segmentation and Graph Construction of HMRF,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2009-03, 2009.
    [BibTeX] [PDF]
    This is a technical report for presenting a documentation on the segmentation and the graph construction of a Hierarchical Markov random eld (HMRF). The segmentation is based on multiscale analysis and watershed regions as presented in [Drauschke et al., 2006]. The region’s development is tracked over the scales, which de nes a region hierarchy graph. This graph is used to improve the segmentation by reforming the regions geometrically more precisely. This work is taken from [Drauschke, 2009]. Furthermore, we determine a region adjacency graph from each image partition of all scales. The detected image regions, their adjacent regions and their hierarchical neighbors are saved into an xml- fille for a convenient output.

    @TechReport{Drauschke2009Documentation,
    Title = {Documentation: Segmentation and Graph Construction of HMRF},
    Author = {Drauschke, Martin},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2009},
    Number = {TR-IGG-P-2009-03},
    Abstract = {This is a technical report for presenting a documentation on the segmentation and the graph construction of a Hierarchical Markov random eld (HMRF). The segmentation is based on multiscale analysis and watershed regions as presented in [Drauschke et al., 2006]. The region's development is tracked over the scales, which denes a region hierarchy graph. This graph is used to improve the segmentation by reforming the regions geometrically more precisely. This work is taken from [Drauschke, 2009]. Furthermore, we determine a region adjacency graph from each image partition of all scales. The detected image regions, their adjacent regions and their hierarchical neighbors are saved into an xml-fille for a convenient output.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2009Documentation.pdf}
    }

  • M. Drauschke, “An Irregular Pyramid for Multi-scale Analysis of Objects and their Parts,” in 7th IAPR-TC-15 Workshop on Graph-based Representations in Pattern Recognition , Venice, Italy, 2009, pp. 293-303. doi:10.1007/978-3-642-02124-4_30
    [BibTeX] [PDF]
    We present an irregular image pyramid which is derived from multi-scale analysis of segmented watershed regions. Our framework is based on the development of regions in the Gaussian scale-space, which is represented by a region hierarchy graph. Using this structure, we are able to determine geometrically precise borders of our segmented regions using a region focusing. In order to handle the complexity, we select only stable regions and regions resulting from a merging event, which enables us to keep the hierarchical structure of the regions. Using this framework, we are able to detect objects of various scales in an image. Finally, the hierarchical structure is used for describing these detected regions as aggregations of their parts. We investigate the usefulness of the regions for interpreting images showing building facades with parts like windows, balconies or entrances.

    @InProceedings{Drauschke2009Irregular,
    Title = {An Irregular Pyramid for Multi-scale Analysis of Objects and their Parts},
    Author = {Drauschke, Martin},
    Booktitle = {7th IAPR-TC-15 Workshop on Graph-based Representations in Pattern Recognition},
    Year = {2009},
    Address = {Venice, Italy},
    Pages = {293--303},
    Abstract = {We present an irregular image pyramid which is derived from multi-scale analysis of segmented watershed regions. Our framework is based on the development of regions in the Gaussian scale-space, which is represented by a region hierarchy graph. Using this structure, we are able to determine geometrically precise borders of our segmented regions using a region focusing. In order to handle the complexity, we select only stable regions and regions resulting from a merging event, which enables us to keep the hierarchical structure of the regions. Using this framework, we are able to detect objects of various scales in an image. Finally, the hierarchical structure is used for describing these detected regions as aggregations of their parts. We investigate the usefulness of the regions for interpreting images showing building facades with parts like windows, balconies or entrances.},
    City = {Bonn},
    Doi = {10.1007/978-3-642-02124-4_30},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2009Irregular.pdf}
    }

  • M. Drauschke, W. Förstner, and A. Brunn, “Multidodging: Ein effizienter Algorithmus zur automatischen Verbesserung von digitalisierten Luftbildern,” in Publikationen der DGPF, Band 18: Zukunft mit Tradition , Jena, 2009, pp. 61-68.
    [BibTeX] [PDF]
    Wir haben ein effizientes, automatisches Verfahren zur Verbesserung von digitalisierten Luftbildern entwickelt. Das Verfahren MULTIDODGING dient im Kontext der visuellen Aufbereitung von historischen Aufnahmen aus dem 2. Weltkrieg. Bei der Bildverbesserung mittels MULTIDODGING wird das eingescannte Bild zunächst in sich nicht überlappende rechteckige Bildausschnitte unterteilt. In jedem Bildausschnitt wird eine Histogrammverebnung durchgeführt, die im Allgemeinen zu einer Verstärkung des Kontrasts führt. Durch die regionale Veränderung des Bildes entstehen sichtbare Grenzen zwischen den Bildausschnitten, die durch eine Interpolation entfernt werden. In der Anwendung des bisherigen Verfahrens hat sich gezeigt, dass der Kontrast in vielen lokalen Stellen zu stark ist. Deshalb kann zum Abschluss die Spannweite der Grauwerte zusätzlich reduziert werden, wobei diese Kontrastanpassung regional aus den Gradienten im Bildausschnitt berechnet wird. Dieser Beitrag beschreibt und analysiert das Verfahren im Detail.

    @InProceedings{Drauschke2009Multidodging,
    Title = {Multidodging: Ein effizienter Algorithmus zur automatischen Verbesserung von digitalisierten Luftbildern},
    Author = {Drauschke, Martin and F\"orstner, Wolfgang and Brunn, Ansgar},
    Booktitle = {Publikationen der DGPF, Band 18: Zukunft mit Tradition},
    Year = {2009},
    Address = {Jena},
    Pages = {61--68},
    Abstract = {Wir haben ein effizientes, automatisches Verfahren zur Verbesserung von digitalisierten Luftbildern entwickelt. Das Verfahren MULTIDODGING dient im Kontext der visuellen Aufbereitung von historischen Aufnahmen aus dem 2. Weltkrieg. Bei der Bildverbesserung mittels MULTIDODGING wird das eingescannte Bild zun\"achst in sich nicht \"uberlappende rechteckige Bildausschnitte unterteilt. In jedem Bildausschnitt wird eine Histogrammverebnung durchgef\"uhrt, die im Allgemeinen zu einer Verst\"arkung des Kontrasts f\"uhrt. Durch die regionale Ver\"anderung des Bildes entstehen sichtbare Grenzen zwischen den Bildausschnitten, die durch eine Interpolation entfernt werden. In der Anwendung des bisherigen Verfahrens hat sich gezeigt, dass der Kontrast in vielen lokalen Stellen zu stark ist. Deshalb kann zum Abschluss die Spannweite der Grauwerte zus\"atzlich reduziert werden, wobei diese Kontrastanpassung regional aus den Gradienten im Bildausschnitt berechnet wird. Dieser Beitrag beschreibt und analysiert das Verfahren im Detail.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2009Multidodging.pdf}
    }

  • M. Drauschke, R. Roscher, T. Läbe, and W. Förstner, “Improving Image Segmentation using Multiple View Analysis,” in Object Extraction for 3D City Models, Road Databases and Traffic Monitoring – Concepts, Algorithms and Evaluatin (CMRT09) , 2009, pp. 211-216.
    [BibTeX] [PDF]
    In our contribution, we improve image segmentation by integrating depth information from multi-view analysis. We assume the object surface in each region can be represented by a low order polynomial, and estimate the best fitting parameters of a plane using those points of the point cloud, which are mapped to the specific region. We can merge adjacent image regions, which cannot be distinguished geometrically. We demonstrate the approach for finding spatially planar regions on aerial images. Furthermore, we discuss the possibilities of extending of our approach towards segmenting terrestrial facade images.

    @InProceedings{Drauschke2009Improving,
    Title = {Improving Image Segmentation using Multiple View Analysis},
    Author = {Drauschke, Martin and Roscher, Ribana and L\"abe, Thomas and F\"orstner, Wolfgang},
    Booktitle = {Object Extraction for 3D City Models, Road Databases and Traffic Monitoring - Concepts, Algorithms and Evaluatin (CMRT09)},
    Year = {2009},
    Pages = {211-216},
    Abstract = {In our contribution, we improve image segmentation by integrating depth information from multi-view analysis. We assume the object surface in each region can be represented by a low order polynomial, and estimate the best fitting parameters of a plane using those points of the point cloud, which are mapped to the specific region. We can merge adjacent image regions, which cannot be distinguished geometrically. We demonstrate the approach for finding spatially planar regions on aerial images. Furthermore, we discuss the possibilities of extending of our approach towards segmenting terrestrial facade images.},
    City = {Paris},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2009Improving.pdf}
    }

  • F. Endres, J. Hess, N. Franklin, C. Plagemann, C. Stachniss, and W. Burgard, “Estimating Range Information from Monocular Vision,” in Workshop Regression in Robotics – Approaches and Applications at Robotics: Science and Systems (RSS) , Seattle, WA, USA, 2009.
    [BibTeX]
    [none]
    @InProceedings{Endres2009,
    Title = {Estimating Range Information from Monocular Vision},
    Author = {Endres, F. and Hess, J. and Franklin, N. and Plagemann, C. and Stachniss, C. and Burgard, W.},
    Booktitle = {Workshop Regression in Robotics - Approaches and Applications at Robotics: Science and Systems (RSS)},
    Year = {2009},
    Address = {Seattle, WA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • F. Endres, C. Plagemann, C. Stachniss, and W. Burgard, “Scene Analysis using Latent Dirichlet Allocation,” in Proceedings of Robotics: Science and Systems (RSS) , Seattle, WA, USA, 2009.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Endres2009a,
    Title = {Scene Analysis using Latent Dirichlet Allocation},
    Author = {F. Endres and C. Plagemann and Stachniss, C. and Burgard, W.},
    Booktitle = RSS,
    Year = {2009},
    Address = {Seattle, WA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/endres09rss-draft.pdf}
    }

  • C. Eppner, J. Sturm, M. Bennewitz, C. Stachniss, and W. Burgard, “Imitation Learning with Generalized Task Descriptions,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Kobe, Japan, 2009.
    [BibTeX]
    [none]
    @InProceedings{Eppner2009,
    Title = {Imitation Learning with Generalized Task Descriptions},
    Author = {C. Eppner and J. Sturm and M. Bennewitz and Stachniss, C. and Burgard, W.},
    Booktitle = icra,
    Year = {2009},
    Address = {Kobe, Japan},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • W. Förstner, “Computer Vision and Remote Sensing – Lessons Learned,” in Photogrammetric Week 2009 , Heidelberg, 2009, pp. 241-249.
    [BibTeX] [PDF]
    Photogrammetry has significantly been influenced by its two neigbouring fields, namely Computer Vision and Remote Sensing. Today, Photogrammetry has been become a part of Remote Sensing. The paper reflects its growing relations with Computer Vision, based on a more than 25 years experience of the author with the fascinating field between cognitive, natural and engineering science, which stimulated his own research and transferred him into a wanderer between two worlds.

    @InProceedings{Forstner2009Computer,
    Title = {Computer Vision and Remote Sensing - Lessons Learned},
    Author = {F\"orstner, Wolfgang},
    Booktitle = {Photogrammetric Week 2009},
    Year = {2009},
    Address = {Heidelberg},
    Pages = {241--249},
    Abstract = {Photogrammetry has significantly been influenced by its two neigbouring fields, namely Computer Vision and Remote Sensing. Today, Photogrammetry has been become a part of Remote Sensing. The paper reflects its growing relations with Computer Vision, based on a more than 25 years experience of the author with the fascinating field between cognitive, natural and engineering science, which stimulated his own research and transferred him into a wanderer between two worlds.},
    City = {Stuttgart},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2009Computer_slides.pdf;:Forstner2009Computer.pdf}
    }

  • W. Förstner, “Mustererkennung in der Fernerkundung,” in Publikationen der DGPF, Band 18: Zukunft mit Tradition , Jena, 2009, pp. 129-136.
    [BibTeX] [PDF]
    Der Beitrag beleuchtet die Forschung in Photogrammetrie und Fernerkundung unter dem Blickwinkel der Methoden, die für die Lösung der zentrale Aufgabe beider Fachgebiete, der Bildinterpretation, erforderlich sind, sowohl zur Integration beider Gebiete, wie zu einer effizienten Gestaltung gemeinsamerer Forschung. Ingredienzien für erfolgreiche Forschung in diesem Bereich sind Fokussierung auf Themen, die in ca. eine Dekade bearbeitet werden können, enge Kooperation mit den fachlich angrenzenden Disziplinen – der Mustererkennung und dem maschinellen Lernen – , kompetetives Benchmarking, Softwareaustausch und Integration der Forschungsthemen in die Ausbildung. Der Beitrag skizziert ein Forschungsprogamm mit den Themen ‘Mustererkennung in der Fernerkundung’ und ‘Interpretation von LIDARDaten’ das, interdisziplinär ausgerichtet, die Photogrammetrie mit den unmittelbaren Nachbardisziplinen zunehmend verweben könnte, und – nach Ansicht des Autors – zur Erhaltung der Innovationskraft auch dringend erforderlich ist.

    @InProceedings{Forstner2009Mustererkennung,
    Title = {Mustererkennung in der Fernerkundung},
    Author = {F\"orstner, Wolfgang},
    Booktitle = {Publikationen der DGPF, Band 18: Zukunft mit Tradition},
    Year = {2009},
    Address = {Jena},
    Pages = {129--136},
    Abstract = {Der Beitrag beleuchtet die Forschung in Photogrammetrie und Fernerkundung unter dem Blickwinkel der Methoden, die f\"ur die L\"osung der zentrale Aufgabe beider Fachgebiete, der Bildinterpretation, erforderlich sind, sowohl zur Integration beider Gebiete, wie zu einer effizienten Gestaltung gemeinsamerer Forschung. Ingredienzien f\"ur erfolgreiche Forschung in diesem Bereich sind Fokussierung auf Themen, die in ca. eine Dekade bearbeitet werden k\"onnen, enge Kooperation mit den fachlich angrenzenden Disziplinen - der Mustererkennung und dem maschinellen Lernen - , kompetetives Benchmarking, Softwareaustausch und Integration der Forschungsthemen in die Ausbildung. Der Beitrag skizziert ein Forschungsprogamm mit den Themen 'Mustererkennung in der Fernerkundung' und 'Interpretation von LIDARDaten' das, interdisziplin\"ar ausgerichtet, die Photogrammetrie mit den unmittelbaren Nachbardisziplinen zunehmend verweben k\"onnte, und - nach Ansicht des Autors - zur Erhaltung der Innovationskraft auch dringend erforderlich ist.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2009Mustererkennung.pdf}
    }

  • W. Förstner, T. Dickscheid, and F. Schindler, “On the Completeness of Coding with Image Features,” in 20th British Machine Vision Conference , London, UK, 2009. doi:10.5244/C.23.1
    [BibTeX] [PDF]
    We present a scheme for measuring completeness of local feature extraction in terms of image coding. Completeness is here considered as good coverage of relevant image information by the features. As each feature requires a certain number of bits which are representative for a certain subregion of the image, we interpret the coverage as a sparse coding scheme. The measure is therefore based on a comparison of two densities over the image domain: An entropy density pH(x) based on local image statistics, and a feature coding density pc(x) which is directly computed from each particular set of local features. Motivated by the coding scheme in JPEG, the entropy distribution is derived from the power spectrum of local patches around each pixel position in a statistically sound manner. As the total number of bits for coding the image and for representing it with local features may be different, we measure incompleteness by the Hellinger distance between pH(x) and pc(x). We will derive a procedure for measuring incompleteness of possibly mixed sets of local features and show results on standard datasets using some of the most popular region and keypoint detectors, including Lowe, MSER and the recently published SFOP detectors. Furthermore, we will draw some interesting conclusions about the complementarity of detectors.

    @InProceedings{Forstner2009Completeness,
    Title = {On the Completeness of Coding with Image Features},
    Author = {F\"orstner, Wolfgang and Dickscheid, Timo and Schindler, Falko},
    Booktitle = {20th British Machine Vision Conference},
    Year = {2009},
    Address = {London, UK},
    Abstract = {We present a scheme for measuring completeness of local feature extraction in terms of image coding. Completeness is here considered as good coverage of relevant image information by the features. As each feature requires a certain number of bits which are representative for a certain subregion of the image, we interpret the coverage as a sparse coding scheme. The measure is therefore based on a comparison of two densities over the image domain: An entropy density pH(x) based on local image statistics, and a feature coding density pc(x) which is directly computed from each particular set of local features. Motivated by the coding scheme in JPEG, the entropy distribution is derived from the power spectrum of local patches around each pixel position in a statistically sound manner. As the total number of bits for coding the image and for representing it with local features may be different, we measure incompleteness by the Hellinger distance between pH(x) and pc(x). We will derive a procedure for measuring incompleteness of possibly mixed sets of local features and show results on standard datasets using some of the most popular region and keypoint detectors, including Lowe, MSER and the recently published SFOP detectors. Furthermore, we will draw some interesting conclusions about the complementarity of detectors.},
    Doi = {10.5244/C.23.1},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2009Completeness.pdf}
    }

  • W. Förstner, T. Dickscheid, and F. Schindler, “Detecting Interpretable and Accurate Scale-Invariant Keypoints,” in 12th IEEE International Conference on Computer Vision (ICCV’09) , Kyoto, Japan, 2009, pp. 2256-2263. doi:10.1109/ICCV.2009.5459458
    [BibTeX] [PDF]
    This paper presents a novel method for detecting scale invariant keypoints. It fills a gap in the set of available methods, as it proposes a scale-selection mechanism for junction-type features. The method is a scale-space extension of the detector proposed by Förstner (1994) and uses the general spiral feature model of Bigün (1990) to unify different types of features within the same framework. By locally optimising the consistency of image regions with respect to the spiral model, we are able to detect and classify image structures with complementary properties over scalespace, especially star and circular shapes as interpretable and identifiable subclasses. Our motivation comes from calibrating images of structured scenes with poor texture, where blob detectors alone cannot find sufficiently many keypoints, while existing corner detectors fail due to the lack of scale invariance. The procedure can be controlled by semantically clear parameters. One obtains a set of keypoints with position, scale, type and consistency measure. We characterise the detector and show results on common benchmarks. It competes in repeatability with the Lowe detector, but finds more stable keypoints in poorly textured areas, and shows comparable or higher accuracy than other recent detectors. This makes it useful for both object recognition and camera calibration.

    @InProceedings{Forstner2009Detecting,
    Title = {Detecting Interpretable and Accurate Scale-Invariant Keypoints},
    Author = {F\"orstner, Wolfgang and Dickscheid, Timo and Schindler, Falko},
    Booktitle = {12th IEEE International Conference on Computer Vision (ICCV'09)},
    Year = {2009},
    Address = {Kyoto, Japan},
    Pages = {2256--2263},
    Abstract = {This paper presents a novel method for detecting scale invariant keypoints. It fills a gap in the set of available methods, as it proposes a scale-selection mechanism for junction-type features. The method is a scale-space extension of the detector proposed by F\"orstner (1994) and uses the general spiral feature model of Big\"un (1990) to unify different types of features within the same framework. By locally optimising the consistency of image regions with respect to the spiral model, we are able to detect and classify image structures with complementary properties over scalespace, especially star and circular shapes as interpretable and identifiable subclasses. Our motivation comes from calibrating images of structured scenes with poor texture, where blob detectors alone cannot find sufficiently many keypoints, while existing corner detectors fail due to the lack of scale invariance. The procedure can be controlled by semantically clear parameters. One obtains a set of keypoints with position, scale, type and consistency measure. We characterise the detector and show results on common benchmarks. It competes in repeatability with the Lowe detector, but finds more stable keypoints in poorly textured areas, and shows comparable or higher accuracy than other recent detectors. This makes it useful for both object recognition and camera calibration.},
    Doi = {10.1109/ICCV.2009.5459458},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2009Detectinga.pdf}
    }

  • B. Frank, C. Stachniss, R. Schmedding, W. Burgard, and M. Teschner, “Real-world Robot Navigation amongst Deformable Obstacles,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Kobe, Japan, 2009.
    [BibTeX]
    [none]
    @InProceedings{Frank2009,
    Title = {Real-world Robot Navigation amongst Deformable Obstacles},
    Author = {B. Frank and C. Stachniss and R. Schmedding and W. Burgard and M. Teschner},
    Booktitle = icra,
    Year = {2009},
    Address = {Kobe, Japan},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • G. Grisetti, C. Stachniss, and W. Burgard, “Non-linear Constraint Network Optimization for Efficient Map Learning,” IEEE Transactions on Intelligent Transportation Systems, vol. 10, iss. 3, pp. 428-439, 2009.
    [BibTeX] [PDF]
    [none]
    @Article{Grisetti2009,
    Title = {Non-linear Constraint Network Optimization for Efficient Map Learning},
    Author = {Grisetti, G. and Stachniss, C. and Burgard, W.},
    Journal = ieeeits,
    Year = {2009},
    Number = {3},
    Pages = {428--439},
    Volume = {10},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti09its.pdf}
    }

  • F. Korč and W. Förstner, “eTRIMS Image Database for Interpreting Images of Man-Made Scenes,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2009-01, 2009.
    [BibTeX] [PDF]
    We describe ground truth data that we provide to serve as a basis for evaluation and comparison of supervised learning approaches to image interpretation. The provided ground truth, the eTRIMS Image Database, is a collection of annotated images of real world street scenes. Typical objects in these images are variable in shape and appearance, in the number of its parts and appear in a variety of con gurations. The domain of man-made scenes is thus well suited for evaluation and comparison of a variety of interpretation approaches, including those that employ structure models. The provided pixelwise ground truth assigns each image pixel both with a class label and an object label and o ffers thus ground truth annotation both on the level of pixels and regions. While we believe that such ground truth is of general interest in supervised learning, such data may be of further relevance in emerging real world applications involving automation of man-made scene interpretation.

    @TechReport{Korvc2009eTRIMS,
    Title = {{eTRIMS} Image Database for Interpreting Images of Man-Made Scenes},
    Author = {Kor{\vc}, Filip and F\"orstner, Wolfgang},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2009},
    Month = apr,
    Number = {TR-IGG-P-2009-01},
    Abstract = {We describe ground truth data that we provide to serve as a basis for evaluation and comparison of supervised learning approaches to image interpretation. The provided ground truth, the eTRIMS Image Database, is a collection of annotated images of real world street scenes. Typical objects in these images are variable in shape and appearance, in the number of its parts and appear in a variety of congurations. The domain of man-made scenes is thus well suited for evaluation and comparison of a variety of interpretation approaches, including those that employ structure models. The provided pixelwise ground truth assigns each image pixel both with a class label and an object label and o ffers thus ground truth annotation both on the level of pixels and regions. While we believe that such ground truth is of general interest in supervised learning, such data may be of further relevance in emerging real world applications involving automation of man-made scene interpretation.},
    Institute = {Dept. of Photogrammetry, University of Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Korvc2009eTRIMS.pdf}
    }

  • R. Kuemmerle, B. Steder, C. Dornhege, M. Ruhnke, G. Grisetti, C. Stachniss, and A. Kleiner, “On measuring the accuracy of SLAM algorithms,” Autonomous Robots, vol. 27, p. 387ff, 2009.
    [BibTeX] [PDF]
    [none]
    @Article{Kuemmerle2009,
    Title = {On measuring the accuracy of {SLAM} algorithms},
    Author = {R. Kuemmerle and B. Steder and C. Dornhege and M. Ruhnke and G. Grisetti and C. Stachniss and A. Kleiner},
    Journal = auro,
    Year = {2009},
    Pages = {387ff},
    Volume = {27},
    Abstract = {[none]},
    Issue = {4},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/kuemmerle09auro.pdf}
    }

  • J. Meidow, C. Beder, and W. Förstner, “Reasoning with uncertain points, straight lines, and straight line segments in 2D,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 64, iss. 2, pp. 125-139, 2009. doi:10.1016/j.isprsjprs.2008.09.013
    [BibTeX] [PDF]
    Decisions based on basic geometric entities can only be optimal, if their uncertainty is propagated trough the entire reasoning chain. This concerns the construction of new entities from given ones, the testing of geometric relations between geometric entities, and the parameter estimation of geometric entities based on spatial relations which have been found to hold. Basic feature extraction procedures often provide measures of uncertainty. These uncertainties should be incorporated into the representation of geometric entities permitting statistical testing, eliminates the necessity of specifying non-interpretable thresholds and enables statistically optimal parameter estimation. Using the calculus of homogeneous coordinates the power of algebraic projective geometry can be exploited in these steps of image analysis. This review collects, discusses and evaluates the various representations of uncertain geometric entities in 2D together with their conversions. The representations are extended to achieve a consistent set of representations allowing geometric reasoning. The statistical testing of geometric relations is presented. Furthermore, a generic estimation procedure is provided for multiple uncertain geometric entities based on possibly correlated observed geometric entities and geometric constraints.

    @Article{Meidow2009Reasoning,
    Title = {Reasoning with uncertain points, straight lines, and straight line segments in 2D},
    Author = {Meidow, Jochen and Beder, Christian and F\"orstner, Wolfgang},
    Journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
    Year = {2009},
    Number = {2},
    Pages = {125--139},
    Volume = {64},
    Abstract = {Decisions based on basic geometric entities can only be optimal, if their uncertainty is propagated trough the entire reasoning chain. This concerns the construction of new entities from given ones, the testing of geometric relations between geometric entities, and the parameter estimation of geometric entities based on spatial relations which have been found to hold. Basic feature extraction procedures often provide measures of uncertainty. These uncertainties should be incorporated into the representation of geometric entities permitting statistical testing, eliminates the necessity of specifying non-interpretable thresholds and enables statistically optimal parameter estimation. Using the calculus of homogeneous coordinates the power of algebraic projective geometry can be exploited in these steps of image analysis. This review collects, discusses and evaluates the various representations of uncertain geometric entities in 2D together with their conversions. The representations are extended to achieve a consistent set of representations allowing geometric reasoning. The statistical testing of geometric relations is presented. Furthermore, a generic estimation procedure is provided for multiple uncertain geometric entities based on possibly correlated observed geometric entities and geometric constraints.},
    City = {Bonn},
    Doi = {10.1016/j.isprsjprs.2008.09.013},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Meidow2009Reasoning.pdf}
    }

  • J. Meidow, W. Förstner, and C. Beder, “Optimal Parameter Estimation with Homogeneous Entities and Arbitrary Constraints,” in Pattern Recognition (Symposium of DAGM) , Jena, Germany, 2009, pp. 292-301. doi:10.1007/978-3-642-03798-6_30
    [BibTeX] [PDF]
    Well known estimation techniques in computational geometry usually deal only with single geometric entities as unknown parameters and do not account for constrained observations within the estimation. The estimation model proposed in this paper is much more general, as it can handle multiple homogeneous vectors as well as multiple constraints. Furthermore, it allows the consistent handling of arbitrary covariance matrices for the observed and the estimated entities. The major novelty is the proper handling of singular observation covariance matrices made possible by additional constraints within the estimation. These properties are of special interest for instance in the calculus of algebraic projective geometry, where singular covariance matrices arise naturally from the non-minimal parameterizations of the entities. The validity of the proposed adjustment model will be demonstrated by the estimation of a fundamental matrix from synthetic data and compared to heteroscedastic regression [?], which is considered as state-ofthe- art estimator for this task. As the latter is unable to simultaneously estimate multiple entities, we will also demonstrate the usefulness and the feasibility of our approach by the constrained estimation of three vanishing points from observed uncertain image line segments.

    @InProceedings{Meidow2009Optimal,
    Title = {Optimal Parameter Estimation with Homogeneous Entities and Arbitrary Constraints},
    Author = {Meidow, Jochen and F\"orstner, Wolfgang and Beder, Christian},
    Booktitle = {Pattern Recognition (Symposium of DAGM)},
    Year = {2009},
    Address = {Jena, Germany},
    Editor = {Denzler, J. and Notni, G.},
    Pages = {292--301},
    Publisher = {Springer},
    Series = {LNCS},
    Abstract = {Well known estimation techniques in computational geometry usually deal only with single geometric entities as unknown parameters and do not account for constrained observations within the estimation. The estimation model proposed in this paper is much more general, as it can handle multiple homogeneous vectors as well as multiple constraints. Furthermore, it allows the consistent handling of arbitrary covariance matrices for the observed and the estimated entities. The major novelty is the proper handling of singular observation covariance matrices made possible by additional constraints within the estimation. These properties are of special interest for instance in the calculus of algebraic projective geometry, where singular covariance matrices arise naturally from the non-minimal parameterizations of the entities. The validity of the proposed adjustment model will be demonstrated by the estimation of a fundamental matrix from synthetic data and compared to heteroscedastic regression [?], which is considered as state-ofthe- art estimator for this task. As the latter is unable to simultaneously estimate multiple entities, we will also demonstrate the usefulness and the feasibility of our approach by the constrained estimation of three vanishing points from observed uncertain image line segments.},
    Doi = {10.1007/978-3-642-03798-6_30},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Meidow2009Optimal.pdf}
    }

  • M. D. Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone, “Morphological attribute filters for the analysis of very high resolution remote sensing images,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2009) , 2009. doi:10.1109/IGARSS.2009.5418096
    [BibTeX]
    This paper proposes the use of morphological attribute profiles as an effective alternative to the conventional morphological operators based on the geodesic reconstruction for modeling the spatial information in very high resolution images. Attribute profiles, used in multilevel approaches, result particularly effective in terms of computational complexity and capabilities in characterizing the objects in the image. In addition they are more flexible than operators by reconstruction, thanks to the definition of possible different attributes. Experimental results obtained on a Quickbird panchromatic very high resolution image proved the effectiveness of the presented attribute filters and pointed out their main properties.

    @InProceedings{Mura2009Morphological,
    Title = {Morphological attribute filters for the analysis of very high resolution remote sensing images},
    Author = {Mura, M.D. and Benediktsson, J.A. and Waske, Bj\"orn and Bruzzone, L.},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2009)},
    Year = {2009},
    Abstract = {This paper proposes the use of morphological attribute profiles as an effective alternative to the conventional morphological operators based on the geodesic reconstruction for modeling the spatial information in very high resolution images. Attribute profiles, used in multilevel approaches, result particularly effective in terms of computational complexity and capabilities in characterizing the objects in the image. In addition they are more flexible than operators by reconstruction, thanks to the definition of possible different attributes. Experimental results obtained on a Quickbird panchromatic very high resolution image proved the effectiveness of the presented attribute filters and pointed out their main properties.},
    Doi = {10.1109/IGARSS.2009.5418096},
    Keywords = {Quickbird panchromatic imagery;computational complexity;geodesic reconstruction;morphological attribute filters;morphological operators;spatial information;very high resolution remote sensing images;computational complexity;geophysical image processing;image reconstruction;mathematical morphology;remote sensing;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • M. Pilger, “Automatische Bestimmung skalierungsinvarianter Fenster für markante Bildpunkte,” Diploma Thesis Master Thesis, 2009.
    [BibTeX]
    Wir haben basierend auf dem Interestoperator von Förstner und Gülch einen skaleninvarianten Operator in Matlab implementiert, der möglichst präzise lokalisierbare Kantenschnittpunkte und ihre Skalen aus Bildern extrahiert. Dazu wurden C++-Bibliotheken zur Rauschschätzung und zur schnellen Berechnung von Faltungen nach der Methode von Deriche für Matlab verfügbar gemacht. Leider hat sich herausgestellt, dass die Faltungen mit dem Deriche-Filter für unsere spezielle Anwendung nicht geeignet ist: Es entstehen Artefakte in unserer Optimierungsfunktion, so dass eine zuverlässige Auswertung nicht gewährleistet ist. Indem wir unsere Funktionen durch Faltungen im Frequenzbereich berechnet haben, konnten wir zunächst auf Testbildern Kantenschnittpunkte mit entsprechenden Skalen extrahieren. Perfekte Skaleninvarianz bei Maßstabsänderung des Bildes konnten wir in einem Experiment nicht nachweisen: die detektierte Skala eines Kantenschnittpunktes wuchs im Experiment nicht schnell genug mit dem größer werdenden Bildmaßstab mit. Dennoch erzielten wir auf realen Bildern gute Ergebnisse und detektierten auf zwei Bildern, die sich durch bekannte geometrische oder radiometrische Transformationen unterscheiden, prozentual ähnlich viele korrespondierende Punkte und Skalen wie existierende skaleninvarianteInterestoperatoren. Gemessen an der absoluten Zahl der Detektionen liegt unser Operator weit hinter dem SIFT-Operator und dem Harris-Laplace Operator – beide entdecken auf realen Bildern meist mehr als doppelt so viele Punkte wie unser Operator. Allerdings kann unser Operator auf einen weiteren Typus von Interestpunkten erweitert werden, das sind Zentren kreissymmetrischer Bildmerkmale, oder allgemeiner auch auf spiralartige Merkmale. Damit kann in Zukunft möglicherweise das Manko der geringen Anzahl an Detektionen überwunden werden. Ohne einen Schwellwert detektiert unser Operator auch zufällig verteilte Punkte und Skalen in homogenen Bildbereichen. Wir haben gezeigt, dass es sinnvoll ist, ein Homogenitätsmaß zu benutzen, um Detektionen auf homogenen Bildbereichen zu unterdrücken, und dennoch auch nicht so gut lokalisierte Punkte, die aber zu einer Bildorientierung beitragen können, zu erhalten. Unser Operator lässt im derzeitigen Entwicklungsstadium noch Raum für Erweiterungen: neben der schon erwähnten Einbeziehung weiterer Punktmerkmale kann bei Farbbildern die Information aller drei Kanäle in die Detektion mit einbezogen werden, ähnlich wie bei Fuchs (1997), ohne das Bild unter Informationsverlust auf einen Helligkeitskanal zu reduzieren. Außerdem könnte untersucht werden, ob sich ein Oversampling des Bildes vor der Berechnung der quadratischen Gradienten, wie es Köthe (2003) vorschlägt, vorteilhaft auf die Punktdetektionen auswirkt. Wichtig für Anwendungen in der Praxis wäre auch eine deutliche Geschwindigkeitssteigerung. Abhängig von Bildgröße, Anzahl detektierter Punktkandidaten und Diskretisierungsdichte des Skalenraums kann die Detektion für ein Bild der Größe (800 x 800pel) bei eingeschalteter Subpixelschätzung auf einem 2,4 GHz Computer 15 Minuten dauern. Die meiste Zeit beanspruchen dabei die Faltungen und die kubische Interpolation bei der Subpixelschätzung. Die Zeit für die Faltungen könnte durch einen Übergang auf eine Pyramidendarstellung des Bildes im Skalenraum reduziert werden.

    @MastersThesis{Pilger2009Automatische,
    Title = {Automatische Bestimmung skalierungsinvarianter Fenster f\"ur markante Bildpunkte},
    Author = {Pilger, Marko},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2009},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Inform. Timo Dickscheid},
    Type = {Diploma Thesis},
    Abstract = {Wir haben basierend auf dem Interestoperator von F\"orstner und G\"ulch einen skaleninvarianten Operator in Matlab implementiert, der m\"oglichst pr\"azise lokalisierbare Kantenschnittpunkte und ihre Skalen aus Bildern extrahiert. Dazu wurden C++-Bibliotheken zur Rauschsch\"atzung und zur schnellen Berechnung von Faltungen nach der Methode von Deriche f\"ur Matlab verf\"ugbar gemacht. Leider hat sich herausgestellt, dass die Faltungen mit dem Deriche-Filter f\"ur unsere spezielle Anwendung nicht geeignet ist: Es entstehen Artefakte in unserer Optimierungsfunktion, so dass eine zuverl\"assige Auswertung nicht gew\"ahrleistet ist. Indem wir unsere Funktionen durch Faltungen im Frequenzbereich berechnet haben, konnten wir zun\"achst auf Testbildern Kantenschnittpunkte mit entsprechenden Skalen extrahieren. Perfekte Skaleninvarianz bei Ma{\ss}stabs\"anderung des Bildes konnten wir in einem Experiment nicht nachweisen: die detektierte Skala eines Kantenschnittpunktes wuchs im Experiment nicht schnell genug mit dem gr\"o{\ss}er werdenden Bildma{\ss}stab mit. Dennoch erzielten wir auf realen Bildern gute Ergebnisse und detektierten auf zwei Bildern, die sich durch bekannte geometrische oder radiometrische Transformationen unterscheiden, prozentual \"ahnlich viele korrespondierende Punkte und Skalen wie existierende skaleninvarianteInterestoperatoren. Gemessen an der absoluten Zahl der Detektionen liegt unser Operator weit hinter dem SIFT-Operator und dem Harris-Laplace Operator - beide entdecken auf realen Bildern meist mehr als doppelt so viele Punkte wie unser Operator. Allerdings kann unser Operator auf einen weiteren Typus von Interestpunkten erweitert werden, das sind Zentren kreissymmetrischer Bildmerkmale, oder allgemeiner auch auf spiralartige Merkmale. Damit kann in Zukunft m\"oglicherweise das Manko der geringen Anzahl an Detektionen \"uberwunden werden. Ohne einen Schwellwert detektiert unser Operator auch zuf\"allig verteilte Punkte und Skalen in homogenen Bildbereichen. Wir haben gezeigt, dass es sinnvoll ist, ein Homogenit\"atsma{\ss} zu benutzen, um Detektionen auf homogenen Bildbereichen zu unterdr\"ucken, und dennoch auch nicht so gut lokalisierte Punkte, die aber zu einer Bildorientierung beitragen k\"onnen, zu erhalten. Unser Operator l\"asst im derzeitigen Entwicklungsstadium noch Raum f\"ur Erweiterungen: neben der schon erw\"ahnten Einbeziehung weiterer Punktmerkmale kann bei Farbbildern die Information aller drei Kan\"ale in die Detektion mit einbezogen werden, \"ahnlich wie bei Fuchs (1997), ohne das Bild unter Informationsverlust auf einen Helligkeitskanal zu reduzieren. Au{\ss}erdem k\"onnte untersucht werden, ob sich ein Oversampling des Bildes vor der Berechnung der quadratischen Gradienten, wie es K\"othe (2003) vorschl\"agt, vorteilhaft auf die Punktdetektionen auswirkt. Wichtig f\"ur Anwendungen in der Praxis w\"are auch eine deutliche Geschwindigkeitssteigerung. Abh\"angig von Bildgr\"o{\ss}e, Anzahl detektierter Punktkandidaten und Diskretisierungsdichte des Skalenraums kann die Detektion f\"ur ein Bild der Gr\"o{\ss}e (800 x 800pel) bei eingeschalteter Subpixelsch\"atzung auf einem 2,4 GHz Computer 15 Minuten dauern. Die meiste Zeit beanspruchen dabei die Faltungen und die kubische Interpolation bei der Subpixelsch\"atzung. Die Zeit f\"ur die Faltungen k\"onnte durch einen \"Ubergang auf eine Pyramidendarstellung des Bildes im Skalenraum reduziert werden.}
    }

  • R. Roscher and W. Förstner, “Multiclass Bounded Logistic Regression — Efficient Regularization with Interior Point Method,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2009-02, 2009.
    [BibTeX] [PDF]
    Logistic regression has been widely used in classi cation tasks for many years. Its optimization in case of linear separable data has received extensive study due to the problem of a monoton likelihood. This paper presents a new approach, called bounded logistic regression (BLR), by solving the logistic regression as a convex optimization problem with constraints. The paper tests the accuracy of BLR by evaluating nine well-known datasets and compares it to the closely related support vector machine approach (SVM).

    @TechReport{Roscher2009Multiclass,
    Title = {Multiclass Bounded Logistic Regression -- Efficient Regularization with Interior Point Method},
    Author = {Roscher, Ribana and F\"orstner, Wolfgang},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2009},
    Number = {TR-IGG-P-2009-02},
    Abstract = {Logistic regression has been widely used in classication tasks for many years. Its optimization in case of linear separable data has received extensive study due to the problem of a monoton likelihood. This paper presents a new approach, called bounded logistic regression (BLR), by solving the logistic regression as a convex optimization problem with constraints. The paper tests the accuracy of BLR by evaluating nine well-known datasets and compares it to the closely related support vector machine approach (SVM).},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2009Multiclass.pdf}
    }

  • J. Schmittwilken, M. Y. Yang, W. Förstner, and L. Plümer, “Integration of conditional random fields and attribute grammars for range data interpretation of man-made objects,” Annals of GIS, vol. 15, iss. 2, pp. 117-126, 2009. doi:10.1080/19475680903464696
    [BibTeX] [PDF]
    A new concept for the integration of low- and high-level reasoning for the interpretation of images of man-made objects is described. The focus is on the 3D reconstruction of facades, especially the transition area between buildings and the surrounding ground. The aim is the identification of semantically meaningful objects such as stairs, entrances, and windows. A low-level module based on randomsample consensus (RANSAC) algorithmgenerates planar polygonal patches. Conditional random fields (CRFs) are used for their classification, based on local neighborhood and priors fromthe grammar. An attribute grammar is used to represent semantic knowledge including object partonomy and observable geometric constraints. The AND-OR tree-based parser uses the precision of the classified patches to control the reconstruction process and to optimize the sampling mechanism of RANSAC. Although CRFs are close to data, attribute grammars make the high-level structure of objects explicit and translate semantic knowledge in observable geometric constraints. Our approach combines top-down and bottom-up reasoning by integrating CRF and attribute grammars and thus exploits the complementary strengths of these methods.

    @Article{Schmittwilken2009Integration,
    Title = {Integration of conditional random fields and attribute grammars for range data interpretation of man-made objects},
    Author = {Schmittwilken, J\"org and Yang, Michael Ying and F\"orstner, Wolfgang and Pl\"umer, Lutz},
    Journal = {Annals of GIS},
    Year = {2009},
    Number = {2},
    Pages = {117--126},
    Volume = {15},
    Abstract = {A new concept for the integration of low- and high-level reasoning for the interpretation of images of man-made objects is described. The focus is on the 3D reconstruction of facades, especially the transition area between buildings and the surrounding ground. The aim is the identification of semantically meaningful objects such as stairs, entrances, and windows. A low-level module based on randomsample consensus (RANSAC) algorithmgenerates planar polygonal patches. Conditional random fields (CRFs) are used for their classification, based on local neighborhood and priors fromthe grammar. An attribute grammar is used to represent semantic knowledge including object partonomy and observable geometric constraints. The AND-OR tree-based parser uses the precision of the classified patches to control the reconstruction process and to optimize the sampling mechanism of RANSAC. Although CRFs are close to data, attribute grammars make the high-level structure of objects explicit and translate semantic knowledge in observable geometric constraints. Our approach combines top-down and bottom-up reasoning by integrating CRF and attribute grammars and thus exploits the complementary strengths of these methods.},
    Doi = {10.1080/19475680903464696},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schmittwilken2009Integration.pdf}
    }

  • A. Schneider, S. J. C. Stachniss, M. Reisert, H. Burkhardt, and W. Burgard, “Object Identification with Tactile Sensors Using Bag-of-Features,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2009.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Schneider2009,
    Title = {Object Identification with Tactile Sensors Using Bag-of-Features},
    Author = {A. Schneider and J. Sturm C. Stachniss and M. Reisert and H. Burkhardt and W. Burgard},
    Booktitle = iros,
    Year = {2009},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/wurm09iros.pdf}
    }

  • R. Schultz, “Orientierung einer Kamera in einer Legolandszene,” Bachelor Thesis Master Thesis, 2009.
    [BibTeX]
    Diese Arbeit untersucht ein Verfahren zur Bestimmung der äußeren Orientierung einer Kamera. Für viele Anwendungen in der Photogrammetrie ist es interessant, die äußere Orientierung der Kamera mit geringem Aufwand schätzen zu können. Die äußere Orientierung beschreibt die räumliche Lage der Kamera im Objektkoordinatensystem und lässt sich über die Fluchtpunkte bestimmen. Die Fluchtpunkte lassen sich in einer Legolandszene durch parallele Objektkanten schätzen. In einer Legolandszene bestehen alle Objekte aus Polyedern, die ausschließlich rechte Winkel haben. Hierbei sind die Polyeder parallel zueinander angeordnet. Legolandszenen sind eine Vereinfachung realer Bilder. Sie sollen dem Erlernen des Erkennens von Strukturen, in diesem Falle von Objektkanten dienen. Ziel ist es, eine Methode zu entwickeln, mit deren Hilfe im Bild Objektkanten, die zum gleichen Fluchtpunkt führen, gefunden werden können. Auf Grundlage dieser Kanten kann die äußere Orientierung der Kamera bestimmt werden. Es existiert ein Verfahren zur Bestimmung der äußeren Orientierung der Kamera, unter der Voraussetzung, dass die innere Orientierung bekannt ist. Dieses Verfahren wurde an der Universität Bonn von Prof. Förstner entwickelt. Aufgabe der Bachelorarbeit ist es, dieses Verfahren bezüglich seiner Kantenwahl zu verbessern. Es wurden in den Bildern Kanten segmentiert, unter welchen Kantenpaare manuell dahingehend untersucht wurden, ob sie zum gleichen Fluchtpunkt führen. Diese Datenmenge wurde in eine Test- und Trainingsmenge unterteilt. Die Daten der Trainingsmenge wurden verwendet, um anhand von geometrischen Eigenschaften zu untersuchen, ob ein Kantenpaar zum gleichen Fluchtpunkt führt. Es wurden der Abstand und der Winkel zwischen zwei Kanten sowie deren Überlappung untersucht. Weiterhin wurde zu den extrahierten Kanten eine Dreiecksvermaschung durch eine bedingte Delaunay- Triangulierung konstruiert, mit deren Hilfe ein Kantenzuordnungsverfahren entwickelt wurde. Diese geometrischen Eigenschaften wurden vorerst einzeln und später in Kombination mittels eines Entscheidungsbaumes untersucht. Die für die Eigenschaften ermittelten Kriterien wurden mit den Daten der Testmenge überprüft. Bei den untersuchten Daten erwies sich ein Winkel zwischen 13 Grad und 19 Grad als effektiv. Hiermit wurden 58 % der theoretisch maximalen Utility durch fehlerfreie Klassifikation erreicht, im Kontrast zu 10 % des ursprünglichen Verfahrens.

    @MastersThesis{Schultz2009Orientierung,
    Title = {Orientierung einer Kamera in einer Legolandszene},
    Author = {Schultz, Rebekka},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2009},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.- Inform. Timo Dickscheid},
    Type = {Bachelor Thesis},
    Abstract = {Diese Arbeit untersucht ein Verfahren zur Bestimmung der \"au{\ss}eren Orientierung einer Kamera. F\"ur viele Anwendungen in der Photogrammetrie ist es interessant, die \"au{\ss}ere Orientierung der Kamera mit geringem Aufwand sch\"atzen zu k\"onnen. Die \"au{\ss}ere Orientierung beschreibt die r\"aumliche Lage der Kamera im Objektkoordinatensystem und l\"asst sich \"uber die Fluchtpunkte bestimmen. Die Fluchtpunkte lassen sich in einer Legolandszene durch parallele Objektkanten sch\"atzen. In einer Legolandszene bestehen alle Objekte aus Polyedern, die ausschlie{\ss}lich rechte Winkel haben. Hierbei sind die Polyeder parallel zueinander angeordnet. Legolandszenen sind eine Vereinfachung realer Bilder. Sie sollen dem Erlernen des Erkennens von Strukturen, in diesem Falle von Objektkanten dienen. Ziel ist es, eine Methode zu entwickeln, mit deren Hilfe im Bild Objektkanten, die zum gleichen Fluchtpunkt f\"uhren, gefunden werden k\"onnen. Auf Grundlage dieser Kanten kann die \"au{\ss}ere Orientierung der Kamera bestimmt werden. Es existiert ein Verfahren zur Bestimmung der \"au{\ss}eren Orientierung der Kamera, unter der Voraussetzung, dass die innere Orientierung bekannt ist. Dieses Verfahren wurde an der Universit\"at Bonn von Prof. F\"orstner entwickelt. Aufgabe der Bachelorarbeit ist es, dieses Verfahren bez\"uglich seiner Kantenwahl zu verbessern. Es wurden in den Bildern Kanten segmentiert, unter welchen Kantenpaare manuell dahingehend untersucht wurden, ob sie zum gleichen Fluchtpunkt f\"uhren. Diese Datenmenge wurde in eine Test- und Trainingsmenge unterteilt. Die Daten der Trainingsmenge wurden verwendet, um anhand von geometrischen Eigenschaften zu untersuchen, ob ein Kantenpaar zum gleichen Fluchtpunkt f\"uhrt. Es wurden der Abstand und der Winkel zwischen zwei Kanten sowie deren \"Uberlappung untersucht. Weiterhin wurde zu den extrahierten Kanten eine Dreiecksvermaschung durch eine bedingte Delaunay- Triangulierung konstruiert, mit deren Hilfe ein Kantenzuordnungsverfahren entwickelt wurde. Diese geometrischen Eigenschaften wurden vorerst einzeln und sp\"ater in Kombination mittels eines Entscheidungsbaumes untersucht. Die f\"ur die Eigenschaften ermittelten Kriterien wurden mit den Daten der Testmenge \"uberpr\"uft. Bei den untersuchten Daten erwies sich ein Winkel zwischen 13 Grad und 19 Grad als effektiv. Hiermit wurden 58 % der theoretisch maximalen Utility durch fehlerfreie Klassifikation erreicht, im Kontrast zu 10 % des urspr\"unglichen Verfahrens.}
    }

  • C. Stachniss, “Spatial Modeling and Robot Navigation,” Habilitation PhD Thesis, 2009.
    [BibTeX] [PDF]
    [none]
    @PhdThesis{Stachniss2009,
    Title = {Spatial Modeling and Robot Navigation},
    Author = {C. Stachniss},
    School = {University of Freiburg, Department of Computer Science},
    Year = {2009},
    Type = {Habilitation},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss-habil.pdf}
    }

  • C. Stachniss, Robotic Mapping and Exploration, Springer, 2009, vol. 55.
    [BibTeX]
    [none]
    @Book{Stachniss2009a,
    Title = {Robotic Mapping and Exploration},
    Author = {C. Stachniss},
    Publisher = {Springer},
    Year = {2009},
    Series = springerstaradvanced,
    Volume = {55},
    Abstract = {[none]},
    ISBN = {978-3-642-01096-5},
    Timestamp = {2014.04.24}
    }

  • C. Stachniss, O. Martinez Mozos, and W. Burgard, “Efficient Exploration of Unknown Indoor Environments using a Team of Mobile Robots,” Annals of Mathematics and Artificial Intelligence, vol. 52, p. 205ff, 2009.
    [BibTeX]
    [none]
    @Article{Stachniss2009b,
    Title = {Efficient Exploration of Unknown Indoor Environments using a Team of Mobile Robots},
    Author = {Stachniss, C. and Martinez Mozos, O. and Burgard, W.},
    Journal = {Annals of Mathematics and Artificial Intelligence},
    Year = {2009},
    Pages = {205ff},
    Volume = {52},
    Abstract = {[none]},
    Issue = {2},
    Timestamp = {2014.04.24}
    }

  • C. Stachniss, C. Plagemann, and A. J. Lilienthal, “Gas Distribution Modeling using Sparse Gaussian Process Mixtures,” Autonomous Robots, vol. 26, p. 187ff, 2009.
    [BibTeX]
    [none]
    @Article{Stachniss2009c,
    Title = {Gas Distribution Modeling using Sparse Gaussian Process Mixtures},
    Author = {Stachniss, C. and Plagemann, C. and Lilienthal, A.J.},
    Journal = auro,
    Year = {2009},
    Pages = {187ff},
    Volume = {26},
    Abstract = {[none]},
    Issue = {2},
    Timestamp = {2014.04.24}
    }

  • R. Steffen, “Visual SLAM from image sequences acquired by unmanned aerial vehicles,” PhD Thesis, 2009.
    [BibTeX]
    Die Verwendung der Triangulation zur Lösung des Problems der gleichzeitigen Lokalisierung und Kartierung findet seit Jahren ihren Eingang in die Entwicklung autonomer Systeme. Aufgrund von Echtzeitanforderungen dieser Systeme erreichen rekursive Schätzverfahren, insbesondere Kalmanfilter basierte Ansätze, große Beliebtheit. Bedauerlicherweise, treten dabei durch die Nichtlinearität der Triangulation einige Effekte auf, welche die Konsistenz und Genauigkeit der Lösung hinsichtlich der geschätzten Parameter maßgeblich beeinflussen. In der Literatur existieren dazu einige interessante Lösungsansätze, um diese genauigkeitsrelevanten Effekte zu minimieren. Die Motivation dieser Arbeit ist die These, dass die KaImanfilter basierte Lösung der Triangulation zur Lokalisierung und Kartierung aus Bildfolgen von unbemannten Drohnen realisierbar ist. Im Gegensatz zur klassischen Aero-Triangulation treten dadurch zusätzliche Aspekte in den Vordergrund, die in dieser Arbeit beleuchtet werden. Der erste Beitrag dieser Arbeit besteht in der Herleitung eines generellen Verfahrens zum rekursiven Verbessern im KaImanfilter mit impliziten Beobachtungsgleichungen. Wir zeigen, dass die klassischen Verfahren im Kalmanfilter eine Spezialisierung unseres Ansatzes darstellen. Im zweite Beitrag erweitern wir die klassische Modellierung für ein Einkameramodell im Kalmanfilter und formulieren linear berechenbare Bewegungsmodelle. Neben verschiedenen Verfahren zur Initialisierung von Neupunkten im Kalmanfilter aus der Literatur stellen wir in einem dritten Hauptbeitrag ein neues Verfahren vor. Am Beispiel von Bildfolgen eines unbemannten Flugobjektes zeigen wir in dieser Arbeit als vierten Beitrag, welche Genauigkeit zur Lokalisierung und Kartierung durch Triangulation möglich ist. Schließlich wird anhand von empirischen Untersuchungen unter Verwendung simulierter und realer Daten einer Bildfolge eines photogrammetrischen Streifens gezeigt und verglichen, welchen Einfluß die Initialisierungsmethoden für Neupunkte im Kalmanfilter haben und welche Genauigkeiten für diese Szenarien erreichbar sind.

    @PhdThesis{Steffen2009Visual,
    Title = {Visual SLAM from image sequences acquired by unmanned aerial vehicles},
    Author = {Steffen, Richard},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2009},
    Abstract = {Die Verwendung der Triangulation zur L\"osung des Problems der gleichzeitigen Lokalisierung und Kartierung findet seit Jahren ihren Eingang in die Entwicklung autonomer Systeme. Aufgrund von Echtzeitanforderungen dieser Systeme erreichen rekursive Sch\"atzverfahren, insbesondere Kalmanfilter basierte Ans\"atze, gro{\ss}e Beliebtheit. Bedauerlicherweise, treten dabei durch die Nichtlinearit\"at der Triangulation einige Effekte auf, welche die Konsistenz und Genauigkeit der L\"osung hinsichtlich der gesch\"atzten Parameter ma{\ss}geblich beeinflussen. In der Literatur existieren dazu einige interessante L\"osungsans\"atze, um diese genauigkeitsrelevanten Effekte zu minimieren. Die Motivation dieser Arbeit ist die These, dass die KaImanfilter basierte L\"osung der Triangulation zur Lokalisierung und Kartierung aus Bildfolgen von unbemannten Drohnen realisierbar ist. Im Gegensatz zur klassischen Aero-Triangulation treten dadurch zus\"atzliche Aspekte in den Vordergrund, die in dieser Arbeit beleuchtet werden. Der erste Beitrag dieser Arbeit besteht in der Herleitung eines generellen Verfahrens zum rekursiven Verbessern im KaImanfilter mit impliziten Beobachtungsgleichungen. Wir zeigen, dass die klassischen Verfahren im Kalmanfilter eine Spezialisierung unseres Ansatzes darstellen. Im zweite Beitrag erweitern wir die klassische Modellierung f\"ur ein Einkameramodell im Kalmanfilter und formulieren linear berechenbare Bewegungsmodelle. Neben verschiedenen Verfahren zur Initialisierung von Neupunkten im Kalmanfilter aus der Literatur stellen wir in einem dritten Hauptbeitrag ein neues Verfahren vor. Am Beispiel von Bildfolgen eines unbemannten Flugobjektes zeigen wir in dieser Arbeit als vierten Beitrag, welche Genauigkeit zur Lokalisierung und Kartierung durch Triangulation m\"oglich ist. Schlie{\ss}lich wird anhand von empirischen Untersuchungen unter Verwendung simulierter und realer Daten einer Bildfolge eines photogrammetrischen Streifens gezeigt und verglichen, welchen Einflu{\ss} die Initialisierungsmethoden f\"ur Neupunkte im Kalmanfilter haben und welche Genauigkeiten f\"ur diese Szenarien erreichbar sind.}
    }

  • H. Strasdat, C. Stachniss, and W. Burgard, “Which Landmark is Useful? Learning Selection Policies for Navigation in Unknown Environments,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Kobe, Japan, 2009.
    [BibTeX]
    [none]
    @InProceedings{Strasdat2009,
    Title = {Which Landmark is Useful? Learning Selection Policies for Navigation in Unknown Environments},
    Author = {H. Strasdat and Stachniss, C. and Burgard, W.},
    Booktitle = icra,
    Year = {2009},
    Address = {Kobe, Japan},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • J. Sturm, V. Predeap, C. Stachniss, C. Plagemann, K. Konolige, and W. Burgard, “Learning Kinematic Models for Articulated Objects,” in Proceedings of the Int. Conf. on Artificial Intelligence (IJCAI) , Pasadena, CA, USA, 2009.
    [BibTeX]
    [none]
    @InProceedings{Sturm2009a,
    Title = {Learning Kinematic Models for Articulated Objects},
    Author = {J. Sturm and V. Predeap and Stachniss, C. and C. Plagemann and K. Konolige and Burgard, W.},
    Booktitle = ijcai,
    Year = {2009},
    Address = {Pasadena, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • J. Sturm, C. Stachniss, V. Predeap, C. Plagemann, K. Konolige, and W. Burgard, “Learning Kinematic Models for Articulated Objects,” in Online Proc. of the Learning Workshop (Snowbird) , Clearwater, FL, USA, 2009.
    [BibTeX]
    [none]
    @InProceedings{Sturm2009,
    Title = {Learning Kinematic Models for Articulated Objects},
    Author = {J. Sturm and Stachniss, C. and V. Predeap and C. Plagemann and K. Konolige and Burgard, W.},
    Booktitle = {Online Proc. of the Learning Workshop (Snowbird)},
    Year = {2009},
    Address = {Clearwater, FL, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • J. Sturm, C. Stachniss, V. Predeap, C. Plagemann, K. Konolige, and W. Burgard, “Towards Understanding Articulated Objects,” in Workshop Integrating Mobility and Manipulation at Robotics: Science and Systems (RSS) , Seattle, WA, USA, 2009.
    [BibTeX]
    [none]
    @InProceedings{Sturm2009b,
    Title = {Towards Understanding Articulated Objects},
    Author = {J. Sturm and Stachniss, C. and V. Predeap and C. Plagemann and K. Konolige and Burgard, W.},
    Booktitle = {Workshop Integrating Mobility and Manipulation at Robotics: Science and Systems (RSS)},
    Year = {2009},
    Address = {Seattle, WA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • J. R. Sveinsson, B. Waske, and J. A. Benediktsson, “Speckle reduction of TerraSAR-X imagery using TV segmentation,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2009. doi:10.1109/IGARSS.2009.5417412
    [BibTeX]
    The nonsubsampled contourlet transform (NSCT) is a new image representation approach that has sparser representation at both spatial and directional resolution and thus captures smooth contours in images. On the other hand, wavelet transform has sparser representation of homogeneous areas. In this paper, we are going to use the three combinations of undecimated wavelet and nonsubsampled contourlet transforms that was used in for denoising of TerraSAR-X images. Two of the methods use the undecimated wavelet transform to de-noise homogeneous areas and the nonsubsampled contourlet transform to denoise areas with edges. The segmentation between homogeneous areas and areas with edges is done by using total variation segmentation. The third method is a linear averaging of the two denoising methods. A thresholding in the wavelet and contourlet domain is done by non-linear functions which are adapted for each selected subband. The non-linear functions are based on sigmoid functions. Simulation results suggested that these denoising schemes achieve good and clean images.

    @InProceedings{Sveinsson2009Speckle,
    Title = {Speckle reduction of TerraSAR-X imagery using TV segmentation},
    Author = {Sveinsson, J.R. and Waske, Bj\"orn and Benediktsson, J.A.},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2009},
    Abstract = {The nonsubsampled contourlet transform (NSCT) is a new image representation approach that has sparser representation at both spatial and directional resolution and thus captures smooth contours in images. On the other hand, wavelet transform has sparser representation of homogeneous areas. In this paper, we are going to use the three combinations of undecimated wavelet and nonsubsampled contourlet transforms that was used in for denoising of TerraSAR-X images. Two of the methods use the undecimated wavelet transform to de-noise homogeneous areas and the nonsubsampled contourlet transform to denoise areas with edges. The segmentation between homogeneous areas and areas with edges is done by using total variation segmentation. The third method is a linear averaging of the two denoising methods. A thresholding in the wavelet and contourlet domain is done by non-linear functions which are adapted for each selected subband. The non-linear functions are based on sigmoid functions. Simulation results suggested that these denoising schemes achieve good and clean images.},
    Doi = {10.1109/IGARSS.2009.5417412},
    Keywords = {TV segmentation;TerraSAR-X imagery;directional resolution;image contours;image denoising;image representation;image segmentation;linear averaging;nonlinear functions;nonsubsampled contourlet transforms;sigmoid functions;spatial resolution;speckle reduction;total variation segmentation;undecimated wavelet transforms;feature extraction;geophysical image processing;geophysical techniques;image denoising;image representation;image resolution;image segmentation;radar imaging;remote sensing by radar;synthetic aperture radar;wavelet transforms;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • T. Udelhoven, S. van der Linden, B. Waske, M. Stellmes, and L. Hoffmann, “Hypertemporal Classification of Large Areas Using Decision Fusion,” IEEE Geoscience and Remote Sensing Letters, vol. 6, iss. 3, pp. 592-596, 2009. doi:10.1109/LGRS.2009.2021960
    [BibTeX]
    A novel multiannual land-cover-classification scheme for classifying hypertemporal image data is suggested, which is based on a supervised decision fusion (DF) approach. This DF approach comprises two steps: First, separate support vector machines (SVMs) are trained for normalized difference vegetation index (NDVI) time-series and mean annual temperature values of three consecutive years. In the second step, the information of the preliminary continuous SVM outputs, which represent posterior probabilities of the class assignments, is fused using a second-level SVM classifier. We tested the approach using the 10-day maximum-value NDVI composites from the "Mediterranean Extended Daily one-km Advanced Very High Resolution Radiometer Data Set" (MEDOKADS). The approach increases the classification accuracy and robustness compared with another DF method (simple majority voting) and with a single SVM expert that is trained for the same multiannual periods. The results clearly demonstrate that DF is a reliable technique for large-area mapping using hypertemporal data sets.

    @Article{Udelhoven2009Hypertemporal,
    Title = {Hypertemporal Classification of Large Areas Using Decision Fusion},
    Author = {Udelhoven, Thomas and van der Linden, Sebastian and Waske, Bj\"orn and Stellmes, Marion and Hoffmann, Lucien},
    Journal = {IEEE Geoscience and Remote Sensing Letters},
    Year = {2009},
    Month = jul,
    Number = {3},
    Pages = {592--596},
    Volume = {6},
    Abstract = {A novel multiannual land-cover-classification scheme for classifying hypertemporal image data is suggested, which is based on a supervised decision fusion (DF) approach. This DF approach comprises two steps: First, separate support vector machines (SVMs) are trained for normalized difference vegetation index (NDVI) time-series and mean annual temperature values of three consecutive years. In the second step, the information of the preliminary continuous SVM outputs, which represent posterior probabilities of the class assignments, is fused using a second-level SVM classifier. We tested the approach using the 10-day maximum-value NDVI composites from the "Mediterranean Extended Daily one-km Advanced Very High Resolution Radiometer Data Set" (MEDOKADS). The approach increases the classification accuracy and robustness compared with another DF method (simple majority voting) and with a single SVM expert that is trained for the same multiannual periods. The results clearly demonstrate that DF is a reliable technique for large-area mapping using hypertemporal data sets.},
    Doi = {10.1109/LGRS.2009.2021960},
    Owner = {waske},
    Sn = {1545-598X},
    Tc = {2},
    Timestamp = {2012.09.04},
    Ut = {WOS:000267764800048},
    Z8 = {0},
    Z9 = {2},
    Zb = {0}
    }

  • S. Valero, J. Chanussot, J. A. Benediktsson, H. Talbot, and B. Waske, “Directional mathematical morphology for the detection of the road network in Very High Resolution remote sensing images,” in 16th IEEE International Conference on Image Processing (ICIP) , 2009. doi:10.1109/ICIP.2009.5414344
    [BibTeX]
    This paper presents a new method for extracting roads in Very High Resolution remotely sensed images based on advanced directional morphological operators. The proposed approach introduces the use of Path Openings and Closings in order to extract structural pixel information. These morphological operators remain flexible enough to fit rectilinear and slightly curved structures since they do not depend on the choice of a structural element shape and hence outperform standard approaches using rotating rectangular structuring elements. The method consists in building a granulometry chain using Path Openings and Closing to perform Morphological Profiles. For each pixel, the Morphological Profile constitutes the feature vector on which our road extraction is based.

    @InProceedings{Valero2009Directional,
    Title = {Directional mathematical morphology for the detection of the road network in Very High Resolution remote sensing images},
    Author = {Valero, S. and Chanussot, J. and Benediktsson, J.A. and Talbot, H. and Waske, Bj\"orn},
    Booktitle = {16th IEEE International Conference on Image Processing (ICIP)},
    Year = {2009},
    Abstract = {This paper presents a new method for extracting roads in Very High Resolution remotely sensed images based on advanced directional morphological operators. The proposed approach introduces the use of Path Openings and Closings in order to extract structural pixel information. These morphological operators remain flexible enough to fit rectilinear and slightly curved structures since they do not depend on the choice of a structural element shape and hence outperform standard approaches using rotating rectangular structuring elements. The method consists in building a granulometry chain using Path Openings and Closing to perform Morphological Profiles. For each pixel, the Morphological Profile constitutes the feature vector on which our road extraction is based.},
    Doi = {10.1109/ICIP.2009.5414344},
    ISSN = {1522-4880},
    Keywords = {directional mathematical morphology;morphological profiles;path closings;path openings;rectilinear structures;road network detection;slightly curved structures;structural pixel information;very high resolution remote sensing images;geophysical image processing;geophysical techniques;mathematical morphology;remote sensing;roads;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • X. Wang, B. Waske, and J. A. Benediktsson, “Ensemble methods for spectral-spatial classification of urban hyperspectral data,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2009. doi:10.1109/IGARSS.2009.5417534
    [BibTeX]
    Classification of hyperspectral data with high spatial resolution from urban areas is investigated. The approach is an extension of existing approaches, using both spectral and spatial information for classification. The spatial information is derived by mathematical morphology and principal components of the hyperspectral data set, generating a set of different morphological profiles. The whole data set is classified by the Random Forest algorithm. However, the computational complexity as well as the increased dimensionality and redundancy of data sets based on morphological profiles are potential drawbacks. Thus, in the presented study, feature selection is applied, using nonparametric weighted feature extraction and the variable importance of the random forests. The proposed approach is applied to ROSIS data from an urban area. The experimental results demonstrate that a feature reduction is useful in terms of accuracy. Moreover, the proposed approach also shows excellent results with a limited training set.

    @InProceedings{Wang2009Ensemble,
    Title = {Ensemble methods for spectral-spatial classification of urban hyperspectral data},
    Author = {Xin-Lu Wang and Waske, Bj\"orn and Benediktsson, J.A.},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2009},
    Abstract = {Classification of hyperspectral data with high spatial resolution from urban areas is investigated. The approach is an extension of existing approaches, using both spectral and spatial information for classification. The spatial information is derived by mathematical morphology and principal components of the hyperspectral data set, generating a set of different morphological profiles. The whole data set is classified by the Random Forest algorithm. However, the computational complexity as well as the increased dimensionality and redundancy of data sets based on morphological profiles are potential drawbacks. Thus, in the presented study, feature selection is applied, using nonparametric weighted feature extraction and the variable importance of the random forests. The proposed approach is applied to ROSIS data from an urban area. The experimental results demonstrate that a feature reduction is useful in terms of accuracy. Moreover, the proposed approach also shows excellent results with a limited training set.},
    Doi = {10.1109/IGARSS.2009.5417534},
    Keywords = {ROSIS data;computational complexity;data dimensionality;data redundancy;ensemble methods;feature selection;hyperspectral data classification;mathematical morphology;nonparametric weighted feature extraction;principal component analysis;random forest algorithm;spatial information classification;spectral information classification;urban hyperspectral data;decision trees;feature extraction;geophysical image processing;image classification;principal component analysis;remote sensing;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • B. Waske, J. A. Benediktsson, K. Arnason, and J. R. Sveinsson, “Mapping of hyperspectral AVIRIS data using machine-learning algorithms,” Canadian Journal of Remote Sensing, vol. 35, pp. 106-116, 2009. doi:10.5589/m09-018
    [BibTeX]
    Hyperspectral imaging provides detailed spectral and spatial information from the land cover that enables a precise differentiation between various surface materials. on the other hand, the performance of traditional and widely used statistical classification methods is often limited in this context, and thus alternative methods are required. In the study presented here, the performance of two machine-learning techniques, namely support vector machines (SVMs) and random forests (RFs), is investigated and the classification results are compared with those from well-known methods (i.e., maximum likelihood classifier and spectral angle mapper). The classifiers are applied to an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) dataset that was acquired near the Hekla volcano in Iceland. The results clearly show the advantages of the two proposed classifier algorithms in terms of accuracy. They significantly outperform the other methods and achieve overall accuracies of approximately 90%. Although SVM and RF show some diversity in the classification results, the global performance of the two classifiers is very similar. Thus, both methods can be considered attractive for the classification of hyperspectral data.

    @Article{Waske2009Mapping,
    Title = {Mapping of hyperspectral AVIRIS data using machine-learning algorithms},
    Author = {Waske, Bj\"orn and Benediktsson, Jon Atli and Arnason, Kolbeinn and Sveinsson, Johannes R.},
    Journal = {Canadian Journal of Remote Sensing},
    Year = {2009},
    Pages = {106--116},
    Volume = {35},
    Abstract = {Hyperspectral imaging provides detailed spectral and spatial information from the land cover that enables a precise differentiation between various surface materials. on the other hand, the performance of traditional and widely used statistical classification methods is often limited in this context, and thus alternative methods are required. In the study presented here, the performance of two machine-learning techniques, namely support vector machines (SVMs) and random forests (RFs), is investigated and the classification results are compared with those from well-known methods (i.e., maximum likelihood classifier and spectral angle mapper). The classifiers are applied to an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) dataset that was acquired near the Hekla volcano in Iceland. The results clearly show the advantages of the two proposed classifier algorithms in terms of accuracy. They significantly outperform the other methods and achieve overall accuracies of approximately 90%. Although SVM and RF show some diversity in the classification results, the global performance of the two classifiers is very similar. Thus, both methods can be considered attractive for the classification of hyperspectral data.},
    Doi = {10.5589/m09-018},
    Owner = {waske},
    Si = {SP},
    Sn = {1712-7971},
    Su = {1},
    Tc = {3},
    Timestamp = {2012.09.04},
    Ut = {WOS:000275720100008},
    Z8 = {1},
    Z9 = {3},
    Zb = {1}
    }

  • B. Waske, J. A. Benediktsson, and J. R. Sveinsson, “Fusion of multisource data sets from agricultural areas for improved land cover classification,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2009. doi:10.1109/IGARSS.2009.5417536
    [BibTeX]
    An approach for spectral-spatial classification of multisource remote sensing data from agricultural areas is addressed. Mathematical morphology is used to derive the spatial information from the data sets. The different data sources (i.e., SAR and multispectral) are classified by support vector machines (SVM). Afterwards, the SVM outputs are transferred to probability measurements. These probability values are combined by different fusion strategies, to derive the final classification result. Comparing the results based on mathematical morphology the total accuracy increased by 6% compared to the pure-pixel classification results. Moreover the transfer of the SVM outputs into probability values and the subsequent fusion further increases the classification accuracy, resulting in an accuracy of 78.5%.

    @InProceedings{Waske2009Fusion,
    Title = {Fusion of multisource data sets from agricultural areas for improved land cover classification},
    Author = {Waske, Bj\"orn and Benediktsson, J.A. and Sveinsson, J.R.},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2009},
    Abstract = {An approach for spectral-spatial classification of multisource remote sensing data from agricultural areas is addressed. Mathematical morphology is used to derive the spatial information from the data sets. The different data sources (i.e., SAR and multispectral) are classified by support vector machines (SVM). Afterwards, the SVM outputs are transferred to probability measurements. These probability values are combined by different fusion strategies, to derive the final classification result. Comparing the results based on mathematical morphology the total accuracy increased by 6% compared to the pure-pixel classification results. Moreover the transfer of the SVM outputs into probability values and the subsequent fusion further increases the classification accuracy, resulting in an accuracy of 78.5%.},
    Doi = {10.1109/IGARSS.2009.5417536},
    Keywords = {SAR remote sensing data;SVM;agricultural land cover classification;mathematical morphology;multisource data sets;multisource remote sensing data;multispectral remote sensing data;probability measurements;pure-pixel classification;spectral-spatial classification;support vector machines;geophysical image processing;geophysical techniques;image classification;mathematical morphology;remote sensing by radar;support vector machines;synthetic aperture radar;terrain mapping;vegetation mapping;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • B. Waske and M. Braun, “Classifier ensembles for land cover mapping using multitemporal SAR imagery,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 64, iss. 5, pp. 450-457, 2009. doi:10.1016/j.isprsjprs.2009.01.003
    [BibTeX]
    SAR data are almost independent from weather conditions, and thus are well suited for mapping of seasonally changing variables such as land cover. In regard to recent and upcoming missions, multitemporal and multi-frequency approaches become even more attractive. In the present study, classifier ensembles (i.e., boosted decision tree and random forests) are applied to multi-temporal C-band SAR data, from different study sites and years. A detailed accuracy assessment shows that classifier ensembles, in particularly random forests, outperform standard approaches like a single decision tree and a conventional maximum likelihood classifier by more than 10% independently from the site and year. They reach up to almost 84% of overall accuracy in rural areas with large plots. Visual interpretation confirms the statistical accuracy assessment and reveals that also typical random noise is considerably reduced. In addition the results demonstrate that random forests are less sensitive to the number of training samples and perform well even with only a small number. Random forests are computationally highly efficient and are hence considered very well suited for land cover classifications of future multifrequency and multitemporal stacks of SAR imagery. (C) 2009 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

    @Article{Waske2009Classifier,
    Title = {Classifier ensembles for land cover mapping using multitemporal SAR imagery},
    Author = {Waske, Bj\"orn and Braun, Matthias},
    Journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
    Year = {2009},
    Month = sep,
    Number = {5},
    Pages = {450--457},
    Volume = {64},
    Abstract = {SAR data are almost independent from weather conditions, and thus are well suited for mapping of seasonally changing variables such as land cover. In regard to recent and upcoming missions, multitemporal and multi-frequency approaches become even more attractive. In the present study, classifier ensembles (i.e., boosted decision tree and random forests) are applied to multi-temporal C-band SAR data, from different study sites and years. A detailed accuracy assessment shows that classifier ensembles, in particularly random forests, outperform standard approaches like a single decision tree and a conventional maximum likelihood classifier by more than 10% independently from the site and year. They reach up to almost 84% of overall accuracy in rural areas with large plots. Visual interpretation confirms the statistical accuracy assessment and reveals that also typical random noise is considerably reduced. In addition the results demonstrate that random forests are less sensitive to the number of training samples and perform well even with only a small number. Random forests are computationally highly efficient and are hence considered very well suited for land cover classifications of future multifrequency and multitemporal stacks of SAR imagery. (C) 2009 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.},
    Doi = {10.1016/j.isprsjprs.2009.01.003},
    Owner = {waske},
    Sn = {0924-2716},
    Tc = {10},
    Timestamp = {2012.09.04},
    Ut = {WOS:000273381000003},
    Z8 = {0},
    Z9 = {10},
    Zb = {2}
    }

  • B. Waske, M. Chi, J. A. Benediktsson, S. van der Linden, and B. Koetz, “Geospatial Technology for Earth Observation,” in Geospatial Technology for Earth Observation, D. Li, J. Shan, and J. Gong, Eds., Springer US, 2009, pp. 203-233. doi:10.1007/978-1-4419-0050-0_8
    [BibTeX]
    During the last decades the manner how the Earth is being observed was revolutionized. Earth Observation (EO) systems became a valuable and powerful tool to monitor the Earth and had significant impact on the acquisition and analysis of environmental data (Rosenquist et al. 2003). Currently, EO data play a major role in supporting decision-making and surveying compliance of several multilateral environmental treaties, such as the Kyoto Protocol, the Convention on Biological Diversity, or the European initiative Global Monitoring for Environment and Security, GMES (Peter 2004, Rosenquist et al. 2003, Backhaus and Beule 2005). However, the need for such long-term monitoring of the Earth’s surface requires the standardized and coordinated use of global EO data sets, which has led, e.g., to the international Global Earth Observation System of Systems (GEOSS) initiative as well as to the Global Climate Observation System (GCOS) implementation plan (GCOS 2004, GEO 2005). The evolving EO technologies together with the requirements and standards arising from their exploitation demand increasingly improving algorithms, especially in the field of land cover classification

    @InBook{Waske2009Geospatial,
    Title = {Geospatial Technology for Earth Observation},
    Author = {Waske, Bj\"orn and Chi, Mingmin and Benediktsson, Jon Atli and van der Linden, Sebastian and Koetz, Benjamin},
    Chapter = {Algorithms and Applications for Land Cover Classification - A Review},
    Editor = {Li, Deren and Shan, Jie and Gong, Jianya},
    Pages = {203--233},
    Publisher = {Springer US},
    Year = {2009},
    Abstract = {During the last decades the manner how the Earth is being observed was revolutionized. Earth Observation (EO) systems became a valuable and powerful tool to monitor the Earth and had significant impact on the acquisition and analysis of environmental data (Rosenquist et al. 2003). Currently, EO data play a major role in supporting decision-making and surveying compliance of several multilateral environmental treaties, such as the Kyoto Protocol, the Convention on Biological Diversity, or the European initiative Global Monitoring for Environment and Security, GMES (Peter 2004, Rosenquist et al. 2003, Backhaus and Beule 2005). However, the need for such long-term monitoring of the Earth's surface requires the standardized and coordinated use of global EO data sets, which has led, e.g., to the international Global Earth Observation System of Systems (GEOSS) initiative as well as to the Global Climate Observation System (GCOS) implementation plan (GCOS 2004, GEO 2005). The evolving EO technologies together with the requirements and standards arising from their exploitation demand increasingly improving algorithms, especially in the field of land cover classification},
    Affiliation = {Faculty of Electrical and Computer Engineering, University of Iceland, 107 Reykjavik, Iceland},
    Booktitle = {Geospatial Technology for Earth Observation},
    Doi = {10.1007/978-1-4419-0050-0_8},
    ISBN = {978-1-4419-0050-0},
    Keyword = {Earth and Environmental Science},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • B. Waske, M. Fauvel, J. A. Benediktsson, and J. Chanussot, “Machine Learning Techniques in Remote Sensing Data Analysis,” in Kernel Methods for Remote Sensing Data Analysis, G. Camps-Valls and L. Bruzzone, Eds., John Wiley & Sons, Ltd, 2009, pp. 1-24. doi:10.1002/9780470748992.ch1
    [BibTeX]
    Several applications have been developed in the field of remote sensing image analysis during the last decades. Besides well-known statistical approaches, many recent methods are based on techniques taken from the field of machine learning. A major aim of machine learning algorithms in remote sensing is supervised classification, which is perhaps the most widely used image classification approach. In this chapter a brief introduction to machine learning and the different paradigms in remote sensing is given. Moreover this chapter briefly discusses the use of recent developments in supervised classification techniques such as neural networks, support vector machines and multiple classifier systems.

    @InBook{Waske2009Machine,
    Title = {Machine Learning Techniques in Remote Sensing Data Analysis},
    Author = {Waske, Bj\"orn and Fauvel, Mathieu and Benediktsson, Jon Atli and Chanussot, Jocelyn},
    Chapter = {Machine Learning Techniques in Remote Sensing Data Analysis},
    Editor = {Camps-Valls, Gustavo and Bruzzone, Lorenzo},
    Pages = {1--24},
    Publisher = {John Wiley \& Sons, Ltd},
    Year = {2009},
    Abstract = {Several applications have been developed in the field of remote sensing image analysis during the last decades. Besides well-known statistical approaches, many recent methods are based on techniques taken from the field of machine learning. A major aim of machine learning algorithms in remote sensing is supervised classification, which is perhaps the most widely used image classification approach. In this chapter a brief introduction to machine learning and the different paradigms in remote sensing is given. Moreover this chapter briefly discusses the use of recent developments in supervised classification techniques such as neural networks, support vector machines and multiple classifier systems.},
    Booktitle = {Kernel Methods for Remote Sensing Data Analysis},
    Doi = {10.1002/9780470748992.ch1},
    ISBN = {9780470748992},
    Keywords = {machine learning techniques in remote sensing data analysis, machine learning algorithms in remote sensing and supervised classification, remote sensing challenges, machine learning (ML) - artificial intelligence area and learning from data, remote sensing paradigms, feature extraction and feature selection and dimensionality reduction, Tasseled Cap Transformation, ISODATA (iterative self-organizing data analysis), neural networks (NN) in pattern recognition and remote sensing context, development in field of (supervised) classification machine learning concepts},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • B. Waske, S. van der Linden, J. A. Benediktsson, A. Rabe, and P. Hostert, “Impact of different morphological profiles on the classification accuracy of urban hyperspectral data,” in First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS) , 2009. doi:10.1109/WHISPERS.2009.5289078
    [BibTeX]
    We present a detailed study on the classification of urban hyperspectral data with morphological profiles (MP). Although such a spectral-spatial classification approach may significantly increase achieved accuracy, the computational complexity as well as the increased dimensionality and redundancy of such data sets are potential drawbacks. This can be overcome by feature selection. Moreover it is useful to derive detailed information on the contribution of different components from MP to the classification accuracy by evaluating these subsets. We apply a wrapper approach for feature selection based on support vector machines (SVM) with sequential feature forward selection (FFS) search strategy to two hyperspectral data sets that contain the first principal components (PC) and various corresponding MP from an urban area. In doing so, we identify feature subsets of increasing size that perform best in terms of kappa for the given setup. Results clearly demonstrate that maximum classification accuracies are achieved already on small feature subsets with few morphological profiles.

    @InProceedings{Waske2009Impact,
    Title = {Impact of different morphological profiles on the classification accuracy of urban hyperspectral data},
    Author = {Waske, Bj\"orn and van der Linden, S. and Benediktsson, J.A. and Rabe, A. and Hostert, P.},
    Booktitle = {First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)},
    Year = {2009},
    Abstract = {We present a detailed study on the classification of urban hyperspectral data with morphological profiles (MP). Although such a spectral-spatial classification approach may significantly increase achieved accuracy, the computational complexity as well as the increased dimensionality and redundancy of such data sets are potential drawbacks. This can be overcome by feature selection. Moreover it is useful to derive detailed information on the contribution of different components from MP to the classification accuracy by evaluating these subsets. We apply a wrapper approach for feature selection based on support vector machines (SVM) with sequential feature forward selection (FFS) search strategy to two hyperspectral data sets that contain the first principal components (PC) and various corresponding MP from an urban area. In doing so, we identify feature subsets of increasing size that perform best in terms of kappa for the given setup. Results clearly demonstrate that maximum classification accuracies are achieved already on small feature subsets with few morphological profiles.},
    Doi = {10.1109/WHISPERS.2009.5289078},
    Keywords = {FFS search;computational complexity;feature forward selection;hyperspectral image;mathematical morphology;morphological profile;principal component;spectral-spatial classification;support vector machine;urban hyperspectral data classification;wrapper approach;feature extraction;image classification;mathematical morphology;principal component analysis;support vector machines;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • S. Wenzel and W. Förstner, “The Role of Sequences for Incremental Learning,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2009-04, 2009.
    [BibTeX] [PDF]
    This report points out the role of sequences of samples for training an incremental learning method. We define characteristics of incremental learning methods to describe the influence of sample ordering on the performance of a learned model. Different types of experiments evaluate these properties for two different datasets and two different incremental learning methods. We show how to find sequences of classes for training just based on the data to get always best possible error rates. This is based on the estimation of Bayes error bounds.

    @TechReport{Wenzel2009Role,
    Title = {The Role of Sequences for Incremental Learning},
    Author = {Wenzel, Susanne and F\"orstner, Wolfgang},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2009},
    Month = oct,
    Number = {TR-IGG-P-2009-04},
    Abstract = {This report points out the role of sequences of samples for training an incremental learning method. We define characteristics of incremental learning methods to describe the influence of sample ordering on the performance of a learned model. Different types of experiments evaluate these properties for two different datasets and two different incremental learning methods. We show how to find sequences of classes for training just based on the data to get always best possible error rates. This is based on the estimation of Bayes error bounds.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2009Role.pdf}
    }

  • K. M. Wurm, R. Kuemmerle, C. Stachniss, and W. Burgard, “Improving Robot Navigation in Structured Outdoor Environments by Identifying Vegetation from Laser Data,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2009.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Wurm2009,
    Title = {Improving Robot Navigation in Structured Outdoor Environments by Identifying Vegetation from Laser Data},
    Author = {K.M. Wurm and R. Kuemmerle and Stachniss, C. and Burgard, W.},
    Booktitle = iros,
    Year = {2009},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/wurm09iros.pdf}
    }

  • M. Y. Yang, “Multiregion Level-set Segmentation of Synthetic Aperture Radar Images,” in IEEE International Conference on Image Processing , Cairo, 2009, pp. 1717-1720. doi:10.1109/ICIP.2009.5413378
    [BibTeX] [PDF]
    Due to the presence of speckle, segmentation of SAR images is generally acknowledged as a difficult problem. A large effort has been done in order to cope with the influence of speckle noise on image segmentation such as edge detection or direct global segmentation. Recent works address this problem by using statistical image representation and deformable models. We suggest a novel variational approach to SAR image segmentation, which consists of minimizing a functional containing an original observation term derived from maximum a posteriori (MAP) estimation framework and a Gamma image representation. The minimization is carried out efficiently by a new multiregion method which embeds a simple partition assumption directly in curve evolution to guarantee a partition of the image domain from an arbitrary initial partition. Experiments on both synthetic and real images show the effectiveness of the proposed method.

    @InProceedings{Yang2009Multiregion,
    Title = {Multiregion Level-set Segmentation of Synthetic Aperture Radar Images},
    Author = {Yang, Michael Ying},
    Booktitle = {IEEE International Conference on Image Processing},
    Year = {2009},
    Address = {Cairo},
    Pages = {1717--1720},
    Abstract = {Due to the presence of speckle, segmentation of SAR images is generally acknowledged as a difficult problem. A large effort has been done in order to cope with the influence of speckle noise on image segmentation such as edge detection or direct global segmentation. Recent works address this problem by using statistical image representation and deformable models. We suggest a novel variational approach to SAR image segmentation, which consists of minimizing a functional containing an original observation term derived from maximum a posteriori (MAP) estimation framework and a Gamma image representation. The minimization is carried out efficiently by a new multiregion method which embeds a simple partition assumption directly in curve evolution to guarantee a partition of the image domain from an arbitrary initial partition. Experiments on both synthetic and real images show the effectiveness of the proposed method.},
    Doi = {10.1109/ICIP.2009.5413378},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Yang2009Multiregion.pdf}
    }

  • Y. Yang, “Remote sensing image registration via active contour model,” International Journal of Electronics and Communications, vol. 65, pp. 227-234, 2009. doi:10.1016/j.aeue.2008.01.003
    [BibTeX]
    Image registration is the process by which we determine a transformation that provides the most accurate match between two images. The search for the matching transformation can be automated with the use of a suitable metric, but it can be very time-consuming and tedious. In this paper, we introduce a registration algorithm that combines active contour segmentation together with mutual information. Our approach starts with a segmentation procedure. It is formed by a novel geometric active contour, which incorporates edge knowledge, namely Edgeflow, into active contour model. Two edgemap images filled with closed contours are obtained. After ruling out mismatched curves, we use mutual information (MI) as a similarity measure to register two edgemap images. Experimental results are provided to illustrate the performance of the proposed registration algorithm using both synthetic and multisensor images. Quantitative error analysis is also provided and several images are shown for subjective evaluation.

    @Article{Yang2009Remote,
    Title = {Remote sensing image registration via active contour model},
    Author = {Yang, Ying},
    Journal = {International Journal of Electronics and Communications},
    Year = {2009},
    Pages = {227--234},
    Volume = {65},
    Abstract = {Image registration is the process by which we determine a transformation that provides the most accurate match between two images. The search for the matching transformation can be automated with the use of a suitable metric, but it can be very time-consuming and tedious. In this paper, we introduce a registration algorithm that combines active contour segmentation together with mutual information. Our approach starts with a segmentation procedure. It is formed by a novel geometric active contour, which incorporates edge knowledge, namely Edgeflow, into active contour model. Two edgemap images filled with closed contours are obtained. After ruling out mismatched curves, we use mutual information (MI) as a similarity measure to register two edgemap images. Experimental results are provided to illustrate the performance of the proposed registration algorithm using both synthetic and multisensor images. Quantitative error analysis is also provided and several images are shown for subjective evaluation.},
    Doi = {10.1016/j.aeue.2008.01.003}
    }

2008

  • C. Beder and R. Steffen, “Incremental estimation without specifying a-priori covariance matrices for the novel parameters,” in VLMP Workshop on CVPR , Anchorage, USA, 2008. doi:10.1109/CVPRW.2008.4563139
    [BibTeX] [PDF]
    We will present a novel incremental algorithm for the task of online least-squares estimation. Our approach aims at combining the accuracy of least-squares estimation and the fast computation of recursive estimation techniques like the Kalman filter. Analyzing the structure of least-squares estimation we devise a novel incremental algorithm, which is able to introduce new unknown parameters and observations into an estimation simultaneously and is equivalent to the optimal overall estimation in case of linear models. It constitutes a direct generalization of the well-known Kalman filter allowing to augment the state vector inside the update step. In contrast to classical recursive estimation techniques no artificial initial covariance for the new unknown parameters is required here. We will show, how this new algorithm allows more flexible parameter estimation schemes especially in the case of scene and motion reconstruction from image sequences. Since optimality is not guaranteed in the non-linear case we will also compare our incremental estimation scheme to the optimal bundle adjustment on a real image sequence. It will be shown that competitive results are achievable using the proposed technique.

    @InProceedings{Beder2008Incremental,
    Title = {Incremental estimation without specifying a-priori covariance matrices for the novel parameters},
    Author = {Beder, Christian and Steffen, Richard},
    Booktitle = {VLMP Workshop on CVPR},
    Year = {2008},
    Address = {Anchorage, USA},
    Abstract = {We will present a novel incremental algorithm for the task of online least-squares estimation. Our approach aims at combining the accuracy of least-squares estimation and the fast computation of recursive estimation techniques like the Kalman filter. Analyzing the structure of least-squares estimation we devise a novel incremental algorithm, which is able to introduce new unknown parameters and observations into an estimation simultaneously and is equivalent to the optimal overall estimation in case of linear models. It constitutes a direct generalization of the well-known Kalman filter allowing to augment the state vector inside the update step. In contrast to classical recursive estimation techniques no artificial initial covariance for the new unknown parameters is required here. We will show, how this new algorithm allows more flexible parameter estimation schemes especially in the case of scene and motion reconstruction from image sequences. Since optimality is not guaranteed in the non-linear case we will also compare our incremental estimation scheme to the optimal bundle adjustment on a real image sequence. It will be shown that competitive results are achievable using the proposed technique.},
    Doi = {10.1109/CVPRW.2008.4563139},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Beder2008Incremental.pdf}
    }

  • J. A. Benediktsson, X. Ceamanos Garcia, B. Waske, J. Chanussot, J. R. Sveinsson, and M. Fauvel, “Ensemble Methods for Classification of Hyperspectral Data,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2008. doi:10.1109/IGARSS.2008.4778793
    [BibTeX]
    The classification of hyperspectral data is addressed using a classifier ensemble based on Support Vector Machines (SVM). First of all, the hyperspectral data set is decomposed into few sources according to the spectral bands correlation. Then, each source is treated separately and classified by an SVM classifier. Finally, all outputs are used as inputs for the final decision fusion, performed by an additional SVM classifier. The results of experiments, clearly show that the proposed SVM-based decision fusion outperforms a single SVM classifier in terms of overall accuracies.

    @InProceedings{Benediktsson2008Ensemble,
    Title = {Ensemble Methods for Classification of Hyperspectral Data},
    Author = {Benediktsson, Jon Atli and Ceamanos Garcia, X. and Waske, Bj\"orn and Chanussot, J. and Sveinsson, J.R. and Fauvel, M.},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2008},
    Abstract = {The classification of hyperspectral data is addressed using a classifier ensemble based on Support Vector Machines (SVM). First of all, the hyperspectral data set is decomposed into few sources according to the spectral bands correlation. Then, each source is treated separately and classified by an SVM classifier. Finally, all outputs are used as inputs for the final decision fusion, performed by an additional SVM classifier. The results of experiments, clearly show that the proposed SVM-based decision fusion outperforms a single SVM classifier in terms of overall accuracies.},
    Doi = {10.1109/IGARSS.2008.4778793},
    Keywords = {Gaussian maximum likelihood method;SVM classifier;Support Vector Machines;decision fusion;ensemble classifier method;hyperspectral data classification;multisensor image classification;pattern recognition;spectral band correlation;geophysical techniques;geophysics computing;image classification;image processing;maximum likelihood estimation;pattern recognition;remote sensing;support vector machines;},
    Timestamp = {2012.09.05}
    }

  • T. Dickscheid, T. Läbe, and W. Förstner, “Benchmarking Automatic Bundle Adjustment Results,” in 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS) , Beijing, China, 2008, p. 7–12, Part B3a.
    [BibTeX] [PDF]
    In classical photogrammetry, point observations are manually determined by an operator for performing the bundle adjustment of a sequence of images. In such cases, a comparison of different estimates is usually carried out with respect to the estimated 3D object points. Today, a broad range of automatic methods are available for extracting and matching point features across images, even in the case of widely separated views and under strong deformations. This allows for fully automatic solutions to the relative orientation problem, and even to the bundle triangulation in case that manually measured control points are available. However, such systems often contain random subprocedures like RANSAC for eliminating wrong correspondences, yielding different 3D points but hopefully similar orientation parameters. This causes two problems for the evaluation: First, the randomness of the algorithm has an influence on its stability, and second, we are constrained to compare the orientation parameters instead of the 3D points. We propose a method for benchmarking automatic bundle adjustments which takes these constraints into account and uses the orientation parameters directly. Given sets of corresponding orientation parameters, we require our benchmark test to address their consistency of the form deviation and the internal precision and their precision level related to the precision of a reference data set. Besides comparing different bundle adjustment methods, the approach may be used to safely evaluate effects of feature operators, matching strategies, control parameters and other design decisions for a particular method. The goal of this paper is to derive appropriate measures to cover these aspects, describe a coherent benchmarking scheme and show the feasibility of the approach using real data.

    @InProceedings{Dickscheid2008Benchmarking,
    Title = {Benchmarking Automatic Bundle Adjustment Results},
    Author = {Dickscheid, Timo and L\"abe, Thomas and F\"orstner, Wolfgang},
    Booktitle = {21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)},
    Year = {2008},
    Address = {Beijing, China},
    Pages = {7--12, Part B3a},
    Abstract = {In classical photogrammetry, point observations are manually determined by an operator for performing the bundle adjustment of a sequence of images. In such cases, a comparison of different estimates is usually carried out with respect to the estimated 3D object points. Today, a broad range of automatic methods are available for extracting and matching point features across images, even in the case of widely separated views and under strong deformations. This allows for fully automatic solutions to the relative orientation problem, and even to the bundle triangulation in case that manually measured control points are available. However, such systems often contain random subprocedures like RANSAC for eliminating wrong correspondences, yielding different 3D points but hopefully similar orientation parameters. This causes two problems for the evaluation: First, the randomness of the algorithm has an influence on its stability, and second, we are constrained to compare the orientation parameters instead of the 3D points. We propose a method for benchmarking automatic bundle adjustments which takes these constraints into account and uses the orientation parameters directly. Given sets of corresponding orientation parameters, we require our benchmark test to address their consistency of the form deviation and the internal precision and their precision level related to the precision of a reference data set. Besides comparing different bundle adjustment methods, the approach may be used to safely evaluate effects of feature operators, matching strategies, control parameters and other design decisions for a particular method. The goal of this paper is to derive appropriate measures to cover these aspects, describe a coherent benchmarking scheme and show the feasibility of the approach using real data.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Dickscheid2008Benchmarking.pdf}
    }

  • M. Drauschke, “Description of Stable Regions IPM,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2008-03, 2008.
    [BibTeX] [PDF]
    The Stable Regions Image Processing Module is a low-level region detector. It delivers image parts of interest without any further interpretation. These image parts are all regions of an image which do not change much over a certain range in scale space of the image. The output of this IPM is a list of polygons of any shape and their rectangular bounding boxes, which both are saved into an xml-file.

    @TechReport{Drauschke2008Description,
    Title = {Description of Stable Regions IPM},
    Author = {Drauschke, Martin},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2008},
    Month = mar,
    Number = {TR-IGG-P-2008-03},
    Abstract = {The Stable Regions Image Processing Module is a low-level region detector. It delivers image parts of interest without any further interpretation. These image parts are all regions of an image which do not change much over a certain range in scale space of the image. The output of this IPM is a list of polygons of any shape and their rectangular bounding boxes, which both are saved into an xml-file.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2008Description.pdf}
    }

  • M. Drauschke, “Feature Subset Selection with Adaboost and ADTboost,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2008-04, 2008.
    [BibTeX] [PDF]
    This technical report presents feature subset selection methods for two boosting classi cation frameworks: Adaboost and ADTboost.

    @TechReport{Drauschke2008Feature,
    Title = {Feature Subset Selection with Adaboost and ADTboost},
    Author = {Drauschke, Martin},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2008},
    Month = mar,
    Number = {TR-IGG-P-2008-04},
    Abstract = {This technical report presents feature subset selection methods for two boosting classication frameworks: Adaboost and ADTboost.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2008Feature.pdf}
    }

  • M. Drauschke, “Multi-class ADTboost,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2008-06, 2008.
    [BibTeX] [PDF]
    This technical report gives a short review on boosting with alternating decision trees (ADTboost), which has been proposed by Freund & Mason (1999) and refined by De Comite et al. (2001). This approach is designed for two-class problems, and we extend it towards multi-class classification. The advantage of a multi-class boosting algorithm is its usage in scene interpretation with various kinds of objects. In these cases, two-class approaches will lead to several one class versus background (the other classes) classifications, where we must solve unappropriate results like "always background" or "two or more valid classes" for a sample.

    @TechReport{Drauschke2008Multi,
    Title = {Multi-class ADTboost},
    Author = {Drauschke, Martin},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2008},
    Month = aug,
    Number = {TR-IGG-P-2008-06},
    Abstract = {This technical report gives a short review on boosting with alternating decision trees (ADTboost), which has been proposed by Freund & Mason (1999) and refined by De Comite et al. (2001). This approach is designed for two-class problems, and we extend it towards multi-class classification. The advantage of a multi-class boosting algorithm is its usage in scene interpretation with various kinds of objects. In these cases, two-class approaches will lead to several one class versus background (the other classes) classifications, where we must solve unappropriate results like "always background" or "two or more valid classes" for a sample.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2008Multi.pdf}
    }

  • M. Drauschke, “Verbesserung des Multi-Dodgings mittels bikubischer Interpolation,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2008-07, 2008.
    [BibTeX] [PDF]
    Aufgabenstellung: Digitalisierte 16-Bit-Luftbilder sollen automatisch verbessert werden. Dazu haben wir in (1) und (2) den Multi-Dodging-Ansatz vorgeschlagen. In diesem Verfahren wird ein Bild in sich nicht überlappende Ausschnitte (Patches) zerlegt. Dann wird in jedem dieser Bildausschnitte eine Histogrammverebnung durchgeführt. Da dieses Vorgehen die Patchgrenzen im verbesserten Bild hinterlässt, wurde abschließend zwischen den Patches bilinear interpoliert. In dieser Arbeit wird untersucht, ob die Verwendung einer bikubischen Interpolation an Stelle der bilinearen zu besseren Ergebnissen führt.

    @TechReport{Drauschke2008Verbesserung,
    Title = {Verbesserung des Multi-Dodgings mittels bikubischer Interpolation},
    Author = {Drauschke, Martin},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2008},
    Number = {TR-IGG-P-2008-07},
    Abstract = {Aufgabenstellung: Digitalisierte 16-Bit-Luftbilder sollen automatisch verbessert werden. Dazu haben wir in (1) und (2) den Multi-Dodging-Ansatz vorgeschlagen. In diesem Verfahren wird ein Bild in sich nicht \"uberlappende Ausschnitte (Patches) zerlegt. Dann wird in jedem dieser Bildausschnitte eine Histogrammverebnung durchgef\"uhrt. Da dieses Vorgehen die Patchgrenzen im verbesserten Bild hinterl\"asst, wurde abschlie{\ss}end zwischen den Patches bilinear interpoliert. In dieser Arbeit wird untersucht, ob die Verwendung einer bikubischen Interpolation an Stelle der bilinearen zu besseren Ergebnissen f\"uhrt.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2008Verbesserung.pdf}
    }

  • M. Drauschke and W. Förstner, “Comparison of Adaboost and ADTboost for Feature Subset Selection,” in PRIS 2008 , Barcelona, Spain, 2008, pp. 113-122.
    [BibTeX] [PDF]
    This paper addresses the problem of feature selection within classification processes. We present a comparison of a feature subset selection with respect to two boosting methods, Adaboost and ADTboost. In our evaluation, we have focused on three different criteria: the classification error and the efficiency of the process depending on the number of most appropriate features and the number of training samples. Therefore, we discuss both techniques and sketch their functionality, where we restrict both boosting approaches to linear weak classifiers. We propose a feature subset selection method, which we evaluate on synthetic and on benchmark data sets.

    @InProceedings{Drauschke2008Comparison,
    Title = {Comparison of Adaboost and ADTboost for Feature Subset Selection},
    Author = {Drauschke, Martin and F\"orstner, Wolfgang},
    Booktitle = {PRIS 2008},
    Year = {2008},
    Address = {Barcelona, Spain},
    Pages = {113--122},
    Abstract = {This paper addresses the problem of feature selection within classification processes. We present a comparison of a feature subset selection with respect to two boosting methods, Adaboost and ADTboost. In our evaluation, we have focused on three different criteria: the classification error and the efficiency of the process depending on the number of most appropriate features and the number of training samples. Therefore, we discuss both techniques and sketch their functionality, where we restrict both boosting approaches to linear weak classifiers. We propose a feature subset selection method, which we evaluate on synthetic and on benchmark data sets.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2008Comparison.pdf}
    }

  • M. Drauschke and W. Förstner, “Selecting appropriate features for detecting buildings and building parts,” in 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS) , Beijing, China, 2008, p. 447–452 Part B3b-1.
    [BibTeX] [PDF]
    The paper addresses the problem of feature selection during classification of image regions within the context of interpreting images showing highly structured objects such as buildings. We present a feature selection scheme that is connected with the classification framework Adaboost, cf. (Schapire and Singer, 1999). We constricted our weak learners on threshold classification on a single feature. Our experiments showed that the classification with Adaboost is based on relatively small subsets of features. Thus, we are able to find sets of appropriate features. We present our results on manually annotated and automatically segmented regions from facade images of the eTRIMS data base, where our focus were the object classes facade, roof, windows and window panes.

    @InProceedings{Drauschke2008Selecting,
    Title = {Selecting appropriate features for detecting buildings and building parts},
    Author = {Drauschke, Martin and F\"orstner, Wolfgang},
    Booktitle = {21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)},
    Year = {2008},
    Address = {Beijing, China},
    Pages = {447--452 Part B3b-1},
    Abstract = {The paper addresses the problem of feature selection during classification of image regions within the context of interpreting images showing highly structured objects such as buildings. We present a feature selection scheme that is connected with the classification framework Adaboost, cf. (Schapire and Singer, 1999). We constricted our weak learners on threshold classification on a single feature. Our experiments showed that the classification with Adaboost is based on relatively small subsets of features. Thus, we are able to find sets of appropriate features. We present our results on manually annotated and automatically segmented regions from facade images of the eTRIMS data base, where our focus were the object classes facade, roof, windows and window panes.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2008Selecting.pdf}
    }

  • B. Frank, M. Becker, C. Stachniss, M. Teschner, and W. Burgard, “Learning Cost Functions for Mobile Robot Navigation in Environments with Deformable Objects,” in Workshop on Path Planning on Cost Maps at the IEEE Int. Conf. on Robotics & Automation (ICRA) , Pasadena, CA, USA, 2008.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Frank2008,
    Title = {Learning Cost Functions for Mobile Robot Navigation in Environments with Deformable Objects},
    Author = {Frank, B. and Becker, M. and Stachniss, C. and Teschner, M. and Burgard, W.},
    Booktitle = icrawsplanning,
    Year = {2008},
    Address = {Pasadena, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/frank08icraws.pdf}
    }

  • B. Frank, M. Becker, C. Stachniss, M. Teschner, and W. Burgard, “Efficient Path Planning for Mobile Robots in Environments with Deformable Objects,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Pasadena, CA, USA, 2008.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Frank2008a,
    Title = {Efficient Path Planning for Mobile Robots in Environments with Deformable Objects},
    Author = {Frank, B. and Becker, M. and Stachniss, C. and Teschner, M. and Burgard, W.},
    Booktitle = ICRA,
    Year = {2008},
    Address = {Pasadena, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/frank08icra.pdf}
    }

  • G. Grisetti, D. Lordi Rizzini, C. Stachniss, E. Olson, and W. Burgard, “Online Constraint Network Optimization for Efficient Maximum Likelihood Map Learning,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Pasadena, CA, USA, 2008.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Grisetti2008,
    Title = {Online Constraint Network Optimization for Efficient Maximum Likelihood Map Learning},
    Author = {Grisetti, G. and Lordi Rizzini, D. and Stachniss, C. and Olson, E. and Burgard, W.},
    Booktitle = ICRA,
    Year = {2008},
    Address = {Pasadena, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti08icra.pdf}
    }

  • L. Jensen, “Automatische Detektion von Bombentrichtern,” Bachelor Thesis Master Thesis, 2008.
    [BibTeX] [PDF]
    Der Kampfmittelbeseitigungsdienst der Bezirksregierung Arnsberg nutzt Luftbilder aus dem Zweiten Weltkrieg zur Detektion von Blindgängern. Aufgrund der großen Anzahl an Bildern (über 300000) ist die Suche sehr aufwändig. Die Arbeit der Auswerter könnte erleichtert werden, wenn sie eine Karte hätten, auf der die Dichte der Bombardierung dargestellt ist. Um diese Karte zu erstellen, ist ein Verfahren notwendig, das die Bombentrichter auf den Bildern automatisch detektiert. Dieses wurde in der vorliegenden Bachelorarbeit realisiert. Da die Trichter sich in ihrer Gestalt und Größe stark unterscheiden, muss ein Ansatz zur Detektion gewählt werden, der mit diesen Variationen umgehen kann. Der Algorithmus führt eine Kandidatensuche mittels Kreuzkorrelation des Bildes mit einem repräsentativen Trichter-Template in verschiedenen Größen durch und klassifiziert die gefundenen Kandidaten anschließend. Die Klassifizierung erfolgt mit Hilfe der Wahrscheinlichkeitsdichte der Verteilungen der Klassen Trichter und Hintergrund. Um die Verteilungsparameter zu schätzen, ist die Dimensionsreduktion des Merkmalsraums der Trainingsdaten mit einer Hauptkomponentenanalyse (PCA) und einer linearen Diskriminanzanalyse nach Fisher (LDA) und anschließender Projektion in den Unterraum notwendig. In dieser Arbeit wurde das Verfahren mit einer Trichterklasse implementiert, es kann aber gut auf verschiedene Trichterklassen erweitert werden. Der Algorithmus zur Bombentrichterdetektion wurde in Matlab implementiert. Nach der Vorverarbeitung des Bildmaterials mussten zur Erstellung des Templates zunächst Trainingsbilder annotiert werden. Außerdem waren bei der Umsetzung verschiedene Parameter, wie z.B. die Templategrößen zur Kandidatensuche, die Dimension des PCA-Raums und die Bildausschnittsgröße bei der Klassifikation zu bestimmen. Zur Beurteilung der Ergebnisse wurde der Algorithmus auf den Trainingsbildern getestet und die Ergebnisse mit den Referenzdaten verglichen. Je nachdem ob vier oder fünf Templategrößen verwendet werden, können mit dem erstellten Template etwa 75% oder 80% der Trichter erfasst werden. Nach der Klassifikation werden mit dem implementierten Algorithmus je nach Konfiguration zwischen 70% und 64% der Trichter detektiert, dabei ist die Relevanz allerdings sehr gering. Maximal sind etwa 31% der als Trichter klassifizierten Bildausschnitte auch tatsächlich Bombentrichter. Bei der Analyse der false positives auf Testbildern ergab sich, dass bestimmte Bildstrukturen, wie Hausdächer, Schattenwurf an Straßen, Texturen in Feldern oder Waldstrukturen immer wieder fälschlicherweise als Trichter klassifiziert werden. Bei der Untersuchung der nicht detektierten Bombentrichter konnten Trichterklassen abgeleitet werden, die mit dem erstellten Template nicht detektiert werden. Mit den Testbildern wurde außerdem die Möglichkeit untersucht, die Bilder mit Hilfe der Bombentrichterdetektion in die Kategorien schwache, mittlere und starke Bombardierung einzuordnen. Hierbei wurden 73% der Bilder der richtigen Kategorie zugeordnet. Bei einer Steigerung der Relevanz und der Annotation weiterer Testbilder ist eine bessere Einordnung zu erwarten. Insgesamt liegt mit dieser Arbeit ein vielversprechender Ansatz zur Bombentrichterdetektion mit großer Erweiterungsmöglichkeit vor.

    @MastersThesis{Jensen2008Automatische,
    Title = {Automatische Detektion von Bombentrichtern},
    Author = {Jensen, Laura},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2008},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Inform. Martin Drauschke},
    Type = {Bachelor Thesis},
    Abstract = {Der Kampfmittelbeseitigungsdienst der Bezirksregierung Arnsberg nutzt Luftbilder aus dem Zweiten Weltkrieg zur Detektion von Blindg\"angern. Aufgrund der gro{\ss}en Anzahl an Bildern (\"uber 300000) ist die Suche sehr aufw\"andig. Die Arbeit der Auswerter k\"onnte erleichtert werden, wenn sie eine Karte h\"atten, auf der die Dichte der Bombardierung dargestellt ist. Um diese Karte zu erstellen, ist ein Verfahren notwendig, das die Bombentrichter auf den Bildern automatisch detektiert. Dieses wurde in der vorliegenden Bachelorarbeit realisiert. Da die Trichter sich in ihrer Gestalt und Gr\"o{\ss}e stark unterscheiden, muss ein Ansatz zur Detektion gew\"ahlt werden, der mit diesen Variationen umgehen kann. Der Algorithmus f\"uhrt eine Kandidatensuche mittels Kreuzkorrelation des Bildes mit einem repr\"asentativen Trichter-Template in verschiedenen Gr\"o{\ss}en durch und klassifiziert die gefundenen Kandidaten anschlie{\ss}end. Die Klassifizierung erfolgt mit Hilfe der Wahrscheinlichkeitsdichte der Verteilungen der Klassen Trichter und Hintergrund. Um die Verteilungsparameter zu sch\"atzen, ist die Dimensionsreduktion des Merkmalsraums der Trainingsdaten mit einer Hauptkomponentenanalyse (PCA) und einer linearen Diskriminanzanalyse nach Fisher (LDA) und anschlie{\ss}ender Projektion in den Unterraum notwendig. In dieser Arbeit wurde das Verfahren mit einer Trichterklasse implementiert, es kann aber gut auf verschiedene Trichterklassen erweitert werden. Der Algorithmus zur Bombentrichterdetektion wurde in Matlab implementiert. Nach der Vorverarbeitung des Bildmaterials mussten zur Erstellung des Templates zun\"achst Trainingsbilder annotiert werden. Au{\ss}erdem waren bei der Umsetzung verschiedene Parameter, wie z.B. die Templategr\"o{\ss}en zur Kandidatensuche, die Dimension des PCA-Raums und die Bildausschnittsgr\"o{\ss}e bei der Klassifikation zu bestimmen. Zur Beurteilung der Ergebnisse wurde der Algorithmus auf den Trainingsbildern getestet und die Ergebnisse mit den Referenzdaten verglichen. Je nachdem ob vier oder f\"unf Templategr\"o{\ss}en verwendet werden, k\"onnen mit dem erstellten Template etwa 75% oder 80% der Trichter erfasst werden. Nach der Klassifikation werden mit dem implementierten Algorithmus je nach Konfiguration zwischen 70% und 64% der Trichter detektiert, dabei ist die Relevanz allerdings sehr gering. Maximal sind etwa 31% der als Trichter klassifizierten Bildausschnitte auch tats\"achlich Bombentrichter. Bei der Analyse der false positives auf Testbildern ergab sich, dass bestimmte Bildstrukturen, wie Hausd\"acher, Schattenwurf an Stra{\ss}en, Texturen in Feldern oder Waldstrukturen immer wieder f\"alschlicherweise als Trichter klassifiziert werden. Bei der Untersuchung der nicht detektierten Bombentrichter konnten Trichterklassen abgeleitet werden, die mit dem erstellten Template nicht detektiert werden. Mit den Testbildern wurde au{\ss}erdem die M\"oglichkeit untersucht, die Bilder mit Hilfe der Bombentrichterdetektion in die Kategorien schwache, mittlere und starke Bombardierung einzuordnen. Hierbei wurden 73% der Bilder der richtigen Kategorie zugeordnet. Bei einer Steigerung der Relevanz und der Annotation weiterer Testbilder ist eine bessere Einordnung zu erwarten. Insgesamt liegt mit dieser Arbeit ein vielversprechender Ansatz zur Bombentrichterdetektion mit gro{\ss}er Erweiterungsm\"oglichkeit vor.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Jensen2008Automatische.pdf}
    }

  • L. Jensen, “Schattenentfernung aus Farbbildern mit dem Retinex-Algorithmus,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2008-01, 2008.
    [BibTeX] [PDF]
    [none]
    @TechReport{Jensen2008Schattenentfernung,
    Title = {Schattenentfernung aus Farbbildern mit dem Retinex-Algorithmus},
    Author = {Jensen, Laura},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2008},
    Number = {TR-IGG-P-2008-01},
    Abstract = {[none]},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Jensen2008Schattenentfernung.pdf}
    }

  • F. Korč and W. Förstner, “Approximate Parameter Learning in Conditional Random Fields: An Empirical Investigation,” in 30th Annual Symposium of the German Association for Pattern Recognition (DAGM) , Munich, Germany, 2008, pp. 11-20. doi:10.1007/978-3-540-69321-5_2
    [BibTeX] [PDF]
    We investigate maximum likelihood parameter learning in Conditional Random Fields (CRF) and present an empirical study of pseudo-likelihood (PL) based approximations of the parameter likelihood gradient. We show that these parameter learning methods can be improved and evaluate the resulting performance employing different inference techniques. We show that the approximation based on penalized pseudo-likelihood (PPL) in combination with the Maximum A Posteriori (MAP) inference yields results comparable to other state of the art approaches, while providing low complexity and advantages to formulating parameter learning as a convex optimization problem. Eventually, we demonstrate applicability on the task of detecting man-made structures in natural images.

    @InProceedings{Korvc2008Approximate,
    Title = {Approximate Parameter Learning in Conditional Random Fields: An Empirical Investigation},
    Author = {Kor{\vc}, Filip and F\"orstner, Wolfgang},
    Booktitle = {30th Annual Symposium of the German Association for Pattern Recognition (DAGM)},
    Year = {2008},
    Address = {Munich, Germany},
    Editor = {G. Rigoll},
    Number = {5096},
    Pages = {11--20},
    Publisher = {Springer},
    Series = {LNCS},
    Abstract = {We investigate maximum likelihood parameter learning in Conditional Random Fields (CRF) and present an empirical study of pseudo-likelihood (PL) based approximations of the parameter likelihood gradient. We show that these parameter learning methods can be improved and evaluate the resulting performance employing different inference techniques. We show that the approximation based on penalized pseudo-likelihood (PPL) in combination with the Maximum A Posteriori (MAP) inference yields results comparable to other state of the art approaches, while providing low complexity and advantages to formulating parameter learning as a convex optimization problem. Eventually, we demonstrate applicability on the task of detecting man-made structures in natural images.},
    Doi = {10.1007/978-3-540-69321-5_2},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Korvc2008Approximate.pdf}
    }

  • F. Korč and W. Förstner, “Finding Optimal Non-Overlapping Subset of Extracted Image Objects,” in Proc. of the 12th International Workshop on Combinatorial Image Analysis (IWCIA) , Buffalo, USA, 2008.
    [BibTeX] [PDF]
    We present a solution to the following discrete optimization problem. Given a set of independent, possibly overlapping image regions and a non-negative likeliness of the individual regions, we select a non-overlapping subset that is optimal with respect to the following requirements: First, every region is either part of the solution or has an overlap with it. Second, the degree of overlap of the solution with the rest of the regions is maximized together with the likeliness of the solution. Third, the likeliness of the individual regions influences the overall solution proportionally to the degree of overlap with neighboring regions. We represent the problem as a graph and solve the task by reduction to a constrained binary integer programming problem. The problem involves minimizing a linear objective function subject to linear inequality constraints. Both the objective function and the constraints exploit the structure of the graph. We illustrate the validity and the relevance of the proposed formulation by applying the method to the problem of facade window extraction. We generalize our formulation to the case where a set of hypotheses is given together with a binary similarity relation and similarity measure. Our formulation then exploits combination of degree and structure of hypothesis similarity and likeliness of individual hypotheses. In this case, we present a solution with non-similar hypotheses which can be viewed as a non-redundant representation.

    @InProceedings{Korvc2008Finding,
    Title = {Finding Optimal Non-Overlapping Subset of Extracted Image Objects},
    Author = {Kor{\vc}, Filip and F\"orstner, Wolfgang},
    Booktitle = {Proc. of the 12th International Workshop on Combinatorial Image Analysis (IWCIA)},
    Year = {2008},
    Address = {Buffalo, USA},
    Abstract = {We present a solution to the following discrete optimization problem. Given a set of independent, possibly overlapping image regions and a non-negative likeliness of the individual regions, we select a non-overlapping subset that is optimal with respect to the following requirements: First, every region is either part of the solution or has an overlap with it. Second, the degree of overlap of the solution with the rest of the regions is maximized together with the likeliness of the solution. Third, the likeliness of the individual regions influences the overall solution proportionally to the degree of overlap with neighboring regions. We represent the problem as a graph and solve the task by reduction to a constrained binary integer programming problem. The problem involves minimizing a linear objective function subject to linear inequality constraints. Both the objective function and the constraints exploit the structure of the graph. We illustrate the validity and the relevance of the proposed formulation by applying the method to the problem of facade window extraction. We generalize our formulation to the case where a set of hypotheses is given together with a binary similarity relation and similarity measure. Our formulation then exploits combination of degree and structure of hypothesis similarity and likeliness of individual hypotheses. In this case, we present a solution with non-similar hypotheses which can be viewed as a non-redundant representation.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Korvc2008Finding.pdf}
    }

  • F. Korč and W. Förstner, “Interpreting Terrestrial Images of Urban Scenes Using Discriminative Random Fields,” in Proc. of the 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS) , Beijing, China, 2008, p. 291–296 Part B3a.
    [BibTeX] [PDF]
    We investigate Discriminative Random Fields (DRF) which provide a principled approach for combining local discriminative classifiers that allow the use of arbitrary overlapping features, with adaptive data-dependent smoothing over the label field. We discuss the differences between a traditional Markov Random Field (MRF) formulation and the DRF model, and compare the performance of the two models and an independent sitewise classifier. Further, we present results suggesting the potential for performance enhancement by improving state of the art parameter learning methods. Eventually, we demonstrate the application feasibility on both synthetic and natural images.

    @InProceedings{Korvc2008Interpreting,
    Title = {Interpreting Terrestrial Images of Urban Scenes Using Discriminative Random Fields},
    Author = {Kor{\vc}, Filip and F\"orstner, Wolfgang},
    Booktitle = {Proc. of the 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)},
    Year = {2008},
    Address = {Beijing, China},
    Pages = {291--296 Part B3a},
    Abstract = {We investigate Discriminative Random Fields (DRF) which provide a principled approach for combining local discriminative classifiers that allow the use of arbitrary overlapping features, with adaptive data-dependent smoothing over the label field. We discuss the differences between a traditional Markov Random Field (MRF) formulation and the DRF model, and compare the performance of the two models and an independent sitewise classifier. Further, we present results suggesting the potential for performance enhancement by improving state of the art parameter learning methods. Eventually, we demonstrate the application feasibility on both synthetic and natural images.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Korvc2008Interpreting.pdf}
    }

  • H. Kretzschmar, C. Stachniss, C. Plagemann, and W. Burgard, “Estimating Landmark Locations from Geo-Referenced Photographs,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Nice, France, 2008.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Kretzschmar2008,
    Title = {Estimating Landmark Locations from Geo-Referenced Photographs},
    Author = {Kretzschmar, H. and Stachniss, C. and Plagemann, C. and W. Burgard},
    Booktitle = iros,
    Year = {2008},
    Address = {Nice, France},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/kretzschmar08iros.pdf}
    }

  • T. Läbe, T. Dickscheid, and W. Förstner, “On the Quality of Automatic Relative Orientation Procedures,” in 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS) , Beijing, China, 2008, p. 37–42 Part B3b-1.
    [BibTeX] [PDF]
    This paper presents an empirical investigation into the quality of automatic relative orientation procedures. The results of an in-house developed automatic orientation software called aurelo (Laebe and Foerstner, 2006) are evaluated. For this evaluation a recently proposed consistency measure for two sets of orientation parameters (Dickscheid et. al., 2008) and the ratio of two covariances matrices is used. Thus we evaluate the consistency of bundle block adjustments and the precision level achievable. We use different sets of orientation results related to the same set of images but computed under differing conditions. As reference datasets results on a much higher image resolution and ground truth data from artificial images rendered with computer graphics software are used. Six different effects are analysed: varying results due to random procedures in aurelo, computations on different image pyramid levels and with or without points with only two or three observations, the effect of replacing the used SIFT operator with an approximation of SIFT features, called SURF, repetitive patterns in the scene and remaining non-linear distortions. These experiments show under which conditions the bundle adjustment results reflect the true errors and thus give valuable hints for the use of automatic relative orientation procedures and possible improvements of the software.

    @InProceedings{Labe2008Quality,
    Title = {On the Quality of Automatic Relative Orientation Procedures},
    Author = {L\"abe, Thomas and Dickscheid, Timo and F\"orstner, Wolfgang},
    Booktitle = {21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)},
    Year = {2008},
    Address = {Beijing, China},
    Pages = {37--42 Part B3b-1},
    Abstract = {This paper presents an empirical investigation into the quality of automatic relative orientation procedures. The results of an in-house developed automatic orientation software called aurelo (Laebe and Foerstner, 2006) are evaluated. For this evaluation a recently proposed consistency measure for two sets of orientation parameters (Dickscheid et. al., 2008) and the ratio of two covariances matrices is used. Thus we evaluate the consistency of bundle block adjustments and the precision level achievable. We use different sets of orientation results related to the same set of images but computed under differing conditions. As reference datasets results on a much higher image resolution and ground truth data from artificial images rendered with computer graphics software are used. Six different effects are analysed: varying results due to random procedures in aurelo, computations on different image pyramid levels and with or without points with only two or three observations, the effect of replacing the used SIFT operator with an approximation of SIFT features, called SURF, repetitive patterns in the scene and remaining non-linear distortions. These experiments show under which conditions the bundle adjustment results reflect the true errors and thus give valuable hints for the use of automatic relative orientation procedures and possible improvements of the software.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Labe2008Quality.pdf}
    }

  • J. Müller, C. Stachniss, K. O. Arras, and W. Burgard, “Socially Inspired Motion Planning for Mobile Robots in Populated Environments,” in International Conference on Cognitive Systems (CogSys) , Baden Baden, Germany, 2008.
    [BibTeX]
    [none]
    @InProceedings{Muller2008,
    Title = {Socially Inspired Motion Planning for Mobile Robots in Populated Environments},
    Author = {M\"uller, J. and Stachniss, C. and Arras, K.O. and Burgard, W.},
    Booktitle = COGSYS,
    Year = {2008},
    Address = {Baden Baden, Germany},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • M. Muffert, “Durchführung von Untersuchungen zur Bewertung der Messqualität eines Faro-Messarms des Typs "Titanium",” Bachelor Thesis Master Thesis, 2008.
    [BibTeX]
    In the following study we carry out initial research on the FaroArm Titanium. The results allow conclusions regarding the accuracy and reliability of measurement depending on the measurement position. In order to draw conclusions about the accuracy we have developed and applied several measuring techniques. The FaroArm Titanium (FaroArm) is a mobile precision measurement arm which can be described as an industrial robot. It has particular use in quality control and, in mechanical engineering in what is known as Reverse Engineering. In accordance with company specifications, single point accuracies of 0.05mm are achieved. Mobile precision measurement arms consist of links various lengths which are usually connected bysix or seven revolute joints. The number ofthe axes ofrotation gives the number of degrees of freedom of the measurement arm. Company specifications regarding the lengths of the axes do not lie within the required range of accuracy. The orientation ofthe revolute joints is unknown. In this study we deal with fundamental mathematical and statistical procedures for spatial orientation. The Denavit-Hartenberg-Convention is of particular importance in robotic forward kinematics. The best estimation of spatial orientation is crucial in the measurements to be taken. The main part of this study deals with the development and application of measurement concepts which will result in the first information about the accuracy of measurement ofthe FaroArm. First we modelIed the forward kinematics of the robot by means of rough estimates of the axis length. A direct comparison between the self chosen coordinates and the nominal coordinates is impossible. For results ofthe accuracy ofmeasurement, we compared the scatter plots ofthe FaroArm with reference scatter plots ofthe Lasertracker Smart 310 from the Leica Company. An exact definition of the points is therefore required, wh ich we have achieved by centring an aluminium plate. The plate has metal cylinders in which cones are inserted. The cones serve to define the points. The different centring positions on the board were measured with both measuring systems. For both systems the spatial transformation between different plate positions were determined and compared. In this way we obtained our own information about measuring quality and accuracy ofthe FaroArm. The comparison between the two different measuring systems reveals gross errors in the ovservations. These can be attributed to incorrect operation or uncertainty in the centring of the measuring plate. A direct outlier control has to be taken after every measurement.

    @MastersThesis{Muffert2008Durchfuhrung,
    Title = {Durchf\"uhrung von Untersuchungen zur Bewertung der Messqualit\"at eines Faro-Messarms des Typs "Titanium"},
    Author = {Muffert, Maximilian},
    School = {University of Bonn In Zusammenarbeit mit dem Lehrstuhl f\"ur Geod\"asie des IGG},
    Year = {2008},
    Note = {Betreuung: Dr.-Ing. Wolfgang Schauerte, Prof. Dr.-Ing. Wolfgang F\"orstner},
    Type = {Bachelor Thesis},
    Abstract = {In the following study we carry out initial research on the FaroArm Titanium. The results allow conclusions regarding the accuracy and reliability of measurement depending on the measurement position. In order to draw conclusions about the accuracy we have developed and applied several measuring techniques. The FaroArm Titanium (FaroArm) is a mobile precision measurement arm which can be described as an industrial robot. It has particular use in quality control and, in mechanical engineering in what is known as Reverse Engineering. In accordance with company specifications, single point accuracies of 0.05mm are achieved. Mobile precision measurement arms consist of links various lengths which are usually connected bysix or seven revolute joints. The number ofthe axes ofrotation gives the number of degrees of freedom of the measurement arm. Company specifications regarding the lengths of the axes do not lie within the required range of accuracy. The orientation ofthe revolute joints is unknown. In this study we deal with fundamental mathematical and statistical procedures for spatial orientation. The Denavit-Hartenberg-Convention is of particular importance in robotic forward kinematics. The best estimation of spatial orientation is crucial in the measurements to be taken. The main part of this study deals with the development and application of measurement concepts which will result in the first information about the accuracy of measurement ofthe FaroArm. First we modelIed the forward kinematics of the robot by means of rough estimates of the axis length. A direct comparison between the self chosen coordinates and the nominal coordinates is impossible. For results ofthe accuracy ofmeasurement, we compared the scatter plots ofthe FaroArm with reference scatter plots ofthe Lasertracker Smart 310 from the Leica Company. An exact definition of the points is therefore required, wh ich we have achieved by centring an aluminium plate. The plate has metal cylinders in which cones are inserted. The cones serve to define the points. The different centring positions on the board were measured with both measuring systems. For both systems the spatial transformation between different plate positions were determined and compared. In this way we obtained our own information about measuring quality and accuracy ofthe FaroArm. The comparison between the two different measuring systems reveals gross errors in the ovservations. These can be attributed to incorrect operation or uncertainty in the centring of the measuring plate. A direct outlier control has to be taken after every measurement.}
    }

  • P. Pfaff, C. Stachniss, C. Plagemann, and W. Burgard, “Efficiently Learning High-dimensional Observation Models for Monte-Carlo Localization using Gaussian Mixtures,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Nice, France, 2008.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Pfaff2008,
    Title = {Efficiently Learning High-dimensional Observation Models for Monte-Carlo Localization using Gaussian Mixtures},
    Author = {Pfaff, P. and Stachniss, C. and Plagemann, C. and W. Burgard},
    Booktitle = iros,
    Year = {2008},
    Address = {Nice, France},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/pfaff08iros.pdf}
    }

  • C. Plagemann, F. Endres, J. Hess, C. Stachniss, and W. Burgard, “Monocular Range Sensing: A Non-Parametric Learning Approach,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Pasadena, CA, USA, 2008.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Plagemann2008,
    Title = {Monocular Range Sensing: A Non-Parametric Learning Approach},
    Author = {Plagemann, C. and Endres, F. and Hess, J. and Stachniss, C. and Burgard, W.},
    Booktitle = ICRA,
    Year = {2008},
    Address = {Pasadena, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/plagemann08icra.pdf}
    }

  • R. Roscher, “Bestimmung von 3D-Merkmalen von Bildregionen aus Stereobildern,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2008-05, 2008.
    [BibTeX] [PDF]
    Dieser Report erläutert die Bestimmung von 3D-Merkmalen von Bildregionen durch Zuordnung von Bildpunkten in diesen Regionen zu den Objektpunkten in einer Punktwolke.Die Umsetzung erfolgt in einer grafischen Benutzeroberfläche in Matlab, deren Bedienung in diesem Report veranschaulicht werden soll.

    @TechReport{Roscher2008Bestimmung,
    Title = {Bestimmung von 3D-Merkmalen von Bildregionen aus Stereobildern},
    Author = {Roscher, Ribana},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2008},
    Number = {TR-IGG-P-2008-05},
    Abstract = {Dieser Report erl\"autert die Bestimmung von 3D-Merkmalen von Bildregionen durch Zuordnung von Bildpunkten in diesen Regionen zu den Objektpunkten in einer Punktwolke.Die Umsetzung erfolgt in einer grafischen Benutzeroberfl\"ache in Matlab, deren Bedienung in diesem Report veranschaulicht werden soll.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2008Bestimmung.pdf}
    }

  • R. Roscher, “Lernen linearer probabilistischer diskriminativer Modelle für die semantische Bildsegmentierung,” Diploma Thesis Master Thesis, 2008.
    [BibTeX] [PDF]
    [none]
    @MastersThesis{Roscher2008Lernen,
    Title = {Lernen linearer probabilistischer diskriminativer Modelle f\"ur die semantische Bildsegmentierung},
    Author = {Roscher, Ribana},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2008},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Ing. Filip Kor{\vc}},
    Type = {Diploma Thesis},
    Abstract = {[none]},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Roscher2008Lernen.pdf}
    }

  • B. Schmeing, “Analyse des Bewegungsmusters von Objekten,” Bachelor Thesis Master Thesis, 2008.
    [BibTeX]
    Die vorliegende Arbeit beschäftigt sich mit der Analyse der Bewegungsmuster von Fußgängern. Wir entwickeln Methoden zur Erfassung, Modellierung und Klassifikation der Bewegung und untersuchen ihre Eignung an realen Daten. Für das implementierte Klassifikationsverfahren wählten wir einen diskriminativen Ansatz, d.h. Ziel der Klassifikation ist einzig die Unterscheidung verschiedener Bewegungsmuster. Auf die Realisierung eines generativen Ansatzes, der auch die Erzeugung synthetischer Bewegungsmuster erlaubt, wurde verzichtet. Der Algorithmus soll die Bewegungsmuster "Gehen", "Hinken" und "Laufen" anhand von Merkmalen unterscheiden, die aus der Eigenbewegung mehrerer Probanden abgeleitet sind. Als Datengrundlage dienen mittels einer fest an der Brust angebrachten Kamera aufgenommene Rotationszeitreihen der Eigenbewegung der Probanden. Für jedes Bewegungsmuster stehen ca. 200 Videosequenzen von 5 Sekunden Länge (je 150 Bilder) zur Verfügung; die Rotationszeitreihen werden aus den Rotationen zwischen aufeinanderfolgenden Bildern erzeugt. Als Merkmalsvektoren für die Klassifikation dienen die Leistungsspektren der Rotationszeitreihen. Die Klassifikation basiert auf Fisher’s Linearer Diskriminante. Dabei werden in der Trainingsphase die Merkmalsvektoren von 378 Bildfolgen bearbeitet. Im ersten Schritt findet eine Projektion in den Entscheidungsraum mittels Linearer Diskriminanzanalyse (LDA) statt. Die Projektion ist so gewählt, dass sich die Klassen im Entscheidungsraum maximal unterscheiden. Im zweiten Schritt wird die Verteilung der projizierten Datenpunkte für jede Klasse bestimmt. Eine zuvor durchgeführte Dimensionsreduktion mittels Hauptkompentenanalyse (PCA) reduziert die Dimension des Klassifikationsproblems und verbessert so die Numerik bei der LDA. Nun können weitere Daten in den Entscheidungsraum projiziert und klassifiziert werden. Dabei wird jeder Datenpunkt der Klasse zugeordnet, bei der die Mahalonobis-Distanz zum Mittelpunkt der Klasse minimal ist. Sowohl die Aufnahme der Bewegung als auch die Bestimmung der Rotationszeitreihen funktioniert unter den bei der Bachelorarbeit vorliegenden Bedingungen zuverlässig. Es kam allerdings bei der Bewegungsart "Laufen" bei 8 aus 200 Bildfolgen zu Fehlern bei der Rotationsbestimmung; diese Bildfolgen wurden aus der Datenmenge ausgeschieden. Als Ergebnis der Trainingsphase ergaben sich neben der Projektionsmatrix in den Entscheidungsraum die Verteilungen der projizierten Trainingsdaten. Die Klassen sind im Entscheidungsraum gut unterscheidbar. Lediglich zwischen den Klassen "Gehen" und "Hinken" existieren Ausreißer, die nahe am Mittelwert der jeweils anderen Klasse liegen. Der implementierte Algorithmus ist in der Lage, die einzelnen Bewegungsmuster zuverlässig zu entscheiden. Von 201 Testdatensätzen konnten 199 korrekt zugeordnet werden. Die Fehlzuordnungen traten zwischen den Klassen "Gehen" und ”Hinken” auf. Der zur Klassifikation der Bewegungsmuster verwendete Ansatz lässt sich gut auf die Analyse weiterer Bewegungsmuster ausweiten.

    @MastersThesis{Schmeing2008Analyse,
    Title = {Analyse des Bewegungsmusters von Objekten},
    Author = {Schmeing, Benno},
    School = {Instiute of Photogrammetry, University of Bonn},
    Year = {2008},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Ing. Richard Steffen},
    Type = {Bachelor Thesis},
    Abstract = {Die vorliegende Arbeit besch\"aftigt sich mit der Analyse der Bewegungsmuster von Fu{\ss}g\"angern. Wir entwickeln Methoden zur Erfassung, Modellierung und Klassifikation der Bewegung und untersuchen ihre Eignung an realen Daten. F\"ur das implementierte Klassifikationsverfahren w\"ahlten wir einen diskriminativen Ansatz, d.h. Ziel der Klassifikation ist einzig die Unterscheidung verschiedener Bewegungsmuster. Auf die Realisierung eines generativen Ansatzes, der auch die Erzeugung synthetischer Bewegungsmuster erlaubt, wurde verzichtet. Der Algorithmus soll die Bewegungsmuster "Gehen", "Hinken" und "Laufen" anhand von Merkmalen unterscheiden, die aus der Eigenbewegung mehrerer Probanden abgeleitet sind. Als Datengrundlage dienen mittels einer fest an der Brust angebrachten Kamera aufgenommene Rotationszeitreihen der Eigenbewegung der Probanden. F\"ur jedes Bewegungsmuster stehen ca. 200 Videosequenzen von 5 Sekunden L\"ange (je 150 Bilder) zur Verf\"ugung; die Rotationszeitreihen werden aus den Rotationen zwischen aufeinanderfolgenden Bildern erzeugt. Als Merkmalsvektoren f\"ur die Klassifikation dienen die Leistungsspektren der Rotationszeitreihen. Die Klassifikation basiert auf Fisher's Linearer Diskriminante. Dabei werden in der Trainingsphase die Merkmalsvektoren von 378 Bildfolgen bearbeitet. Im ersten Schritt findet eine Projektion in den Entscheidungsraum mittels Linearer Diskriminanzanalyse (LDA) statt. Die Projektion ist so gew\"ahlt, dass sich die Klassen im Entscheidungsraum maximal unterscheiden. Im zweiten Schritt wird die Verteilung der projizierten Datenpunkte f\"ur jede Klasse bestimmt. Eine zuvor durchgef\"uhrte Dimensionsreduktion mittels Hauptkompentenanalyse (PCA) reduziert die Dimension des Klassifikationsproblems und verbessert so die Numerik bei der LDA. Nun k\"onnen weitere Daten in den Entscheidungsraum projiziert und klassifiziert werden. Dabei wird jeder Datenpunkt der Klasse zugeordnet, bei der die Mahalonobis-Distanz zum Mittelpunkt der Klasse minimal ist. Sowohl die Aufnahme der Bewegung als auch die Bestimmung der Rotationszeitreihen funktioniert unter den bei der Bachelorarbeit vorliegenden Bedingungen zuverl\"assig. Es kam allerdings bei der Bewegungsart "Laufen" bei 8 aus 200 Bildfolgen zu Fehlern bei der Rotationsbestimmung; diese Bildfolgen wurden aus der Datenmenge ausgeschieden. Als Ergebnis der Trainingsphase ergaben sich neben der Projektionsmatrix in den Entscheidungsraum die Verteilungen der projizierten Trainingsdaten. Die Klassen sind im Entscheidungsraum gut unterscheidbar. Lediglich zwischen den Klassen "Gehen" und "Hinken" existieren Ausrei{\ss}er, die nahe am Mittelwert der jeweils anderen Klasse liegen. Der implementierte Algorithmus ist in der Lage, die einzelnen Bewegungsmuster zuverl\"assig zu entscheiden. Von 201 Testdatens\"atzen konnten 199 korrekt zugeordnet werden. Die Fehlzuordnungen traten zwischen den Klassen "Gehen" und ''Hinken'' auf. Der zur Klassifikation der Bewegungsmuster verwendete Ansatz l\"asst sich gut auf die Analyse weiterer Bewegungsmuster ausweiten.}
    }

  • J. Siegemund, “Trajektorienrekonstruktion von bewegten Objekten aus Stereobildfolgen,” Diploma Thesis Master Thesis, 2008.
    [BibTeX] [PDF]
    Die vorliegende Arbeit beschäftigt sich mit der Rekonstruktion der räumlichen Trajektorienparameter bewegter Objekte anhand von kalibrierten Stereobildsequenzen. Zur Lösung dieses Problems wird ein Verfahren auf der Grundlage eines robusten Ausgleichungsmodells eingeführt. Als Eingabedaten dienen vorsegmentierte Bildpunkte des Objektes mit bekannter stereoskopischer und temporaler Zuordnung. Auf Basis dieser Bildinformation wird zusätzlich zu den Trajektorienparametern eine dreidimensionale Punktwolke in einem lokalen Objektsystem geschätzt, welche Hinweise auf Form und Ausmaße des beobachteten Objektes liefert. Darüber hinaus werden Techniken zur Steigerung der Effizienz und Robustheit des Verfahrens vorgestellt und es wird erläutert, wie mögliches Vorwissen in den Ausgleichungsprozess eingebracht werden kann. Der Anwendungsfokus in Beispielen und Ergebnissen liegt auf der Bestimmung der Trajektorien von Fremdfahrzeugen mittels Eigenfahrzeugsensorik zum Zwecke der Kollisionsvermeidung. Diese Informationen sind für Fahrassistenzsysteme von großer Bedeutung und für die Daimler AG als Kooperationspartner dieser Arbeit von besonderem Interesse. Das Verfahren selbst wird jedoch auf kein spezielles Anwendungsgebiet beschränkt. Anhand von Experimenten auf simulierten Szenen wird ein systematischer Fehler in den geschätzten Objektpositionen beobachtet. Das Auftreten dieses Fehlers wird motiviert und Methoden zur Behebung werden vorgestellt.Weiterhin zeigen Experimente auf realen Aufnahmen die Notwendigkeit einer zeitlichen Glättung der geschätzten Trajektorienparameter. Aus diesem Grund wird eine adaptive Glättungsmethode eingeführt, deren Strenge darüber hinaus anwendungsbezogen gesteuert werden kann. Die Ergebnisse zeigen, dass das Verfahren, trotz hoher Ausreißeranteile in den Eingabedaten, im Stande ist, die Bewegungstrajektorie eines Objektes mit hoher Genauigkeit und Robustheit zu bestimmen und gleichzeitig die dreidimensionale Form des beobachteten Objektes zu rekonstruieren.

    @MastersThesis{Siegemund2008Trajektorienrekonstruktion,
    Title = {Trajektorienrekonstruktion von bewegten Objekten aus Stereobildfolgen},
    Author = {Siegemund, Jan},
    School = {University of Bonn In Zusammenarbeit mit dem Institut f\"ur Informatik der Universit\"at Bonn},
    Year = {2008},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Prof. Dr. Daniel Cremers},
    Type = {Diploma Thesis},
    Abstract = {Die vorliegende Arbeit besch\"aftigt sich mit der Rekonstruktion der r\"aumlichen Trajektorienparameter bewegter Objekte anhand von kalibrierten Stereobildsequenzen. Zur L\"osung dieses Problems wird ein Verfahren auf der Grundlage eines robusten Ausgleichungsmodells eingef\"uhrt. Als Eingabedaten dienen vorsegmentierte Bildpunkte des Objektes mit bekannter stereoskopischer und temporaler Zuordnung. Auf Basis dieser Bildinformation wird zus\"atzlich zu den Trajektorienparametern eine dreidimensionale Punktwolke in einem lokalen Objektsystem gesch\"atzt, welche Hinweise auf Form und Ausma{\ss}e des beobachteten Objektes liefert. Dar\"uber hinaus werden Techniken zur Steigerung der Effizienz und Robustheit des Verfahrens vorgestellt und es wird erl\"autert, wie m\"ogliches Vorwissen in den Ausgleichungsprozess eingebracht werden kann. Der Anwendungsfokus in Beispielen und Ergebnissen liegt auf der Bestimmung der Trajektorien von Fremdfahrzeugen mittels Eigenfahrzeugsensorik zum Zwecke der Kollisionsvermeidung. Diese Informationen sind f\"ur Fahrassistenzsysteme von gro{\ss}er Bedeutung und f\"ur die Daimler AG als Kooperationspartner dieser Arbeit von besonderem Interesse. Das Verfahren selbst wird jedoch auf kein spezielles Anwendungsgebiet beschr\"ankt. Anhand von Experimenten auf simulierten Szenen wird ein systematischer Fehler in den gesch\"atzten Objektpositionen beobachtet. Das Auftreten dieses Fehlers wird motiviert und Methoden zur Behebung werden vorgestellt.Weiterhin zeigen Experimente auf realen Aufnahmen die Notwendigkeit einer zeitlichen Gl\"attung der gesch\"atzten Trajektorienparameter. Aus diesem Grund wird eine adaptive Gl\"attungsmethode eingef\"uhrt, deren Strenge dar\"uber hinaus anwendungsbezogen gesteuert werden kann. Die Ergebnisse zeigen, dass das Verfahren, trotz hoher Ausrei{\ss}eranteile in den Eingabedaten, im Stande ist, die Bewegungstrajektorie eines Objektes mit hoher Genauigkeit und Robustheit zu bestimmen und gleichzeitig die dreidimensionale Form des beobachteten Objektes zu rekonstruieren.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Siegemund2008Trajektorienrekonstruktion.pdf}
    }

  • C. Stachniss, M. Bennewitz, G. Grisetti, S. Behnke, and W. Burgard, “How to Learn Accurate Grid Maps with a Humanoid,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Pasadena, CA, USA, 2008.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2008,
    Title = {How to Learn Accurate Grid Maps with a Humanoid},
    Author = {Stachniss, C. and Bennewitz, M. and Grisetti, G. and Behnke, S. and Burgard, W.},
    Booktitle = ICRA,
    Year = {2008},
    Address = {Pasadena, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss08icra.pdf}
    }

  • C. Stachniss, C. Plagemann, A. Lilienthal, and W. Burgard, “Gas Distribution Modeling using Sparse Gaussian Process Mixture Models,” in Proceedings of Robotics: Science and Systems (RSS) , Zurich, Switzerland, 2008.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2008a,
    Title = {Gas Distribution Modeling using Sparse Gaussian Process Mixture Models},
    Author = {Stachniss, C. and Plagemann, C. and Lilienthal, A. and Burgard, W.},
    Booktitle = RSS,
    Year = {2008},
    Address = {Zurich, Switzerland},
    Note = {To appear},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss08rss.pdf}
    }

  • B. Steder, G. Grisetti, C. Stachniss, and W. Burgard, “Learning Visual Maps using Cameras and Inertial Sensors,” in Workshop on Robotic Perception, International Conference on Computer Vision Theory and Applications , Funchal, Madeira, Portugal, 2008.
    [BibTeX]
    [none]
    @InProceedings{Steder2008,
    Title = {Learning Visual Maps using Cameras and Inertial Sensors},
    Author = {Steder, B. and Grisetti, G. and Stachniss, C. and Burgard, W.},
    Booktitle = {Workshop on Robotic Perception, International Conference on Computer Vision Theory and Applications},
    Year = {2008},
    Address = {Funchal, Madeira, Portugal},
    Note = {To appear},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • R. Steffen, “A Robust Iterative Kalman Filter Based On Implicit Measurement Equations,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2008-08, 2008.
    [BibTeX] [PDF]
    In the field of robotics and computer vision recursive estimation of time dependent processes is one of the key tasks. Usually Kalman filter based techniques are used, which rely on explicit model functions, that directly and explicitly describe the effect of the parameters on the observations. However, some problems naturally result in implicit constraints between the observations and the parameters, for instance all those resulting in homogeneous equation systems. By implicit we mean, that the constraints are given by equations, that are not easily solvable for the observation vector. We derive an iterative extended Kalman filter framework based on implicit measurement equations. In a wide field of applications the possibility to use implicit constraints simplifies the process of specifying suitable measurement equations. As an extension we introduce a robustification technique similar to [Ting et.al 2007] and [Huber 1981], which allows the presented estimation scheme to cope with outliers. Furthermore we will present results for the application of the proposed framework to the structure-from-motion task in the case of an image sequence acquired by an airborne vehicle.

    @TechReport{Steffen2008Robust,
    Title = {A Robust Iterative Kalman Filter Based On Implicit Measurement Equations},
    Author = {Steffen, Richard},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2008},
    Month = aug,
    Number = {TR-IGG-P-2008-08},
    Abstract = {In the field of robotics and computer vision recursive estimation of time dependent processes is one of the key tasks. Usually Kalman filter based techniques are used, which rely on explicit model functions, that directly and explicitly describe the effect of the parameters on the observations. However, some problems naturally result in implicit constraints between the observations and the parameters, for instance all those resulting in homogeneous equation systems. By implicit we mean, that the constraints are given by equations, that are not easily solvable for the observation vector. We derive an iterative extended Kalman filter framework based on implicit measurement equations. In a wide field of applications the possibility to use implicit constraints simplifies the process of specifying suitable measurement equations. As an extension we introduce a robustification technique similar to [Ting et.al 2007] and [Huber 1981], which allows the presented estimation scheme to cope with outliers. Furthermore we will present results for the application of the proposed framework to the structure-from-motion task in the case of an image sequence acquired by an airborne vehicle.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Steffen2009Robust.pdf}
    }

  • R. Steffen and W. Förstner, “On Visual Real Time Mapping for Unmanned Aerial Vehicles,” in 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS) , Beijing, China, 2008, p. 57-62 Part B3a.
    [BibTeX] [PDF]
    This paper addresses the challenge of a real-time capable vision system in the task of trajectory and surface reconstruction by aerial image sequences. The goal is to present the design, methods and strategies of a real-time capable vision system solving the mapping task for secure navigation of small UAVs with a single camera. This includes the estimation process, map representation, initialization processes, loop closing detection and exploration strategies. The estimation process is based on the Kalman-Filter and a landmark based map representation. We introduce a new initialization method for new observed landmarks. We will show that the initialization process and the exploration strategy has a significant effect on the accuracy of the estimated camera trajectory and of the map.

    @InProceedings{Steffen2008Visual,
    Title = {On Visual Real Time Mapping for Unmanned Aerial Vehicles},
    Author = {Steffen, Richard and F\"orstner, Wolfgang},
    Booktitle = {21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)},
    Year = {2008},
    Address = {Beijing, China},
    Pages = {57-62 Part B3a},
    Abstract = {This paper addresses the challenge of a real-time capable vision system in the task of trajectory and surface reconstruction by aerial image sequences. The goal is to present the design, methods and strategies of a real-time capable vision system solving the mapping task for secure navigation of small UAVs with a single camera. This includes the estimation process, map representation, initialization processes, loop closing detection and exploration strategies. The estimation process is based on the Kalman-Filter and a landmark based map representation. We introduce a new initialization method for new observed landmarks. We will show that the initialization process and the exploration strategy has a significant effect on the accuracy of the estimated camera trajectory and of the map.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Steffen2008Visual.pdf}
    }

  • S. Steneberg, “Robuste Relative Orientierung kalibrierter Kameras mit Bildkanten,” Diploma Thesis Master Thesis, 2008.
    [BibTeX]
    [none]
    @MastersThesis{Steneberg2008Robuste,
    Title = {Robuste Relative Orientierung kalibrierter Kameras mit Bildkanten },
    Author = {Steneberg, Stephan},
    School = {University of Bonn, University of Koblenz In Zusammenarbeit mit der Arbeitsgruppe Aktives Sehen der Universit\"at Koblenz},
    Year = {2008},
    Note = {Betreuung: Dipl.-Inform. Timo Dickscheid, Prof. Dr.-Ing. Wolfgang F\"orstner},
    Type = {Diploma Thesis},
    Abstract = {[none]}
    }

  • T. Udelhoven, B. Waske, S. van der Linden, and S. Heitz, “Land-Cover Classification of Hypertemporal Data using Ensemble Systems,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2008. doi:10.1109/IGARSS.2008.4779524
    [BibTeX]
    This study addresses the problem of multiannual supervised land-cover classification using hypertemporal data from the "Mediterranean Extended Daily One Km AVHRR Data Set" (MEDOKADS) and a decision fusion approach. 10 day NDVI maximum value composite data from the Iberian Peninsula for every year in the observation period (1989 to 2004) were preprocessed using Minimum Noise Fraction (MNF-) transformation. The MNF-scores from each year were then individually pre-classified using support-vector machines (SVM). The continuous outputs from the SVM, which can be interpreted in terms of posterior probabilities, where used to train a second-order SVM classifier to merge the information within consecutive years. The decision fusion strategy significantly increased the classification accuracy compared to pre-classification results. Increasing the temporal range in decision fusion from a two year to five-year period enhanced the total accuracy. The outcomes from the selected approach were compared with another ensemble method (majority voting) and with a single SVM expert that was trained for comparable multiannual periods. The results suggest that decision fusion is superior to the other methods.

    @InProceedings{Udelhoven2008Land,
    Title = {Land-Cover Classification of Hypertemporal Data using Ensemble Systems},
    Author = {Udelhoven, T. and Waske, Bj\"orn and van der Linden, Sebastian and Heitz, S.},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2008},
    Abstract = {This study addresses the problem of multiannual supervised land-cover classification using hypertemporal data from the "Mediterranean Extended Daily One Km AVHRR Data Set" (MEDOKADS) and a decision fusion approach. 10 day NDVI maximum value composite data from the Iberian Peninsula for every year in the observation period (1989 to 2004) were preprocessed using Minimum Noise Fraction (MNF-) transformation. The MNF-scores from each year were then individually pre-classified using support-vector machines (SVM). The continuous outputs from the SVM, which can be interpreted in terms of posterior probabilities, where used to train a second-order SVM classifier to merge the information within consecutive years. The decision fusion strategy significantly increased the classification accuracy compared to pre-classification results. Increasing the temporal range in decision fusion from a two year to five-year period enhanced the total accuracy. The outcomes from the selected approach were compared with another ensemble method (majority voting) and with a single SVM expert that was trained for comparable multiannual periods. The results suggest that decision fusion is superior to the other methods.},
    Doi = {10.1109/IGARSS.2008.4779524},
    Keywords = {AD 1989 to 2004;Iberian Peninsula;MEDOKADS;MNF-transformation;Mediterranean Extended Daily One Km AVHRR Data Set;Minimum Noise Fraction transformation;NDVI;decision fusion approach;ensemble systems;hypertemporal data;second-order SVM classifier;supervised land-cover classification;support-vector machines;geophysics computing;image classification;sensor fusion;support vector machines;vegetation;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • B. Waske and J. A. Benediktsson, “Semi-Supervised Classifier Ensembles for Classifying Remote Sensing Data,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2008. doi:10.1109/IGARSS.2008.4778938
    [BibTeX]
    The analysis of data sets, which were acquired within different time periods over the same geographical region is interesting for updating land cover maps and operational monitoring systems. In this context an adequate and temporally stable classification approach is worthwhile. In the presented study a classifier ensemble (i.e., random forests) is trained on a multispectral image from an agricultural region from and is successively modified and adapted, to classify a data set from another year. A detailed accuracy assessment clearly demonstrates that the proposed modification of the classifier significantly improves the overall accuracy, whereas a simple transfer of a classifier to a data set from another year is limited and results in a decreased accuracy. Thus the proposed approach can be recommended for classifying multiannual data sets and updating land cover maps.

    @InProceedings{Waske2008Semi,
    Title = {Semi-Supervised Classifier Ensembles for Classifying Remote Sensing Data},
    Author = {Waske, Bj\"orn and Benediktsson, Jon Atli},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2008},
    Abstract = {The analysis of data sets, which were acquired within different time periods over the same geographical region is interesting for updating land cover maps and operational monitoring systems. In this context an adequate and temporally stable classification approach is worthwhile. In the presented study a classifier ensemble (i.e., random forests) is trained on a multispectral image from an agricultural region from and is successively modified and adapted, to classify a data set from another year. A detailed accuracy assessment clearly demonstrates that the proposed modification of the classifier significantly improves the overall accuracy, whereas a simple transfer of a classifier to a data set from another year is limited and results in a decreased accuracy. Thus the proposed approach can be recommended for classifying multiannual data sets and updating land cover maps.},
    Doi = {10.1109/IGARSS.2008.4778938},
    Keywords = {agricultural region;data analysis;land cover maps;multispectral image;operational monitoring systems;temporally stable classification approach;agriculture;data analysis;image classification;terrain mapping;vegetation mapping;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • B. Waske and S. van der Linden, “Classifying multilevel imagery from SAR and optical sensors by decision fusion,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, iss. 5, pp. 1457-1466, 2008. doi:10.1109/TGRS.2008.916089
    [BibTeX]
    A strategy for the joint classification of multiple segmentation levels from multisensor imagery is introduced by using synthetic aperture radar and optical data. At first, the two data sets are separately segmented, creating independent aggregation levels at different scales. Each individual level from the two sensors is then preclassified by a support vector machine (SVM). The original outputs of each SVM, i.e., images showing the distances of the pixels to the hyperplane fitted by the SVM, are used in a decision fusion to determine the final classes. The fusion strategy is based on the application of an additional classifier, which is applied on the preclassification results. Both a second SVM and random forests (RF) were tested for the decision fusion. The results are compared with SVM and RF applied to the full data set without preclassification. Both the integration of multilevel information and the use of multisensor imagery increase the overall accuracy. It is shown that the classification of multilevel-multisource data sets with SVM and RF is feasible and does not require a definition of ideal aggregation levels. The proposed decision fusion approach that applies RF to the preclassification outperforms all other approaches.

    @Article{Waske2008Classifying,
    Title = {Classifying multilevel imagery from SAR and optical sensors by decision fusion},
    Author = {Waske, Bj\"orn and van der Linden, Sebastian},
    Journal = {IEEE Transactions on Geoscience and Remote Sensing},
    Year = {2008},
    Month = may,
    Number = {5},
    Pages = {1457--1466},
    Volume = {46},
    Abstract = {A strategy for the joint classification of multiple segmentation levels from multisensor imagery is introduced by using synthetic aperture radar and optical data. At first, the two data sets are separately segmented, creating independent aggregation levels at different scales. Each individual level from the two sensors is then preclassified by a support vector machine (SVM). The original outputs of each SVM, i.e., images showing the distances of the pixels to the hyperplane fitted by the SVM, are used in a decision fusion to determine the final classes. The fusion strategy is based on the application of an additional classifier, which is applied on the preclassification results. Both a second SVM and random forests (RF) were tested for the decision fusion. The results are compared with SVM and RF applied to the full data set without preclassification. Both the integration of multilevel information and the use of multisensor imagery increase the overall accuracy. It is shown that the classification of multilevel-multisource data sets with SVM and RF is feasible and does not require a definition of ideal aggregation levels. The proposed decision fusion approach that applies RF to the preclassification outperforms all other approaches.},
    Doi = {10.1109/TGRS.2008.916089},
    Owner = {waske},
    Sn = {0196-2892},
    Tc = {29},
    Timestamp = {2012.09.04},
    Ut = {WOS:000255222800017},
    Z8 = {2},
    Z9 = {31},
    Zb = {3}
    }

  • S. Wenzel, M. Drauschke, and W. Förstner, “Detection of repeated structures in facade images,” Pattern Recognition and Image Analysis, vol. 18, iss. 3, pp. 406-411, 2008. doi:10.1134/S1054661808030073
    [BibTeX] [PDF]
    We present a method for detecting repeated structures, which is applied on facade images for describing the regularity of their windows. Our approach finds and explicitly represents repetitive structures and thus gives initial representation of facades. No explicit notion of a window is used; thus, the method also appears to be able to identify other manmade structures, e.g., paths with regular tiles. A method for detection of dominant symmetries is adapted for detection of multiply repeated structures. A compact description of the repetitions is derived from the detected translations in the image by a heuristic search method and the criterion of the minimum description length.

    @Article{Wenzel2008Detection,
    Title = {Detection of repeated structures in facade images},
    Author = {Wenzel, Susanne and Drauschke, Martin and F\"orstner, Wolfgang},
    Journal = {Pattern Recognition and Image Analysis},
    Year = {2008},
    Month = sep,
    Number = {3},
    Pages = {406--411},
    Volume = {18},
    Abstract = {We present a method for detecting repeated structures, which is applied on facade images for describing the regularity of their windows. Our approach finds and explicitly represents repetitive structures and thus gives initial representation of facades. No explicit notion of a window is used; thus, the method also appears to be able to identify other manmade structures, e.g., paths with regular tiles. A method for detection of dominant symmetries is adapted for detection of multiply repeated structures. A compact description of the repetitions is derived from the detected translations in the image by a heuristic search method and the criterion of the minimum description length.},
    Doi = {10.1134/S1054661808030073},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2008Detection.pdf}
    }

  • S. Wenzel and W. Förstner, “Semi-supervised incremental learning of hierarchical appearance models,” in 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS) , Beijing, China, 2008, p. 399–404 Part B3b-2.
    [BibTeX] [PDF]
    We propose an incremental learning scheme for learning a class hierarchy for objects typically occurring multiple in images. Given one example of an object that appears several times in the image, e.g. is part of a repetitive structure, we propose a method for identifying prototypes using an unsupervised clustering procedure. These prototypes are used for building a hierarchical appearance based model of the envisaged class in a supervised manner. For classification of new instances detected in new images we use linear subspace methods that combine discriminative and reconstructive properties. The used methods are chosen to be capable for an incremental update. We test our approach on facade images with repetitive windows and balconies. We use the learned object models to find new instances in other images, e. g. the neighbouring facade and update already learned models with the new instances.

    @InProceedings{Wenzel2008Semi,
    Title = {Semi-supervised incremental learning of hierarchical appearance models},
    Author = {Wenzel, Susanne and F\"orstner, Wolfgang},
    Booktitle = {21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS)},
    Year = {2008},
    Address = {Beijing, China},
    Pages = {399--404 Part B3b-2},
    Abstract = {We propose an incremental learning scheme for learning a class hierarchy for objects typically occurring multiple in images. Given one example of an object that appears several times in the image, e.g. is part of a repetitive structure, we propose a method for identifying prototypes using an unsupervised clustering procedure. These prototypes are used for building a hierarchical appearance based model of the envisaged class in a supervised manner. For classification of new instances detected in new images we use linear subspace methods that combine discriminative and reconstructive properties. The used methods are chosen to be capable for an incremental update. We test our approach on facade images with repetitive windows and balconies. We use the learned object models to find new instances in other images, e. g. the neighbouring facade and update already learned models with the new instances.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2008Semi.pdf}
    }

  • K. M. Wurm, C. Stachniss, and W. Burgard, “Coordinated Multi-Robot Exploration using a Segmentation of the Environment,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Nice, France, 2008.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Wurm2008,
    Title = {Coordinated Multi-Robot Exploration using a Segmentation of the Environment},
    Author = {K.M. Wurm and Stachniss, C. and W. Burgard},
    Booktitle = iros,
    Year = {2008},
    Address = {Nice, France},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/wurm08iros.pdf}
    }

  • Robotics: Science and Systems III, W. Burgard, O. Brock, and C. Stachniss, Eds., MIT Press, 2008.
    [BibTeX]
    [none]
    @Book{Burgard2008,
    Title = {Robotics: Science and Systems III},
    Editor = {Burgard, W. and Brock, O. and Stachniss, C.},
    Publisher = {MIT Press},
    Year = {2008},
    Month = {March},
    Note = {In press},
    Abstract = {[none]},
    ISBN = {0262524848},
    Timestamp = {2014.04.24}
    }

2007

  • W. Burgard, C. Stachniss, and D. Haehnel, “Mobile Robot Map Learning from Range Data in Dynamic Environments,” in Autonomous Navigation in Dynamic Environments, C. Laugier and R. Chatila, Eds., Springer, 2007, vol. 35.
    [BibTeX]
    [none]
    @InCollection{Burgard2007,
    Title = {Mobile Robot Map Learning from Range Data in Dynamic Environments},
    Author = {Burgard, W. and Stachniss, C. and Haehnel, D.},
    Booktitle = {Autonomous Navigation in Dynamic Environments},
    Publisher = springer,
    Year = {2007},
    Editor = {Laugier, C. and Chatila, R.},
    Series = springerstaradvanced,
    Volume = {35},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • F. De Sanctis, “Untersuchungen zur automatisierten Generierung von digitalen Oberflächenmodellen aus mehreren extrem großmaßstäbigen Luftbildern,” Master Thesis, 2007.
    [BibTeX]
    Die vorliegende Arbeit untersucht zwei automatische Verfahren zur dichten Oberflächenrekonstruktion mit großmaßstäbigen Bildern. Dabei wird von einer bekannten inneren sowie äußere Orientierung ausgegangen. Die Verfahren liegen mit den Programmen MATCH-T der Firma Inpho GmbH sowie eine Implementation des Semi-Global-Matching Blockmatch vor. Insbesondere soll auf die zu erreichende Höhengenauigkeit aus Bildanordnungen für den Standard-Luftbildfall eingegangen werden.

    @MastersThesis{DeSanctis2007Untersuchungen,
    Title = {Untersuchungen zur automatisierten Generierung von digitalen Oberfl\"achenmodellen aus mehreren extrem gro{\ss}ma{\ss}st\"abigen Luftbildern},
    Author = {De Sanctis, Federica},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2007},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Ing. Richard Steffen},
    Abstract = {Die vorliegende Arbeit untersucht zwei automatische Verfahren zur dichten Oberfl\"achenrekonstruktion mit gro{\ss}ma{\ss}st\"abigen Bildern. Dabei wird von einer bekannten inneren sowie \"au{\ss}ere Orientierung ausgegangen. Die Verfahren liegen mit den Programmen MATCH-T der Firma Inpho GmbH sowie eine Implementation des Semi-Global-Matching Blockmatch vor. Insbesondere soll auf die zu erreichende H\"ohengenauigkeit aus Bildanordnungen f\"ur den Standard-Luftbildfall eingegangen werden.},
    City = {Bonn}
    }

  • M. Drauschke, A. Brunn, K. Kulschewski, and W. Förstner, “Automatic Dodging of Aerial Images,” in Publikationen der DGPF: Von der Medizintechnik bis zur Planetenforschung – Photogrammetrie und Fernerkundung für das 21. Jahrhundert , Muttenz, Basel, 2007, pp. 173-180.
    [BibTeX] [PDF]
    We present an automated approach for the dodging of images, with which we edit digital images as it is usually done with analogue images in dark-rooms. Millions of aerial images of all battle fields were taken during the Second World War. They were intensively used, e.g. for the observation of military movements, the documentation of success and failure of military operations and further planning. Today, the information of these images supports the removal of explosives of the Second World War and the identi-fication of dangerous waste in the soil. In North Rhine-Westphalia, approximately 300.000 aerial images are scanned to handle the huge amount of available data efficiently. The scanning is done with a gray value depth of 12 bits and a pixel size of 21 {\mu}m to gain both, a high radiometric and a high geometric resolution of the images. Due to the photographic process used in the 1930s and 1940s and several reproductions, the digitized images are exposed locally very differently. Therefore, the images shall be improved by automated dodging. Global approaches mostly returned unsatisfying results. Therefore, we present a new approach, which is based on local histogram equalization. Other methods as spreading the histogram or linear transformations of the histogram manipulate the images either too much or not enough. For the implementation of our approach, we focus not only on the quality of the resulting images, but also on robustness and performance of the algorithm. Thus, the technique can also be used for other applications concerning image improvements.

    @InProceedings{Drauschke2007Automatic,
    Title = {Automatic Dodging of Aerial Images},
    Author = {Drauschke, Martin and Brunn, Ansgar and Kulschewski, Kai and F\"orstner, Wolfgang},
    Booktitle = {Publikationen der DGPF: Von der Medizintechnik bis zur Planetenforschung - Photogrammetrie und Fernerkundung f\"ur das 21. Jahrhundert},
    Year = {2007},
    Address = {Muttenz, Basel},
    Editor = {Seyfert, Eckhardt},
    Month = jun,
    Pages = {173--180},
    Publisher = {DGPF},
    Volume = {16},
    Abstract = {We present an automated approach for the dodging of images, with which we edit digital images as it is usually done with analogue images in dark-rooms. Millions of aerial images of all battle fields were taken during the Second World War. They were intensively used, e.g. for the observation of military movements, the documentation of success and failure of military operations and further planning. Today, the information of these images supports the removal of explosives of the Second World War and the identi-fication of dangerous waste in the soil. In North Rhine-Westphalia, approximately 300.000 aerial images are scanned to handle the huge amount of available data efficiently. The scanning is done with a gray value depth of 12 bits and a pixel size of 21 {\mu}m to gain both, a high radiometric and a high geometric resolution of the images. Due to the photographic process used in the 1930s and 1940s and several reproductions, the digitized images are exposed locally very differently. Therefore, the images shall be improved by automated dodging. Global approaches mostly returned unsatisfying results. Therefore, we present a new approach, which is based on local histogram equalization. Other methods as spreading the histogram or linear transformations of the histogram manipulate the images either too much or not enough. For the implementation of our approach, we focus not only on the quality of the resulting images, but also on robustness and performance of the algorithm. Thus, the technique can also be used for other applications concerning image improvements.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2007Automatic.pdf}
    }

  • W. Förstner and R. Steffen, “Online geocoding and evaluation of large scale imagery without GPS,” Photogrammetric Week, Heidelberg, vol. Wichmann Verlag, 2007.
    [BibTeX] [PDF]
    Large scale imagery will be increasingly available due to the low cost of video cameras and unmanned aerial vehicles. Their use is broad: the documentation of traffic accidents, the effects of thunderstorms onto agricultural farms, the 3Dstructure of industrial plants or the monitoring of archeological excavation. The value of imagery depends on the availability of (1) information about the place and date during data capture, (2) of information about the 3D-structure of the object and (3) of information about the class or identity of the objects in the scene. Geocoding, problem (1), usually relies the availability of GPS-information, which however limits the use of imagery to outdoor applications. The paper discusses methods for geocoding and geometrical evaluation of such imagery and especially adresses the question in how far the methods can do without GPS.

    @Article{Forstner2007Online,
    Title = {Online geocoding and evaluation of large scale imagery without GPS},
    Author = {F\"orstner, Wolfgang and Steffen, Richard},
    Journal = {Photogrammetric Week, Heidelberg},
    Year = {2007},
    Volume = {Wichmann Verlag},
    Abstract = {Large scale imagery will be increasingly available due to the low cost of video cameras and unmanned aerial vehicles. Their use is broad: the documentation of traffic accidents, the effects of thunderstorms onto agricultural farms, the 3Dstructure of industrial plants or the monitoring of archeological excavation. The value of imagery depends on the availability of (1) information about the place and date during data capture, (2) of information about the 3D-structure of the object and (3) of information about the class or identity of the objects in the scene. Geocoding, problem (1), usually relies the availability of GPS-information, which however limits the use of imagery to outdoor applications. The paper discusses methods for geocoding and geometrical evaluation of such imagery and especially adresses the question in how far the methods can do without GPS.},
    Editor = {D. Fritsch},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2007Online.pdf}
    }

  • N. Fischer, “3D-Reconstruction from Multiple Images on the GPU,” Diplomarbeit Master Thesis, 2007.
    [BibTeX]
    Die automatische Rekonstruktion der sichtbaren Oberfläche eines Objekts aus mehreren Bildern stellt ein in seiner Allgemeinheit ungelöstes Problem. Unter günstigen Bedingungen sind jedoch erfolgreiche Ansätze vorhanden. Die schnelle Implementation solcher Ansätze auf Graphischen Prozessoren (GPU’s) stellt wegen der Entwicklung leistungsfähiger Schnittstellen und Programmiersprachen einen interessanten Ansatz dar. Dazu sind jedoch die Algorithmen auf ihre Parallelisierbarkeit zu untersuchen und zwar speziell in Bezug auf die von GPU’s bereitgestellten Strukturen. In der Arbeit soll ein Verfahren zur Oberflächenrekonstruktion in Hinblick auf seine Eignung für die Implementation auf einer GPU konzeptionell untersucht, prototypisch realisiert und untersucht werden.

    @MastersThesis{Fischer20073D,
    Title = {3D-Reconstruction from Multiple Images on the GPU},
    Author = {Fischer, Norbert},
    School = {Institute of Photogrammetry, University of Bonn In Zusammenarbeit mit dem Institut f\"ur Informatik der Universit\"at Bonn},
    Year = {2007},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, PD Dr. Volker Steinhage},
    Type = {Diplomarbeit},
    Abstract = {Die automatische Rekonstruktion der sichtbaren Oberfl\"ache eines Objekts aus mehreren Bildern stellt ein in seiner Allgemeinheit ungel\"ostes Problem. Unter g\"unstigen Bedingungen sind jedoch erfolgreiche Ans\"atze vorhanden. Die schnelle Implementation solcher Ans\"atze auf Graphischen Prozessoren (GPU's) stellt wegen der Entwicklung leistungsf\"ahiger Schnittstellen und Programmiersprachen einen interessanten Ansatz dar. Dazu sind jedoch die Algorithmen auf ihre Parallelisierbarkeit zu untersuchen und zwar speziell in Bezug auf die von GPU's bereitgestellten Strukturen. In der Arbeit soll ein Verfahren zur Oberfl\"achenrekonstruktion in Hinblick auf seine Eignung f\"ur die Implementation auf einer GPU konzeptionell untersucht, prototypisch realisiert und untersucht werden.},
    City = {Bonn}
    }

  • C. Garvert, “Untersuchungen des SURF-Deskriptors zur Bildfolgenanalyse,” Diplomarbeit Master Thesis, 2007.
    [BibTeX]
    Im Forschungsbereich der gleichzeitigen Lokalisierung und Kartierung aus monokularen Bildfolgen ist das Verfolgen von Bildpunkten ein wesentlicher Bestandteil. Dies erfordert, dass die Bildpunkte in jedem Folgebild identifizierbar sind und nur geringe Disparitäten vorliegen. Abschattungen oder schnelle Rotationen der Kamera können den Verlust der Bildpunktverfolgung bedeuten. In den letzten Jahren wurden verschiedene Deskriptoren zur Beschreibung der Punktumgebung entwickelt, mit denen es möglich ist, eine Zuordnung von Punkten auch bei extrem großen Disparitäten zu ermöglichen. Insbesondere rotations- und skaleninvariante Deskriptoren haben in den letzten Jahren massiv an Bedeutung gewonnen. In der Diplomarbeit soll der von Bay et al. (2006) vorgestellte rotations- und skaleninvariante Punktdeskriptor SURF implementiert werden. Im Gegensatz zum Sift Deskriptor von Lowe (2004) werden beim SURF-Deskriptor Integral-Bilder zur wesentlich schnelleren Berechnung verwendet. In der Arbeit soll untersucht werden, welche Parameter des Punktdeskriptors Genauigkeit und Geschwindigkeit beeinflussen. Da der Deskriptor auf einer anderen Art von Punktmerkmalen basiert, soll überprüft werden, bei welchen Typen von Bilddaten der SURF Deskriptor zum Sift-Deskriptor über- bzw. unterlegen ist. Der SURF Deskriptor soll an künstlichen Daten und wenn möglich an realen Daten (Luftbilder) getestet und evaluiert werden.

    @MastersThesis{Garvert2007Untersuchungen,
    Title = {Untersuchungen des SURF-Deskriptors zur Bildfolgenanalyse},
    Author = {Garvert, Christina},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2007},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Ing. Richard Steffen},
    Type = {Diplomarbeit},
    Abstract = {Im Forschungsbereich der gleichzeitigen Lokalisierung und Kartierung aus monokularen Bildfolgen ist das Verfolgen von Bildpunkten ein wesentlicher Bestandteil. Dies erfordert, dass die Bildpunkte in jedem Folgebild identifizierbar sind und nur geringe Disparit\"aten vorliegen. Abschattungen oder schnelle Rotationen der Kamera k\"onnen den Verlust der Bildpunktverfolgung bedeuten. In den letzten Jahren wurden verschiedene Deskriptoren zur Beschreibung der Punktumgebung entwickelt, mit denen es m\"oglich ist, eine Zuordnung von Punkten auch bei extrem gro{\ss}en Disparit\"aten zu erm\"oglichen. Insbesondere rotations- und skaleninvariante Deskriptoren haben in den letzten Jahren massiv an Bedeutung gewonnen. In der Diplomarbeit soll der von Bay et al. (2006) vorgestellte rotations- und skaleninvariante Punktdeskriptor SURF implementiert werden. Im Gegensatz zum Sift Deskriptor von Lowe (2004) werden beim SURF-Deskriptor Integral-Bilder zur wesentlich schnelleren Berechnung verwendet. In der Arbeit soll untersucht werden, welche Parameter des Punktdeskriptors Genauigkeit und Geschwindigkeit beeinflussen. Da der Deskriptor auf einer anderen Art von Punktmerkmalen basiert, soll \"uberpr\"uft werden, bei welchen Typen von Bilddaten der SURF Deskriptor zum Sift-Deskriptor \"uber- bzw. unterlegen ist. Der SURF Deskriptor soll an k\"unstlichen Daten und wenn m\"oglich an realen Daten (Luftbilder) getestet und evaluiert werden.},
    City = {Bonn}
    }

  • S. Grau, “Untersuchungen zur Rekonstruktion von Bohrungen aus Stereobildern,” Diplomarbeit Master Thesis, 2007.
    [BibTeX]
    In der Industrie werden stereoskopische Messtechniken zur Prüfung von Werkstücken bereits erfolgreich eingesetzt. Dabei werden höchste Anforderungen an die Genauigkeit gestellt. Insbesondere metallene Oberflächen stellen durch ihr schwer vorhersehbares Reflektionsverhalten eine besondere Herausforderung dar. Eine Oberflächenrekonstruktion wird heute im Allgemeinen punktweise durch Einsatz von strukturiertem Licht gelöst. Damit werden zwar hoch genau Oberflächen vermessen, jedoch bleibt die Bestimmung von präzisen Koordinaten von Bohrlöchern ein bisher ungelöstes Problem. Diese Diplomarbeit setzt sich zum Ziel, die Position eines mit einem Stereosystem beobachteten Bohrlochs (Kreis im Raum) im photogrammetrischen System präzise zu bestimmen. Grundlage der Rekonstruktion sind subpixelgenaue Kanten des Bohrlochs. Dabei treten auch Kanten aus Spiegelungen auf. In einem ersten Schritt soll untersucht werden, wie Näherungswerte bestimmt werden können. In einem zweiten Schritt ist ein robustes Ausgleichsmodell zu realisieren. Es soll untersucht werden, unter welchen Bedingungen welche Genauigkeiten der Rekonstruktion der Bohrloch-Koordinate erreicht werden können.

    @MastersThesis{Grau2007Untersuchungen,
    Title = {Untersuchungen zur Rekonstruktion von Bohrungen aus Stereobildern},
    Author = {Grau, Stephan},
    Year = {2007},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Ing. Richard Steffen},
    Type = {Diplomarbeit},
    Abstract = {In der Industrie werden stereoskopische Messtechniken zur Pr\"ufung von Werkst\"ucken bereits erfolgreich eingesetzt. Dabei werden h\"ochste Anforderungen an die Genauigkeit gestellt. Insbesondere metallene Oberfl\"achen stellen durch ihr schwer vorhersehbares Reflektionsverhalten eine besondere Herausforderung dar. Eine Oberfl\"achenrekonstruktion wird heute im Allgemeinen punktweise durch Einsatz von strukturiertem Licht gel\"ost. Damit werden zwar hoch genau Oberfl\"achen vermessen, jedoch bleibt die Bestimmung von pr\"azisen Koordinaten von Bohrl\"ochern ein bisher ungel\"ostes Problem. Diese Diplomarbeit setzt sich zum Ziel, die Position eines mit einem Stereosystem beobachteten Bohrlochs (Kreis im Raum) im photogrammetrischen System pr\"azise zu bestimmen. Grundlage der Rekonstruktion sind subpixelgenaue Kanten des Bohrlochs. Dabei treten auch Kanten aus Spiegelungen auf. In einem ersten Schritt soll untersucht werden, wie N\"aherungswerte bestimmt werden k\"onnen. In einem zweiten Schritt ist ein robustes Ausgleichsmodell zu realisieren. Es soll untersucht werden, unter welchen Bedingungen welche Genauigkeiten der Rekonstruktion der Bohrloch-Koordinate erreicht werden k\"onnen.},
    City = {Bonn}
    }

  • G. Grisetti, S. Grzonka, C. Stachniss, P. Pfaff, and W. Burgard, “Efficient Estimation of Accurate Maximum Likelihood Maps in 3D,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , San Diego, CA, USA, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Grisetti2007c,
    Title = {Efficient Estimation of Accurate Maximum Likelihood Maps in 3D},
    Author = {Grisetti, G. and Grzonka, S. and Stachniss, C. and Pfaff, P. and Burgard, W.},
    Booktitle = iros,
    Year = {2007},
    Address = {San Diego, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti07iros.pdf}
    }

  • G. Grisetti, C. Stachniss, and W. Burgard, “Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters,” IEEE Transactions on Robotics, vol. 23, iss. 1, pp. 34-46, 2007.
    [BibTeX] [PDF]
    [none]
    @Article{Grisetti2007a,
    Title = {Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters},
    Author = {Grisetti, G. and Stachniss, C. and Burgard, W.},
    Journal = ieeetransrob,
    Year = {2007},
    Number = {1},
    Pages = {34--46},
    Volume = {23},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti07tro.pdf}
    }

  • G. Grisetti, C. Stachniss, S. Grzonka, and W. Burgard, “A Tree Parameterization for Efficiently Computing Maximum Likelihood Maps using Gradient Descent,” in Proceedings of Robotics: Science and Systems (RSS) , Atlanta, GA, USA, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Grisetti2007b,
    Title = {A Tree Parameterization for Efficiently Computing Maximum Likelihood Maps using Gradient Descent},
    Author = {Grisetti, G. and Stachniss, C. and Grzonka, S. and Burgard, W.},
    Booktitle = RSS,
    Year = {2007},
    Address = {Atlanta, GA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti07rss.pdf}
    }

  • G. Grisetti, G. D. Tipaldi, C. Stachniss, W. Burgard, and D. Nardi, “Fast and Accurate SLAM with Rao-Blackwellized Particle Filters,” Robotics and Autonomous Systems, vol. 55, iss. 1, pp. 30-38, 2007.
    [BibTeX] [PDF]
    [none]
    @Article{Grisetti2007,
    Title = {Fast and Accurate {SLAM} with Rao-Blackwellized Particle Filters},
    Author = {Grisetti, G. and Tipaldi, G.D. and Stachniss, C. and Burgard, W. and Nardi, D.},
    Journal = jras,
    Year = {2007},
    Number = {1},
    Pages = {30--38},
    Volume = {55},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti07jras.pdf}
    }

  • V. Heinzel, B. Waske, M. Braun, and G. Menz, “Remote sensing data assimilation for regional crop growth modelling in the region of Bonn (Germany),” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2007. doi:10.1109/IGARSS.2007.4423636
    [BibTeX]
    The study investigates the possibilities to improve the performance of CERES-Wheat crop growth model by assimilating information derived by optical and SAR Earth observation data. Biophysical parameter retrieval was done with the water cloud model for SAR data and the CLAIR model was applied to multispectral imagery. The CERES -Wheat model was calibrated using ground truth information. The re-initialization method with an adjustable planting date was selected as assimilation strategy. Modelling results generally improved by using all different kind of remote sensing data. However, best results were achieved by using information of the optical sensors only and not by a synergetic time series of all available data.

    @InProceedings{Heinzel2007Remote,
    Title = {Remote sensing data assimilation for regional crop growth modelling in the region of Bonn (Germany)},
    Author = {Heinzel, V. and Waske, Bj\"orn and Braun, M. and Menz, Gunter.},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2007},
    Abstract = {The study investigates the possibilities to improve the performance of CERES-Wheat crop growth model by assimilating information derived by optical and SAR Earth observation data. Biophysical parameter retrieval was done with the water cloud model for SAR data and the CLAIR model was applied to multispectral imagery. The CERES -Wheat model was calibrated using ground truth information. The re-initialization method with an adjustable planting date was selected as assimilation strategy. Modelling results generally improved by using all different kind of remote sensing data. However, best results were achieved by using information of the optical sensors only and not by a synergetic time series of all available data.},
    Doi = {10.1109/IGARSS.2007.4423636},
    Keywords = {Bonn;CERES-wheat crop growth model;CLAIR model;Germany;SAR Earth observation data;biophysical parameter retrieval;data assimilation;ground truth information;information assimilation;multispectral imagery;optical data;optical sensors;regional crop growth modelling;remote sensing;water cloud model;crops;data assimilation;geophysical signal processing;radar imaging;remote sensing by radar;spectral analysis;synthetic aperture radar;vegetation mapping;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • K. Herms, “Aufbau einer Datenbank unter Matlab zur Verwaltung von Bildsegmenten,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2007-05, 2007.
    [BibTeX] [PDF]
    Ein großer Aufgabenbereich der Bildverarbeitung ist die Merkmalsextraktion. Hierbei ist es zunächst erforderlich, die Bilder durch eine Segmentierung in konsistente Landkarten zu überführen. Wir verwenden zur Segmentierung einen Wasserscheidenalgorithmus. Die Verwaltung der Landkarten sollte möglichst effizient erfolgen. Der vorliegende Report erläutert zunächst unterschiedliche Speicherstrukturen und geht auf einen möglichen Ansatz zur konsistenten Umwandlung von Rasterdaten in Vektordaten ein. In einem zweiten Tiel beschäftigen wir uns mit dem Aufbau einer Datenbank zur Verwaltung dieser Vektordaten von Matlab aus.

    @TechReport{Herms2007Aufbau,
    Title = {Aufbau einer Datenbank unter Matlab zur Verwaltung von Bildsegmenten},
    Author = {Herms, Kerstin},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2007},
    Month = {August},
    Number = {TR-IGG-P-2007-05},
    Abstract = {Ein gro{\ss}er Aufgabenbereich der Bildverarbeitung ist die Merkmalsextraktion. Hierbei ist es zun\"achst erforderlich, die Bilder durch eine Segmentierung in konsistente Landkarten zu \"uberf\"uhren. Wir verwenden zur Segmentierung einen Wasserscheidenalgorithmus. Die Verwaltung der Landkarten sollte m\"oglichst effizient erfolgen. Der vorliegende Report erl\"autert zun\"achst unterschiedliche Speicherstrukturen und geht auf einen m\"oglichen Ansatz zur konsistenten Umwandlung von Rasterdaten in Vektordaten ein. In einem zweiten Tiel besch\"aftigen wir uns mit dem Aufbau einer Datenbank zur Verwaltung dieser Vektordaten von Matlab aus.},
    Keywords = {Segmentation, Algorithmic Geometry, GIS},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Herms2007Aufbau.pdf}
    }

  • K. Herms, “Exploration des Skalenraumes bezüglich der Gebäudeextraktion in terrestrischen Farbbildern,” Diplomarbeit Master Thesis, 2007.
    [BibTeX] [PDF]
    Die Gebäudedetektion in digitalen Bildern stellt wegen der Komplexität der Objekte ein schwieriges Problem der Mustererkennung dar. In neueren Ansätzen zur Gebäudeextraktion wird das Bild in verschiedenen Auflösungsstufen, im sog. Skalenraum analysiert. Auf diese Weise können für die Bildinterpretation hinderliche Details ausgeblendet werden. Dabei spielen stabile Regionen, d. s. Regionen die sich bei Veränderung der Auflösung wenig ändern, eine besondere Rolle. Von stabilen Regionen im Skalenraum kann man auf kontraststarke Übergänge zwischen Objekten im Bild schließen [Drauschke et al. 2006: Stabilität von Regionen im Skalenraum]. Diese Diplomarbeit soll untersuchen, ob über stabilen Bildregionen eine Klassifikation von Gebäuden und anderen Objekten durchgeführt werden kann. Dazu sollen Merkmale der stabilen Regionen ausgewählt und bestimmt werden und diese Merkmale auf ihre Skalenabhängigkeit hin überprüft werden. Mit Hilfe eines geeignet gewählten Klassifikators sollen Gebäude und andere Objekte identifiziert werden. An Hand von terrestrischen Bildern soll bewertet werden, ob die u. U. skalenabhängigen Merkmale für die Gebäudeextraktion geeignet sind.

    @MastersThesis{Herms2007Exploration,
    Title = {Exploration des Skalenraumes bez\"uglich der Geb\"audeextraktion in terrestrischen Farbbildern},
    Author = {Herms, Kerstin},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2007},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Inform. Martin Drauschke},
    Type = {Diplomarbeit},
    Abstract = {Die Geb\"audedetektion in digitalen Bildern stellt wegen der Komplexit\"at der Objekte ein schwieriges Problem der Mustererkennung dar. In neueren Ans\"atzen zur Geb\"audeextraktion wird das Bild in verschiedenen Aufl\"osungsstufen, im sog. Skalenraum analysiert. Auf diese Weise k\"onnen f\"ur die Bildinterpretation hinderliche Details ausgeblendet werden. Dabei spielen stabile Regionen, d. s. Regionen die sich bei Ver\"anderung der Aufl\"osung wenig \"andern, eine besondere Rolle. Von stabilen Regionen im Skalenraum kann man auf kontraststarke \"Uberg\"ange zwischen Objekten im Bild schlie{\ss}en [Drauschke et al. 2006: Stabilit\"at von Regionen im Skalenraum]. Diese Diplomarbeit soll untersuchen, ob \"uber stabilen Bildregionen eine Klassifikation von Geb\"auden und anderen Objekten durchgef\"uhrt werden kann. Dazu sollen Merkmale der stabilen Regionen ausgew\"ahlt und bestimmt werden und diese Merkmale auf ihre Skalenabh\"angigkeit hin \"uberpr\"uft werden. Mit Hilfe eines geeignet gew\"ahlten Klassifikators sollen Geb\"aude und andere Objekte identifiziert werden. An Hand von terrestrischen Bildern soll bewertet werden, ob die u. U. skalenabh\"angigen Merkmale f\"ur die Geb\"audeextraktion geeignet sind.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Herms2007Exploration.pdf}
    }

  • K. Herms, “Extraktion relevanter Bildkanten für die Gebäudedetektion (umgesetzt in Matlab),” Department of Photogrammetry, University of Bonn, TR-IGG-P-2007-05, 2007.
    [BibTeX] [PDF]
    Um Gebäude in Bildern detektieren detektieren zu können, greifen wir als ein Merkmal auf Bildkanten zurück. Kanten liegen zum einen aus der Bildsegmentierung vor, können aber auch gezielt mit einer Kantenextraktion aus dem quadratisch Gradientenbild gewonnen werden. Durch die Auswahl von Kanten, die in beiden F ällen auftreten, wollen wir eine Einschränkung auf möglichst relevante Kanten vornehmen. Diese Arbeit beschäftigt sich mit dem Au?nden (und der Gewichtung) von Kantenzügen eines segmentierten Bildes, die einer Kante im quadratidschen Gradientenbild zugeordnet werden können.

    @TechReport{Herms2007Extraktion,
    Title = {Extraktion relevanter Bildkanten f\"ur die Geb\"audedetektion (umgesetzt in Matlab)},
    Author = {Herms, Kerstin},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2007},
    Number = {TR-IGG-P-2007-05},
    Abstract = {Um Geb\"aude in Bildern detektieren detektieren zu k\"onnen, greifen wir als ein Merkmal auf Bildkanten zur\"uck. Kanten liegen zum einen aus der Bildsegmentierung vor, k\"onnen aber auch gezielt mit einer Kantenextraktion aus dem quadratisch Gradientenbild gewonnen werden. Durch die Auswahl von Kanten, die in beiden F \"allen auftreten, wollen wir eine Einschr\"ankung auf m\"oglichst relevante Kanten vornehmen. Diese Arbeit besch\"aftigt sich mit dem Au?nden (und der Gewichtung) von Kantenz\"ugen eines segmentierten Bildes, die einer Kante im quadratidschen Gradientenbild zugeordnet werden k\"onnen.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Herms2007Extraktion.pdf}
    }

  • A. Janz, S. van der Linden, B. Waske, and P. Hostert, “imageSVM – A user-oriented tool for advanced classification of hyperspectral data using support vector machines,” in 5th Workshop of the EARSeL Special Interest Group Imaging Spectroscopy , 2007.
    [BibTeX] [PDF]
    An implementation for the classification of remote sensing images with support vector machines (SVM) is introduced. This tool, called imageSVM, allows a user-friendly work, especially with large, highly-resolved data sets in the ENVI/IDL environment. imageSVM uses LIBSVM for the training of the SVM in combination with a user-defined grid search. Parameter settings can be set flexibly during the entire workflow and a time-efficient processing becomes possible. First tests underline the high-accuracy of SVM classification using heterogeneous hyperspectral data and the good performance of SVM in the context of multi-sensoral studies.

    @InProceedings{Janz2007imageSVM,
    Title = {imageSVM - A user-oriented tool for advanced classification of hyperspectral data using support vector machines},
    Author = {Janz, Andreas and van der Linden, Sebastian and Waske, Bj\"orn and Hostert, Patrick},
    Booktitle = {5th Workshop of the EARSeL Special Interest Group Imaging Spectroscopy},
    Year = {2007},
    Abstract = {An implementation for the classification of remote sensing images with support vector machines (SVM) is introduced. This tool, called imageSVM, allows a user-friendly work, especially with large, highly-resolved data sets in the ENVI/IDL environment. imageSVM uses LIBSVM for the training of the SVM in combination with a user-defined grid search. Parameter settings can be set flexibly during the entire workflow and a time-efficient processing becomes possible. First tests underline the high-accuracy of SVM classification using heterogeneous hyperspectral data and the good performance of SVM in the context of multi-sensoral studies.},
    Owner = {waske},
    Timestamp = {2012.09.05},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Janz2007imageSVM.pdf}
    }

  • D. Joho, C. Stachniss, P. Pfaff, and W. Burgard, “Autonomous Exploration for 3D Map Learning,” in Autonome Mobile Systeme , Kaiserslautern, Germany, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Joho2007,
    Title = {Autonomous Exploration for 3D Map Learning},
    Author = {Joho, D. and Stachniss, C. and Pfaff, P. and Burgard, W.},
    Booktitle = AMS,
    Year = {2007},
    Address = {Kaiserslautern, Germany},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/joho07ams.pdf}
    }

  • F. Klughardt, “Einführung eines neuen Photokonsistenz-Maßes zur Oberflächenrekonstruktion in Luftbildern mittels eines Multilabel-Graphcut-Verfahrens,” Diplomarbeit Master Thesis, 2007.
    [BibTeX]
    Die Zuordnung von Bildern im Rahmen einer dreidimensionalen Rekonstruktion der abgebildeten Szene stellt in seiner Allgemeinheit ein bisher – im Vergleich zum visuellen System des Menschen – nur sehr unzureichend gelöstes Problem dar. Gleichzeitig gibt es eine große Zahl erfolgreicher Ansätze zur Lösung des Problems unter wohl definierten Bedingungen und eine beträchtliche Zahl in der Praxis angewendeter Verfahren. Die Rekonstruktion von Oberflächenmodellen aus Luftbildern stellt eine in jüngster Zeit mit den Entwicklungen von Google-Earth and Virtual Earth zunehmend beachtete Problemstellung dar. Die Randbedingungen für eine Stereorekonstruktion sind hier wegen der meist günstig gewählten Lichtverhältnisse bei der Bildaufnahme und der meist vorhandenen diffusen Reflexionseigenschaften der Oberflächen vergleichsweise homogen, wenn man von Schatteneffekten und gelegentlichen spiegelnden Reflektionen absieht. Das in diesem Bereich übliche Maß zur Kennzeichnung der Ähnlichkeit zugeordneter Bildbereiche sind der normalisierte Korrelationskoeffizient und die Summe der quadratischen Intensitätsdifferenzen. Die beiden Maße stellen Extreme bzgl. der Invarianz gegen Beleuchtungsveränderung dar: Der Korrelationskoeffizient ist völlig invariant, die Summe der quadratischen Intensitätsdifferenzen nicht invariant. Das zentrale Anliegen der vorliegenden Arbeit ist die Entwicklung eines neuen Photokonsistenzmaßes, das zwischen diesen bei den Extremen zu vermitteln in der Lage ist. Für eine genäherte Oberflächenrekonstruktion wird das Multi-Level-Graphcut-Verfahren eingesetzt, das vergleichsweise effizient das komplexe Problem der Oberflächenrekonstruktion lösen kann und das neben dem neuen Photokonsistenzmaß flexibel Vorinformation über die Oberfläche integrieren und so das Problem von Unstetigkeiten angehen kann.

    @MastersThesis{Klughardt2007Einfuhrung,
    Title = {Einf\"uhrung eines neuen Photokonsistenz-Ma{\ss}es zur Oberfl\"achenrekonstruktion in Luftbildern mittels eines Multilabel-Graphcut-Verfahrens},
    Author = {Klughardt, Frank},
    School = {Institute of Photogrammetry, University of Bonn In Zusammenarbeit mit dem Institut f\"ur Informatik der Universit\"at Bonn},
    Year = {2007},
    Note = {Betreuung: Prof. Dr. Daniel Cremers, Prof. Dr.-Ing. Wolfgang F\"orstner},
    Type = {Diplomarbeit},
    Abstract = {Die Zuordnung von Bildern im Rahmen einer dreidimensionalen Rekonstruktion der abgebildeten Szene stellt in seiner Allgemeinheit ein bisher - im Vergleich zum visuellen System des Menschen - nur sehr unzureichend gel\"ostes Problem dar. Gleichzeitig gibt es eine gro{\ss}e Zahl erfolgreicher Ans\"atze zur L\"osung des Problems unter wohl definierten Bedingungen und eine betr\"achtliche Zahl in der Praxis angewendeter Verfahren. Die Rekonstruktion von Oberfl\"achenmodellen aus Luftbildern stellt eine in j\"ungster Zeit mit den Entwicklungen von Google-Earth and Virtual Earth zunehmend beachtete Problemstellung dar. Die Randbedingungen f\"ur eine Stereorekonstruktion sind hier wegen der meist g\"unstig gew\"ahlten Lichtverh\"altnisse bei der Bildaufnahme und der meist vorhandenen diffusen Reflexionseigenschaften der Oberfl\"achen vergleichsweise homogen, wenn man von Schatteneffekten und gelegentlichen spiegelnden Reflektionen absieht. Das in diesem Bereich \"ubliche Ma{\ss} zur Kennzeichnung der \"Ahnlichkeit zugeordneter Bildbereiche sind der normalisierte Korrelationskoeffizient und die Summe der quadratischen Intensit\"atsdifferenzen. Die beiden Ma{\ss}e stellen Extreme bzgl. der Invarianz gegen Beleuchtungsver\"anderung dar: Der Korrelationskoeffizient ist v\"ollig invariant, die Summe der quadratischen Intensit\"atsdifferenzen nicht invariant. Das zentrale Anliegen der vorliegenden Arbeit ist die Entwicklung eines neuen Photokonsistenzma{\ss}es, das zwischen diesen bei den Extremen zu vermitteln in der Lage ist. F\"ur eine gen\"aherte Oberfl\"achenrekonstruktion wird das Multi-Level-Graphcut-Verfahren eingesetzt, das vergleichsweise effizient das komplexe Problem der Oberfl\"achenrekonstruktion l\"osen kann und das neben dem neuen Photokonsistenzma{\ss} flexibel Vorinformation \"uber die Oberfl\"ache integrieren und so das Problem von Unstetigkeiten angehen kann.},
    City = {Bonn}
    }

  • F. Korč and V. Hlaváč, “Human Motion – Understanding, Modeling, Capture and Animation,” , 1 ed., B. Rosenhahn, R. Klette, and D. Metaxas, Eds., Springer, 2007, vol. 36, pp. 105-130.
    [BibTeX] [PDF]
    This work contributes to detection and tracking of walking or running humans in surveillance video sequences. We propose a 2D model-based approach to the whole body tracking in a video sequence captured from a single camera view. An extended six-link biped human model is employed. We assume that a static camera observes the scene horizontally or obliquely. Persons can be seen from a continuum of views ranging from a lateral to a frontal one. We do not expect humans to be the only moving objects in the scene and to appear at the same scale at different image locations.

    @InBook{Korvc2007Human,
    Title = {Human Motion - Understanding, Modeling, Capture and Animation},
    Author = {Kor{\vc}, Filip and Hlav{\'a}{\vc}, V{\'a}clav},
    Chapter = {Detection and Tracking of Humans in Single View Sequences Using 2D Articulated Model},
    Editor = {Rosenhahn, Bodo and Klette, Reinhard and Metaxas, Dimitris},
    Pages = {105--130},
    Publisher = {Springer},
    Year = {2007},
    Edition = {1},
    Series = {Computational Imaging and Vision},
    Volume = {36},
    Abstract = {This work contributes to detection and tracking of walking or running humans in surveillance video sequences. We propose a 2D model-based approach to the whole body tracking in a video sequence captured from a single camera view. An extended six-link biped human model is employed. We assume that a static camera observes the scene horizontally or obliquely. Persons can be seen from a continuum of views ranging from a lateral to a frontal one. We do not expect humans to be the only moving objects in the scene and to appear at the same scale at different image locations.},
    ISBN = {978-1-4020-6692-4},
    Keywords = {human detection in video, model-based human detection},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Korvc2007Human.pdf}
    }

  • S. van der Linden, A. Janz, B. Waske, M. Eiden, and P. Hostert, “Classifying segmented hyperspectral data from a heterogeneous urban environment using support vector machines,” Journal of Applied Remote Sensing, vol. 1, p. 13543, 2007. doi:10.1117/1.2813466
    [BibTeX]
    Classifying remotely sensed images from urban environments is challenging. Urban land cover classes are spectrally heterogeneous and materials from different classes have similar spectral properties. Image segmentation has become a common preprocessing step that helped to overcome such problems. However, little attention has been paid to impacts of segmentation on the data’s spectral information content. Here, urban hyperspectral data is spectrally classified using support vector machines (SVM). By training a SVM on pixel information and applying it to the image before segmentation and after segmentation at different levels, the classification framework is maintained and the influence of the spectral generalization during image segmentation hence directly investigated. In addition, a straightforward multi-level approach was performed, which combines information from different levels into one final map. A stratified accuracy assessment by urban structure types is applied. The classification of the unsegmented data achieves an overall accuracy of 88.7\%. Accuracy of the segment-based classification is lower and decreases with increasing segment size. Highest accuracies for the different urban structure types are achieved at varying segmentation levels. The accuracy of the multi-level approach is similar to that of unsegmented data but comprises the positive effects of more homogeneous segment-based classifications at different levels in one map.

    @Article{Linden2007Classifying,
    Title = {Classifying segmented hyperspectral data from a heterogeneous urban environment using support vector machines},
    Author = {van der Linden, Sebastian and Janz, Andreas and Waske, Bj\"orn and Eiden, Michael and Hostert, Patrick},
    Journal = {Journal of Applied Remote Sensing},
    Year = {2007},
    Pages = {013543},
    Volume = {1},
    Abstract = {Classifying remotely sensed images from urban environments is challenging. Urban land cover classes are spectrally heterogeneous and materials from different classes have similar spectral properties. Image segmentation has become a common preprocessing step that helped to overcome such problems. However, little attention has been paid to impacts of segmentation on the data's spectral information content. Here, urban hyperspectral data is spectrally classified using support vector machines (SVM). By training a SVM on pixel information and applying it to the image before segmentation and after segmentation at different levels, the classification framework is maintained and the influence of the spectral generalization during image segmentation hence directly investigated. In addition, a straightforward multi-level approach was performed, which combines information from different levels into one final map. A stratified accuracy assessment by urban structure types is applied. The classification of the unsegmented data achieves an overall accuracy of 88.7\%. Accuracy of the segment-based classification is lower and decreases with increasing segment size. Highest accuracies for the different urban structure types are achieved at varying segmentation levels. The accuracy of the multi-level approach is similar to that of unsegmented data but comprises the positive effects of more homogeneous segment-based classifications at different levels in one map.},
    Doi = {10.1117/1.2813466},
    Owner = {waske},
    Sn = {1931-3195},
    Tc = {11},
    Timestamp = {2012.09.04},
    Ut = {WOS:000260914300007},
    Z8 = {0},
    Z9 = {11},
    Zb = {2}
    }

  • S. van der Linden, B. Waske, and P. Hostert, “Towards an optimized use of the spectral angle space,” in 5th Workshop of the EARSeL Special Interest Group Imaging Spectroscopy , 2007.
    [BibTeX] [PDF]
    The concept of spectral angle mapping (SAM) is extended in this work by the use of self-learning decision trees (DT) to evaluate rule images. We test whether the performance of the SAM can be improved to achieve the quality of more recent machine learning classifiers in spectrally heterogeneous environments. Results show that the integration of the DT significantly increases the accuracy of the SAM of urban hyperspectral data. However, the accuracy of support vector machines is not achieved. Despite this lower accuracy, the spectral angle space as constituted by the SAM rule images appears to be a useful class-specific transformation of the data, which might be used similar to common transformations in future works.

    @InProceedings{Linden2007Towards,
    Title = {Towards an optimized use of the spectral angle space},
    Author = {van der Linden, Sebastian and Waske, Bj\"orn and Hostert, Patrick},
    Booktitle = {5th Workshop of the EARSeL Special Interest Group Imaging Spectroscopy},
    Year = {2007},
    Abstract = {The concept of spectral angle mapping (SAM) is extended in this work by the use of self-learning decision trees (DT) to evaluate rule images. We test whether the performance of the SAM can be improved to achieve the quality of more recent machine learning classifiers in spectrally heterogeneous environments. Results show that the integration of the DT significantly increases the accuracy of the SAM of urban hyperspectral data. However, the accuracy of support vector machines is not achieved. Despite this lower accuracy, the spectral angle space as constituted by the SAM rule images appears to be a useful class-specific transformation of the data, which might be used similar to common transformations in future works.},
    Owner = {waske},
    Timestamp = {2012.09.05},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Linden2007Towards.pdf}
    }

  • O. Martínez-Mozos, C. Stachniss, A. Rottmann, and W. Burgard, “Using AdaBoost for Place Labelling and Topological Map Building,” in Robotics Research, S. Thrun, R. Brooks, and H. Durrant-Whyte, Eds., Springer, 2007, vol. 28.
    [BibTeX] [PDF]
    [none]
    @InCollection{Mart'inez-Mozos2007,
    Title = {Using AdaBoost for Place Labelling and Topological Map Building},
    Author = {Mart\'{i}nez-Mozos, O. and Stachniss, C. and Rottmann, A. and Burgard, W.},
    Booktitle = {Robotics Research},
    Publisher = springer,
    Year = {2007},
    Editor = {Thrun, S. and Brooks, R. and Durrant-Whyte, H.},
    Series = springerstaradvanced,
    Volume = {28},
    Abstract = {[none]},
    ISBN = {978-3-540-48110-2},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/martinez07springer.pdf}
    }

  • P. Pfaff, R. Kuemmerle, D. Joho, C. Stachniss, R. Triebel, and Burgard, “Navigation in Combined Outdoor and Indoor Environments using Multi-Level Surface Maps,” in Workshop on Safe Navigation in Open and Dynamic Environments at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , San Diego, CA, USA, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Pfaff2007a,
    Title = {Navigation in Combined Outdoor and Indoor Environments using Multi-Level Surface Maps},
    Author = {Pfaff, P. and Kuemmerle, R. and Joho, D. and Stachniss, C. and Triebel, R. and Burgard},
    Booktitle = iroswsnav,
    Year = {2007},
    Address = {San Diego, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/pfaff07irosws.pdf}
    }

  • P. Pfaff, R. Triebel, C. Stachniss, P. Lamon, W. Burgard, and R. Siegwart, “Towards Mapping of Cities,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Rome, Italy, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Pfaff2007,
    Title = {Towards Mapping of Cities},
    Author = {Pfaff, P. and Triebel, R. and Stachniss, C. and Lamon, P. and Burgard, W. and Siegwart, R.},
    Booktitle = icra,
    Year = {2007},
    Address = {Rome, Italy},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/pfaff07icra.pdf}
    }

  • J. Saatkamp and J. Schmittwilken, “Generative Models and Markov Chain Monte Carlo Techniques for Detection and Reconstruction of Stairs from Point Clouds,” in Proceedings of the ISPRS Workshop on Updating Geo-spatial Databases with Imagery & The 5th ISPRS Workshop on Dynamic and Multi-dimensional GIS , Urumqi, China, 2007, pp. 111-119.
    [BibTeX] [PDF]
    The paper describes an approach for the automatical reconstruction of homogeneous straight stairs from point cloud data by using a generative model and Markov Chain Monte Carlo techniques for estimating the parameters. Parameters for a generative model for stairs are presented. The six parameters of this 2D model are determined with a maximum-a-posteriori estimation approach. For all parameters prior probability distributions are chosen. Two types of likelihood functions are introduced. It is shown that four of the parameters under certain conditions can be determined via MCMC. Some results are presented.

    @InProceedings{Saatkamp2007Generative,
    Title = {Generative Models and Markov Chain Monte Carlo Techniques for Detection and Reconstruction of Stairs from Point Clouds},
    Author = {Saatkamp, Jens and Schmittwilken, J\"org},
    Booktitle = {Proceedings of the ISPRS Workshop on Updating Geo-spatial Databases with Imagery \& The 5th ISPRS Workshop on Dynamic and Multi-dimensional GIS},
    Year = {2007},
    Address = {Urumqi, China},
    Editor = {Jiang, Jie and Zhao, Renliang},
    Month = aug,
    Number = {part 4/W54},
    Organization = {ISPRS},
    Pages = {111--119},
    Series = {The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
    Volume = {XXXVI},
    Abstract = {The paper describes an approach for the automatical reconstruction of homogeneous straight stairs from point cloud data by using a generative model and Markov Chain Monte Carlo techniques for estimating the parameters. Parameters for a generative model for stairs are presented. The six parameters of this 2D model are determined with a maximum-a-posteriori estimation approach. For all parameters prior probability distributions are chosen. Two types of likelihood functions are introduced. It is shown that four of the parameters under certain conditions can be determined via MCMC. Some results are presented.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Saatkamp2007Generative.pdf}
    }

  • J. Schmittwilken, J. Saatkamp, W. Förstner, T. Kolbe, and L. Plümer, “A Semantic Model of Stairs in Building Collars,” Photogrammetrie, Fernerkundung, Geoinformation PFG, pp. 415-428, 2007.
    [BibTeX] [PDF]
    The automated extraction of high resolution 3D building models from imagery and laser scanner data requires strong models for all features which are observable at a large scale. In this paper we give a semantic model of stairs. They play a prominent role in the transition from buildings to the surrounding terrain or infrastructure. We name the transition area between terrain and building collar, and the focus is on stairs in building collars. Simple and complex stairways are represented by UML class diagrams along with constraints reflecting semantic and functional aspects in OCL. A systematic derivation of an attribute grammar consisting of production and semantic rules from UML/OCL is presented. Finally, we show how hypotheses with comprehensive predictions may be derived from observations using mixed integer/real programming driven by grammar rules.

    @Article{Schmittwiken2007Semantic,
    Title = {A Semantic Model of Stairs in Building Collars},
    Author = {Schmittwilken, J\"org and Saatkamp, Jens and F\"orstner, Wolfgang and Kolbe, Thomas and Pl\"umer, Lutz},
    Journal = {Photogrammetrie, Fernerkundung, Geoinformation PFG},
    Year = {2007},
    Pages = {415--428},
    Abstract = {The automated extraction of high resolution 3D building models from imagery and laser scanner data requires strong models for all features which are observable at a large scale. In this paper we give a semantic model of stairs. They play a prominent role in the transition from buildings to the surrounding terrain or infrastructure. We name the transition area between terrain and building collar, and the focus is on stairs in building collars. Simple and complex stairways are represented by UML class diagrams along with constraints reflecting semantic and functional aspects in OCL. A systematic derivation of an attribute grammar consisting of production and semantic rules from UML/OCL is presented. Finally, we show how hypotheses with comprehensive predictions may be derived from observations using mixed integer/real programming driven by grammar rules.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schmittwiken2007Semantic.pdf}
    }

  • C. Schmitz, “Untersuchungen zur Genauigkeit der gleichzeitigen Lokalisierung und Kartierung aus monokularen Bildfolgen,” Diplomarbeit Master Thesis, 2007.
    [BibTeX]
    [none]
    @MastersThesis{Schmitz2007Untersuchungen,
    Title = {Untersuchungen zur Genauigkeit der gleichzeitigen Lokalisierung und Kartierung aus monokularen Bildfolgen},
    Author = {Schmitz, Cornelia},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2007},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Ing. Richard Steffen},
    Type = {Diplomarbeit},
    Abstract = {[none]},
    City = {Bonn}
    }

  • C. Stachniss, G. Grisetti, W. Burgard, and N. Roy, “Evaluation of Gaussian Proposal Distributions for Mapping with Rao-Blackwellized Particle Filters,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , San Diego, CA, USA, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2007a,
    Title = {Evaluation of Gaussian Proposal Distributions for Mapping with Rao-Blackwellized Particle Filters},
    Author = {Stachniss, C. and Grisetti, G. and Burgard, W. and Roy, N.},
    Booktitle = iros,
    Year = {2007},
    Address = {San Diego, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss07iros.pdf}
    }

  • C. Stachniss, G. Grisetti, O. Martínez-Mozos, and W. Burgard, “Efficiently Learning Metric and Topological Maps with Autonomous Service Robots,” it — Information Technology, vol. 49, iss. 4, pp. 232-238, 2007.
    [BibTeX]
    [none]
    @Article{Stachniss2007,
    Title = {Efficiently Learning Metric and Topological Maps with Autonomous Service Robots},
    Author = {Stachniss, C. and Grisetti, G. and Mart\'{i}nez-Mozos, O. and Burgard, W.},
    Journal = {it -- Information Technology},
    Year = {2007},
    Number = {4},
    Pages = {232--238},
    Volume = {49},
    Abstract = {[none]},
    Editor = {Buss, M. and Lawitzki, G.},
    Timestamp = {2014.04.24}
    }

  • B. Steder, G. Grisetti, S. Grzonka, C. Stachniss, A. Rottmann, and W. Burgard, “Learning Maps in 3D using Attitude and Noisy Vision Sensors,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , San Diego, CA, USA, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Steder2007,
    Title = {Learning Maps in 3D using Attitude and Noisy Vision Sensors},
    Author = {Steder, B. and Grisetti, G. and Grzonka, S. and Stachniss, C. and Rottmann, A. and Burgard, W.},
    Booktitle = iros,
    Year = {2007},
    Address = {San Diego, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/steder07iros.pdf}
    }

  • B. Steder, A. Rottmann, G. Grisetti, C. Stachniss, and W. Burgard, “Autonomous Navigation for Small Flying Vehicles,” in Workshop on Micro Aerial Vehicles at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , San Diego, CA, USA, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Steder,
    Title = {Autonomous Navigation for Small Flying Vehicles},
    Author = {Steder, B. and Rottmann, A. and Grisetti, G. and Stachniss, C. and Burgard, W.},
    Booktitle = iroswsfly,
    Year = {2007},
    Address = {San Diego, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~steder/publications/steder07irosws.pdf}
    }

  • R. Steffen and C. Beder, “Recursive Estimation with Implicit Constraints,” in Proceedings of the DAGM 2007 , Heidelberg, 2007, pp. 194-203. doi:10.1007/978-3-540-74936-3_20
    [BibTeX] [PDF]
    Recursive estimation or Kalman filtering usually relies on explicit model functions, that directly and explicitly describe the effect of the parameters on the observations. However, many problems in computer vision, including all those resulting in homogeneous equation systems, are easier described using implicit constraints between the observations and the parameters. By implicit we mean, that the constraints are given by equations, that are not easily solvable for the observation vector. We present a framework, that allows to incorporate such implicit constraints as measurement equations into a Kalman filter. The algorithm may be used as a black-box, simplifying the process of specifying suitable measurement equations for many problems. As a byproduct, the possibility of specifying model equations non-explicitly, some non-linearities may be avoided and better results can be achieved for certain problems.

    @InProceedings{Steffen2007Recursive,
    Title = {Recursive Estimation with Implicit Constraints},
    Author = {Steffen, Richard and Beder, Christian},
    Booktitle = {Proceedings of the DAGM 2007},
    Year = {2007},
    Address = {Heidelberg},
    Editor = {F.A. Hamprecht and C. Schn\"orr and B. J\"ahne},
    Number = {4713},
    Pages = {194--203},
    Publisher = {Springer},
    Series = {LNCS},
    Abstract = {Recursive estimation or Kalman filtering usually relies on explicit model functions, that directly and explicitly describe the effect of the parameters on the observations. However, many problems in computer vision, including all those resulting in homogeneous equation systems, are easier described using implicit constraints between the observations and the parameters. By implicit we mean, that the constraints are given by equations, that are not easily solvable for the observation vector. We present a framework, that allows to incorporate such implicit constraints as measurement equations into a Kalman filter. The algorithm may be used as a black-box, simplifying the process of specifying suitable measurement equations for many problems. As a byproduct, the possibility of specifying model equations non-explicitly, some non-linearities may be avoided and better results can be achieved for certain problems.},
    Doi = {10.1007/978-3-540-74936-3_20},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Steffen2007Recursive.pdf}
    }

  • H. Strasdat, C. Stachniss, M. Bennewitz, and W. Burgard, “Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching,” in Autonome Mobile Systeme , Kaiserslautern, Germany, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Strasdat2007,
    Title = {Visual Bearing-Only Simultaneous Localization and Mapping with Improved Feature Matching},
    Author = {Strasdat, H. and Stachniss, C. and Bennewitz, M. and Burgard, W.},
    Booktitle = AMS,
    Year = {2007},
    Address = {Kaiserslautern, Germany},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/strasdat07ams.pdf}
    }

  • B. Waske and J. A. Benediktsson, “Decision Fusion of Multitemporal SAR and Multispectral Imagery for Improved Land Cover Classification,” in ISPRS Mapping without the sun , 2007.
    [BibTeX]
    [none]
    @InProceedings{Waske2007Decision,
    Title = {Decision Fusion of Multitemporal SAR and Multispectral Imagery for Improved Land Cover Classification},
    Author = {Waske, Bj\"orn and Benediktsson, Jon Atli},
    Booktitle = {ISPRS Mapping without the sun},
    Year = {2007},
    Abstract = {[none]},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • B. Waske and J. A. Benediktsson, “Fusion of support vector machines for classification of multisensor data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 45, iss. 12, pp. 3858-3866, 2007. doi:10.1109/TGRS.2007.898446
    [BibTeX]
    The classification of multisensor data sets, consisting of multitemporal synthetic aperture radar data and optical imagery, is addressed. The concept is based on the decision fusion of different outputs. Each data source is treated separately and classified by a support vector machine (SVM). Instead of fusing the final classification outputs (i.e., land cover classes), the original outputs of each SVM discriminant function are used in the subsequent fusion process. This fusion is performed by another SVM, which is trained on the a priori outputs. In addition, two voting schemes are applied to create the final classification results. The results are compared with well-known parametric and nonparametric classifier methods, i.e., decision trees, the maximum-likelihood classifier, and classifier ensembles. The proposed SVM-based fusion approach outperforms all other approaches and significantly improves the results of a single SVM, which is trained on the whole multisensor data set.

    @Article{Waske2007Fusion,
    Title = {Fusion of support vector machines for classification of multisensor data},
    Author = {Waske, Bj\"orn and Benediktsson, Jon Atli},
    Journal = {IEEE Transactions on Geoscience and Remote Sensing},
    Year = {2007},
    Month = dec,
    Number = {12},
    Pages = {3858--3866},
    Volume = {45},
    Abstract = {The classification of multisensor data sets, consisting of multitemporal synthetic aperture radar data and optical imagery, is addressed. The concept is based on the decision fusion of different outputs. Each data source is treated separately and classified by a support vector machine (SVM). Instead of fusing the final classification outputs (i.e., land cover classes), the original outputs of each SVM discriminant function are used in the subsequent fusion process. This fusion is performed by another SVM, which is trained on the a priori outputs. In addition, two voting schemes are applied to create the final classification results. The results are compared with well-known parametric and nonparametric classifier methods, i.e., decision trees, the maximum-likelihood classifier, and classifier ensembles. The proposed SVM-based fusion approach outperforms all other approaches and significantly improves the results of a single SVM, which is trained on the whole multisensor data set.},
    Cl = {Hong Kong, PEOPLES R CHINA},
    Ct = {4th International Workshop on Pattern Recognition in Remote Sensing},
    Cy = {AUG 20, 2006},
    Doi = {10.1109/TGRS.2007.898446},
    Owner = {waske},
    Pn = {Part 1},
    Sn = {0196-2892},
    Tc = {56},
    Timestamp = {2012.09.04},
    Ut = {WOS:000251339400002},
    Z8 = {2},
    Z9 = {58},
    Zb = {6}
    }

  • B. Waske, M. Braun, and G. Menz, “A segment-based speckle filter using multisensoral remote sensing imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 4, iss. 2, pp. 231-235, 2007. doi:10.1109/LGRS.2006.888849
    [BibTeX]
    In the proposed approach, the well-known enhanced Lee filter is modified to allow the integration of feature outlines-previously extracted from segmented optical images. The filter is applied to several ENVISAT ASAR images that cover urban, agricultural, and forest areas during different plant phenological stages. The performance of this segment-based speckle filter is compared to those of other filters using ratio images, visual interpretation, and statistical indexes. The approach reduces the loss of radiometry and spatial information. It performs comparable to more complex methods and outperforms common techniques.

    @Article{Waske2007segment,
    Title = {A segment-based speckle filter using multisensoral remote sensing imagery},
    Author = {Waske, Bj\"orn and Braun, Matthias and Menz, Gunter},
    Journal = {IEEE Geoscience and Remote Sensing Letters},
    Year = {2007},
    Month = apr,
    Number = {2},
    Pages = {231--235},
    Volume = {4},
    Abstract = {In the proposed approach, the well-known enhanced Lee filter is modified to allow the integration of feature outlines-previously extracted from segmented optical images. The filter is applied to several ENVISAT ASAR images that cover urban, agricultural, and forest areas during different plant phenological stages. The performance of this segment-based speckle filter is compared to those of other filters using ratio images, visual interpretation, and statistical indexes. The approach reduces the loss of radiometry and spatial information. It performs comparable to more complex methods and outperforms common techniques.},
    Doi = {10.1109/LGRS.2006.888849},
    Owner = {waske},
    Sn = {1545-598X},
    Tc = {1},
    Timestamp = {2012.09.04},
    Ut = {WOS:000246033900009},
    Z8 = {0},
    Z9 = {1},
    Zb = {0}
    }

  • B. Waske, V. Heinzel, M. Braun, and G. Menz, “Random Forests for Classifying multi-temporal SAR Data,” in ESA’s ENVISAT Symposium , 2007.
    [BibTeX] [PDF]
    The accuracy of supervised land cover classifications depends on several factors like the chosen algorithm, adequate training data and the selection of features. In regard to multi-temporal remote sensing imagery statistical classifier are often not applicable. In the study presented here, a Random Forest was applied to a SAR data set, consisting of 15 acquisitions. A detailed accuracy assessment shows that the Random Forest significantly increases the efficiency of the single decision tree and can outperform other classifiers in terms of accuracy. A visual interpretation confirms the statistical accuracy assessment. The imagery is classified into more homogeneous regions and the noise is significantly decreased. The additional time needed for the generation of Random Forests is little and can be justified. It is still a lot faster than other state-of-the-art classifiers.

    @InProceedings{Waske2007Random,
    Title = {Random Forests for Classifying multi-temporal SAR Data},
    Author = {Waske, Bj\"orn and Heinzel, Vanessa and Braun, Matthias and Menz, Gunter},
    Booktitle = {ESA's ENVISAT Symposium},
    Year = {2007},
    Abstract = {The accuracy of supervised land cover classifications depends on several factors like the chosen algorithm, adequate training data and the selection of features. In regard to multi-temporal remote sensing imagery statistical classifier are often not applicable. In the study presented here, a Random Forest was applied to a SAR data set, consisting of 15 acquisitions. A detailed accuracy assessment shows that the Random Forest significantly increases the efficiency of the single decision tree and can outperform other classifiers in terms of accuracy. A visual interpretation confirms the statistical accuracy assessment. The imagery is classified into more homogeneous regions and the noise is significantly decreased. The additional time needed for the generation of Random Forests is little and can be justified. It is still a lot faster than other state-of-the-art classifiers.},
    Owner = {waske},
    Timestamp = {2012.09.05},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Waske2007Random.pdf}
    }

  • B. Waske, G. Menz, and J. A. Benediktsson, “Fusion of support vector machines for classifying SAR and multispectral imagery from agricultural areas,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2007. doi:10.1109/IGARSS.2007.4423945
    [BibTeX]
    A concept for classifying multisensor data sets, consisting of multispectral and SAR imagery is introduced. Each data source is separately classified by a support vector machine (SVM). In a decision fusion the outputs of the preliminary SVMs are used to determine the final class memberships. This fusion is performed by another SVM as well as two common voting schemes. The results are compared with well-known parametric and nonparametric classifier methods. The proposed SVM-based fusion approach outperforms all other concepts and significantly improves the results of a single SVM that is trained on the whole multisensor data set.

    @InProceedings{Waske2007Fusiona,
    Title = {Fusion of support vector machines for classifying SAR and multispectral imagery from agricultural areas},
    Author = {Waske, Bj\"orn and Menz, Gunter and Benediktsson, Jon Atli},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2007},
    Abstract = {A concept for classifying multisensor data sets, consisting of multispectral and SAR imagery is introduced. Each data source is separately classified by a support vector machine (SVM). In a decision fusion the outputs of the preliminary SVMs are used to determine the final class memberships. This fusion is performed by another SVM as well as two common voting schemes. The results are compared with well-known parametric and nonparametric classifier methods. The proposed SVM-based fusion approach outperforms all other concepts and significantly improves the results of a single SVM that is trained on the whole multisensor data set.},
    Doi = {10.1109/IGARSS.2007.4423945},
    Keywords = {SAR imagery classification;SVM-based fusion approach;Support Vector Machines;agricultural areas;common voting schemes;multisensor data sets classification;multispectral imagery classification;nonparametric classifier method;parametric classifier method;agriculture;image classification;sensor fusion;support vector machines;synthetic aperture radar;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • S. Wenzel, “Spiegelung und Zuordnung der SIFT-Feature Deskriptoren für die Detektion von Symmetrien und wiederholten Strukturen in Bildern,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2007-04, 2007.
    [BibTeX] [PDF]
    This report describes the details for mirroring the descriptors of the SIFT-features. We show how the mirrored versions are derived by simply resorting the descriptor elements. Furthermore, we describe the matching of features within an image. The peculiarity of this task is the search for more than one – the best – match within an single image. The presented methods are based on the work of (Wenzel2006,Detektion). After the introduction the functionallity of the SIFT-feature detector is drafted and the development of the descriptors is described in detail. The following sections describe the details of mirroring and matching the features. Dieser Bericht geht auf die Details zur Spiegelung von SIFT-Feature Deskritoren ein. Es wird gezeigt, wie durch einfaches Umsortieren der Elemente des Feature Deskriptors gespiegelte Versionen der Deskriptoren erlangt werden können. Des Weiteren wird erläutert, wie Features innerhalb eines Bildes zugeordnet werden können. Die Besonderheit dieser Aufgabenstellung liegt in der gesuchten Zuordnung nicht eines – des besten – Matches, sondern in der Zuordnung aller Matches in einem Bild. Die vorgestellten Methoden basieren auf (Wenzel2006,Detektion).

    @TechReport{Wenzel2007Spiegelung,
    Title = {Spiegelung und Zuordnung der SIFT-Feature Deskriptoren f\"ur die Detektion von Symmetrien und wiederholten Strukturen in Bildern},
    Author = {Wenzel, Susanne},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2007},
    Month = aug,
    Number = {TR-IGG-P-2007-04},
    Abstract = {This report describes the details for mirroring the descriptors of the SIFT-features. We show how the mirrored versions are derived by simply resorting the descriptor elements. Furthermore, we describe the matching of features within an image. The peculiarity of this task is the search for more than one - the best - match within an single image. The presented methods are based on the work of (Wenzel2006,Detektion). After the introduction the functionallity of the SIFT-feature detector is drafted and the development of the descriptors is described in detail. The following sections describe the details of mirroring and matching the features. Dieser Bericht geht auf die Details zur Spiegelung von SIFT-Feature Deskritoren ein. Es wird gezeigt, wie durch einfaches Umsortieren der Elemente des Feature Deskriptors gespiegelte Versionen der Deskriptoren erlangt werden k\"onnen. Des Weiteren wird erl\"autert, wie Features innerhalb eines Bildes zugeordnet werden k\"onnen. Die Besonderheit dieser Aufgabenstellung liegt in der gesuchten Zuordnung nicht eines - des besten - Matches, sondern in der Zuordnung aller Matches in einem Bild. Die vorgestellten Methoden basieren auf (Wenzel2006,Detektion).},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2007Spiegelung.pdf}
    }

  • S. Wenzel, M. Drauschke, and W. Förstner, “Detektion wiederholter und symmetrischer Strukturen in Fassadenbildern,” in Publikationen der DGPF: Von der Medizintechnik bis zur Planetenforschung – Photogrammetrie und Fernerkundung für das 21. Jahrhundert , Muttenz, Basel, 2007, pp. 119-126.
    [BibTeX] [PDF]
    Regelmäßige Strukturen und Symmetrien kennzeichnen viele Gebäudefassaden oder Objekte im Umfeld von Gebäuden. Für die automatisierte Bildinterpretation weisen diese Strukturen auf künstliche Objekte hin, führen aber auch zu Schwierigkeiten bei klassischen Bildzuordnungsverfahren. Die Suche und Gruppierung zusammengehöriger Merkmale kann daher sowohl zur Identifikation künstlicher Objekte als auch zur Verbesserung von Zuordnungsverfahren dienen. Für die Analyse von entzerrten Fassadenaufnahmen haben wir das Verfahren von [LOY 2006] zur Detektion symmetrischer Bildstrukturen zu einem Verfahren zur Detektion verschiedener, sich wiederholender Bildstrukturen erweitert und aus den detektierten wiederholten Objekten eine minimale Beschreibung der Struktur der Fassadenelemente in Form von achsenparallelen Basiselementen abgeleitet.

    @InProceedings{Wenzel2007Detektion,
    Title = {Detektion wiederholter und symmetrischer Strukturen in Fassadenbildern},
    Author = {Wenzel, Susanne and Drauschke, Martin and F\"orstner, Wolfgang},
    Booktitle = {Publikationen der DGPF: Von der Medizintechnik bis zur Planetenforschung - Photogrammetrie und Fernerkundung f\"ur das 21. Jahrhundert},
    Year = {2007},
    Address = {Muttenz, Basel},
    Editor = {Seyfert, Eckhardt},
    Month = jun,
    Pages = {119-126},
    Publisher = {DGPF},
    Volume = {16},
    Abstract = {Regelm\"a{\ss}ige Strukturen und Symmetrien kennzeichnen viele Geb\"audefassaden oder Objekte im Umfeld von Geb\"auden. F\"ur die automatisierte Bildinterpretation weisen diese Strukturen auf k\"unstliche Objekte hin, f\"uhren aber auch zu Schwierigkeiten bei klassischen Bildzuordnungsverfahren. Die Suche und Gruppierung zusammengeh\"origer Merkmale kann daher sowohl zur Identifikation k\"unstlicher Objekte als auch zur Verbesserung von Zuordnungsverfahren dienen. F\"ur die Analyse von entzerrten Fassadenaufnahmen haben wir das Verfahren von [LOY 2006] zur Detektion symmetrischer Bildstrukturen zu einem Verfahren zur Detektion verschiedener, sich wiederholender Bildstrukturen erweitert und aus den detektierten wiederholten Objekten eine minimale Beschreibung der Struktur der Fassadenelemente in Form von achsenparallelen Basiselementen abgeleitet.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2007Detektion.pdf}
    }

  • S. Wenzel, M. Drauschke, and W. Förstner, “Detection and Description of Repeated Structures in Rectified Facade Images,” Photogrammetrie, Fernerkundung, Geoinformation (PFG), vol. 7, pp. 481-490, 2007.
    [BibTeX] [PDF]
    We present a method for detecting repeated structures, which is applied on facade images for describing the regularity of their windows. Our approach finds and explicitly represents repetitive structures and thus gives initial representation of facades. No explicit notion of a window is used, thus the method also appears to be able to identify other man made structures, e.g. paths with regular tiles. A method for detection of dominant symmetries is adapted for detection of multiple repeated structures. A compact description of repetitions is derived from translations detected in an image by a heuristic search method and the model selection criterion of the minimum description length.

    @Article{Wenzel2007Detection,
    Title = {Detection and Description of Repeated Structures in Rectified Facade Images},
    Author = {Wenzel, Susanne and Drauschke, Martin and F\"orstner, Wolfgang},
    Journal = {Photogrammetrie, Fernerkundung, Geoinformation (PFG)},
    Year = {2007},
    Pages = {481--490},
    Volume = {7},
    Abstract = {We present a method for detecting repeated structures, which is applied on facade images for describing the regularity of their windows. Our approach finds and explicitly represents repetitive structures and thus gives initial representation of facades. No explicit notion of a window is used, thus the method also appears to be able to identify other man made structures, e.g. paths with regular tiles. A method for detection of dominant symmetries is adapted for detection of multiple repeated structures. A compact description of repetitions is derived from translations detected in an image by a heuristic search method and the model selection criterion of the minimum description length.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2007Detectiona.pdf}
    }

  • S. Wenzel, M. Drauschke, and W. Förstner, “Detection of repeated structures in facade images,” in Proceedings of the OGRW-7-2007, 7th Open German/Russian Workshop on Pattern Recognition and Image Understanding. August 20-23, 2007. Ettlingen, Germany , 2007. doi:10.1134/S1054661808030073
    [BibTeX] [PDF]
    We present a method for detecting repeated structures, which is applied on facade images for describing the regularity of their windows. Our approach finds and explicitly represents repetitive structures and thus gives initial representation of facades. No explicit notion of a window is used, thus the method also appears to be able to identify other man made structures, e.g. paths with regular tiles. A method for detection of dominant symmetries is adapted for detection of multiply repeated structures. A compact description of the repetitions is derived from the detected translations in the image by a heuristic search method and the criterion of the minimum description length.

    @InProceedings{Wenzel2007Detectiona,
    Title = {Detection of repeated structures in facade images},
    Author = {Wenzel, Susanne and Drauschke, Martin and F\"orstner, Wolfgang},
    Booktitle = {Proceedings of the OGRW-7-2007, 7th Open German/Russian Workshop on Pattern Recognition and Image Understanding. August 20-23, 2007. Ettlingen, Germany},
    Year = {2007},
    Abstract = {We present a method for detecting repeated structures, which is applied on facade images for describing the regularity of their windows. Our approach finds and explicitly represents repetitive structures and thus gives initial representation of facades. No explicit notion of a window is used, thus the method also appears to be able to identify other man made structures, e.g. paths with regular tiles. A method for detection of dominant symmetries is adapted for detection of multiply repeated structures. A compact description of the repetitions is derived from the detected translations in the image by a heuristic search method and the criterion of the minimum description length.},
    Doi = {10.1134/S1054661808030073},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2007Detection.pdf}
    }

  • K. M. Wurm, C. Stachniss, G. Grisetti, and W. Burgard, “Improved Simultaneous Localization and Mapping using a Dual Representation of the Environment,” in Proceedings of the European Conference on Mobile Robots (ECMR) , Freiburg, Germany, 2007.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Wurm2007,
    Title = {Improved Simultaneous Localization and Mapping using a Dual Representation of the Environment},
    Author = {Wurm, K.M. and Stachniss, C. and Grisetti, G. and Burgard, W.},
    Booktitle = ECMR,
    Year = {2007},
    Address = {Freiburg, Germany},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/wurm07ecmr.pdf}
    }

  • L. Zug, “Untersuchungen zur Genauigkeit der automatischen Punkt- und Orientierungsbestimmung aus extrem großmaßstäbigen Luftbildern,” Master Thesis, 2007.
    [BibTeX]
    Die vorliegende Arbeit untersucht die durch ein vollautomatisches Orientierungsverfahren (Läbe & Förstner) erreichbaren Genauigkeiten eines Bildverbanders anhand der Genauigkeit rekonstruierter Objektpunkt-koordinaten. Für diese liegen aus einer unabhängigen terrestrischen Messung genaue Referenzkoordinaten vor. Zum Vergleich der Wiedersprüche zwischen Referenzkoordinaten und rekonstruierter Koordinaten in einem photogrammetrischen Modell sollte in der Arbeit eine Koordinatentransformation basierend auf der K- und S-Transformation erstellt werden.

    @MastersThesis{Zug2007Untersuchungen,
    Title = {Untersuchungen zur Genauigkeit der automatischen Punkt- und Orientierungsbestimmung aus extrem gro{\ss}ma{\ss}st\"abigen Luftbildern},
    Author = {Zug, Laura},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2007},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Ing. Richard Steffen},
    Abstract = {Die vorliegende Arbeit untersucht die durch ein vollautomatisches Orientierungsverfahren (L\"abe & F\"orstner) erreichbaren Genauigkeiten eines Bildverbanders anhand der Genauigkeit rekonstruierter Objektpunkt-koordinaten. F\"ur diese liegen aus einer unabh\"angigen terrestrischen Messung genaue Referenzkoordinaten vor. Zum Vergleich der Wiederspr\"uche zwischen Referenzkoordinaten und rekonstruierter Koordinaten in einem photogrammetrischen Modell sollte in der Arbeit eine Koordinatentransformation basierend auf der K- und S-Transformation erstellt werden.},
    City = {Bonn}
    }

2006

  • C. Beder and W. Förstner, “Direct Solutions for Computing Cylinders from Minimal Sets of 3D Points,” in Proceedings of the European Conference on Computer Vision , Graz, Austria, 2006, pp. 135-146. doi:10.1007/11744023_11
    [BibTeX] [PDF]
    Efficient direct solutions for the determination of a cylinder from points are presented. The solutions range from the well known direct solution of a quadric to the minimal solution of a cylinder with five points. In contrast to the approach of G. Roth and M. D. Levine (1990), who used polynomial bases for representing the geometric entities, we use algebraic constraints on the quadric representing the cylinder. The solutions for six to eight points directly determine all the cylinder parameters in one step: (1) The eight-point-solution, similar to the estimation of the fundamental matrix, requires to solve for the roots of a 3rd-order-polynomial. (2) The seven-point-solution, similar to the sixpoint- solution for the relative orientation by J. Philip (1996), yields a linear equation system. (3) The six-point-solution, similar to the fivepoint- solution for the relative orientation by D. Nister (2003), yields a ten-by-ten eigenvalue problem. The new minimal five-point-solution first determines the direction and then the position and the radius of the cylinder. The search for the zeros of the resulting 6th order polynomials is e ciently realized using 2D-Bernstein polynomials. Also direct solutions for the special cases with the axes of the cylinder parallel to a coordinate plane or axis are given. The method is used to find cylinders in range data of an industrial site.

    @InProceedings{Beder2006Direct,
    Title = {Direct Solutions for Computing Cylinders from Minimal Sets of 3D Points},
    Author = {Beder, Christian and F\"orstner, Wolfgang},
    Booktitle = {Proceedings of the European Conference on Computer Vision},
    Year = {2006},
    Address = {Graz, Austria},
    Editor = {A. Leonardis and H. Bischof and A. Pinz},
    Number = {3951},
    Pages = {135--146},
    Publisher = {Springer},
    Series = {LNCS},
    Abstract = {Efficient direct solutions for the determination of a cylinder from points are presented. The solutions range from the well known direct solution of a quadric to the minimal solution of a cylinder with five points. In contrast to the approach of G. Roth and M. D. Levine (1990), who used polynomial bases for representing the geometric entities, we use algebraic constraints on the quadric representing the cylinder. The solutions for six to eight points directly determine all the cylinder parameters in one step: (1) The eight-point-solution, similar to the estimation of the fundamental matrix, requires to solve for the roots of a 3rd-order-polynomial. (2) The seven-point-solution, similar to the sixpoint- solution for the relative orientation by J. Philip (1996), yields a linear equation system. (3) The six-point-solution, similar to the fivepoint- solution for the relative orientation by D. Nister (2003), yields a ten-by-ten eigenvalue problem. The new minimal five-point-solution first determines the direction and then the position and the radius of the cylinder. The search for the zeros of the resulting 6th order polynomials is e ciently realized using 2D-Bernstein polynomials. Also direct solutions for the special cases with the axes of the cylinder parallel to a coordinate plane or axis are given. The method is used to find cylinders in range data of an industrial site.},
    Doi = {10.1007/11744023_11},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Beder2006Direct.pdf}
    }

  • C. Beder and W. Förstner, “Direkte Bestimmung von Zylindern aus 3D-Punkten ohne Nutzung von Oberflächennormalen,” in Photogrammetrie – Laserscanning – Optische 3D-Messtechnik , Oldenburg, 2006, pp. 206-213.
    [BibTeX] [PDF]
    Die automatische Extraktion von Zylindern aus 3D-Punktwolken ist von zentraler Bedeutung bei der Auswertung von Laserscannerdaten insbesondere bei Industrieanlagen. Das robuste Schätzverfahren RANSAC benötigt direkte Lösungen aus so wenig Datenpunkten wie möglich, um effizient zu arbeiten. Wir werden die algebraischen Bedingungen, die quadratische Formen erfüllen müssen, um einen Zylinder darzustellen, analysieren und verschiedene Verfahren für die Lösung dieses Problems vorstellen. Insbesondere werden wir eine minimale Lösung mit nur fünf 3D Punkten präsentieren. Anders als andere Ansätze benötigen wir keine Oberflächennormalen, deren Bestimmung im Allgemeinen schwierig ist.

    @InProceedings{Beder2006Direkte,
    Title = {Direkte Bestimmung von Zylindern aus 3D-Punkten ohne Nutzung von Oberfl\"achennormalen},
    Author = {Beder, Christian and F\"orstner, Wolfgang},
    Booktitle = {Photogrammetrie - Laserscanning - Optische 3D-Messtechnik},
    Year = {2006},
    Address = {Oldenburg},
    Editor = {Thomas Luhmann and Christina M\"uller},
    Pages = {206--213},
    Publisher = {Herbert Wichmann Verlag},
    Abstract = {Die automatische Extraktion von Zylindern aus 3D-Punktwolken ist von zentraler Bedeutung bei der Auswertung von Laserscannerdaten insbesondere bei Industrieanlagen. Das robuste Sch\"atzverfahren RANSAC ben\"otigt direkte L\"osungen aus so wenig Datenpunkten wie m\"oglich, um effizient zu arbeiten. Wir werden die algebraischen Bedingungen, die quadratische Formen erf\"ullen m\"ussen, um einen Zylinder darzustellen, analysieren und verschiedene Verfahren f\"ur die L\"osung dieses Problems vorstellen. Insbesondere werden wir eine minimale L\"osung mit nur f\"unf 3D Punkten pr\"asentieren. Anders als andere Ans\"atze ben\"otigen wir keine Oberfl\"achennormalen, deren Bestimmung im Allgemeinen schwierig ist.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Beder2006Direkte.pdf}
    }

  • C. Beder and R. Steffen, “Determining an initial image pair for fixing the scale of a 3d reconstruction from an image sequence,” in Pattern Recognition , Berlin, 2006, pp. 657-666. doi:10.1007/11861898_66
    [BibTeX] [PDF]
    Algorithms for metric 3d reconstruction of scenes from calibrated image sequences always require an initialization phase for fixing the scale of the reconstruction. Usually this is done by selecting two frames from the sequence and fixing the length of their base-line. In this paper a quality measure, that is based on the uncertainty of the reconstructed scene points, for the selection of such a stable image pair is proposed. Based on this quality measure a fully automatic initialization phase for simultaneous localization and mapping algorithms is derived. The proposed algorithm runs in real-time and some results for synthetic as well as real image sequences are shown.

    @InProceedings{Beder2006Determining,
    Title = {Determining an initial image pair for fixing the scale of a 3d reconstruction from an image sequence},
    Author = {Beder, Christian and Steffen, Richard},
    Booktitle = {Pattern Recognition},
    Year = {2006},
    Address = {Berlin},
    Editor = {K. Franke and K.-R. M\"uller and B. Nickolay and R. Sch\"afer},
    Number = {4174},
    Pages = {657--666},
    Publisher = {Springer},
    Series = {LNCS},
    Abstract = {Algorithms for metric 3d reconstruction of scenes from calibrated image sequences always require an initialization phase for fixing the scale of the reconstruction. Usually this is done by selecting two frames from the sequence and fixing the length of their base-line. In this paper a quality measure, that is based on the uncertainty of the reconstructed scene points, for the selection of such a stable image pair is proposed. Based on this quality measure a fully automatic initialization phase for simultaneous localization and mapping algorithms is derived. The proposed algorithm runs in real-time and some results for synthetic as well as real image sequences are shown.},
    Doi = {10.1007/11861898_66},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Beder2006Determining.pdf}
    }

  • M. Bennewitz, C. Stachniss, W. Burgard, and S. Behnke, “Metric Localization with Scale-Invariant Visual Features using a Single Perspective Camera,” in European Robotics Symposium 2006 , 2006, pp. 143-157.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Bennewitz2006,
    Title = {Metric Localization with Scale-Invariant Visual Features using a Single Perspective Camera},
    Author = {Bennewitz, M. and Stachniss, C. and Burgard, W. and Behnke, S.},
    Booktitle = {European Robotics Symposium 2006},
    Year = {2006},
    Editor = {H.I. Christiensen},
    Pages = {143--157},
    Publisher = {Springer-Verlag Berlin Heidelberg, Germany},
    Series = springerstaradvanced,
    Volume = {22},
    Abstract = {[none]},
    ISBN = {3-540-32688-X},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/bennewitz06euros.pdf}
    }

  • T. Dickscheid, “Markerlose Selbstlokalisation durch Fusion von Sensordaten,” Diplomarbeit Master Thesis, 2006.
    [BibTeX]
    [none]
    @MastersThesis{Dickscheid2006Markerlose,
    Title = {Markerlose Selbstlokalisation durch Fusion von Sensordaten},
    Author = {Dickscheid, Timo},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2006},
    Note = {Betreuung: Prof. Dr.-Ing. Dietrich Paulus (Universit\"at Koblenz), Dr.-Ing. Chunrong Yuan (Fraunhofer FIT)},
    Type = {Diplomarbeit},
    Abstract = {[none]},
    City = {Bonn}
    }

  • M. Drauschke, “Automatisches Dodging von Luftbildern,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2006-01, 2006.
    [BibTeX] [PDF]
    Das Problem stellt sich wie folgt dar: Die Luftbilder wurden mit einem Vexcel-Scanner digitalisiert und als 16-Bit-Bilder abgespeichert. Die Bilder sollen automatisch nachbereitet werden.

    @TechReport{Drauschke2006Automatisches,
    Title = {Automatisches Dodging von Luftbildern},
    Author = {Drauschke, Martin},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2006},
    Number = {TR-IGG-P-2006-01},
    Abstract = {Das Problem stellt sich wie folgt dar: Die Luftbilder wurden mit einem Vexcel-Scanner digitalisiert und als 16-Bit-Bilder abgespeichert. Die Bilder sollen automatisch nachbereitet werden.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2006Automatisches.pdf}
    }

  • M. Drauschke, H. Schuster, and W. Förstner, “Detectibility of Buildings in Aerial Images over Scale Space,” in Symposium of ISPRS Commission III: Photogrammetric Computer Vision , Bonn, 2006, pp. 7-12.
    [BibTeX] [PDF]
    Automatic scene interpretation of aerial images is a major purpose of photogrammetry. Therefore, we want to improve building detection by exploring the "life-time" of stable and relevant image features in scale space. We use watersheds for feature extraction to gain a topologically consistent map. We will show that characteristic features for building detection can be found in all considered scales, so that no optimal scale can be selected for building recognition. Nevertheless, many of these features "live" in a wide scale interval, so that a combination of a small number of scales can be used for automatic building detection.

    @InProceedings{Drauschke2006Detectibility,
    Title = {Detectibility of Buildings in Aerial Images over Scale Space},
    Author = {Drauschke, Martin and Schuster, Hanns-Florian and F\"orstner, Wolfgang},
    Booktitle = {Symposium of ISPRS Commission III: Photogrammetric Computer Vision},
    Year = {2006},
    Address = {Bonn},
    Editor = {Wolfgang F\"orstner and Richard Steffen},
    Month = sep,
    Number = {Part 3},
    Organization = {ISPRS},
    Pages = {7--12},
    Publisher = {ISPRS},
    Volume = {XXXVI},
    Abstract = {Automatic scene interpretation of aerial images is a major purpose of photogrammetry. Therefore, we want to improve building detection by exploring the "life-time" of stable and relevant image features in scale space. We use watersheds for feature extraction to gain a topologically consistent map. We will show that characteristic features for building detection can be found in all considered scales, so that no optimal scale can be selected for building recognition. Nevertheless, many of these features "live" in a wide scale interval, so that a combination of a small number of scales can be used for automatic building detection.},
    Keywords = {Building Detection, Scale Space, Feature Extraction, Stable Regions},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2006Detectibility.pdf}
    }

  • M. Drauschke, H. Schuster, and W. Förstner, “Stabilität von Regionen im Skalenraum,” in Publikationen der DGPF: Geoinformatik und Erdbeobachtung , Berlin, 2006, pp. 29-36.
    [BibTeX] [PDF]
    Für die automatische Erfassung von Gebäuden aus Luftbildern ist es nützlich, Bildstrukturen im Skalenraum, d. h. über mehrere Auflösungsstufen zu beobachten, um für die Objekterkennung hinderliche Details ausblenden zu können. Große Bedeutung messen wir dabei den homogenen Regionen sowie deren Nachbarschaften zu. Regionen betrachten wir als stabil, wenn sie über einen mehrere Skalenstufen invariant bleiben. Sie haben spezielle Eigenschaften: Beim Vergrössern der Skala verschmelzen benachbarte Regionen, wobei eine Region immer vollständig in der anderen aufgeht. Diese speziellen Eigenschaft erleichtert das Bestimmen der Nachbarschaften in einer vorgegeben Skala, denn der Regionennachbarschaftsgraph (RNG) muss nur einmal auf der untersten Ebene des Skalenraums berechnet werden. Die RNGs in den anderen Ebenen können leicht aus der untersten Ebene berechnet werden.

    @InProceedings{Drauschke2006Stabilitat,
    Title = {Stabilit\"at von Regionen im Skalenraum},
    Author = {Drauschke, Martin and Schuster, Hanns-Florian and F\"orstner, Wolfgang},
    Booktitle = {Publikationen der DGPF: Geoinformatik und Erdbeobachtung},
    Year = {2006},
    Address = {Berlin},
    Editor = {Eckhardt Seyfert},
    Month = {Septermber},
    Pages = {29--36},
    Publisher = {DGPF},
    Volume = {15},
    Abstract = {F\"ur die automatische Erfassung von Geb\"auden aus Luftbildern ist es n\"utzlich, Bildstrukturen im Skalenraum, d. h. \"uber mehrere Aufl\"osungsstufen zu beobachten, um f\"ur die Objekterkennung hinderliche Details ausblenden zu k\"onnen. Gro{\ss}e Bedeutung messen wir dabei den homogenen Regionen sowie deren Nachbarschaften zu. Regionen betrachten wir als stabil, wenn sie \"uber einen mehrere Skalenstufen invariant bleiben. Sie haben spezielle Eigenschaften: Beim Vergr\"ossern der Skala verschmelzen benachbarte Regionen, wobei eine Region immer vollst\"andig in der anderen aufgeht. Diese speziellen Eigenschaft erleichtert das Bestimmen der Nachbarschaften in einer vorgegeben Skala, denn der Regionennachbarschaftsgraph (RNG) muss nur einmal auf der untersten Ebene des Skalenraums berechnet werden. Die RNGs in den anderen Ebenen k\"onnen leicht aus der untersten Ebene berechnet werden.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Drauschke2006Stabilitaet.pdf}
    }

  • A. Gil, O. Reinoso, O. Martínez-Mozos, C. Stachniss, and W. Burgard, “Improving Data Association in Vision-based SLAM,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Beijing, China, 2006.
    [BibTeX]
    [none]
    @InProceedings{Gil2006,
    Title = {Improving Data Association in Vision-based {SLAM}},
    Author = {Gil, A. and Reinoso, O. and Mart\'{i}nez-Mozos, O. and Stachniss, C. and Burgard, W.},
    Booktitle = iros,
    Year = {2006},
    Address = {Beijing, China},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

  • G. Grisetti, G. D. Tipaldi, C. Stachniss, W. Burgard, and D. Nardi, “Speeding-Up Rao-Blackwellized SLAM,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Orlando, FL, USA, 2006, pp. 442-447.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Grisetti2006,
    Title = {Speeding-Up Rao-Blackwellized {SLAM}},
    Author = {Grisetti, G. and Tipaldi, G.D. and Stachniss, C. and Burgard, W. and Nardi, D.},
    Booktitle = icra,
    Year = {2006},
    Address = {Orlando, FL, USA},
    Pages = {442--447},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti06icra.pdf}
    }

  • H. Hellwich, “Bestimmung der Eigenbewegung anhand einer monokularen Bildfolge,” Diplomarbeit Master Thesis, 2006.
    [BibTeX]
    [none]
    @MastersThesis{Hellwich2006Bestimmung,
    Title = {Bestimmung der Eigenbewegung anhand einer monokularen Bildfolge},
    Author = {Hellwich, Hendrik},
    School = {Institute of Photogrammetry, University of Bonn In Zusammenarbeit mit dem Institut f\"ur Informatik der Universit\"at Bonn},
    Year = {2006},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Ing. Richard Steffen},
    Type = {Diplomarbeit},
    Abstract = {[none]},
    City = {Bonn}
    }

  • A. Kesting, “Bild-basierte Baumkronenmodellierung mit Kugelflächenfunktionen,” Diplomarbeit Master Thesis, 2006.
    [BibTeX]
    [none]
    @MastersThesis{Kesting2006Bild,
    Title = {Bild-basierte Baumkronenmodellierung mit Kugelfl\"achenfunktionen},
    Author = {Kesting, Arne},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2006},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Inform. Christian Beder},
    Type = {Diplomarbeit},
    Abstract = {[none]},
    City = {Bonn}
    }

  • T. Läbe and W. Förstner, “Automatic Relative Orientation of Images,” in Proceedings of the 5th Turkish-German Joint Geodetic Days , Berlin, 2006.
    [BibTeX] [PDF]
    This paper presents a new full automatic approach for the relative orientation of several digital images taken with a calibrated camera. This approach uses new algorithms for feature extraction and relative orientation developed in the last few years. There is no need for special markers in the scene nor for approximate values of the orientation data. We use the point operator developed by D. G. Lowe (2004), which extracts points with scale- and rotation-invariant descriptors (SIFT-features). These descriptors allow a successful matching of image points even when dealing with highly convergent or rotated images. The approach consists of the following steps: After extracting image points on all images a matching between every image pair is calculated using the SIFT parameters only. No prior information about the pose of the images or the overlapping parts of the images is used. For every image pair a relative orientation is computed with the help of a RANSAC procedure. Here we use the new 5-point algorithm from D. Nister (2004). Out of this set of orientations approximate values for the orientation parameters and the object coordinates are calculated by computing the relative scales and transforming the models into a common coordinate system. Several tests are made in order to get a reliable input for the currently final step: a bundle block adjustment. The paper discusses the practical impacts of the used algorithms. Examples of different indoor- and outdoor-scenes including a data set of oblique images taken from a helicopter are presented and the results of the approach applied to these data sets are evaluated. These results show that the approach can be used for a wide range of scenes with different types of the image geometry and taken with different types of cameras including inexpensive consumer cameras. In particular we investigate in the robustness of the algorithms, e. g. in geometric tests on image triplets. Further developments like the use of image pyramids with a modified matching are discussed in the outlook. Literature: David G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60, 2 (2004), pp. 91-110. D. Nister, An efficient solution to the five-point relative pose problem, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 26(6):756-770, June 2004.

    @InProceedings{Labe2006Automatic,
    Title = {Automatic Relative Orientation of Images},
    Author = {L\"abe, Thomas and F\"orstner, Wolfgang},
    Booktitle = {Proceedings of the 5th Turkish-German Joint Geodetic Days},
    Year = {2006},
    Address = {Berlin},
    Abstract = {This paper presents a new full automatic approach for the relative orientation of several digital images taken with a calibrated camera. This approach uses new algorithms for feature extraction and relative orientation developed in the last few years. There is no need for special markers in the scene nor for approximate values of the orientation data. We use the point operator developed by D. G. Lowe (2004), which extracts points with scale- and rotation-invariant descriptors (SIFT-features). These descriptors allow a successful matching of image points even when dealing with highly convergent or rotated images. The approach consists of the following steps: After extracting image points on all images a matching between every image pair is calculated using the SIFT parameters only. No prior information about the pose of the images or the overlapping parts of the images is used. For every image pair a relative orientation is computed with the help of a RANSAC procedure. Here we use the new 5-point algorithm from D. Nister (2004). Out of this set of orientations approximate values for the orientation parameters and the object coordinates are calculated by computing the relative scales and transforming the models into a common coordinate system. Several tests are made in order to get a reliable input for the currently final step: a bundle block adjustment. The paper discusses the practical impacts of the used algorithms. Examples of different indoor- and outdoor-scenes including a data set of oblique images taken from a helicopter are presented and the results of the approach applied to these data sets are evaluated. These results show that the approach can be used for a wide range of scenes with different types of the image geometry and taken with different types of cameras including inexpensive consumer cameras. In particular we investigate in the robustness of the algorithms, e. g. in geometric tests on image triplets. Further developments like the use of image pyramids with a modified matching are discussed in the outlook. Literature: David G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, 60, 2 (2004), pp. 91-110. D. Nister, An efficient solution to the five-point relative pose problem, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 26(6):756-770, June 2004.},
    City = {Bonn},
    Proceeding = {Proceedings of the 5th Turkish-German Joint Geodetic Days},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Labe2006Automatic.pdf}
    }

  • P. Lamon, C. Stachniss, R. Triebel, P. Pfaff, C. Plagemann, G. Grisetti, S. Kolski, W. Burgard, and R. Siegwart, “Mapping with an Autonomous Car,” in Workshop on Safe Navigation in Open and Dynamic Environments at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Beijing, China, 2006.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Lamon2006,
    Title = {Mapping with an Autonomous Car},
    Author = {Lamon, P. and Stachniss, C. and Triebel, R. and Pfaff, P. and Plagemann, C. and Grisetti, G. and Kolski, S. and Burgard, W. and Siegwart, R.},
    Booktitle = iroswsnav,
    Year = {2006},
    Address = {Beijing, China},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/lamon06iros.pdf}
    }

  • D. Meier, C. Stachniss, and W. Burgard, “Cooperative Exploration With Multiple Robots Using Low Bandwidth Communication,” in Informationsfusion in der Mess- und Sensortechnik , 2006, pp. 145-157.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Meier2006,
    Title = {Cooperative Exploration With Multiple Robots Using Low Bandwidth Communication},
    Author = {Meier, D. and Stachniss, C. and Burgard, W.},
    Booktitle = {Informationsfusion in der Mess- und Sensortechnik},
    Year = {2006},
    Editor = {Beyerer, J. and Puente Le\'{o}n, F. and Sommer, K.-D.},
    Pages = {145--157},
    Abstract = {[none]},
    ISBN = {3-86644-053-7},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/meier06sensor.pdf}
    }

  • C. Plagemann, C. Stachniss, and W. Burgard, “Efficient Failure Detection for Mobile Robots using Mixed-Abstraction Particle Filters,” in European Robotics Symposium 2006 , 2006, pp. 93-107.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Plagemann2006,
    Title = {Efficient Failure Detection for Mobile Robots using Mixed-Abstraction Particle Filters},
    Author = {Plagemann, C. and Stachniss, C. and Burgard, W.},
    Booktitle = {European Robotics Symposium 2006},
    Year = {2006},
    Editor = {H.I. Christiensen},
    Pages = {93--107},
    Publisher = {Springer-Verlag Berlin Heidelberg, Germany},
    Series = springerstaradvanced,
    Volume = {22},
    Abstract = {[none]},
    ISBN = {3-540-32688-X},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/plagemann06euros.pdf}
    }

  • D. Sonntag, S. Stachniss-Carp, C. Stachniss, and V. Stachniss, “Determination of Root Canal Curvatures before and after Canal Preparation (Part II): A Method based on Numeric Calculus,” Aust Endod J, vol. 32, pp. 16-25, 2006.
    [BibTeX] [PDF]
    [none]
    @Article{Sonntag2006,
    Title = {Determination of Root Canal Curvatures before and after Canal Preparation (Part II): A Method based on Numeric Calculus},
    Author = {Sonntag, D. and Stachniss-Carp, S. and Stachniss, C. and Stachniss, V.},
    Journal = {Aust Endod J},
    Year = {2006},
    Pages = {16--25},
    Volume = {32},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/sonntag06endod.pdf}
    }

  • C. Stachniss, “Exploration and Mapping with Mobile Robots,” PhD Thesis, 2006.
    [BibTeX] [PDF]
    [none]
    @PhdThesis{Stachniss2006a,
    Title = {Exploration and Mapping with Mobile Robots},
    Author = {Stachniss, C.},
    School = {University of Freiburg, Department of Computer Science},
    Year = {2006},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss06phd.pdf}
    }

  • C. Stachniss, O. Martínez-Mozos, and W. Burgard, “Speeding-Up Multi-Robot Exploration by Considering Semantic Place Information,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Orlando, FL, USA, 2006, pp. 1692-1697.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2006,
    Title = {Speeding-Up Multi-Robot Exploration by Considering Semantic Place Information},
    Author = {Stachniss, C. and Mart\'{i}nez-Mozos, O. and Burgard, W.},
    Booktitle = icra,
    Year = {2006},
    Address = {Orlando, FL, USA},
    Pages = {1692--1697},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss06icra.pdf}
    }

  • J. Thielmann, “Entwurf und Evaluierung eines Verfahrens zur Detektion wiederholter Bildstrukturen,” Diplomarbeit Master Thesis, 2006.
    [BibTeX]
    Wiederholte Strukturen sind charakteristisch für künstliche Objekte, verursachen jedoch gleichzeitig für Zuordnungsverfahren eine sehr hohe algorithmische Komplexität, weshalb Verfahren zur Identifikation wiederholter Strukturen von besonderem Interesse sind. In der Arbeit soll das verfahren von Schaffalitzky und Zisserman (2000) auf seine Eignung für die Detektion wiederholter Strukturen in Bildern von Gebäuden untersucht und bewertet werden.

    @MastersThesis{Thielmann2006Entwurf,
    Title = {Entwurf und Evaluierung eines Verfahrens zur Detektion wiederholter Bildstrukturen},
    Author = {Thielmann, Jan},
    Year = {2006},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Inform. Martin Drauschke},
    Type = {Diplomarbeit},
    Abstract = {Wiederholte Strukturen sind charakteristisch f\"ur k\"unstliche Objekte, verursachen jedoch gleichzeitig f\"ur Zuordnungsverfahren eine sehr hohe algorithmische Komplexit\"at, weshalb Verfahren zur Identifikation wiederholter Strukturen von besonderem Interesse sind. In der Arbeit soll das verfahren von Schaffalitzky und Zisserman (2000) auf seine Eignung f\"ur die Detektion wiederholter Strukturen in Bildern von Geb\"auden untersucht und bewertet werden.},
    City = {Bonn}
    }

  • B. Waske and S. Schiefer, “Classifying segmented multitemporal SAR data from agricultural areas using support vector machines,” in 2nd Workshop of the EARSeL Special Interest Group on Land Use and Land Cover , 2006.
    [BibTeX] [PDF]
    In the presented study the performance of support vector machines (SVM) for classifying segmented multi-temporal SAR data is investigated. Results show that multi-temporal SAR data from an area dominated by agriculture can be successfully classified using SVM. Classification accuracy (78.2%) and degree of differentiation between land cover types is similar or better than results achieved with a decision tree classifier. A positive influence of image segmentation on classification results can be reported which varies with object size. A comparison of classification results derived on different aggregation levels shows, that a medium segment size should be preferred. It is better to work with segments that are smaller than the natural features of interest and segments that are greater than natural features should be avoided.

    @InProceedings{Waske2006Classifying,
    Title = {Classifying segmented multitemporal SAR data from agricultural areas using support vector machines},
    Author = {Waske, Bj\"orn and Schiefer, Sebastian},
    Booktitle = {2nd Workshop of the EARSeL Special Interest Group on Land Use and Land Cover},
    Year = {2006},
    Abstract = {In the presented study the performance of support vector machines (SVM) for classifying segmented multi-temporal SAR data is investigated. Results show that multi-temporal SAR data from an area dominated by agriculture can be successfully classified using SVM. Classification accuracy (78.2%) and degree of differentiation between land cover types is similar or better than results achieved with a decision tree classifier. A positive influence of image segmentation on classification results can be reported which varies with object size. A comparison of classification results derived on different aggregation levels shows, that a medium segment size should be preferred. It is better to work with segments that are smaller than the natural features of interest and segments that are greater than natural features should be avoided.},
    Owner = {waske},
    Timestamp = {2012.09.05},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Waske2006Classifying.pdf}
    }

  • B. Waske, S. Schiefer, and M. Braun, “Random Feature Selection for Decision Tree Classification of Multi-temporal SAR Data,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2006. doi:10.1109/IGARSS.2006.48
    [BibTeX]
    [none]
    @InProceedings{Waske2006Random,
    Title = {Random Feature Selection for Decision Tree Classification of Multi-temporal SAR Data},
    Author = {Waske, Bj\"orn and Schiefer, S. and Braun, M.},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2006},
    Abstract = {[none]},
    Doi = {10.1109/IGARSS.2006.48},
    Keywords = {decision tree classification;multiple classifiers size;multitemporal SAR images;random feature selection;supervised land cover classifications;visual inspection;decision trees;feature extraction;geophysics computing;image classification;synthetic aperture radar;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • S. Wenzel, “Detektion wiederholter und symmetrischer Strukturen von Objekten in Bildern,” Diplomarbeit Master Thesis, 2006.
    [BibTeX] [PDF]
    Sich wiederholende bzw. symmetrische Strukturen sind Hinweise auf künstliche Objekte, führen aber auch zu Schwierigkeiten bei klassischen Bildzuordnungsverfahren. Die Suche und Gruppierung zusammengehöriger Features kann daher zur Identifikation künstlicher Objekte oder zur Verbesserung von Zuordnungsverfahren dienen. Darüber hinaus kann man aus einem Bild eines im Raum symmetrischen Objekts auf die 3D-Struktur dieses Objekts schließen. Die Diplomarbeit soll das von Loy und Eklundh auf der ECCV 2006 vorgestellte Verfahren zur Detektion symmetrischer und wiederholter Bildbereiche implementieren und hinsichtlich seiner Verwendbarkeit für photogrammetrische Gebäudeaufnahmen überprüfen. Insbesondere geht es um die Detektierbarkeit regelmäßiger Fassadenstrukturen in Abhängigkeit von ihrer Komplexität. Darüber hinaus ist zu klären, wie mehrfache Symmetrien identifiziert und ggf. für die 3D-Rekonstruktion des regelmä\ssigen Teils der Fassadenstruktur genutzt werden können.

    @MastersThesis{Wenzel2006Detektion,
    Title = {Detektion wiederholter und symmetrischer Strukturen von Objekten in Bildern},
    Author = {Wenzel, Susanne},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2006},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Inform. Martin Drauschke},
    Type = {Diplomarbeit},
    Abstract = {Sich wiederholende bzw. symmetrische Strukturen sind Hinweise auf k\"unstliche Objekte, f\"uhren aber auch zu Schwierigkeiten bei klassischen Bildzuordnungsverfahren. Die Suche und Gruppierung zusammengeh\"origer Features kann daher zur Identifikation k\"unstlicher Objekte oder zur Verbesserung von Zuordnungsverfahren dienen. Dar\"uber hinaus kann man aus einem Bild eines im Raum symmetrischen Objekts auf die 3D-Struktur dieses Objekts schlie{\ss}en. Die Diplomarbeit soll das von Loy und Eklundh auf der ECCV 2006 vorgestellte Verfahren zur Detektion symmetrischer und wiederholter Bildbereiche implementieren und hinsichtlich seiner Verwendbarkeit f\"ur photogrammetrische Geb\"audeaufnahmen \"uberpr\"ufen. Insbesondere geht es um die Detektierbarkeit regelm\"a{\ss}iger Fassadenstrukturen in Abh\"angigkeit von ihrer Komplexit\"at. Dar\"uber hinaus ist zu kl\"aren, wie mehrfache Symmetrien identifiziert und ggf. f\"ur die 3D-Rekonstruktion des regelm\"a\ssigen Teils der Fassadenstruktur genutzt werden k\"onnen.},
    City = {Bonn},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Wenzel2006Detektion.pdf}
    }

  • K. Wolff, “Zur Approximation allgemeiner optischer Abbildungsmodelle und deren Anwendung auf eine geometrisch basierte Mehrbildzuordnung am Beispiel einer Mehrmedienabbildung,” PhD Thesis, 2006.
    [BibTeX]
    \textbf{Summary} Non perspective mappings include geometric elements, which have an direct influence on their geometric characteristics but which are more or less unknown in the field of photogrammetry. An example for such an element is the type of a viewing point, which can be a single point, a line or a surface (non single points). Another example is the way of mapping rays. Both influence the image distortions depending on the position of the image point in image space or the object point in object space. This work basically deals with the geometry of viewing points, the resulting image distortion and their relevance for a three dimensional photogrammetric reconstruction. The work focuses the following four points: Introduction of a new representation and taxonomy for optical imaging systems. One feature of optical mappings is the type of image distortion, which is important for the complexity of a photogrammetric process. It may either base on the position of the image point in image space or the object point in object space, which is much more complex. The image distortion and its modeling are being directly influenced by a single point or non single point viewpoint. In this context it is important that an imaging system may have more than one viewing point. A corresponding analysis of imaging systems is realised in the field of photogrammetry only under condition of some of these aspects. Here, a global analysis is given, which takes all these mentioned aspects into account, and a new resulting representation and taxonomy for optical imaging systems is introduced. Development of a general, efficient approximation method for imaging systems. Non perspective mappings, which have object space based image distortions, might have very complex and specialised mapping models. This is motivation for the development of an efficient and general approximation method for complex, object space based distorted mappings by a simplified model. By a definition of special requirements for the realisation, a significant influence of the approximation error on the quality of the final results can be prevented. A priori quality analyses of a 3d reconstruction support this assumption. Development of a geometrically based matching algorithm for multiple views. The approximation method is used for a new geometry based matching algorithm for 3D reconstruction, which will also be presented here. Different tests using synthetic and real data analyse and evaluate the methodology of the approximation, of the image matching and the 3D object reconstruction for multiple views and show the efficiency of both methods. Quality tests by a photogrammetric reconstruction of a fluvial sediment surface with multi media geometry. As an application example of the methods, a fluvial sediment surface is observed through the three optical media air, Perspex and water and is reconstructed in 3D. This application is also a test of the use of photogrammetric methods for preparing input data for a synthetic analysis of the sedimentation process. The results indicate that photogrammetric methods are applicable for this task. \textbf{Zusammenfassung} Nicht perspektivische Abbildungen besitzen geometrische Elemente, die auf ihre geometrischen Eigenschaften einen direkten Einfluss haben, im Bereich der Photogrammetrie aber kaum bekannt sind. Ein solches Element ist zum Beispiel die Art eines Projektionszentrums, das punkt-, linien- oder flächenförmig sein kann und der Verlauf der Abbildungsstrahlen, die beide die Abhängigkeit der Verzeichnung von der Position des Bildpunktes im Bildraum oder des zugehörigen Objektpunktes im Objektraum beeinflussen. Diese Arbeit behandelt grundlegend die Geometrie von Projektionszentren, den daraus resultierenden Bildverzeichnungen und ihre Bedeutung für eine dreidimensionale photogrammetrische Auswertung. Zusammenfassend umfasst diese Arbeit die folgenden vier Schwerpunkte: Einführung einer neuen Repräsentation und Taxonomie optischer Abbildungssysteme. Ein für die Komplexität einer photogrammetrischen Auswertung entscheidendes Merkmal optischer Abbildungen ist die Art ihrer Bildfehler, die bildraumbasiert oder objektraumbasiert sein können. Die Bildfehler und ihre Modellierung werden direkt durch das punktförmige oder nicht punktförmige Projektionszentrum des abbildenden Strahlenbündels beeinflusst. Dabei kann ein Abbildungssystem streng genommen mehr als ein Projektionszentrum besitzen. Eine entsprechende Analyse optischer Systeme ist im Bereich der Photogrammetrie bisher nur unter Berücksichtigung von einzelnen dieser Aspekte durchgeführt worden und wird hier zusammen mit einer resultierenden neuen Repräsentation und Taxonomie umfassender dargestellt. Vorstellung eines allgemeinen, effizienten Approximationsmodells optischer Abbildungssysteme. Nicht perspektivische Abbildungen, die eine objektraumbasierte Verzeichnung besitzen, können sehr komplexe und spezialisierte Abbildungsmodelle besitzen. Dies ist Motivation für die Entwicklung eines effizienten und allgemein gültigen Approximationsmodells komplexer, objektraumbasiert verzeichneter Abbildungen durch ein einfaches Modell. Durch bestimmte Anforderungen an den Einsatz des Modells wird ein signifikanter Einfluss auf die Endergebnisse verhindert. Apriori Qualitätsuntersuchungen für eine dreidimensionale Objektrekonstruktion bestätigen dies. Entwicklung eines geometrisch basierten Zuordnungsalgorithmus im Mehrbildverband. Das Approximationsmodell wird innerhalb eines ebenfalls hier vorgestellten, geometrisch basierten Zuordnungsverfahrens innerhalb einer Objektrekonstruktion angewendet. Verschiedene Tests mit synthetischen und realen Daten analysieren und bewerten die Methodik der Approximation, der Bildzuordnung und Objektrekonstruktion im Mehrbildverband und zeigen einen effektiven Einsatz der beiden Verfahren. Qualitätsanalysen mittels photogrammetrischer Rekonstruktion von fluvialen Sedimentoberflächen unter optischen Mehrmedienbedingungen. Als Anwendungsbeispiel wird eine fluviale Sedimentoberfläche durch die Medien Luft, Plexiglas und Wasser hindurch beobachtet und rekonstruiert. Mit diesem Beispiel wird gleichzeitig die Einsatzmöglichkeiten photogrammetrischer Methoden für die Erzeugung von Eingangsdaten für eine Analyse eines dynamischen, fluvialen Sedimentationsprozesses geprüft. Die Ergebnisse zeigen, dass photogrammetrische Methoden für die Lösung dieser Aufgabenstellung grundsätzlich anwendbar sind.

    @PhdThesis{Wolff2006Zur,
    Title = {Zur Approximation allgemeiner optischer Abbildungsmodelle und deren Anwendung auf eine geometrisch basierte Mehrbildzuordnung am Beispiel einer Mehrmedienabbildung},
    Author = {Wolff, Kirsten},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2006},
    Abstract = {\textbf{Summary} Non perspective mappings include geometric elements, which have an direct influence on their geometric characteristics but which are more or less unknown in the field of photogrammetry. An example for such an element is the type of a viewing point, which can be a single point, a line or a surface (non single points). Another example is the way of mapping rays. Both influence the image distortions depending on the position of the image point in image space or the object point in object space. This work basically deals with the geometry of viewing points, the resulting image distortion and their relevance for a three dimensional photogrammetric reconstruction. The work focuses the following four points: Introduction of a new representation and taxonomy for optical imaging systems. One feature of optical mappings is the type of image distortion, which is important for the complexity of a photogrammetric process. It may either base on the position of the image point in image space or the object point in object space, which is much more complex. The image distortion and its modeling are being directly influenced by a single point or non single point viewpoint. In this context it is important that an imaging system may have more than one viewing point. A corresponding analysis of imaging systems is realised in the field of photogrammetry only under condition of some of these aspects. Here, a global analysis is given, which takes all these mentioned aspects into account, and a new resulting representation and taxonomy for optical imaging systems is introduced. Development of a general, efficient approximation method for imaging systems. Non perspective mappings, which have object space based image distortions, might have very complex and specialised mapping models. This is motivation for the development of an efficient and general approximation method for complex, object space based distorted mappings by a simplified model. By a definition of special requirements for the realisation, a significant influence of the approximation error on the quality of the final results can be prevented. A priori quality analyses of a 3d reconstruction support this assumption. Development of a geometrically based matching algorithm for multiple views. The approximation method is used for a new geometry based matching algorithm for 3D reconstruction, which will also be presented here. Different tests using synthetic and real data analyse and evaluate the methodology of the approximation, of the image matching and the 3D object reconstruction for multiple views and show the efficiency of both methods. Quality tests by a photogrammetric reconstruction of a fluvial sediment surface with multi media geometry. As an application example of the methods, a fluvial sediment surface is observed through the three optical media air, Perspex and water and is reconstructed in 3D. This application is also a test of the use of photogrammetric methods for preparing input data for a synthetic analysis of the sedimentation process. The results indicate that photogrammetric methods are applicable for this task. \textbf{Zusammenfassung} Nicht perspektivische Abbildungen besitzen geometrische Elemente, die auf ihre geometrischen Eigenschaften einen direkten Einfluss haben, im Bereich der Photogrammetrie aber kaum bekannt sind. Ein solches Element ist zum Beispiel die Art eines Projektionszentrums, das punkt-, linien- oder fl\"achenf\"ormig sein kann und der Verlauf der Abbildungsstrahlen, die beide die Abh\"angigkeit der Verzeichnung von der Position des Bildpunktes im Bildraum oder des zugeh\"origen Objektpunktes im Objektraum beeinflussen. Diese Arbeit behandelt grundlegend die Geometrie von Projektionszentren, den daraus resultierenden Bildverzeichnungen und ihre Bedeutung f\"ur eine dreidimensionale photogrammetrische Auswertung. Zusammenfassend umfasst diese Arbeit die folgenden vier Schwerpunkte: Einf\"uhrung einer neuen Repr\"asentation und Taxonomie optischer Abbildungssysteme. Ein f\"ur die Komplexit\"at einer photogrammetrischen Auswertung entscheidendes Merkmal optischer Abbildungen ist die Art ihrer Bildfehler, die bildraumbasiert oder objektraumbasiert sein k\"onnen. Die Bildfehler und ihre Modellierung werden direkt durch das punktf\"ormige oder nicht punktf\"ormige Projektionszentrum des abbildenden Strahlenb\"undels beeinflusst. Dabei kann ein Abbildungssystem streng genommen mehr als ein Projektionszentrum besitzen. Eine entsprechende Analyse optischer Systeme ist im Bereich der Photogrammetrie bisher nur unter Ber\"ucksichtigung von einzelnen dieser Aspekte durchgef\"uhrt worden und wird hier zusammen mit einer resultierenden neuen Repr\"asentation und Taxonomie umfassender dargestellt. Vorstellung eines allgemeinen, effizienten Approximationsmodells optischer Abbildungssysteme. Nicht perspektivische Abbildungen, die eine objektraumbasierte Verzeichnung besitzen, k\"onnen sehr komplexe und spezialisierte Abbildungsmodelle besitzen. Dies ist Motivation f\"ur die Entwicklung eines effizienten und allgemein g\"ultigen Approximationsmodells komplexer, objektraumbasiert verzeichneter Abbildungen durch ein einfaches Modell. Durch bestimmte Anforderungen an den Einsatz des Modells wird ein signifikanter Einfluss auf die Endergebnisse verhindert. Apriori Qualit\"atsuntersuchungen f\"ur eine dreidimensionale Objektrekonstruktion best\"atigen dies. Entwicklung eines geometrisch basierten Zuordnungsalgorithmus im Mehrbildverband. Das Approximationsmodell wird innerhalb eines ebenfalls hier vorgestellten, geometrisch basierten Zuordnungsverfahrens innerhalb einer Objektrekonstruktion angewendet. Verschiedene Tests mit synthetischen und realen Daten analysieren und bewerten die Methodik der Approximation, der Bildzuordnung und Objektrekonstruktion im Mehrbildverband und zeigen einen effektiven Einsatz der beiden Verfahren. Qualit\"atsanalysen mittels photogrammetrischer Rekonstruktion von fluvialen Sedimentoberfl\"achen unter optischen Mehrmedienbedingungen. Als Anwendungsbeispiel wird eine fluviale Sedimentoberfl\"ache durch die Medien Luft, Plexiglas und Wasser hindurch beobachtet und rekonstruiert. Mit diesem Beispiel wird gleichzeitig die Einsatzm\"oglichkeiten photogrammetrischer Methoden f\"ur die Erzeugung von Eingangsdaten f\"ur eine Analyse eines dynamischen, fluvialen Sedimentationsprozesses gepr\"uft. Die Ergebnisse zeigen, dass photogrammetrische Methoden f\"ur die L\"osung dieser Aufgabenstellung grunds\"atzlich anwendbar sind.}
    }

2005

  • S. Abraham and W. Förstner, “Fish-eye-stereo calibration and epipolar rectification,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 59, iss. 5, pp. 278-288, 2005.
    [BibTeX] [PDF]
    The paper describes calibration and epipolar rectification for stereo with fish-eye optics. While stereo processing of classical cameras is state of the art for many applications, stereo with fish-eye cameras have been much less discussed in literature. This paper discusses the geometric calibration and the epipolar rectification as pre-requisite for stereo processing with fish-eyes. First, it surveys mathematical models to describe the projection. Then the paper presents a method of generating epipolar images which are suitable for stereo-processing with a field of view larger than 180 degrees in vertical and horizontal viewing directions. One example with 3D-point measuring from real fish-eye images demonstrates the feasibility of the calibration and rectification procedure. *Keywords: *fish-eye camera calibration; fish-eye stereo; epipolar rectification

    @Article{Steffen2005Fish,
    Title = {Fish-eye-stereo calibration and epipolar rectification},
    Author = {Steffen Abraham and Wolfgang F\"orstner},
    Journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
    Year = {2005},
    Number = {5},
    Pages = {278--288},
    Volume = {59},
    Abstract = {The paper describes calibration and epipolar rectification for stereo with fish-eye optics. While stereo processing of classical cameras is state of the art for many applications, stereo with fish-eye cameras have been much less discussed in literature. This paper discusses the geometric calibration and the epipolar rectification as pre-requisite for stereo processing with fish-eyes. First, it surveys mathematical models to describe the projection. Then the paper presents a method of generating epipolar images which are suitable for stereo-processing with a field of view larger than 180 degrees in vertical and horizontal viewing directions. One example with 3D-point measuring from real fish-eye images demonstrates the feasibility of the calibration and rectification procedure. *Keywords: *fish-eye camera calibration; fish-eye stereo; epipolar rectification},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Steffen2005Fish.pdf}
    }

  • C. Beder, “Agglomerative Grouping of Observations by Bounding Entropy Variation,” in Pattern Recognition , Vienna, Austria, 2005, pp. 101-108. doi:10.1007/11550518_13
    [BibTeX] [PDF]
    An information theoretic framework for grouping observations is proposed. The entropy change incurred by new observations is analyzed using the Kalman filter update equations. It is found, that the entropy variation is caused by a positive similarity term and a negative proximity term. Bounding the similarity term in the spirit of the minimum description length principle and the proximity term in the spirit of maximum entropy inference a robust and efficient grouping procedure is devised. Some of its properties are demonstrated for the exemplary task of edgel grouping.

    @InProceedings{Beder2005Agglomerative,
    Title = {Agglomerative Grouping of Observations by Bounding Entropy Variation},
    Author = {Beder, Christian},
    Booktitle = {Pattern Recognition},
    Year = {2005},
    Address = {Vienna, Austria},
    Editor = {Kropatsch, Walter and Sablatnig, Robert and Hanbury, Allan},
    Number = {3663},
    Organization = {DAGM},
    Pages = {101-108},
    Publisher = {Springer},
    Series = {LNCS},
    Abstract = {An information theoretic framework for grouping observations is proposed. The entropy change incurred by new observations is analyzed using the Kalman filter update equations. It is found, that the entropy variation is caused by a positive similarity term and a negative proximity term. Bounding the similarity term in the spirit of the minimum description length principle and the proximity term in the spirit of maximum entropy inference a robust and efficient grouping procedure is devised. Some of its properties are demonstrated for the exemplary task of edgel grouping.},
    Doi = {10.1007/11550518_13},
    File = {beder05.agglomerative.pdf:http\://www.ipb.uni-bonn.de/papers/2005/beder05.agglomerative.pdf:PDF},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Beder2005Agglomerative.pdf}
    }

  • W. Burgard, M. Moors, C. Stachniss, and F. Schneider, “Coordinated Multi-Robot Exploration,” IEEE Transactions on Robotics, vol. 21, iss. 3, pp. 376-378, 2005.
    [BibTeX] [PDF]
    [none]
    @Article{Burgard2005a,
    Title = {Coordinated Multi-Robot Exploration},
    Author = {W. Burgard and M. Moors and C. Stachniss and F. Schneider},
    Journal = ieeetransrob,
    Year = {2005},
    Number = {3},
    Pages = {376--378},
    Volume = {21},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/burgard05tro.pdf}
    }

  • W. Burgard, C. Stachniss, and G. Grisetti, “Information Gain-based Exploration Using Rao-Blackwellized Particle Filters,” in Proc. of the Learning Workshop (Snowbird) , Snowbird, UT, USA, 2005.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Burgard2005,
    Title = {Information Gain-based Exploration Using Rao-Blackwellized Particle Filters},
    Author = {Burgard, W. and Stachniss, C. and Grisetti, G.},
    Booktitle = {Proc. of the Learning Workshop (Snowbird)},
    Year = {2005},
    Address = {Snowbird, UT, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/burgard05snowbird.pdf}
    }

  • G. Grisetti, C. Stachniss, and W. Burgard, “Improving Grid-based SLAM with Rao-Blackwellized Particle Filters by Adaptive Proposals and Selective Resampling,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Barcelona, Spain, 2005, pp. 2443-2448.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Grisetti2005,
    Title = {Improving Grid-based {SLAM} with Rao-Blackwellized Particle Filters by Adaptive Proposals and Selective Resampling},
    Author = {Grisetti, G. and Stachniss, C. and Burgard, W.},
    Booktitle = ICRA,
    Year = {2005},
    Address = {Barcelona, Spain},
    Pages = {2443--2448},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/grisetti05icra.pdf}
    }

  • V. Heinzel, B. Waske, M. Braun, and G. Menz, “The potential of multitemporal and multisensoral remote sensing data for the extraction of biophysical parameters of wheat,” in SPIE Remote Sensing Europe , 2005. doi:10.1117/12.627336
    [BibTeX]
    Satellite based monitoring of agricultural activities requires a very high temporal resolution, due to the highly dynamic processes on viewed surfaces. The solitary use of optical data is restricted by its dependency on weather conditions. Hence, the synergetic use of SAR and optical data has a very high potential for agricultural applications such as biomass monitoring or yield estimation. Synthetic Aperture Radar data of the ERS-2 offer the chance of bi-weekly data acquisitions. Additionally, Landsat-5 Thematic Mapper (TM) and high-resolution optical data from the Quickbird satellite shall help to verify the derived information. The Advanced Synthetic Aperture Radar (ASAR) of the European environmental satellite (ENVISAT) enables several acquisitions per week, due to the availability of different incidence angles. Moreover, the ASAR sensor offers the possibility to acquire alternating polarization data, providing HH/HV and VV/VH images. This will help to fill time gaps and bring an additional information gain in further studies. In the present study the temporal development of biomass from two winter wheat fields is modeled based on multitemporal and multisensoral satellite data. For this purpose comprehensive ground truth information (e.g. biomass, LAI, vegetation height) was recorded in weekly intervals for the vegetation period of 2005. A positive relationship between the normalized difference vegetation index (NDVI) of optical data and biomass could be shown. The backscatter of SAR data is negatively related to the biomass. Regression coefficients of models for biomass based on satellite data and the collected biomass vary between r2=0.49 for ERS-2 and r2=0.86 for Quickbird. The study is a first step in the synergetic use of optical and SAR data for biomass modeling and yield estimation over agricultural sites in Central Europe.

    @InProceedings{Heinzel2005potential,
    Title = {The potential of multitemporal and multisensoral remote sensing data for the extraction of biophysical parameters of wheat},
    Author = {Heinzel, Vanessa and Waske, Bj\"orn and Braun, Matthias and Menz, Gunter},
    Booktitle = {SPIE Remote Sensing Europe},
    Year = {2005},
    Abstract = {Satellite based monitoring of agricultural activities requires a very high temporal resolution, due to the highly dynamic processes on viewed surfaces. The solitary use of optical data is restricted by its dependency on weather conditions. Hence, the synergetic use of SAR and optical data has a very high potential for agricultural applications such as biomass monitoring or yield estimation. Synthetic Aperture Radar data of the ERS-2 offer the chance of bi-weekly data acquisitions. Additionally, Landsat-5 Thematic Mapper (TM) and high-resolution optical data from the Quickbird satellite shall help to verify the derived information. The Advanced Synthetic Aperture Radar (ASAR) of the European environmental satellite (ENVISAT) enables several acquisitions per week, due to the availability of different incidence angles. Moreover, the ASAR sensor offers the possibility to acquire alternating polarization data, providing HH/HV and VV/VH images. This will help to fill time gaps and bring an additional information gain in further studies. In the present study the temporal development of biomass from two winter wheat fields is modeled based on multitemporal and multisensoral satellite data. For this purpose comprehensive ground truth information (e.g. biomass, LAI, vegetation height) was recorded in weekly intervals for the vegetation period of 2005. A positive relationship between the normalized difference vegetation index (NDVI) of optical data and biomass could be shown. The backscatter of SAR data is negatively related to the biomass. Regression coefficients of models for biomass based on satellite data and the collected biomass vary between r2=0.49 for ERS-2 and r2=0.86 for Quickbird. The study is a first step in the synergetic use of optical and SAR data for biomass modeling and yield estimation over agricultural sites in Central Europe.},
    Doi = {10.1117/12.627336},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • S. Krömeke, “Extraktion affininvarianter Bildmerkmale,” Diplomarbeit Master Thesis, 2005.
    [BibTeX]
    [none]
    @MastersThesis{Kromeke2005Extraktion,
    Title = {Extraktion affininvarianter Bildmerkmale},
    Author = {Kr\"omeke, Sven},
    School = {Institute of Photogrammetry, University of Bonn In Zusammenarbeit mit dem Institut f\"ur Informatik der Universit\"at Bonn},
    Year = {2005},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, PD Dr. Volker Steinhage},
    Type = {Diplomarbeit},
    Abstract = {[none]},
    City = {Bonn}
    }

  • T. Läbe and W. Förstner, “Erfahrungen mit einem neuen vollautomatischen Verfahren zur Orientierung digitaler Bilder,” in Proceedings of DGPF Conference , Rostock, Germany, 2005.
    [BibTeX] [PDF]
    Der Aufsatz präsentiert ein neues vollautomatisches Verfahren zur relativen Orientierung mehrerer digitaler Bilder kalibrierter Kameras. Es nutzt die in den letzten Jahren neu entwickelten Algorithmen im Bereich der Merkmalsextraktion und der Bildgeometrie und erfordert weder das Anbringen von künstlichen Zielmarken noch die Angabe von Näherungswerten. Es basiert auf automatisch extrahierten Punkten, die mit dem von D. Lowe (2004) vorgeschlagenen Verfahren zur Extraktion skaleninvarianter Bildmerkmale berechnet werden. Diese ermöglichen eine Punktzuordnung auch bei stark konvergenten Aufnahmen. Für die Bestimmung von Näherungswerten der abschließenden Bündelausgleichung wird bei der relativen Orientierung der Bildpaare das direkte Lösungsverfahren von D. Nister (2004) verwendet. Der Aufsatz diskutiert die praktischen Erfahrungen mit den verwendeten Algorithmen anhand von Beispieldatensätzen sowohl von Innenraum- als auch von Aussnaufnahmen.

    @InProceedings{Labe2005Erfahrungen,
    Title = {Erfahrungen mit einem neuen vollautomatischen Verfahren zur Orientierung digitaler Bilder},
    Author = {L\"abe, Thomas and F\"orstner, Wolfgang},
    Booktitle = {Proceedings of DGPF Conference},
    Year = {2005},
    Address = {Rostock, Germany},
    Abstract = {Der Aufsatz pr\"asentiert ein neues vollautomatisches Verfahren zur relativen Orientierung mehrerer digitaler Bilder kalibrierter Kameras. Es nutzt die in den letzten Jahren neu entwickelten Algorithmen im Bereich der Merkmalsextraktion und der Bildgeometrie und erfordert weder das Anbringen von k\"unstlichen Zielmarken noch die Angabe von N\"aherungswerten. Es basiert auf automatisch extrahierten Punkten, die mit dem von D. Lowe (2004) vorgeschlagenen Verfahren zur Extraktion skaleninvarianter Bildmerkmale berechnet werden. Diese erm\"oglichen eine Punktzuordnung auch bei stark konvergenten Aufnahmen. F\"ur die Bestimmung von N\"aherungswerten der abschlie{\ss}enden B\"undelausgleichung wird bei der relativen Orientierung der Bildpaare das direkte L\"osungsverfahren von D. Nister (2004) verwendet. Der Aufsatz diskutiert die praktischen Erfahrungen mit den verwendeten Algorithmen anhand von Beispieldatens\"atzen sowohl von Innenraum- als auch von Aussnaufnahmen.},
    City = {Bonn},
    Proceeding = {Proceedings of DGPF Conference},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Labe2005Erfahrungen.pdf}
    }

  • O. Martínez-Mozos, C. Stachniss, and W. Burgard, “Supervised Learning of Places from Range Data using Adaboost,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Barcelona, Spain, 2005, pp. 1742-1747.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Mart'inez-Mozos2005,
    Title = {Supervised Learning of Places from Range Data using Adaboost},
    Author = {Mart\'{i}nez-Mozos, O. and Stachniss, C. and W. Burgard},
    Booktitle = ICRA,
    Year = {2005},
    Address = {Barcelona, Spain},
    Pages = {1742--1747},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/martinez05icra.pdf}
    }

  • J. Meidow and H. Schuster, “Voxel-based Quality Evaluation of Photogrammetic Buildingacquisitions.” 2005.
    [BibTeX] [PDF]
    Automatic quality evaluation of photogrammetric building acquisitions is important to realize deficiencies of acquisition approaches, tocompare different acquisitions approaches and to check the keeping of contractual specifications. For the decision-makers a procedure will be suggested taking a few, good interpretable quality measures into account. Therefore, useful quality measures have to be identifiedby the formulation of criteria. These quantities can be derived from the comparison of a test data set and a reference data set capturing the same scene. The acquired topology is usually uncertain as for instance two adjacent buildings may be acquired as one building ortwo buildings. Thus a screening of the registered area is suggested to compute the quantities. The approach is independent of the used acquisition method. For the application of large data sets the corresponding data structures will be explained. In experimental tests thebuildings registered by two commercial acquisition systems will be compared by the quality measures determined in 2D and 3D.

    @InProceedings{Meidow2005Voxel,
    Title = {Voxel-based Quality Evaluation of Photogrammetic Buildingacquisitions},
    Author = {Meidow, Jochen and Schuster, Hanns-Florian},
    Year = {2005},
    Abstract = {Automatic quality evaluation of photogrammetric building acquisitions is important to realize deficiencies of acquisition approaches, tocompare different acquisitions approaches and to check the keeping of contractual specifications. For the decision-makers a procedure will be suggested taking a few, good interpretable quality measures into account. Therefore, useful quality measures have to be identifiedby the formulation of criteria. These quantities can be derived from the comparison of a test data set and a reference data set capturing the same scene. The acquired topology is usually uncertain as for instance two adjacent buildings may be acquired as one building ortwo buildings. Thus a screening of the registered area is suggested to compute the quantities. The approach is independent of the used acquisition method. For the application of large data sets the corresponding data structures will be explained. In experimental tests thebuildings registered by two commercial acquisition systems will be compared by the quality measures determined in 2D and 3D.},
    City = {Bonn},
    Proceeding = {Proceedings of the CMRT05-Workshop},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Meidow2005Voxel.pdf}
    }

  • D. Meier, C. Stachniss, and W. Burgard, “Coordinating Multiple Robots During Exploration Under Communication With Limited Bandwidth,” in Proceedings of the European Conference on Mobile Robots (ECMR) , Ancona, Italy, 2005, pp. 26-31.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Meier2005,
    Title = {Coordinating Multiple Robots During Exploration Under Communication With Limited Bandwidth},
    Author = {Meier, D. and Stachniss, C. and Burgard, W.},
    Booktitle = ECMR,
    Year = {2005},
    Address = {Ancona, Italy},
    Pages = {26--31},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/meier05ecmr.pdf}
    }

  • A. Rottmann, O. Martínez-Mozos, C. Stachniss, and W. Burgard, “Place Classification of Indoor Environments with Mobile Robots using Boosting,” in Proceedings of the National Conference on Artificial Intelligence (AAAI) , Pittsburgh, PA, USA, 2005, pp. 1306-1311.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Rottmann2005,
    Title = {Place Classification of Indoor Environments with Mobile Robots using Boosting},
    Author = {Rottmann, A. and Mart\'{i}nez-Mozos, O. and Stachniss, C. and Burgard, W.},
    Booktitle = aaai,
    Year = {2005},
    Address = {Pittsburgh, PA, USA},
    Pages = {1306--1311},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/rottmann05aaai.pdf}
    }

  • H. Schuster, “Detection of man-made-objects based on spatial aggregations,” Department of Photogrammetry, University of Bonn, TR-IGG-P-2006-01, 2005.
    [BibTeX] [PDF]
    This paper presents a method for detecting complex man-made-objects in images. The detection model is a bayesian net that aggregates cliques of image regions which may cover a complex object. Observable attributes of the regions are derived from a rich symbolic image description containing points, lines and regions as basic features including their relations. The model captures the dependency of the region aggregates on the features and their relations with respect to observability due to occlusions and to perspective deformations. Cliques are classified using MAP estimation. Up to now, the model captures cliques with one, two and three regions which is sufficient for detecting polyhedral objects. The model allows to detect and locate multiple appearances of object classes. The joint distribution of the Bayesian net is determined in a supervised learning step based on images with annotated regions. The method is realized and demonstrated for the detection of building roofs in aerial images.

    @TechReport{Schuster2005Detection,
    Title = {Detection of man-made-objects based on spatial aggregations},
    Author = {Schuster,Hanns-Florian},
    Institution = {Department of Photogrammetry, University of Bonn},
    Year = {2005},
    Number = {TR-IGG-P-2006-01},
    Abstract = {This paper presents a method for detecting complex man-made-objects in images. The detection model is a bayesian net that aggregates cliques of image regions which may cover a complex object. Observable attributes of the regions are derived from a rich symbolic image description containing points, lines and regions as basic features including their relations. The model captures the dependency of the region aggregates on the features and their relations with respect to observability due to occlusions and to perspective deformations. Cliques are classified using MAP estimation. Up to now, the model captures cliques with one, two and three regions which is sufficient for detecting polyhedral objects. The model allows to detect and locate multiple appearances of object classes. The joint distribution of the Bayesian net is determined in a supervised learning step based on images with annotated regions. The method is realized and demonstrated for the detection of building roofs in aerial images.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schuster2005Detection.pdf}
    }

  • C. Stachniss and W. Burgard, “Mobile Robot Mapping and Localization in Non-Static Environments,” in Proceedings of the National Conference on Artificial Intelligence (AAAI) , Pittsburgh, PA, USA, 2005, pp. 1324-1329.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2005,
    Title = {Mobile Robot Mapping and Localization in Non-Static Environments},
    Author = {Stachniss, C. and Burgard, W.},
    Booktitle = aaai,
    Year = {2005},
    Address = {Pittsburgh, PA, USA},
    Pages = {1324--1329},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss05aaai.pdf}
    }

  • C. Stachniss, G. Grisetti, and W. Burgard, “Information Gain-based Exploration Using Rao-Blackwellized Particle Filters,” in Proceedings of Robotics: Science and Systems (RSS) , Cambridge, MA, USA, 2005, pp. 65-72.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2005a,
    Title = {Information Gain-based Exploration Using Rao-Blackwellized Particle Filters},
    Author = {Stachniss, C. and Grisetti, G. and Burgard, W.},
    Booktitle = RSS,
    Year = {2005},
    Address = {Cambridge, MA, USA},
    Pages = {65--72},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss05rss.pdf}
    }

  • C. Stachniss, G. Grisetti, and W. Burgard, “Recovering Particle Diversity in a Rao-Blackwellized Particle Filter for SLAM after Actively Closing Loops,” in Proceedings of the IEEE Int. Conf. on Robotics & Automation (ICRA) , Barcelona, Spain, 2005, pp. 667-672.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2005d,
    Title = {Recovering Particle Diversity in a Rao-Blackwellized Particle Filter for {SLAM} after Actively Closing Loops},
    Author = {Stachniss, C. and Grisetti, G. and Burgard, W.},
    Booktitle = ICRA,
    Year = {2005},
    Address = {Barcelona, Spain},
    Pages = {667--672},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss05icra.pdf}
    }

  • C. Stachniss, D. Hähnel, W. Burgard, and G. Grisetti, “On Actively Closing Loops in Grid-based FastSLAM,” Advanced Robotics, vol. 19, iss. 10, pp. 1059-1080, 2005.
    [BibTeX] [PDF]
    [none]
    @Article{Stachniss2005c,
    Title = {On Actively Closing Loops in Grid-based {FastSLAM}},
    Author = {Stachniss, C. and H\"{a}hnel, D. and Burgard, W. and Grisetti, G.},
    Journal = advancedrobotics,
    Year = {2005},
    Number = {10},
    Pages = {1059--1080},
    Volume = {19},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss05ar.pdf}
    }

  • C. Stachniss, O. Martínez-Mozos, A. Rottmann, and W. Burgard, “Semantic Labeling of Places,” in Proceedings of the Int. Symposium of Robotics Research (ISRR) , San Francisco, CA, USA, 2005.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2005b,
    Title = {Semantic Labeling of Places},
    Author = {Stachniss, C. and Mart\'{i}nez-Mozos, O. and Rottmann, A. and Burgard, W.},
    Booktitle = isrr,
    Year = {2005},
    Address = {San Francisco, CA, USA},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss05isrr.pdf}
    }

  • P. Trahanias, W. Burgard, A. Argyros, D. Hähnel, H. Baltzakis, P. Pfaff, and C. Stachniss, “TOURBOT and WebFAIR: Web-Operated Mobile Robots for Tele-Presence in Populated Exhibitions,” IEEE Robotics & Automation Magazine, vol. 12, iss. 2, pp. 77-89, 2005.
    [BibTeX] [PDF]
    [none]
    @Article{Trahanias2005,
    Title = {{TOURBOT} and {WebFAIR}: Web-Operated Mobile Robots for Tele-Presence in Populated Exhibitions},
    Author = {Trahanias, P. and Burgard, W. and Argyros, A. and H\"{a}hnel, D. and Baltzakis, H. and Pfaff, P. and Stachniss, C.},
    Journal = {IEEE Robotics \& Automation Magazine},
    Year = {2005},
    Number = {2},
    Pages = {77--89},
    Volume = {12},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://ieeexplore.ieee.org/iel5/100/31383/01458329.pdf?arnumber=1458329}
    }

  • B. Waske, V. Heinzel, M. Braun, and G. Menz, “Object-based speckle filtering using multisensoral remote sensing data,” in SPIE Remote Sensing Europe , 2005. doi:10.1117/12.626513
    [BibTeX]
    Speckle – appearing in SAR Images as random noise – hampers image processing techniques like segmentation and classification. Several algorithms have been developed to suppress the speckle effect. One disadvantage, even with optimized speckle reduction algorithms, is a blurring of the image. This effect, which appears especially along the edges of structures, is leading to further problems in subsequent image interpretation. To prevent a loss of information, the knowledge of structures in the image could be an advantage. Therefore the proposed methodology combines common filtering techniques with results from a segmentation of optical images for an object-based speckle filtering. The performance of the adapted algorithm is compared to those of common speckle filters. The accuracy assessment is based on statistical criteria and visual interpretation of the images. The results show that the efficiency of the speckle filter algorithm can be increased while a loss of information can be reduced using the boundary during the filtering process.

    @InProceedings{Waske2005Object,
    Title = {Object-based speckle filtering using multisensoral remote sensing data},
    Author = {Waske, Bj\"orn and Heinzel, Vanessa and Braun, Matthias and Menz, Gunter},
    Booktitle = {SPIE Remote Sensing Europe},
    Year = {2005},
    Abstract = {Speckle - appearing in SAR Images as random noise - hampers image processing techniques like segmentation and classification. Several algorithms have been developed to suppress the speckle effect. One disadvantage, even with optimized speckle reduction algorithms, is a blurring of the image. This effect, which appears especially along the edges of structures, is leading to further problems in subsequent image interpretation. To prevent a loss of information, the knowledge of structures in the image could be an advantage. Therefore the proposed methodology combines common filtering techniques with results from a segmentation of optical images for an object-based speckle filtering. The performance of the adapted algorithm is compared to those of common speckle filters. The accuracy assessment is based on statistical criteria and visual interpretation of the images. The results show that the efficiency of the speckle filter algorithm can be increased while a loss of information can be reduced using the boundary during the filtering process.},
    Doi = {10.1117/12.626513},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • T. Wellen, “Shadow Removal from Aerial Views for Realistic Terrain Rendering,” Diplomarbeit Master Thesis, 2005.
    [BibTeX]
    [none]
    @MastersThesis{Wellen2005Shadow,
    Title = {Shadow Removal from Aerial Views for Realistic Terrain Rendering},
    Author = {Wellen, Thomas},
    School = {Institute of Photogrammetry, University of Bonn In Zusammenarbeit mit dem Institut f\"ur Informatik der Universit\"at Bonn},
    Year = {2005},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Prof. Dr. Reinhard Klein},
    Type = {Diplomarbeit},
    Abstract = {[none]},
    City = {Bonn}
    }

2004

  • C. Beder, “Fast Statistically Geometric Reasoning About Uncertain Line Segments in 2D- and 3D-Space,” in Proceedings of the DAGM Symposium , Tübingen, 2004, pp. 375-382.
    [BibTeX] [PDF]
    This work addresses the two major drawbacks of current statistical uncertain geometric reasoning approaches. In the first part a framework is presented, that allows to represent uncertain line segments in 2D- and 3D-space and perform statistical test with these practically very important types of entities. The second part addresses the issue of performance of geometric reasoning. A data structure is introduced, that allows the efficient processing of large amounts of statistical tests involving geometric entities. The running times of this approach are finally evaluated experimentally.

    @InProceedings{Beder2004Fast,
    Title = {Fast Statistically Geometric Reasoning About Uncertain Line Segments in 2D- and 3D-Space},
    Author = {Beder, Christian},
    Booktitle = {Proceedings of the DAGM Symposium},
    Year = {2004},
    Address = {T\"ubingen},
    Editor = {C.E.Rasmussen and H.H.B\"ulthoff and B.Sch\"olkopf and M.A.Giese},
    Number = {3175},
    Organization = {DAGM},
    Pages = {375--382},
    Publisher = {Springer},
    Series = {LNCS},
    Abstract = {This work addresses the two major drawbacks of current statistical uncertain geometric reasoning approaches. In the first part a framework is presented, that allows to represent uncertain line segments in 2D- and 3D-space and perform statistical test with these practically very important types of entities. The second part addresses the issue of performance of geometric reasoning. A data structure is introduced, that allows the efficient processing of large amounts of statistical tests involving geometric entities. The running times of this approach are finally evaluated experimentally.},
    File = {beder04.fast.pdf:http\://www.ipb.uni-bonn.de/papers/2004/beder04.fast.pdf:PDF},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Beder2004Fast.pdf}
    }

  • C. Beder, “A Unified Framework for the Automatic Matching of Points and Lines in Multiple Oriented Images,” in Proc. 20th ISPRS Congress , Istanbul, Turkey, 2004, pp. 1109-1113.
    [BibTeX] [PDF]
    The accurate reconstruction of the three-dimensional structure from multiple images is still a challenging problem, so that most current approaches are based on semi-automatic procedures. Therefore the introduction of accurate and reliable automation for this classical problem is one of the key goals of photogrammetric research. This work deals with the problem of matching points and lines across multiple views, in order to gain a highly accurate reconstruction of the depicted object in three-dimensional space. In order to achieve this goal, a novel framework is introduced, that draws a sharp boundary between feature extraction, feature matching based on geometric constraints and feature matching based on radiometric constraints. The isolation of this three parts allows direct control and therefore better understanding of the different kinds of influences on the results. Most image feature matching approaches heavily depend on the radiometric properties of the features and only incorporate geometry information to improve performance and stability. The extracted radiometric descriptors of the features often assume a local planar or smooth object, which is by definition neither present at object corners nor edges. Therefore it would be desirable to use only descriptors that are rigorously founded for the given object model. Unfortunately the task of feature matching based on radiometric properties becomes extremely difficult for this much weaker descriptors. Hence a key feature of the presented framework is the consistent and rigorous use of statistical properties of the extracted geometric entities in the matching process, allowing a unified algorithm for matching points and lines in multiple views using solely the geometric properties of the extracted features. The results are stabilized by the use of many images to compensate for the lack of radiometric information. Radiometric descriptors may be consistently included into the framework for stabilization as well. Results from the application of the presented framework to the task of fully automatic reconstruction of points and lines from multiple images are shown.

    @InProceedings{Beder2004Unified,
    Title = {A Unified Framework for the Automatic Matching of Points and Lines in Multiple Oriented Images},
    Author = {Beder, Christian},
    Booktitle = {Proc. 20th ISPRS Congress},
    Year = {2004},
    Address = {Istanbul, Turkey},
    Organization = {ISPRS},
    Pages = {1109--1113},
    Abstract = {The accurate reconstruction of the three-dimensional structure from multiple images is still a challenging problem, so that most current approaches are based on semi-automatic procedures. Therefore the introduction of accurate and reliable automation for this classical problem is one of the key goals of photogrammetric research. This work deals with the problem of matching points and lines across multiple views, in order to gain a highly accurate reconstruction of the depicted object in three-dimensional space. In order to achieve this goal, a novel framework is introduced, that draws a sharp boundary between feature extraction, feature matching based on geometric constraints and feature matching based on radiometric constraints. The isolation of this three parts allows direct control and therefore better understanding of the different kinds of influences on the results. Most image feature matching approaches heavily depend on the radiometric properties of the features and only incorporate geometry information to improve performance and stability. The extracted radiometric descriptors of the features often assume a local planar or smooth object, which is by definition neither present at object corners nor edges. Therefore it would be desirable to use only descriptors that are rigorously founded for the given object model. Unfortunately the task of feature matching based on radiometric properties becomes extremely difficult for this much weaker descriptors. Hence a key feature of the presented framework is the consistent and rigorous use of statistical properties of the extracted geometric entities in the matching process, allowing a unified algorithm for matching points and lines in multiple views using solely the geometric properties of the extracted features. The results are stabilized by the use of many images to compensate for the lack of radiometric information. Radiometric descriptors may be consistently included into the framework for stabilization as well. Results from the application of the presented framework to the task of fully automatic reconstruction of points and lines from multiple images are shown.},
    File = {beder04.unified.pdf:http\://www.ipb.uni-bonn.de/papers/2004/beder04.unified.pdf:PDF},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Beder2004Unified.pdf}
    }

  • T. Dickscheid, “Automatische Referenzpunktverfeinerung in Panoramabildern mittels SIFT-Operator,” Bachelor Thesis Master Thesis, 2004.
    [BibTeX]
    [none]
    @MastersThesis{Dickscheid2004Automatische,
    Title = {Automatische Referenzpunktverfeinerung in Panoramabildern mittels SIFT-Operator},
    Author = {Dickscheid, Timo},
    Year = {2004},
    Note = {Betreuung: Dipl.-Inf. Detlev Droege},
    Type = {Bachelor Thesis},
    Abstract = {[none]},
    City = {Bonn}
    }

  • W. Förstner, “Uncertainty and Projective Geometry,” in Handbook of Computational Geometry for Pattern Recognition, Computer Vision, Neurocomputing and Robotics, E. Bayro-Corrochano, Ed., Springer, 2004, pp. 493-535. doi:10.1007/3-540-28247-5_15
    [BibTeX] [PDF]
    Geometric reasoning in Computer Vision always is performed under uncertainty. The great potential of both, projective geometry and statistics, can be integrated easily for propagating uncertainty through reasoning chains, for making decisions on uncertain spatial relations and for optimally estimating geometric entities or transformations. This is achieved by (1) exploiting the potential of statistical estimation and testing theory and by (2) choosing a representation of projective entities and relations which supports this integration. The redundancy of the representation of geometric entities with homogeneous vectors and matrices requires a discussion on the equivalence of uncertain projective entities. The multi-linearity of the geometric relations leads to simple expressions also in the presence of uncertainty. The non-linearity of the geometric relations finally requires to analyze the degree of approximation as a function of the noise level and of the embedding of the vectors in projective spaces. The paper discusses a basic link of statistics and projective geometry, based on a carefully chosen representation, and collects the basic relations in 2D and 3D and for single view geometry.

    @InCollection{Forstner2004Uncertainty,
    Title = {Uncertainty and Projective Geometry},
    Author = {F\"orstner, Wolfgang},
    Booktitle = {Handbook of Computational Geometry for Pattern Recognition, Computer Vision, Neurocomputing and Robotics},
    Publisher = {Springer},
    Year = {2004},
    Editor = {E. Bayro-Corrochano},
    Pages = {493--535},
    Abstract = {Geometric reasoning in Computer Vision always is performed under uncertainty. The great potential of both, projective geometry and statistics, can be integrated easily for propagating uncertainty through reasoning chains, for making decisions on uncertain spatial relations and for optimally estimating geometric entities or transformations. This is achieved by (1) exploiting the potential of statistical estimation and testing theory and by (2) choosing a representation of projective entities and relations which supports this integration. The redundancy of the representation of geometric entities with homogeneous vectors and matrices requires a discussion on the equivalence of uncertain projective entities. The multi-linearity of the geometric relations leads to simple expressions also in the presence of uncertainty. The non-linearity of the geometric relations finally requires to analyze the degree of approximation as a function of the noise level and of the embedding of the vectors in projective spaces. The paper discusses a basic link of statistics and projective geometry, based on a carefully chosen representation, and collects the basic relations in 2D and 3D and for single view geometry.},
    Doi = {10.1007/3-540-28247-5_15},
    Optpages = {to appear},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2004Uncertainty.pdf}
    }

  • W. Förstner, “Projective Geometry for Photogrammetric Orientation Procedures II,” in Proc. 20th ISPRS Congress , Istanbul, Turkey, 2004.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Forstner2004Projective,
    Title = {Projective Geometry for Photogrammetric Orientation Procedures II},
    Author = {F\"orstner, Wolfgang},
    Booktitle = {Proc. 20th ISPRS Congress},
    Year = {2004},
    Address = {Istanbul, Turkey},
    Abstract = {[none]},
    City = {Bonn},
    Proceeding = {Tutorial notes from the tutorial held at the ISPRS Congress Istanbul},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2004Projectivea.pdf}
    }

  • W. Förstner, “Projective Geometry for Photogrammetric Orientation Procedures I,” in Tutorial notes from the tutorial held at the ISPRS Congress , Istanbul, Turkey, 2004.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Forstner2004Projectivea,
    Title = {Projective Geometry for Photogrammetric Orientation Procedures I},
    Author = {F\"orstner, Wolfgang},
    Booktitle = {Tutorial notes from the tutorial held at the ISPRS Congress},
    Year = {2004},
    Address = {Istanbul, Turkey},
    Abstract = {[none]},
    City = {Bonn},
    Proceeding = {Tutorial notes from the tutorial held at the ISPRS Congress Istanbul},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2004Projective.pdf}
    }

  • T. Läbe and W. Förstner, “Geometric Stability of Low-Cost Digital Consumer Cameras,” in Proc. 20th ISPRS Congress , Istanbul, Turkey, 2004, pp. 528-535.
    [BibTeX] [PDF]
    During the last years the number of available low-cost digital consumer cameras has significantly increased while their prices decrease. Therefore for many applications with no high-end accuracy requirements it is an important consideration whether to use low-cost cameras. This paper investigates in the use of consumer cameras for photogrammetric measurements and vision systems. An important aspect of the suitability of these cameras is their geometric stability. Two aspects should be considered: The change of calibration parameters when using the camera’s features such as zoom or auto focus and the time invariance of the calibration parameters. Therefore laboratory calibrations of different cameras have been carried out at different times. The resulting calibration parameters, especially the principal distance and the principal point, and their accuracies are given. The usefulness of the information given in the image header, especially the focal length, is compared to the results of the calibration.

    @InProceedings{Labe2004Geometric,
    Title = {Geometric Stability of Low-Cost Digital Consumer Cameras},
    Author = {L\"abe, Thomas and F\"orstner, Wolfgang},
    Booktitle = {Proc. 20th ISPRS Congress},
    Year = {2004},
    Address = {Istanbul, Turkey},
    Pages = {528--535},
    Abstract = {During the last years the number of available low-cost digital consumer cameras has significantly increased while their prices decrease. Therefore for many applications with no high-end accuracy requirements it is an important consideration whether to use low-cost cameras. This paper investigates in the use of consumer cameras for photogrammetric measurements and vision systems. An important aspect of the suitability of these cameras is their geometric stability. Two aspects should be considered: The change of calibration parameters when using the camera's features such as zoom or auto focus and the time invariance of the calibration parameters. Therefore laboratory calibrations of different cameras have been carried out at different times. The resulting calibration parameters, especially the principal distance and the principal point, and their accuracies are given. The usefulness of the information given in the image header, especially the focal length, is compared to the results of the calibration.},
    City = {Bonn},
    Proceeding = {Proc. of XXth ISPRS Congress 2004},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Labe2004Geometric.pdf}
    }

  • A. Löw, B. Waske, R. Ludwig, and W. Mauser, “Derivation of near surface soil moisture patterns from multiscale Envisat ASAR data,” in EGU General Assembly, Geophysical Research Abstracts , 2004. doi:10.1109/IGARSS.2005.1526194
    [BibTeX]
    Water and energy fluxes at the interface between the land surface and atmosphere are strongly depending on the surface soil moisture content which is highly variable in space and time. It has been shown in numerous studies that microwave remote sensing can provide spatially distributed patterns of surface soil moisture. New sensor generations as ENVISAT ASAR or RADARSAT allow for image acquisitions in different imaging modes and geometries. Imaging modes with large area coverage capabilities as the wide swath mode of ENVISAT ASAR are of special interest for practical applications in this context. The paper presents a semiempirical soil moisture inversion scheme for ENVISAT ASAR data. Different land cover types as well as mixed image pixels are taken into account in the soil moisture retrieval process. The inversion results are validated against in situ measurements and a sensitivity analysis of the model is conducted.

    @InProceedings{Low2004Derivation,
    Title = {Derivation of near surface soil moisture patterns from multiscale Envisat ASAR data},
    Author = {L\"ow, A. and Waske, Bj\"orn and Ludwig, R. and Mauser, W.},
    Booktitle = {EGU General Assembly, Geophysical Research Abstracts},
    Year = {2004},
    Abstract = {Water and energy fluxes at the interface between the land surface and atmosphere are strongly depending on the surface soil moisture content which is highly variable in space and time. It has been shown in numerous studies that microwave remote sensing can provide spatially distributed patterns of surface soil moisture. New sensor generations as ENVISAT ASAR or RADARSAT allow for image acquisitions in different imaging modes and geometries. Imaging modes with large area coverage capabilities as the wide swath mode of ENVISAT ASAR are of special interest for practical applications in this context. The paper presents a semiempirical soil moisture inversion scheme for ENVISAT ASAR data. Different land cover types as well as mixed image pixels are taken into account in the soil moisture retrieval process. The inversion results are validated against in situ measurements and a sensitivity analysis of the model is conducted.},
    Doi = {10.1109/IGARSS.2005.1526194},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • A. Löw, B. Waske, R. Ludwig, and W. Mauser, “Derivation of near surface soil moisture patterns from ENVISAT ASAR Wide Swath data,” in 4th International Symposium on Retrieval of Bio- and Geophysical parameters from SAR data for land Applications , 2004.
    [BibTeX]
    [none]
    @InProceedings{Low2004Derivationa,
    Title = {Derivation of near surface soil moisture patterns from ENVISAT ASAR Wide Swath data},
    Author = {L\"ow, A. and Waske, Bj\"orn and Ludwig, R. and Mauser, W.},
    Booktitle = {4th International Symposium on Retrieval of Bio- and Geophysical parameters from SAR data for land Applications},
    Year = {2004},
    Abstract = {[none]},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • A. Löw, B. Waske, R. Ludwig, and W. Mauser, “Derivation of hydrological parameters from ENVISAT ASAR wide swath data,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS) , 2004. doi:10.1109/IGARSS.2004.1370469
    [BibTeX]
    Spatially distributed information about the current state of the land surface can be obtained from remote sensing measurements. These may be used with great benefit for the understanding of hydrological processes on the landscape level, where in situ measurements must fail due to lacking spatial coverage. The potential to quantify soil moisture conditions of the top soil layer, as well as the derivation of snow parameters by means of active microwave imagery has been successfully demonstrated in numerous studies. In contrast to earlier and rather experimental research efforts, data acquired from the ENVISAT ASAR sensor firstly enables to continuously monitor large areas with high temporal frequency and high spatial resolution. The different operation modes of ASAR allow the derivation of soil moisture maps on both, the field and the regional scale. The paper presents new methods to derive soil moisture and snow covered area information from ASAR wide swath (WSM) datasets. The presented approaches allocate a robust, yet practicable and reliable technique to derive near-surface soil moisture and snow patterns, being the key prerequisite for an operational application in hydrologic modelling.

    @InProceedings{Low2004Derivationb,
    Title = {Derivation of hydrological parameters from ENVISAT ASAR wide swath data},
    Author = {L\"ow, A. and Waske, Bj\"orn and Ludwig, R. and Mauser, W.},
    Booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
    Year = {2004},
    Abstract = {Spatially distributed information about the current state of the land surface can be obtained from remote sensing measurements. These may be used with great benefit for the understanding of hydrological processes on the landscape level, where in situ measurements must fail due to lacking spatial coverage. The potential to quantify soil moisture conditions of the top soil layer, as well as the derivation of snow parameters by means of active microwave imagery has been successfully demonstrated in numerous studies. In contrast to earlier and rather experimental research efforts, data acquired from the ENVISAT ASAR sensor firstly enables to continuously monitor large areas with high temporal frequency and high spatial resolution. The different operation modes of ASAR allow the derivation of soil moisture maps on both, the field and the regional scale. The paper presents new methods to derive soil moisture and snow covered area information from ASAR wide swath (WSM) datasets. The presented approaches allocate a robust, yet practicable and reliable technique to derive near-surface soil moisture and snow patterns, being the key prerequisite for an operational application in hydrologic modelling.},
    Doi = {10.1109/IGARSS.2004.1370469},
    Keywords = {ENVISAT ASAR; WSM; active microwave imagery; hydrological process; parameter inversion; remote sensing; snow covered area; soil moisture; synthetic aperture radar; wide swath data; data acquisition; hydrological techniques; microwave imaging; microwave measurement; remote sensing by radar; snow; soil; synthetic aperture radar;},
    Owner = {waske},
    Timestamp = {2012.09.05}
    }

  • M. Luxen, “Performance Evaluation in Natural and Controlled Environments applied to Feature Extraction Procedures,” in Proc. 20th ISPRS Congress, Istanbul, Turkey , Istanbul, Turkey, 2004, pp. 1061-1067.
    [BibTeX] [PDF]
    The paper highlights approaches to reference data acquisition in real environments for the purpose of performance evaluation of image analysis procedures. Reference data for the input and for the output of an algorithm is obtained by a) exploiting the noise characteristics of Gaussian image pyramids and b) exploiting multiple views. The approaches are employed exemplarily in the context of evaluating low level feature extraction algorithms.

    @InProceedings{Luxen2004Performance,
    Title = {Performance Evaluation in Natural and Controlled Environments applied to Feature Extraction Procedures},
    Author = {Luxen, Marc},
    Booktitle = {Proc. 20th ISPRS Congress, Istanbul, Turkey},
    Year = {2004},
    Address = {Istanbul, Turkey},
    Editor = {M. Orhan ALTAN},
    Number = {B3},
    Organization = {ISPRS},
    Pages = {1061--1067},
    Series = {{The International Archives of The Photogrammetry, Remote Sensing and Spatial Information Sciences}},
    Volume = {XXXV, Part B3},
    Abstract = {The paper highlights approaches to reference data acquisition in real environments for the purpose of performance evaluation of image analysis procedures. Reference data for the input and for the output of an algorithm is obtained by a) exploiting the noise characteristics of Gaussian image pyramids and b) exploiting multiple views. The approaches are employed exemplarily in the context of evaluating low level feature extraction algorithms.},
    File = {luxen04.performance.pdf:http\://www.ipb.uni-bonn.de/papers/2004/luxen04.performance.pdf:PDF},
    Postscript = {http://www.ipb.uni-bonn.de/papers/2004/luxen04.performance.ps.gz},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Luxen2004Performance.pdf}
    }

  • J. Meidow, “Calibration of Stationary Cameras by Observing Objects of Equal Heights on a Ground Plane,” in Proc. 20th ISPRS Congress, Istanbul, Turkey , Istanbul, Turkey, 2004, pp. 1067-1072.
    [BibTeX] [PDF]
    With the increasing number of cameras the need for plug-and-play calibration procedures arises to realize a subsequent automatic geometric evaluation of observed scenes. An easy calibration procedure is proposed for a non-zooming stationary camera observing objects of initially equal and known heights above a ground plane. The image coordinates of the corresponding foot and head points of these objects serve as observations. For the interior and exterior orientation of the camera a minimal parametrization is introduced with the height of the camera above the ground plane, its pitch and roll angle and the principal distance. With the idea of corresponding foot and head trajectories being homologue, the situation can be reformulated with a virtual second camera observing the scene. Therefore a plane induced homography can be established for the observation model. This special planar homology can be parametrisied with the unknown calibration quantities. Initially the calibration is estimated by observing foot and head points of objects with known heights. In the subsequent evaluation phase the height and positions of unknown objects can be determined. With the same procedure the calibration can be checked and updated if needed. The approach is evaluated with a real scene.

    @InProceedings{Meidow2004Calibration,
    Title = {Calibration of Stationary Cameras by Observing Objects of Equal Heights on a Ground Plane},
    Author = {Meidow, Jochen},
    Booktitle = {Proc. 20th ISPRS Congress, Istanbul, Turkey},
    Year = {2004},
    Address = {Istanbul, Turkey},
    Organization = {ISPRS},
    Pages = {1067--1072},
    Abstract = {With the increasing number of cameras the need for plug-and-play calibration procedures arises to realize a subsequent automatic geometric evaluation of observed scenes. An easy calibration procedure is proposed for a non-zooming stationary camera observing objects of initially equal and known heights above a ground plane. The image coordinates of the corresponding foot and head points of these objects serve as observations. For the interior and exterior orientation of the camera a minimal parametrization is introduced with the height of the camera above the ground plane, its pitch and roll angle and the principal distance. With the idea of corresponding foot and head trajectories being homologue, the situation can be reformulated with a virtual second camera observing the scene. Therefore a plane induced homography can be established for the observation model. This special planar homology can be parametrisied with the unknown calibration quantities. Initially the calibration is estimated by observing foot and head points of objects with known heights. In the subsequent evaluation phase the height and positions of unknown objects can be determined. With the same procedure the calibration can be checked and updated if needed. The approach is evaluated with a real scene.},
    File = {meidow04.calibration.pdf:http\://www.ipb.uni-bonn.de/papers/2004/meidow04.calibration.pdf:PDF},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Meidow2004Calibration.pdf}
    }

  • H. Schuster, “Segmentation Of LIDAR Data Using The Tensor Voting Framework,” in Proc. 20th ISPRS Congress, Istanbul, Turkey , Istanbul, Turkey, 2004, pp. 1073-1078.
    [BibTeX] [PDF]
    We present an investigation on the use of Tensor Voting for categorizing LIDAR data into outliers, line elements (e.g. high-voltage power lines), surface patches (e.g. roofs) and volumetric elements (e.g. vegetation). The Reconstruction of man-made objects is a main task of photogrammetry. With the increasing quality and availability of LIDAR sensors, range data is becoming more and more important. With LIDAR sensors it is possible to quickly aquire huge amounts of data. But in contrast to classical systems, where the measurement points are chosen by an operator, the data points do not explicitly correspond to meaningful points of the object, i.e. edges, corners, junctions. To extract these features it is necessary to segment the data into homogeneous regions wich can be processed afterwards. Our approach consists of a two step segmentation. The first one uses the Tensor Voting algorithm. It encodes every data point as a particle which sends out a vector field. This can be used to categorize the pointness, edgeness and surfaceness of the data points. After the categorization of the given LIDAR data points also the regions between the data points are rated. Meaningful regions like edges and junctions, given by the inherent structure of the data, are extracted. In a second step the so labeled points are merged due to a similarity constraint. This similarity constraint is based on a minimum description length principle, encoding and comparing different geometrical models. The output of this segmentation consists of non overlapping geometric objects in three dimensional space. The aproach is evaluated with some examples of Lidar data.

    @InProceedings{Schuster2004Segmentation,
    Title = {Segmentation Of LIDAR Data Using The Tensor Voting Framework},
    Author = {Schuster, Hanns-Florian},
    Booktitle = {Proc. 20th ISPRS Congress, Istanbul, Turkey},
    Year = {2004},
    Address = {Istanbul, Turkey},
    Organization = {ISPRS},
    Pages = {1073--1078},
    Abstract = {We present an investigation on the use of Tensor Voting for categorizing LIDAR data into outliers, line elements (e.g. high-voltage power lines), surface patches (e.g. roofs) and volumetric elements (e.g. vegetation). The Reconstruction of man-made objects is a main task of photogrammetry. With the increasing quality and availability of LIDAR sensors, range data is becoming more and more important. With LIDAR sensors it is possible to quickly aquire huge amounts of data. But in contrast to classical systems, where the measurement points are chosen by an operator, the data points do not explicitly correspond to meaningful points of the object, i.e. edges, corners, junctions. To extract these features it is necessary to segment the data into homogeneous regions wich can be processed afterwards. Our approach consists of a two step segmentation. The first one uses the Tensor Voting algorithm. It encodes every data point as a particle which sends out a vector field. This can be used to categorize the pointness, edgeness and surfaceness of the data points. After the categorization of the given LIDAR data points also the regions between the data points are rated. Meaningful regions like edges and junctions, given by the inherent structure of the data, are extracted. In a second step the so labeled points are merged due to a similarity constraint. This similarity constraint is based on a minimum description length principle, encoding and comparing different geometrical models. The output of this segmentation consists of non overlapping geometric objects in three dimensional space. The aproach is evaluated with some examples of Lidar data.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schuster2004Segmentation.pdf}
    }

  • C. Stachniss, G. Grisetti, D. Hähnel, and W. Burgard, “Improved Rao-Blackwellized Mapping by Adaptive Sampling and Active Loop-Closure,” in Proceedings of the Workshop on Self-Organization of AdaptiVE behavior (SOAVE) , Ilmenau, Germany, 2004, pp. 1-15.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2004a,
    Title = {Improved Rao-Blackwellized Mapping by Adaptive Sampling and Active Loop-Closure},
    Author = {Stachniss, C. and Grisetti, G. and H\"{a}hnel, D. and Burgard, W.},
    Booktitle = SOAVE,
    Year = {2004},
    Address = {Ilmenau, Germany},
    Pages = {1--15},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss04soave.pdf}
    }

  • C. Stachniss, D. Hähnel, and W. Burgard, “Exploration with Active Loop-Closing for FastSLAM,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Sendai, Japan, 2004, pp. 1505-1510.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2004,
    Title = {Exploration with Active Loop-Closing for {FastSLAM}},
    Author = {Stachniss, C. and H\"{a}hnel, D. and Burgard, W.},
    Booktitle = IROS,
    Year = {2004},
    Address = {Sendai, Japan},
    Pages = {1505--1510},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss04iros.pdf}
    }

  • M. Thöle, “Evaluierung verschiedener Ansätze zur Schätzung und Repräsentation unsicherer Geraden im Raum,” Diplomarbeit Master Thesis, 2004.
    [BibTeX]
    [none]
    @MastersThesis{Thole2004Evaluierung,
    Title = {Evaluierung verschiedener Ans\"atze zur Sch\"atzung und Repr\"asentation unsicherer Geraden im Raum},
    Author = {Th\"ole, Markus},
    School = {Institute of Photogrammetry, University of Bonn},
    Year = {2004},
    Note = {Betreuung: Prof. Dr.-Ing. Wolfgang F\"orstner, Dipl.-Ing. Marc Luxen},
    Type = {Diplomarbeit},
    Abstract = {[none]},
    City = {Bonn}
    }

2003

  • M. Appel and U. Weidner, “A New Approach Towards Quantative Quality Evaluation of 3D Building Models,” in ISPRS Commission IV Joint Workshop Challenges in Geospatial Analysis, Integration and Visualization II , Stuttgart, 2003.
    [BibTeX] [PDF]
    The need of describing the quality of data ranges from data acquisition to the use of the data in geoinformation systems. The contractor should verify that the data he captured suffices the specifications and the end user wants to know, if the data is suited for a special task at hand. Both are interested in quantifying the quality, possibly by simple and meaningful measures, which can be easily computed without much further efforts prohibitive with respect to involved labour and related costs. Much work has been already done on the standardization of principles of quality evaluation, reports and metadata (c.f. ISO standards 19113, 19114 and 19115), but only few contributions deal with the question of defining quality measures for a specific application, which possibly may be generalized for others as well. A recent project in cooperation with the Surveying Office of North Rhine-Westphalia investigates the topic of quality evaluation of photogrammetrically captured building models with the aim to identify useful quality measures which can be used for contract specificatios and to implement an approach for automated quality control based on a comparision of measurement and reference data. This paper presents the concept of the approach and first results.

    @InProceedings{Appel2003New,
    Title = {A New Approach Towards Quantative Quality Evaluation of 3D Building Models},
    Author = {Appel, Mirko and Weidner, Uwe},
    Booktitle = {ISPRS Commission IV Joint Workshop Challenges in Geospatial Analysis, Integration and Visualization II},
    Year = {2003},
    Address = {Stuttgart},
    Abstract = {The need of describing the quality of data ranges from data acquisition to the use of the data in geoinformation systems. The contractor should verify that the data he captured suffices the specifications and the end user wants to know, if the data is suited for a special task at hand. Both are interested in quantifying the quality, possibly by simple and meaningful measures, which can be easily computed without much further efforts prohibitive with respect to involved labour and related costs. Much work has been already done on the standardization of principles of quality evaluation, reports and metadata (c.f. ISO standards 19113, 19114 and 19115), but only few contributions deal with the question of defining quality measures for a specific application, which possibly may be generalized for others as well. A recent project in cooperation with the Surveying Office of North Rhine-Westphalia investigates the topic of quality evaluation of photogrammetrically captured building models with the aim to identify useful quality measures which can be used for contract specificatios and to implement an approach for automated quality control based on a comparision of measurement and reference data. This paper presents the concept of the approach and first results.},
    City = {Bonn},
    Proceeding = {ISPRS},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Appel2003New.pdf}
    }

  • W. Förstner, “Notions of Scale in Geosciences,” in Dynamics of Multi-Scale Earth Systems , 2003, pp. 17-39. doi:10.1007/3-540-45256-7_2
    [BibTeX] [PDF]
    The paper discusses the notion scale within geosciences. The high complexity of the developed models and the wide range of participating disciplines goes along with different notions of scale used during data acquisition and model building. The paper collects the different notions of scale shows the close relations between the different notions: map scale, resolution, window size, averqage wavelength, level of aggregation, level of abstraction. Finally the problem of identifying scale in models is discussed. A synopsis of the continuous measures for scale links the different notions.

    @InProceedings{Forstner2003Notions,
    Title = {Notions of Scale in Geosciences},
    Author = {F\"orstner, Wolfgang},
    Booktitle = {Dynamics of Multi-Scale Earth Systems},
    Year = {2003},
    Editor = {Neugebauer, Horst J. and Simmer, Clemens},
    Pages = {17--39},
    Abstract = {The paper discusses the notion scale within geosciences. The high complexity of the developed models and the wide range of participating disciplines goes along with different notions of scale used during data acquisition and model building. The paper collects the different notions of scale shows the close relations between the different notions: map scale, resolution, window size, averqage wavelength, level of aggregation, level of abstraction. Finally the problem of identifying scale in models is discussed. A synopsis of the continuous measures for scale links the different notions.},
    City = {Bonn},
    Doi = {10.1007/3-540-45256-7_2},
    Proceeding = {Dynamics of Multi-Scale Earth Systems},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2003Notions.pdf}
    }

  • W. Förstner and T. Läbe, “Learning Optimal Parameters for Self-diagnosis in a System for Automatic Exterior Orientation,” in Vision Systems (ICVS) 2003 , Graz, 2003, pp. 236-246. doi:10.1007/3-540-36592-3_23
    [BibTeX] [PDF]
    The paper describes the automatic learning of parameters for self-diagnosis of a system for automatic orientation of single aerial images used by the State Survey Department of Northrhine–Westfalia. The orientation is based on 3D lines as ground control features, and uses a sequence of probabilistic clustering, search and ML-estimation for robustly estimating the 6 parameters of the exterior orientation of an aerial image. The system is interpreted as a classifier, making an internal evaluation of its success. The classification is based on a number of parameters possibly relevant for self-diagnosis. A hand designed classifier reached 11% false negatives and 2% false positives on appr. 17000 images. A first version of a new classifier using support vector machines is evaluated. Based on appr. 650 images the classifier reaches 2 % false negatives and 4% false positives, indicating an increase in performance.

    @InProceedings{Forstner2003Learning,
    Title = {Learning Optimal Parameters for Self-diagnosis in a System for Automatic Exterior Orientation},
    Author = {F\"orstner, Wolfgang and L\"abe, Thomas},
    Booktitle = {Vision Systems (ICVS) 2003},
    Year = {2003},
    Address = {Graz},
    Editor = {Crowley, James L. and Piater, Justus H. and Vincze, M. and Paletta, L.},
    Pages = {236--246},
    Abstract = {The paper describes the automatic learning of parameters for self-diagnosis of a system for automatic orientation of single aerial images used by the State Survey Department of Northrhine--Westfalia. The orientation is based on 3D lines as ground control features, and uses a sequence of probabilistic clustering, search and ML-estimation for robustly estimating the 6 parameters of the exterior orientation of an aerial image. The system is interpreted as a classifier, making an internal evaluation of its success. The classification is based on a number of parameters possibly relevant for self-diagnosis. A hand designed classifier reached 11% false negatives and 2% false positives on appr. 17000 images. A first version of a new classifier using support vector machines is evaluated. Based on appr. 650 images the classifier reaches 2 % false negatives and 4% false positives, indicating an increase in performance.},
    City = {Bonn},
    Doi = {10.1007/3-540-36592-3_23},
    Proceeding = {Computer Vision Systems (ICVS) 2003},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Forstner2003Learning.pdf}
    }

  • M. Luxen, “Variance Component Estimation in Performance Characteristics Applied to Feature Extraction Procedures,” in Pattern Recognition, 25th DAGM Symposium , Magdeburg, Germany,, 2003, pp. 498-506. doi:10.1007/978-3-540-45243-0_64
    [BibTeX] [PDF]
    The paper proposes variance component estimation (VCE) for empirical quality evaluation in computer vision. An outline is given for the scope of variance component estimation in the context of quality evaluation. The principle of variance component estimation is explained and the approach is applied to results of low level feature extraction. Ground truth is only partly needed for estimating the precision, accuracy and bias of extracted points and straight line segments. The results of diverse feature extraction modules are compared.

    @InProceedings{Luxen2003Variance,
    Title = {Variance Component Estimation in Performance Characteristics Applied to Feature Extraction Procedures},
    Author = {Luxen, Marc},
    Booktitle = {Pattern Recognition, 25th DAGM Symposium},
    Year = {2003},
    Address = {Magdeburg, Germany,},
    Editor = {Bernd Michaelis and Gerald Krell},
    Month = sep,
    Pages = {498--506},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Volume = {2781},
    Abstract = {The paper proposes variance component estimation (VCE) for empirical quality evaluation in computer vision. An outline is given for the scope of variance component estimation in the context of quality evaluation. The principle of variance component estimation is explained and the approach is applied to results of low level feature extraction. Ground truth is only partly needed for estimating the precision, accuracy and bias of extracted points and straight line segments. The results of diverse feature extraction modules are compared.},
    Bibsource = {DBLP, http://dblp.uni-trier.de},
    Doi = {10.1007/978-3-540-45243-0_64},
    ISBN = {3-540-40861-4},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Luxen2003Variance.pdf}
    }

  • M. Luxen and A. Brunn, “Parameterschätzung aus unvollständigen Beobachtungsdaten mittels des EM-Algorithmus,” Zeitschrift für Geodäsie, Geoinformation und Landmanagement (ZfV), iss. 02, pp. 71-79, 2003.
    [BibTeX] [PDF]
    The paper gives an introduction into the problem of parameter estimation from incomplete data and presents the Expectation Maximization Algorithm as a method for solving such problems. The algorithm is put in relation to geodetic estimation problems. Its practicability is shown by an example of line extraction from digital images.

    @Article{Luxen2003Parameterschatzung,
    Title = {Parametersch\"atzung aus unvollst\"andigen Beobachtungsdaten mittels des EM-Algorithmus},
    Author = {Luxen, Marc and Brunn, Ansgar},
    Journal = {Zeitschrift f\"ur Geod\"asie, Geoinformation und Landmanagement (ZfV)},
    Year = {2003},
    Number = {02},
    Pages = {71--79},
    Abstract = {The paper gives an introduction into the problem of parameter estimation from incomplete data and presents the Expectation Maximization Algorithm as a method for solving such problems. The algorithm is put in relation to geodetic estimation problems. Its practicability is shown by an example of line extraction from digital images.},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Luxen2003Parameterschatzung.pdf}
    }

  • H. Schuster and W. Förstner, “Segmentierung, Rekonstruktion und Datenfusion bei der Objekterfassung mit Entfernungsdaten – ein Überblick,” in Proceedings 2. Oldenburger 3D-Tage , Oldenburg, 2003.
    [BibTeX] [PDF]
    Mit dem Aufkommen von flächig erfaßten Entfernungsdaten im Vermessungswesen steht ein Paradigmenwechsel in der Auswertung und Verarbeitung dieser Daten an, vergleichbar dem Übergang von der analytischen zur digitalen Photogrammetrie mit der Verfügbarkeit digitaler bzw. digitalisierter Bilder. Der vorliegende Beitrag gibt einen Überblick über Verfahren zur Fusion und Segmentierung von Entfernungsdaten und verdeutlicht Potentiale zur weiteren Automatisierung

    @InProceedings{Schuster2003Segmentierung,
    Title = {Segmentierung, Rekonstruktion und Datenfusion bei der Objekterfassung mit Entfernungsdaten - ein \"Uberblick},
    Author = {Schuster, Hanns-Florian and F\"orstner, Wolfgang},
    Booktitle = {Proceedings 2. Oldenburger 3D-Tage},
    Year = {2003},
    Address = {Oldenburg},
    Abstract = {Mit dem Aufkommen von fl\"achig erfa{\ss}ten Entfernungsdaten im Vermessungswesen steht ein Paradigmenwechsel in der Auswertung und Verarbeitung dieser Daten an, vergleichbar dem \"Ubergang von der analytischen zur digitalen Photogrammetrie mit der Verf\"ugbarkeit digitaler bzw. digitalisierter Bilder. Der vorliegende Beitrag gibt einen \"Uberblick \"uber Verfahren zur Fusion und Segmentierung von Entfernungsdaten und verdeutlicht Potentiale zur weiteren Automatisierung},
    City = {Bonn},
    Proceeding = {Proceedings 2. Oldenburger 3D-Tage},
    Url = {http://www.ipb.uni-bonn.de/pdfs/Schuster2003Segmentierung.pdf}
    }

  • C. Stachniss and W. Burgard, “Exploring Unknown Environments with Mobile Robots using Coverage Maps,” in Proceedings of the Int. Conf. on Artificial Intelligence (IJCAI) , Acapulco, Mexico, 2003, pp. 1127-1132.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2003,
    Title = {Exploring Unknown Environments with Mobile Robots using Coverage Maps},
    Author = {Stachniss, C. and Burgard, W.},
    Booktitle = IJCAI,
    Year = {2003},
    Address = {Acapulco, Mexico},
    Pages = {1127--1132},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss03ijcai.pdf}
    }

  • C. Stachniss and W. Burgard, “Using Coverage Maps to Represent the Environment of Mobile Robots,” in Proceedings of the European Conference on Mobile Robots (ECMR) , Radziejowice, Poland, 2003, pp. 59-64.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2003a,
    Title = {Using Coverage Maps to Represent the Environment of Mobile Robots},
    Author = {Stachniss, C. and Burgard, W.},
    Booktitle = ECMR,
    Year = {2003},
    Address = {Radziejowice, Poland},
    Pages = {59--64},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss03ecmr.pdf}
    }

  • C. Stachniss and W. Burgard, “Mapping and Exploration with Mobile Robots using Coverage Maps,” in Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , Las Vegas, NV, USA, 2003, pp. 476-481.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Stachniss2003b,
    Title = {Mapping and Exploration with Mobile Robots using Coverage Maps},
    Author = {Stachniss, C. and Burgard, W.},
    Booktitle = IROS,
    Year = {2003},
    Address = {Las Vegas, NV, USA},
    Pages = {476--481},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/stachniss03iros.pdf}
    }

  • C. Stachniss, D. Hähnel, and W. Burgard, “Grid-based FastSLAM and Exploration with Active Loop Closing,” in Online Proceedings of the Dagstuhl Seminar on Robot Navigation (Dagstuhl Seminar 03501) , Dagstuhl, Germany, 2003.
    [BibTeX]
    [none]
    @InProceedings{Stachniss2003c,
    Title = {Grid-based {FastSLAM} and Exploration with Active Loop Closing},
    Author = {Stachniss, C. and H\"{a}hnel, D. and Burgard, W.},
    Booktitle = {Online Proceedings of the Dagstuhl Seminar on Robot Navigation (Dagstuhl Seminar 03501)},
    Year = {2003},
    Address = {Dagstuhl, Germany},
    Abstract = {[none]},
    Timestamp = {2014.04.24}
    }

2002

  • M. Appel and W. Förstner, “Scene Constraints for Direct Single Image Orientation with Selfdiagnosis,” in Photogrammetric Computer Vision, Graz , 2002, pp. 42-49.
    [BibTeX] [PDF]
    In this paper we present a new method for single image orientation using an orthographic drawing or map of the scene. Environments which are dominated by man made objects, such as industrial facilities or urban scenes, are very rich of vertical and horizontal structures. These scene constraints reflect in symbols in an associated drawing. For example, vertical lines in the scene are usually marked as points in a drawing. The resulting orientation may be used in augmented reality systems or for initiating a subsequent bundle adjustment of all available images. In this paper we propose to use such scene constraints taken from a drawing to estimate the camera orientation. We use observed vertical lines, horizontal lines, and points to estimate the projection matrix P of the image. We describe the constraints in terms of projective geometry which makes them straightforward and very transparent. In contrast to the work of Bondyfalatetal 2001, we give a direct solution for P without using the fundamental matrix between image and map as we do not need parallelity constraints between lines in a vertical plane other than for horizontal lines, nor observed perpendicular lines. We present both a direct solution for P and a statistically optimal, iterative solution, which takes the uncertainties of the contraints and the observations in the image and the drawing into account. It is a simplifying modification of the eigenvalue method of Matei/Meer 1997. The method allows to evaluate the results statistically, namely to verify the used projection model and the assumed statistical properties of the measured image and map quantities and to validate the achieved accuracy of the estimated projection matrix P. To demonstrate the feasibility of the approach, we present results of the application of our method to both synthetic data and real scenes in industrial environment. Statistical tests show the performance and prove the rigour of the new method.

    <