Igor Bogoslavskyi

PhD Student
Contact:
Email: igor.bogoslavskyi@nulluni-bonn.de
Tel: +49 - 228 - 73 - 27 11
Fax: +49 - 228 - 73 - 27 12
Office: Nussallee 15, 1.0G, room 1.012
Address:
University of Bonn
Photogrammetry, IGG
Nussallee 15
53115 Bonn
Follow Igor on GitHub
Google Scholar Profile

Short CV

Igor Bogoslavskyi is a PhD student at the lab for photogrammetry at the University of Bonn led by Cyrill Stachniss. Before moving to Bonn, he has finished his Master of Science studies in the field of Applied Computer Science at the University of Freiburg in Germany in 2011 and a Bachelor of Science in the field of Applied Mathematics in Ukraine in 2007. During his master studies he was working as an assistant on ROVINA project in Autonomous Intelligent Systems (AIS) laboratory led by Wolfram Burgard. His current interests lie in scene interpretation, outdoor perception and navigation.

Research Interests

  • Probabilistic robotics
  • Localization, Mapping, SLAM
  • Autonomous Navigation and Exploration
  • Dynamic Object Detection from Laser Data

Projects

  • ROVINA – Robots for Exploration, Digital Preservation and Visualization of Archeological Sites.
    I have been responsible for implementing traversability analysis, robust homing and part of navigation and exploration stack that ran on the robot exploring real Roman catacombs. See website for details and my publication list for related publications.
  • Depth Clustering – a library for fast and robust segmentation of Velodyne-generated 3D scans.
    StarForkWatch
  • catkin fetch – a new verb for catkin_tools to download project dependencies automatically.
    Star ForkWatch
  • EasyClangComplete – an easy to setup C/C++ completion plugin for Sublime Text.
    Star ForkWatch

Teaching

  • Exercises for Photogrammetry & Remote Sensing, 2014/2015
  • C++ For Image Processing, 2015
  • C++ For Image Processing, 2016
  • 3D Mapping, 2016/2017
  • C++ For Image Processing, 2017

Awards

  • MINT Excellence Network Member

Publications

2017

  • I. Bogoslavskyi and C. Stachniss, “Efficient online segmentation for sparse 3d laser scans,” Pfg — journal of photogrammetry, remote sensing and geoinformation science, pp. 1-12, 2017.
    [BibTeX] [PDF]
    The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.

    @Article{bogoslavskyi17pfg,
    Title = {Efficient Online Segmentation for Sparse 3D Laser Scans},
    Author = {Bogoslavskyi, Igor and Stachniss, Cyrill},
    Journal = {PFG -- Journal of Photogrammetry, Remote Sensing and Geoinformation Science},
    Year = {2017},
    Pages = {1--12},
    Abstract = {The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.},
    Url = {http://link.springer.com/article/10.1007/s41064-016-0003-y}
    }

2016

  • I. Bogoslavskyi, M. Mazuran, and C. Stachniss, “Robust homing for autonomous robots,” in Proceedings of the ieee int. conf. on robotics & automation (icra) , 2016.
    [BibTeX] [PDF]
    [none]
    @InProceedings{bogoslavskyi16icra,
    Title = {Robust Homing for Autonomous Robots},
    Author = {I. Bogoslavskyi and M. Mazuran and C. Stachniss},
    Booktitle = icra,
    Year = {2016},
    Abstract = {[none]},
    Url = {http://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16icra.pdf}
    }

  • I. Bogoslavskyi and C. Stachniss, “Fast range image-based segmentation of sparse 3d laser scans for online operation,” in Proceedings of the ieee/rsj int. conf. on intelligent robots and systems (iros) , 2016.
    [BibTeX] [PDF]
    [none]
    @InProceedings{bogoslavskyi16iros,
    Title = {Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation},
    Author = {I. Bogoslavskyi and C. Stachniss},
    Booktitle = iros,
    Year = {2016},
    Abstract = {[none]},
    Url = {http://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16iros.pdf}
    }

  • D. Perea-Ström, I. Bogoslavskyi, and C. Stachniss, “Robust exploration and homing for autonomous robots,” in Robotics and autonomous systems , 2016.
    [BibTeX] [PDF]
    @InProceedings{perea16jras,
    Title = {Robust Exploration and Homing for Autonomous Robots},
    Author = {D. Perea-Str{\"o}m and I. Bogoslavskyi and C. Stachniss},
    Booktitle = jras,
    Year = {2016},
    Url = {http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/perea16jras.pdf}
    }

2015

  • I. Bogoslavskyi, L. Spinello, W. Burgard, and C. Stachniss, “Where to park? minimizing the expected time to find a parking space,” in Proceedings of the ieee int. conf. on robotics & automation (icra) , 2015, pp. 2147-2152. doi:10.1109/ICRA.2015.7139482
    [BibTeX] [PDF]
    Quickly finding a free parking spot that is close to a desired target location can be a difficult task. This holds for human drivers and autonomous cars alike. In this paper, we investigate the problem of predicting the occupancy of parking spaces and exploiting this information during route planning. We propose an MDP-based planner that considers route information as well as the occupancy probabilities of parking spaces to compute the path that minimizes the expected total time for finding an unoccupied parking space and for walking from the parking location to the target destination. We evaluated our system on real world data gathered over several days in a real parking lot. We furthermore compare our approach to three parking strategies and show that our method outperforms the alternative behaviors.

    @InProceedings{bogoslavskyi15icra,
    Title = {Where to Park? Minimizing the Expected Time to Find a Parking Space},
    Author = {I. Bogoslavskyi and L. Spinello and W. Burgard and C. Stachniss},
    Booktitle = ICRA,
    Year = {2015},
    Pages = {2147-2152},
    Abstract = {Quickly finding a free parking spot that is close to a desired target location can be a difficult task. This holds for human drivers and autonomous cars alike. In this paper, we investigate the problem of predicting the occupancy of parking spaces and exploiting this information during route planning. We propose an MDP-based planner that considers route information as well as the occupancy probabilities of parking spaces to compute the path that minimizes the expected total time for finding an unoccupied parking space and for walking from the parking location to the target destination. We evaluated our system on real world data gathered over several days in a real parking lot. We furthermore compare our approach to three parking strategies and show that our method outperforms the alternative behaviors.},
    Doi = {10.1109/ICRA.2015.7139482},
    Timestamp = {2015.06.29},
    Url = {http://www.ipb.uni-bonn.de/pdfs/bogoslavskyi15icra.pdf}
    }

2013

  • I. Bogoslavskyi, O. Vysotska, J. Serafin, G. Grisetti, and C. Stachniss, “Efficient traversability analysis for mobile robots using the kinect sensor,” in Proceedings of the european conference on mobile robots (ecmr) , Barcelona, Spain, 2013.
    [BibTeX] [PDF]
    [none]
    @InProceedings{Bogoslavskyi2013,
    Title = {Efficient Traversability Analysis for Mobile Robots using the Kinect Sensor},
    Author = {I. Bogoslavskyi and O. Vysotska and J. Serafin and G. Grisetti and C. Stachniss},
    Booktitle = ECMR,
    Year = {2013},
    Address = {Barcelona, Spain},
    Abstract = {[none]},
    Timestamp = {2014.04.24},
    Url = {http://www.informatik.uni-freiburg.de/~stachnis/pdf/bogoslavskyi13ecmr.pdf}
    }

 

igg
Institute of Geodesy
and Geoinformation
lwf
Faculty of Agriculture
ubn
University of Bonn