Dr. Jens Behley

Postdoctoral researcher
Contact:
Email: jens.behley@nulligg.uni-bonn.de
Tel: +49 – 228 – 73 – 60 190
Fax: +49 – 228 – 73 – 27 12
Office: Nussallee 15, 1. OG, room 1.008
Address:
University of Bonn
Photogrammetry, IGG
Nussallee 15
53115 Bonn
Google Scholar Profile

Research Interests

  • Laser-based perception
  • Machine learning for robotic applications

Short CV

Jens Behley is a postdoctoral researcher at the Department for Photogrammetry since February 2016. From September 2008 to July 2015, Jens worked at the Department for Computer Science III, University of Bonn and he successfully defended his PhD thesis on “Three-dimensional Laser-based Classification in Outdoor Environments” supervised by Prof. Dr. Armin B. Cremers in January 2014.

Awards

  • Diplomarbeitspreis der Bonner Informatik Gesellschaft e.V. (2009)

Publications

2020

  • J. Behley, A. Milioto, and C. Stachniss, “A Benchmark for LiDAR-based Panoptic Segmentation based on KITTI,” Arxiv preprint, 2020.
    [BibTeX] [PDF]

    Panoptic segmentation is the recently introduced task that tackles semantic segmentation and instance segmentation jointly. In this paper, we present an extension of SemanticKITTI, which is a large-scale dataset providing dense point-wise semantic labels for all sequences of the KITTI Odometry Benchmark, for training and evaluation of laser-based panoptic segmentation. We provide the data and discuss the processing steps needed to enrich a given semantic annotation with temporally consistent instance information, i.e., instance information that supplements the semantic labels and identifies the same instance over sequences of LiDAR point clouds. Additionally, we present two strong baselines that combine state-of-the-art LiDAR-based semantic segmentation approaches with a state-of-the-art detector enriching the segmentation with instance information and that allow other researchers to compare their approaches against. We hope that our extension of SemanticKITTI with strong baselines enables the creation of novel algorithms for LiDAR-based panoptic segmentation as much as it has for the original semantic segmentation and semantic scene completion tasks. Data, code, and an online evaluation using a hidden test set will be published on http://semantic-kitti.org.

    @article{behley2020arxiv,
    author = {J. Behley and A. Milioto and C. Stachniss},
    title = {{A Benchmark for LiDAR-based Panoptic Segmentation based on KITTI}},
    journal = arxiv,
    year = 2020,
    eprint = {2003.02371v1},
    url = {http://arxiv.org/pdf/2003.02371v1},
    keywords = {cs.CV},
    abstract = {Panoptic segmentation is the recently introduced task that tackles semantic segmentation and instance segmentation jointly. In this paper, we present an extension of SemanticKITTI, which is a large-scale dataset providing dense point-wise semantic labels for all sequences of the KITTI Odometry Benchmark, for training and evaluation of laser-based panoptic segmentation. We provide the data and discuss the processing steps needed to enrich a given semantic annotation with temporally consistent instance information, i.e., instance information that supplements the semantic labels and identifies the same instance over sequences of LiDAR point clouds. Additionally, we present two strong baselines that combine state-of-the-art LiDAR-based semantic segmentation approaches with a state-of-the-art detector enriching the segmentation with instance information and that allow other researchers to compare their approaches against. We hope that our extension of SemanticKITTI with strong baselines enables the creation of novel algorithms for LiDAR-based panoptic segmentation as much as it has for the original semantic segmentation and semantic scene completion tasks. Data, code, and an online evaluation using a hidden test set will be published on http://semantic-kitti.org.}
    }

  • P. Lottes, J. Behley, N. Chebrolu, A. Milioto, and C. Stachniss, “Robust joint stem detection and crop-weed classification using image sequences for plant-specific treatment in precision farming,” Journal of field robotics, vol. 37, pp. 20-34, 2020. doi:https://doi.org/10.1002/rob.21901
    [BibTeX] [PDF]
    @Article{lottes2020jfr,
    title = {Robust joint stem detection and crop-weed classification using image sequences for plant-specific treatment in precision farming},
    author = {Lottes, P. and Behley, J. and Chebrolu, N. and Milioto, A. and Stachniss, C.},
    journal = jfr,
    volume = {37},
    numer = {1},
    pages = {20-34},
    year = {2020},
    doi = {https://doi.org/10.1002/rob.21901},
    url = {http://www.ipb.uni-bonn.de/pdfs/lottes2019jfr.pdf},
    }

2019

  • J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences,” in Proc. of the ieee/cvf international conf.~on computer vision (iccv), 2019.
    [BibTeX] [PDF] [Video]
    @InProceedings{behley2019iccv,
    author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall},
    title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
    booktitle = iccv,
    year = {2019},
    videourl = {http://www.ipb.uni-bonn.de/html/projects/semantic_kitti/videos/teaser.mp4},
    }

  • E. Palazzolo, J. Behley, P. Lottes, P. Giguère, and C. Stachniss, “ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals,” in Proceedings of the ieee/rsj int. conf. on intelligent robots and systems (iros), 2019.
    [BibTeX] [PDF] [Code] [Video]
    @InProceedings{palazzolo2019iros,
    author = {E. Palazzolo and J. Behley and P. Lottes and P. Gigu\`ere and C. Stachniss},
    title = {{ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals}},
    booktitle = iros,
    year = {2019},
    url = {http://www.ipb.uni-bonn.de/pdfs/palazzolo2019iros.pdf},
    codeurl = {https://github.com/PRBonn/refusion},
    videourl = {https://youtu.be/1P9ZfIS5-p4},
    }

  • X. Chen, A. Milioto, E. Palazzolo, P. Giguère, J. Behley, and C. Stachniss, “SuMa++: Efficient LiDAR-based Semantic SLAM,” in Proceedings of the ieee/rsj int. conf. on intelligent robots and systems (iros), 2019.
    [BibTeX] [PDF] [Code] [Video]
    @inproceedings{chen2019iros,
    author = {X. Chen and A. Milioto and E. Palazzolo and P. Giguère and J. Behley and C. Stachniss},
    title = {{SuMa++: Efficient LiDAR-based Semantic SLAM}},
    booktitle = {Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
    year = 2019,
    codeurl = {https://github.com/PRBonn/semantic_suma/},
    videourl = {https://youtu.be/uo3ZuLuFAzk},
    }

  • A. Milioto, I. Vizzo, J. Behley, and C. Stachniss, “RangeNet++: Fast and Accurate LiDAR Semantic Segmentation,” in Proceedings of the ieee/rsj int. conf. on intelligent robots and systems (iros), 2019.
    [BibTeX] [PDF] [Code] [Video]
    @inproceedings{milioto2019iros,
    author = {A. Milioto and I. Vizzo and J. Behley and C. Stachniss},
    title = {{RangeNet++: Fast and Accurate LiDAR Semantic Segmentation}},
    booktitle = {Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
    year = 2019,
    codeurl = {https://github.com/PRBonn/lidar-bonnetal},
    videourl = {https://youtu.be/wuokg7MFZyU},
    }

2018

  • P. Lottes, J. Behley, A. Milioto, and C. Stachniss, “Fully convolutional networks with sequential information for robust crop and weed detection in precision farming,” Ieee robotics and automation letters (ra-l), vol. 3, pp. 3097-3104, 2018. doi:10.1109/LRA.2018.2846289
    [BibTeX] [PDF] [Video]
    @Article{lottes2018ral,
    author = {P. Lottes and J. Behley and A. Milioto and C. Stachniss},
    title = {Fully Convolutional Networks with Sequential Information for Robust Crop and Weed Detection in Precision Farming},
    journal = {IEEE Robotics and Automation Letters (RA-L)},
    year = {2018},
    volume = {3},
    issue = {4},
    pages = {3097-3104},
    doi = {10.1109/LRA.2018.2846289},
    url = {http://www.ipb.uni-bonn.de/pdfs/lottes2018ral.pdf},
    videourl = {https://www.youtube.com/watch?v=vTepw9HRLh8},
    }

  • P. Lottes, J. Behley, N. Chebrolu, A. Milioto, and C. Stachniss, “Joint stem detection and crop-weed classification for plant-specific treatment in precision farming,” in Proceedings of the ieee/rsj int. conf. on intelligent robots and systems (iros), 2018.
    [BibTeX] [PDF] [Video]

    Applying agrochemicals is the default procedure for conventional weed control in crop production, but has negative impacts on the environment. Robots have the potential to treat every plant in the field individually and thus can reduce the required use of such chemicals. To achieve that, robots need the ability to identify crops and weeds in the field and must additionally select effective treatments. While certain types of weed can be treated mechanically, other types need to be treated by (selective) spraying. In this paper, we present an approach that provides the necessary information for effective plant-specific treatment. It outputs the stem location for weeds, which allows for mechanical treatments, and the covered area of the weed for selective spraying. Our approach uses an end-to- end trainable fully convolutional network that simultaneously estimates stem positions as well as the covered area of crops and weeds. It jointly learns the class-wise stem detection and the pixel-wise semantic segmentation. Experimental evaluations on different real-world datasets show that our approach is able to reliably solve this problem. Compared to state-of-the-art approaches, our approach not only substantially improves the stem detection accuracy, i.e., distinguishing crop and weed stems, but also provides an improvement in the semantic segmentation performance.

    @InProceedings{lottes2018iros,
    author = {P. Lottes and J. Behley and N. Chebrolu and A. Milioto and C. Stachniss},
    title = {Joint Stem Detection and Crop-Weed Classification for Plant-specific Treatment in Precision Farming},
    booktitle = {Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
    year = 2018,
    url = {http://www.ipb.uni-bonn.de/pdfs/lottes18iros.pdf},
    videourl = {https://www.youtube.com/watch?v=C9mjZxE_Sxg},
    abstract = {Applying agrochemicals is the default procedure for conventional weed control in crop production, but has negative impacts on the environment. Robots have the potential to treat every plant in the field individually and thus can reduce the required use of such chemicals. To achieve that, robots need the ability to identify crops and weeds in the field and must additionally select effective treatments. While certain types of weed can be treated mechanically, other types need to be treated by (selective) spraying. In this paper, we present an approach that provides the necessary information for effective plant-specific treatment. It outputs the stem location for weeds, which allows for mechanical treatments, and the covered area of the weed for selective spraying. Our approach uses an end-to- end trainable fully convolutional network that simultaneously estimates stem positions as well as the covered area of crops and weeds. It jointly learns the class-wise stem detection and the pixel-wise semantic segmentation. Experimental evaluations on different real-world datasets show that our approach is able to reliably solve this problem. Compared to state-of-the-art approaches, our approach not only substantially improves the stem detection accuracy, i.e., distinguishing crop and weed stems, but also provides an improvement in the semantic segmentation performance.}
    }

  • J. Behley and C. Stachniss, “Efficient surfel-based slam using 3d laser range data in urban environments,” in Proceedings of robotics: science and systems (rss), 2018.
    [BibTeX] [PDF] [Video]
    @InProceedings{behley2018rss,
    author = {J. Behley and C. Stachniss},
    title = {Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments},
    booktitle = rss,
    year = 2018,
    videourl = {https://www.youtube.com/watch?v=-AEX203rXkE},
    url = {http://www.roboticsproceedings.org/rss14/p16.pdf},
    }