Louis Wiesmann

Louis Wiesmann

Ph.D. Student
Contact:
Email: louis.wiesmann@nulligg.uni-bonn.de
Tel: +49 – 228 – 73 – 29 06
Fax: +49 – 228 – 73 – 27 12
Office: Nussallee 15, 1. OG, room 1.006
Address:
University of Bonn
Photogrammetry, IGG
Nussallee 15
53115 Bonn

Short CV

Louis Wiesmann is a PhD student at the Photogrammetry Lab at the University of Bonn since November 2019. He received his master’s degree at the Institute of Geodesy and Geoinformation in 2019.

Research Interests

  • SLAM
  • Computer Vision
  • Machine Learning

Awards

  • Turbo-Preis 2019 of the DVW

Publications

2024

  • Y. Wu, T. Guadagnino, L. Wiesmann, L. Klingbeil, C. Stachniss, and H. Kuhlmann, “LIO-EKF: High Frequency LiDAR-Inertial Odometry using Extended Kalman Filters,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2024.
    [BibTeX]
    @inproceedings{wu2024icra,
    author = {Y. Wu and T. Guadagnino and L. Wiesmann and L. Klingbeil and C. Stachniss and H. Kuhlmann},
    title = {{LIO-EKF: High Frequency LiDAR-Inertial Odometry using Extended Kalman Filters}},
    booktitle = icra,
    year = 2024,
    note = {Accepted},
    }

  • Y. Pan, X. Zhong, L. Wiesmann, T. Posewsky, J. Behley, and C. Stachniss, “PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency,” Arxiv preprint, vol. arXiv:2401.09101, 2024.
    [BibTeX] [PDF] [Code]

    Accurate and robust localization and mapping are essential components for most autonomous robots. In this paper, we propose a SLAM system for building globally consistent maps, called PIN-SLAM, that is based on an elastic and compact point-based implicit neural map representation. Taking range measurements as input, our approach alternates between incremental learning of the local implicit signed distance field and the pose estimation given the current local map using a correspondence-free, point-to-implicit model registration. Our implicit map is based on sparse optimizable neural points, which are inherently elastic and deformable with the global pose adjustment when closing a loop. Loops are also detected using the neural point features. Extensive experiments validate that PIN-SLAM is robust to various environments and versatile to different range sensors such as LiDAR and RGB-D cameras. PIN-SLAM achieves pose estimation accuracy better or on par with the state-of-the-art LiDAR odometry or SLAM systems and outperforms the recent neural implicit SLAM approaches while maintaining a more consistent, and highly compact implicit map that can be reconstructed as accurate and complete meshes. Finally, thanks to the voxel hashing for efficient neural points indexing and the fast implicit map-based registration without closest point association, PIN-SLAM can run at the sensor frame rate on a moderate GPU. Codes will be available at: https://github.com/PRBonn/PIN_SLAM.

    @article{pan2024arxiv,
    author = {Y. Pan and X. Zhong and L. Wiesmann and T. Posewsky and J. Behley and C. Stachniss},
    title = {{PIN-SLAM: LiDAR SLAM Using a Point-Based Implicit Neural Representation for Achieving Global Map Consistency}},
    journal = arxiv,
    year = 2024,
    volume = {arXiv:2401.09101},
    url = {http://arxiv.org/pdf/2401.09101v1},
    abstract = {Accurate and robust localization and mapping are essential components for most autonomous robots. In this paper, we propose a SLAM system for building globally consistent maps, called PIN-SLAM, that is based on an elastic and compact point-based implicit neural map representation. Taking range measurements as input, our approach alternates between incremental learning of the local implicit signed distance field and the pose estimation given the current local map using a correspondence-free, point-to-implicit model registration. Our implicit map is based on sparse optimizable neural points, which are inherently elastic and deformable with the global pose adjustment when closing a loop. Loops are also detected using the neural point features. Extensive experiments validate that PIN-SLAM is robust to various environments and versatile to different range sensors such as LiDAR and RGB-D cameras. PIN-SLAM achieves pose estimation accuracy better or on par with the state-of-the-art LiDAR odometry or SLAM systems and outperforms the recent neural implicit SLAM approaches while maintaining a more consistent, and highly compact implicit map that can be reconstructed as accurate and complete meshes. Finally, thanks to the voxel hashing for efficient neural points indexing and the fast implicit map-based registration without closest point association, PIN-SLAM can run at the sensor frame rate on a moderate GPU. Codes will be available at: https://github.com/PRBonn/PIN_SLAM.},
    codeurl = {https://github.com/PRBonn/PIN_SLAM}
    }

2023

  • R. Marcuzzi, L. Nunes, L. Wiesmann, E. Marks, J. Behley, and C. Stachniss, “Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Sequences,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 11, pp. 7487-7494, 2023. doi:10.1109/LRA.2023.3320020
    [BibTeX] [PDF] [Code]
    @article{marcuzzi2023ral-meem,
    author = {R. Marcuzzi and L. Nunes and L. Wiesmann and E. Marks and J. Behley and C. Stachniss},
    title = {{Mask4D: End-to-End Mask-Based 4D Panoptic Segmentation for LiDAR Sequences}},
    journal = ral,
    year = {2023},
    volume = {8},
    number = {11},
    pages = {7487-7494},
    issn = {2377-3766},
    doi = {10.1109/LRA.2023.3320020},
    codeurl = {https://github.com/PRBonn/Mask4D},
    }

  • I. Vizzo, B. Mersch, L. Nunes, L. Wiesmann, T. Guadagnino, and C. Stachniss, “Toward Reproducible Version-Controlled Perception Platforms: Embracing Simplicity in Autonomous Vehicle Dataset Acquisition,” in Proc. of the intl. conf. on intelligent transportation systems workshops, 2023.
    [BibTeX] [PDF] [Code]
    @inproceedings{vizzo2023itcsws,
    author = {I. Vizzo and B. Mersch and L. Nunes and L. Wiesmann and T. Guadagnino and C. Stachniss},
    title = {{Toward Reproducible Version-Controlled Perception Platforms: Embracing Simplicity in Autonomous Vehicle Dataset Acquisition}},
    booktitle = {Proc. of the Intl. Conf. on Intelligent Transportation Systems Workshops},
    year = 2023,
    codeurl = {https://github.com/ipb-car/meta-workspace},
    note = {accepted}
    }

  • L. Wiesmann, T. Guadagnino, I. Vizzo, N. Zimmerman, Y. Pan, H. Kuang, J. Behley, and C. Stachniss, “LocNDF: Neural Distance Field Mapping for Robot Localization,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, p. 4999–5006, 2023. doi:10.1109/LRA.2023.3291274
    [BibTeX] [PDF] [Code]
    @article{wiesmann2023ral-icra,
    author = {L. Wiesmann and T. Guadagnino and I. Vizzo and N. Zimmerman and Y. Pan and H. Kuang and J. Behley and C. Stachniss},
    title = {{LocNDF: Neural Distance Field Mapping for Robot Localization}},
    journal = ral,
    volume = {8},
    number = {8},
    pages = {4999--5006},
    year = 2023,
    url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/wiesmann2023ral-icra.pdf},
    issn = {2377-3766},
    doi = {10.1109/LRA.2023.3291274},
    codeurl = {https://github.com/PRBonn/LocNDF}
    }

  • E. Marks, M. Sodano, F. Magistri, L. Wiesmann, D. Desai, R. Marcuzzi, J. Behley, and C. Stachniss, “High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field Conditions,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 8, pp. 4791-4798, 2023. doi:10.1109/LRA.2023.3288383
    [BibTeX] [PDF]
    @article{marks2023ral,
    author = {E. Marks and M. Sodano and F. Magistri and L. Wiesmann and D. Desai and R. Marcuzzi and J. Behley and C. Stachniss},
    title = {{High Precision Leaf Instance Segmentation in Point Clouds Obtained Under Real Field Conditions}},
    journal = ral,
    pages = {4791-4798},
    volume = {8},
    number = {8},
    issn = {2377-3766},
    year = {2023},
    doi = {10.1109/LRA.2023.3288383},
    }

  • L. Nunes, L. Wiesmann, R. Marcuzzi, X. Chen, J. Behley, and C. Stachniss, “Temporal Consistent 3D LiDAR Representation Learning for Semantic Perception in Autonomous Driving,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2023.
    [BibTeX] [PDF] [Code] [Video]
    @inproceedings{nunes2023cvpr,
    author = {L. Nunes and L. Wiesmann and R. Marcuzzi and X. Chen and J. Behley and C. Stachniss},
    title = {{Temporal Consistent 3D LiDAR Representation Learning for Semantic Perception in Autonomous Driving}},
    booktitle = cvpr,
    year = 2023,
    codeurl = {https://github.com/PRBonn/TARL},
    videourl = {https://youtu.be/0CtDbwRYLeo},
    }

  • I. Vizzo, T. Guadagnino, B. Mersch, L. Wiesmann, J. Behley, and C. Stachniss, “KISS-ICP: In Defense of Point-to-Point ICP – Simple, Accurate, and Robust Registration If Done the Right Way,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 2, pp. 1-8, 2023. doi:10.1109/LRA.2023.3236571
    [BibTeX] [PDF] [Code]
    @article{vizzo2023ral,
    author = {Vizzo, Ignacio and Guadagnino, Tiziano and Mersch, Benedikt and Wiesmann, Louis and Behley, Jens and Stachniss, Cyrill},
    title = {{KISS-ICP: In Defense of Point-to-Point ICP -- Simple, Accurate, and Robust Registration If Done the Right Way}},
    journal = ral,
    pages = {1-8},
    doi = {10.1109/LRA.2023.3236571},
    volume = {8},
    number = {2},
    year = {2023},
    codeurl = {https://github.com/PRBonn/kiss-icp},
    }

  • R. Marcuzzi, L. Nunes, L. Wiesmann, J. Behley, and C. Stachniss, “Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 2, p. 1141–1148, 2023. doi:10.1109/LRA.2023.3236568
    [BibTeX] [PDF] [Video]
    @article{marcuzzi2023ral,
    author = {R. Marcuzzi and L. Nunes and L. Wiesmann and J. Behley and C. Stachniss},
    title = {{Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving}},
    journal = ral,
    volume = {8},
    number = {2},
    pages = {1141--1148},
    year = 2023,
    doi = {10.1109/LRA.2023.3236568},
    videourl = {https://youtu.be/I8G9VKpZux8}
    }

  • L. Wiesmann, L. Nunes, J. Behley, and C. Stachniss, “KPPR: Exploiting Momentum Contrast for Point Cloud-Based Place Recognition,” IEEE Robotics and Automation Letters (RA-L), vol. 8, iss. 2, pp. 592-599, 2023. doi:10.1109/LRA.2022.3228174
    [BibTeX] [PDF] [Code] [Video]
    @article{wiesmann2023ral,
    author = {L. Wiesmann and L. Nunes and J. Behley and C. Stachniss},
    title = {{KPPR: Exploiting Momentum Contrast for Point Cloud-Based Place Recognition}},
    journal = ral,
    volume = {8},
    number = {2},
    pages = {592-599},
    year = 2023,
    issn = {2377-3766},
    doi = {10.1109/LRA.2022.3228174},
    codeurl = {https://github.com/PRBonn/kppr},
    videourl = {https://youtu.be/bICz1sqd8Xs}
    }

  • M. Arora, L. Wiesmann, X. Chen, and C. Stachniss, “Static Map Generation from 3D LiDAR Point Clouds Exploiting Ground Segmentation,” Robotics and autonomous systems, vol. 159, p. 104287, 2023. doi:https://doi.org/10.1016/j.robot.2022.104287
    [BibTeX] [PDF]
    @article{arora2023jras,
    author = {M. Arora and L. Wiesmann and X. Chen and C. Stachniss},
    title = {{Static Map Generation from 3D LiDAR Point Clouds Exploiting Ground Segmentation}},
    journal = jras,
    volume = {159},
    pages = {104287},
    year = {2023},
    issn = {0921-8890},
    doi = {https://doi.org/10.1016/j.robot.2022.104287},
    }

2022

  • N. Zimmerman, L. Wiesmann, T. Guadagnino, T. Läbe, J. Behley, and C. Stachniss, “Robust Onboard Localization in Changing Environments Exploiting Text Spotting,” in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2022.
    [BibTeX] [PDF] [Code]
    @inproceedings{zimmerman2022iros,
    title = {{Robust Onboard Localization in Changing Environments Exploiting Text Spotting}},
    author = {N. Zimmerman and L. Wiesmann and T. Guadagnino and T. Läbe and J. Behley and C. Stachniss},
    booktitle = iros,
    year = {2022},
    codeurl = {https://github.com/PRBonn/tmcl},
    }

  • I. Vizzo, B. Mersch, R. Marcuzzi, L. Wiesmann, J. Behley, and C. Stachniss, “Make it dense: self-supervised geometric scan completion of sparse 3d lidar scans in large outdoor environments,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 3, pp. 8534-8541, 2022. doi:10.1109/LRA.2022.3187255
    [BibTeX] [PDF] [Code] [Video]
    @article{vizzo2022ral,
    author = {I. Vizzo and B. Mersch and R. Marcuzzi and L. Wiesmann and J. Behley and C. Stachniss},
    title = {Make it Dense: Self-Supervised Geometric Scan Completion of Sparse 3D LiDAR Scans in Large Outdoor Environments},
    journal = ral,
    url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/vizzo2022ral-iros.pdf},
    codeurl = {https://github.com/PRBonn/make_it_dense},
    year = {2022},
    volume = {7},
    number = {3},
    pages = {8534-8541},
    doi = {10.1109/LRA.2022.3187255},
    videourl = {https://youtu.be/NVjURcArHn8},
    }

  • L. Wiesmann, T. Guadagnino, I. Vizzo, G. Grisetti, J. Behley, and C. Stachniss, “DCPCR: Deep Compressed Point Cloud Registration in Large-Scale Outdoor Environments,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 3, pp. 6327-6334, 2022. doi:10.1109/LRA.2022.3171068
    [BibTeX] [PDF] [Code] [Video]
    @article{wiesmann2022ral-iros,
    author = {L. Wiesmann and T. Guadagnino and I. Vizzo and G. Grisetti and J. Behley and C. Stachniss},
    title = {{DCPCR: Deep Compressed Point Cloud Registration in Large-Scale Outdoor Environments}},
    journal = ral,
    year = 2022,
    volume = 7,
    number = 3,
    pages = {6327-6334},
    issn = {2377-3766},
    doi = {10.1109/LRA.2022.3171068},
    codeurl = {https://github.com/PRBonn/DCPCR},
    videourl = {https://youtu.be/RqLr2RTGy1s},
    }

  • L. Wiesmann, R. Marcuzzi, C. Stachniss, and J. Behley, “Retriever: Point Cloud Retrieval in Compressed 3D Maps,” in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.
    [BibTeX] [PDF]
    @inproceedings{wiesmann2022icra,
    author = {L. Wiesmann and R. Marcuzzi and C. Stachniss and J. Behley},
    title = {{Retriever: Point Cloud Retrieval in Compressed 3D Maps}},
    booktitle = icra,
    year = 2022,
    }

  • R. Marcuzzi, L. Nunes, L. Wiesmann, I. Vizzo, J. Behley, and C. Stachniss, “Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans,” IEEE Robotics and Automation Letters (RA-L), vol. 7, iss. 2, pp. 1550-1557, 2022. doi:10.1109/LRA.2022.3140439
    [BibTeX] [PDF]
    @article{marcuzzi2022ral,
    author = {R. Marcuzzi and L. Nunes and L. Wiesmann and I. Vizzo and J. Behley and C. Stachniss},
    title = {{Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans}},
    journal = ral,
    year = 2022,
    doi = {10.1109/LRA.2022.3140439},
    issn = {2377-3766},
    volume = 7,
    number = 2,
    pages = {1550-1557},
    }

2021

  • M. Arora, L. Wiesmann, X. Chen, and C. Stachniss, “Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation,” in Proc. of the european conf. on mobile robots (ecmr), 2021.
    [BibTeX] [PDF] [Code]
    @InProceedings{arora2021ecmr,
    author = {M. Arora and L. Wiesmann and X. Chen and C. Stachniss},
    title = {{Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation}},
    booktitle = ecmr,
    codeurl = {https://github.com/humbletechy/Dynamic-Point-Removal},
    year = {2021},
    }

  • X. Chen, S. Li, B. Mersch, L. Wiesmann, J. Gall, J. Behley, and C. Stachniss, “Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data,” IEEE Robotics and Automation Letters (RA-L), vol. 6, pp. 6529-6536, 2021. doi:10.1109/LRA.2021.3093567
    [BibTeX] [PDF] [Code] [Video]
    @article{chen2021ral,
    title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data}},
    author={X. Chen and S. Li and B. Mersch and L. Wiesmann and J. Gall and J. Behley and C. Stachniss},
    year={2021},
    volume=6,
    issue=4,
    pages={6529-6536},
    journal=ral,
    url = {https://www.ipb.uni-bonn.de/pdfs/chen2021ral-iros.pdf},
    codeurl = {https://github.com/PRBonn/LiDAR-MOS},
    videourl = {https://youtu.be/NHvsYhk4dhw},
    doi = {10.1109/LRA.2021.3093567},
    issn = {2377-3766},
    }

  • L. Wiesmann, A. Milioto, X. Chen, C. Stachniss, and J. Behley, “Deep Compression for Dense Point Cloud Maps,” IEEE Robotics and Automation Letters (RA-L), vol. 6, pp. 2060-2067, 2021. doi:10.1109/LRA.2021.3059633
    [BibTeX] [PDF] [Code] [Video]
    @article{wiesmann2021ral,
    author = {L. Wiesmann and A. Milioto and X. Chen and C. Stachniss and J. Behley},
    title = {{Deep Compression for Dense Point Cloud Maps}},
    journal = ral,
    volume = 6,
    issue = 2,
    pages = {2060-2067},
    doi = {10.1109/LRA.2021.3059633},
    year = 2021,
    url = {https://www.ipb.uni-bonn.de/pdfs/wiesmann2021ral.pdf},
    codeurl = {https://github.com/PRBonn/deep-point-map-compression},
    videourl = {https://youtu.be/fLl9lTlZrI0}
    }

2020

  • C. Stachniss, I. Vizzo, L. Wiesmann, and N. Berning, How To Setup and Run a 100\% Digital Conf.: DIGICROP 2020, 2020.
    [BibTeX] [PDF]

    The purpose of this record is to document the setup and execution of DIGICROP 2020 and to simplify conducting future online events of that kind. DIGICROP 2020 was a 100\% virtual conference run via Zoom with around 900 registered people in November 2020. It consisted of video presentations available via our website and a single-day live event for Q&A. We had around 450 people attending the Q&A session overall, most of the time 200-250 people have been online at the same time. This document is a collection of notes, instructions, and todo lists. It is not a polished manual, however, we believe these notes will be useful for other conference organizers and for us in the future.

    @misc{stachniss2020digitalconf,
    author = {C. Stachniss and I. Vizzo and L. Wiesmann and N. Berning},
    title = {{How To Setup and Run a 100\% Digital Conf.: DIGICROP 2020}},
    year = {2020},
    url = {https://www.ipb.uni-bonn.de/pdfs/stachniss2020digitalconf.pdf},
    abstract = {The purpose of this record is to document the setup and execution of DIGICROP 2020 and to simplify conducting future online events of that kind. DIGICROP 2020 was a 100\% virtual conference run via Zoom with around 900 registered people in November 2020. It consisted of video presentations available via our website and a single-day live event for Q&A. We had around 450 people attending the Q&A session overall, most of the time 200-250 people have been online at the same time. This document is a collection of notes, instructions, and todo lists. It is not a polished manual, however, we believe these notes will be useful for other conference organizers and for us in the future.},
    }