Work experience

RESEARCH ENGINEER AT "Laboratoire des Technologies Innovantes d'Amiens" (AMIENS - FRANCE)

I work since September 2022 and until the beginning of June 2023 on a European project financed by React-eu, a set of measures set up to face the consequences of the Covid-19 pandemic and to boost the economy.

Our project is to design a robot that can disinfect classrooms in schools by UV treatment. This robot has to act autonomously, move from room to room and stop the disinfection if it detects a human being, for security reasons.

During this 9 months contract, I had for objective to allow a robot to carry out a cartography of a low-textured interior environment.

Choice of the camera

Initially, tests were conducted in order to determine the camera to use. We have Intel Realsense cameras model D435, D435i and T265 in our possession. Our robot must be able to navigate in low-textured environments like narrow corridors, so tests were performed with algorithms such as ORB-SLAM3, OV²SLAM and RTAB-Map [1-3].

At first, the choice of a Monocular configuration was avoided because of the need to initialize the algorithm by performing a translation and rotation movement. This initialization is feasible on devices such as drones, but on a robot of a certain weight, a system would have to be provided for this purpose. Moreover, the need to add a sensor such as an IMU to determine the scale and then obtain a map with the right scale is an additional constraint. 

So the tests were done in Stereo and RGB-D.

Choice of the algorithm

Once the choice of the Realsense T265 was made, it was necessary to consider the choice of the algorithm in order to obtain the best possible results. Three algorithms in particular were selected, namely ORB-SLAM3, OV²SLAM and RTAB-Map. [1-3]

ORB-SLAM3 [1] is the most popular feature-based Visual SLAM due to its ability to support several sensor configurations (Monocular, Stereo, RGB-D, Mono-inertial, Stereo-inertial and RGB-D-inertial) as well as its excellent results on datasets such as EuRoc or KITTI. However, Visual SLAM based on feature method is not the best in low textured environments as it needs to extract key points in the images and track them through feature matching.

OV²SLAM [2] is a Visual SLAM algorithm that extracts features only on key frames and tracks them with optical flow on subsequent frames. It also uses features to track local maps, but unlike ORB-SLAM3 [1] , it performs this operation only for key frames. This algorithm has a precise Visual Odometry that can go up to 200 Hz, thus guaranteeing the use of the algorithm in real time.

Finally, RTAB-Map [3] is one of the few Visual SLAM algorithms to provide a dense map (and not a sparse one such as the majority of other algorithms) which can therefore allow the navigation of a robot. The main advantage of this algorithm is its memory management, which permits loops detection on very large distances.

RESEARCH INTERN AT "Laboratoire des Technologies Innovantes d'Amiens" (AMIENS - FRANCE)

I did a 20 weeks internship at the "Laboratoire des Technologies Innovantes d'Amiens" (LTI) during which I started studying the different existing Visual Odometry methods (i.e. the indirect -> feature method as well as the direct method). Once these technologies were analyzed, tests were performed to determine in which case the use of each type of algorithms was best suited.

Thus I could continue with the detailed analysis of the functioning of the ORB-SLAM3 [1] algorithm, one of the most popular Visual SLAM algorithms.

Tests were performed with different types of cameras to determine which method was the most robust. It was deduced that further work would be done on ORB-SLAM3 because of its superior results to the direct method algorithms, due to the lack of photometric calibration. 

It was also deduced according to the reading of a scientist paper "Benchmarking cameras for open vslam indoors" [4], that the use of the T265 camera was more efficient than the D435, because of its larger field of view.

M2_S2_Visual Odometry_KEMPF_Dorian.pdf


[1 ] C. Campos, R. Elvira, J. J. G. Rodriguez, J. M. Montiel, and J. D.Tardos, "Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam", IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021.

[2] M. Ferrera, A. Eudes, J. Moras, M. Sanfourche, and G. Lebesnerais, "OV2SLAM : A Fully Online and Versatile Visual SLAM for Real-Time Applications", IEEE Robotics and Automation Letters, 2021.

[3] Labbé, M. Michaud, F. "Rtab-map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation", Journal of Field Robotics 36(2) (2019) 416–446

[4] K. Chappellet, G. Caron, F. Kanehiro, K. Sakurada, and A. Kheddar, "Benchmarking cameras for open vslam indoors", in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021.