Enhancing Visual Odometry with Estimated Scene Depth: Leveraging RGB-D Data with Deep Learning
[ 1 ] Instytut Robotyki i Inteligencji Maszynowej, Wydział Automatyki, Robotyki i Elektrotechniki, Politechnika Poznańska | [ D ] doktorant | [ P ] pracownik
[2.2] Automatyka, elektronika, elektrotechnika i technologie kosmiczne
2024
artykuł naukowy
angielski
- visual odometry
- RGB-D cameras
- depth estimation
- deep learning
- particle swarm optimization
EN Advances in visual odometry (VO) systems have benefited from the widespread use of affordable RGB-D cameras, improving indoor localization and mapping accuracy. However, older sensors like the Kinect v1 face challenges due to depth inaccuracies and incomplete data. This study compares indoor VO systems that use RGB-D images, exploring methods to enhance depth information. We examine conventional image inpainting techniques and a deep learning approach, utilizing newer depth data from devices like the Kinect v2. Our research highlights the importance of refining data from lower-quality sensors, which is crucial for cost-effective VO applications. By integrating deep learning models with richer context from RGB images and more comprehensive depth references, we demonstrate improved trajectory estimation compared to standard methods. This work advances budget-friendly RGB-D VO systems for indoor mobile robots, emphasizing deep learning’s role in leveraging connections between image appearance and depth data.
13.07.2024
2755-1 - 2755-20
Article number: 2755
CC BY (uznanie autorstwa)
otwarte czasopismo
ostateczna wersja opublikowana
w momencie opublikowania
publiczny
100
2,6 [Lista 2023]