Abstract
Vision systems that provide a 360-degree view are becoming increasingly common in today's vehicles. These systems are generally composed of several cameras pointing in different directions and rigidly connected to each other. The purpose of these systems is to provide driver assistance in the form of a display, for example by building a Bird's eye view around the vehicle for parking assistance. In this context, and for reasons of cost and ease of integration, such cameras are generally not synchronized. If non-synchronization is not a problem when it comes to display only, it poses significant issues for more complex computer vision applications (3D reconstruction, motion estimation, etc.). In this article, we propose to use a network of asynchronous cameras to estimate the motion of the vehicle and to find the 3D structure of the scene around it (for example for obstacle detection). Our method relies on the use of at least three images from two adjacent cameras. The poses of the cameras are independently estimated by conventional visual odometry algorithms. Then we show that it is possible to find the absolute scale factor by hypothesizing that the motion of the vehicle is smooth. The results are then refined through a local bundle adjustment on the scale factor and 3D points only. We evaluated our method under real conditions on the KITTI database, and we showed that our method can be generalized to a larger network of cameras thanks to a system developed in our lab.
http://bit.ly/2VyoF0z
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου