Multimotion Visual Odometry (IROS 2018)
Share
Kevin Judd has had his paper on Multimotion Visual Odometry (MVO) accepted to IROS 2018. You can already read all about it on arXiv and we hope to see you in Madrid, Spain in October.
- Publication
- Conference
- Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- Pages
- 3949–3956
- Location
- Madrid, Spain
- Date
Abstract
Estimating motion from images is a well-studied problem in computer vision and robotics. Previous work has developed techniques to estimate the motion of a moving camera in a largely static environment (e.g., visual odometry) and to segment or track motions in a dynamic scene using known camera motions (e.g., multiple object tracking).
It is more challenging to estimate the unknown motion of the camera and the dynamic scene simultaneously. Most previous work requires a priori object models (e.g., tracking-by-detection), motion constraints (e.g., planar motion), or fails to estimate the full SE(3) motions of the scene (e.g., scene flow). While these approaches work well in specific application domains, they are not generalizable to unconstrained motions.
This paper extends the traditional visual odometry (VO) pipeline to estimate the full SE(3) motion of both a stereo/RGB-D camera and the dynamic scene. This multimotion visual odometry (MVO) pipeline requires no a priori knowledge of the environment or the dynamic objects. Its performance is evaluated on a real-world dynamic dataset with ground truth for all motions from a motion capture system.