Occlusion-Robust MVO (IROS 2020)

Kevin Judd's work extending Multimotion Visual Odometry (MVO) to handle temporary occlusions robustly has been accepted to IROS 2020. You can read the full paper on arXiv and perhaps we'll even get to see you in Las Vegas, USA in October.

K. M. Judd, J. D. Gammell. “Occlusion-robust MVO: Multimotion estimation through occlusion via motion closure.” in Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), 25–29 Oct. 2020.

Abstract

Visual motion estimation is an integral and well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation, which is especially challenging in highly dynamic environments. Such environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion.

Previous work in object tracking focuses on maintaining the integrity of object tracks but usually relies on specific appearance-based descriptors or constrained motion models. These approaches are very effective in specific applications but do not generalize to the full multimotion estimation problem.

This paper presents a pipeline for estimating multiple motions, including the camera egomotion, in the presence of occlusions. This approach uses an expressive motion prior to estimate the SE(3) trajectory of every motion in the scene, even during temporary occlusions, and identify the reappearance of motions through motion closure. The performance of this occlusion-robust multimotion visual odometry (MVO) pipeline is evaluated on real-world data and the Oxford Multimotion Dataset.