Oxford Multimotion Dataset (RA-L 2019)
Share
The Oxford Multimotion Dataset (OMD) has been accepted to RA-L. You can read about it in Kevin's paper (available now on arXiv) and download the data from our dataset page.
- Publication
- Journal
- IEEE Robotics and Automation Letters (RA-L)
- Volume
- 4
- Number
- 2
- Pages
- 800–807
- Date
- Notes
- Presented at ICRA 2019
Abstract
Datasets advance research by posing challenging new problems and providing standardized methods of algorithm comparison. High-quality datasets exist for many important problems in robotics and computer vision, including egomotion estimation and motion/scene segmentation, but not for techniques that estimate every motion in a scene. Metric evaluation of these multimotion estimation techniques requires datasets consisting of multiple, complex motions that also contain ground truth for every moving body.
The Oxford Multimotion Dataset provides a number of multimotion estimation problems of varying complexity. It includes both complex problems that challenge existing algorithms as well as a number of simpler problems to support development. These include observations from both static and dynamic sensors, a varying number of moving bodies, and a variety of different SE(3) motions. It also provides a number of experiments designed to isolate specific challenges of the multimotion problem, including rotation about the optical axis and occlusion.
In total, the Oxford Multimotion Dataset contains over 80 minutes of multimotion data consisting of stereo and RGB-D camera images, IMU data, and Vicon ground-truth trajectories. The dataset culminates in a complex toy car segment representative of many challenging real-world scenarios. This paper describes each experiment with a focus on its relevance to the multimotion estimation problem.