Next best view planning with an unstructured representation
Share
- Publication Date
- Abstract
High-quality observations of the real world are crucial for creating realistic scene imitations and performing structural analysis. Observations can be used to produce 3D printed replicas of small-scale scenes (e.g., a toy bunny), conduct inspections of large-scale infrastructure (e.g., a building) or integrated into virtual environments that provide immersive experiences for our entertainment and training robotic systems.
Scenes are observed by obtaining point measurements using a sensor from multiple views. These views can be chosen by a human operator or planned using knowledge of existing measurements or an a priori scene model. The challenge of selecting the ‘next’ view of a scene to obtain that will provide the ‘best’ improvement in an observation is known as the Next Best View (NBV) planning problem.
This thesis presents work on NBV planning with a novel unstructured scene representation. In contrast to existing literature on the problem, which typically uses structured representations, an unstructured representation does not impose an external structure on scene observations. There is no reduction in the fidelity of information represented or simplifying assumptions made about the scene structure.
This unstructured representation is used to create the Surface Edge Explorer (SEE), a novel NBV planning approach. Observed points are classified based on the local measurement density. Views are chosen to improve the surface coverage of an observation until a minimum point density has been attained. Experiments comparing SEE with structured approaches demonstrate that it is able to obtain an equivalent observation quality using fewer views and a lower computation time.
Novel point-based techniques for considering occlusions and scene visibility are investigated. This work overcomes the raycasting constraints of existing methods used by structured approaches. The best performing strategies for addressing each of these challenges are integrated with SEE to create SEE++. An experimental comparison of SEE++ with SEE and structured approaches demonstrates that it achieves significant improvements in observation performance by requiring fewer views and shorter travel distances while maintaining a reasonable computation time.
Observations of real world scenes using SEE and SEE++ illustrate the successful transference of their capabilities from a simulation environment to the real world. Qualitative results show that both approaches are able to obtain highly complete observations of several scenes with varying size and structural complexity using multiple sensor modalities. Quantitative results demonstrate that SEE++ observes the scenes with greater efficiency than SEE by utilising an increased computational time.
- Publication Details
- Type
- D.Phil. Thesis
- Institution
- University of Oxford
- Manuscript
- Open-Access PDF
- https://robotic-esp.com/papers/border_dphil19.pdf
- Google Scholar
- Google Scholar
- BibTeX Entry
@phdthesis{border_dphil19,
author = {Rowan Border},
title = {Next best view planning with an unstructured representation},
school = {University of Oxford},
year = {2019},
}