Jump to Content
  1. Queen's
  2. Smith Eng.
  3. ECE

Estimation, Search, and Planning (ESP) Research Group

Surface Edge Explorer

Rowan has submitted a journal paper on the Surface Edge Explorer (SEE) to IJRR. It not only contains all the latest advances to the open source code but also really detailed simulations and experiments. Please check out the trailer video or multimedia extensions on YouTube and the paper preprint on arXiv.

Authors
  1. R. Border
  2. J. D. Gammell
Title
The Surface Edge Explorer (SEE): A measurement-direct approach to next best view planning
Publication
Journal
The International Journal of Robotics Research (IJRR)
Date
Code
Code
Videos
Video
PDFs
PDF
Digital Object Identifier (DOI)
doi: 10.1177/02783649241230098
arXiv
Google Scholar
Google Scholar

Abstract

High-quality observations of the real world are crucial for a variety of applications, including producing 3D printed replicas of small-scale scenes and conducting inspections of large-scale infrastructure. These 3D observations are commonly obtained by combining multiple sensor measurements from different views. Guiding the selection of suitable views is known as the Next Best View (NBV) planning problem.

Most NBV approaches reason about measurements using rigid data structures (e.g., surface meshes or voxel grids). This simplifies next best view selection but can be computationally expensive, reduces real-world fidelity, and couples the selection of a next best view with the final data processing.

This paper presents the Surface Edge Explorer (SEE), a NBV approach that selects new observations directly from previous sensor measurements without requiring rigid data structures. SEE uses measurement density to propose next best views that increase coverage of insufficiently observed surfaces while avoiding potential occlusions. Statistical results from simulated experiments show that SEE can attain better surface coverage in less computational time and sensor travel distance than evaluated volumetric approaches on both small- and large-scale scenes. Real-world experiments demonstrate SEE autonomously observing a deer statue using a 3D sensor affixed to a robotic arm.