Sammendrag
Autonomous Underwater Vehicles (AUVs) can significantly extend our access to the oceans. For an AUV, having a map of its surroundings may deem necessary to perform certain tasks, such as path planning and collision avoidance. Alas, the performance of some sensors used on land is somewhat impaired underwater. Simultaneously, Visual Simultaneous Location and Mapping (VSLAM) methods based purely on RGB cameras are becoming increasingly accurate. In this thesis we present a surface reconstruction system that uses the output from a VSLAM system to reconstruct a dense 3D model of the scene. Having access to a synthetic underwater dataset, we also present a novel method of emulating VSLAM data on this dataset and similar datasets. In addition, as a step towards reconstructing the surface, we implement a per-frame depth interpolation method, based on the sparse depth samples obtained from the VSLAM data. The final reconstruction is then performed by a third-party software, presented along with a selection of relevant literature. The resulting reconstruction system thus consists of three separate modules, in which one is a third-party software, and all modules can be exchanged for other methods. Utilizing a synthetic dataset, the modular system stands as an example for testing surface reconstruction methods on simulated underwater environments.
Vis fullstendig beskrivelse