Automatic scene understanding and object identification in point clouds

Author
Bae, Egil
Date Issued
2019-10-10
Keywords
TermsetEmneord::Skyer
TermsetEmneord::Lidar
TermsetEmneord::Modellering
Permalink
http://hdl.handle.net/123456789/104314
http://hdl.handle.net/20.500.12242/2655
DOI
https://doi.org/10.1117/12.2534984
Collection
Articles
Description
Bae, Egil. Automatic scene understanding and object identification in point clouds. Proceedings of SPIE, the International Society for Optical Engineering 2019 ;Volum 11160.(111600M) s. 1-17
1754800.pdf
Size: 3M
Abstract
A ladar can acquire a dense set of 3D coordinates of a scene, a so-called point cloud, in sub-second time from ranges of several kilometers. This paper presents algorithms for segmenting a point cloud into meaningful classes of similar objects, and for identifying a specific object within its respective class. The segmentation algorithm incorporates several low level features derived from surface patches of objects from different classes and the interphases between them. On a mathematical level, it partitions the point cloud in a way that optimally balances these considerations by finding the global minimizer to a so-called variational problem over a graph, utilizing recently published results on general high-dimensional data classification. The subsequent recognition step makes use of higher level features for identifying a particular object, represented by a 3D model, among the respective class of segmented objects. It measures similarity of shape between the 3D model and each observed object, considering them as two pieces in a puzzle. The simulated shadow and visibility of the 3D model are measured for consistency with the point cloud shadows. The recognition step is also formulated as an optimization problem and solved by mathematically well-founded techniques. Results demonstrate that point clouds acquired in maritime, urban and rural scenes can be segmented into meaningful object classes and that individual vessels can be identified with a high confidence.
View Meta Data