Automatic object recognition within point clouds in clustered or scattered scenes

Forfatter
Bae, Egil
Publisert
2020
Emneord
Sceneanalyse
Lidar
Permalenke
http://hdl.handle.net/20.500.12242/2817
DOI
https://doi.org/10.1117/12.2574119
Samling
Articles
Description
Bae, Egil. Automatic object recognition within point clouds in clustered or scattered scenes. Proceedings of SPIE, the International Society for Optical Engineering 2020 ;Volum 11538. s. 1-20
1861109.pdf
Size: 5M
Sammendrag
We consider the problem of automatically locating, classifying and identifying an object within a point cloud that has been acquired by scanning a scene with a ladar. The recent work [E. Bae, Automatic scene understanding and object identification in point clouds, Proceedings of SPIE Volume 11160, 2019] approached the problem by first segmenting the point cloud into multiple classes of similar objects, before a more sophisticated and computationally demanding algorithm attempted to recognize/identify individual objects within the relevant class. The overall approach could find and identify partially visible objects with high confidence, but has the potential of failing if the object of interest is placed right next to other objects from the same class, or if the object in interest is scattered into several disjoint parts due to occlusions or slant view angles. This paper proposes an improvement of the algorithm that allows it to handle both clustered and scattered scenarios in a unified way. It proposes an intermediate step between segmentation and recognition that extracts objects from the relevant class based on similarity between their distance function and the distance function of a reference shape for different view angles. The similarity measure accounts for occlusions and partial visibility naturally, and can be expressed analytically in the distance coordinate for azimuth and elevation angles within the field of view (FOV). This reduces the dimensions for which to search from three to two. Furthermore, calculations can be limited to parts of the FOV corresponding to the relevant segmented region. In consequence, the computational efficiency of the algorithm is high and it is possible to match against the reference shape for multiple discrete view angles. The subsequent recognition step analyzes the extracted objects in more details and avoids suffering from discretization and conversion errors. The algorithm is demonstrated in various maritime examples.
View Meta Data