Open-world semantic segmentation for lidar point clouds

Current methods for LIDAR semantic segmentation are not robust enough for real-world applications, e. The closed-set assumption makes the network only able to output labels of trained classes, even for objects never seen before, while a static network cannot update its knowledge base according to what it has seen.

However, we found that in the outdoor point cloud, the improvement obtained in this way is quite limited. Densely annotating LiDAR point clouds remains too expensive and time-consuming to keep up with the ever growing volume of data. Accurate and fast scene understanding is one of the challenging task for autonomous driving, which requires to take full advantage of LiDAR point clouds for semantic segmentation. The first is scene-level swapping which exchanges point cloud sectors of two LiDAR scans that are cut along the azimuth axis. In this work, we analyze the limitations of the Point Transformer and propose our powerful and efficient Point Transformer V2 model with novel designs that overcome the limitations of previous work. In this work, we study the varying-sparsity distribution of LiDAR points and present SphereFormer to directly aggregate information from dense close points to the sparse distant ones.

Open-world semantic segmentation for lidar point clouds

.

The closed-set assumption makes the network only able to output labels of trained classes, even for objects never seen before, while a static network cannot update its knowledge base according to what it has seen. ULS labeled data.

.

Current methods for LIDAR semantic segmentation are not robust enough for real-world applications, e. The closed-set assumption makes the network only able to output labels of trained classes, even for objects never seen before, while a static network cannot update its knowledge base according to what it has seen. Therefore, in this work, we propose the open-world semantic segmentation task for LIDAR point clouds, which aims to 1 identify both old and novel classes using open-set semantic segmentation, and 2 gradually incorporate novel objects into the existing knowledge base using incremental learning without forgetting old classes. For this purpose, we propose a RE dund A ncy c L assifier REAL framework to provide a general architecture for both the open-set semantic segmentation and incremental learning problems. The experimental results show that REAL can simultaneously achieves state-of-the-art performance in the open-set semantic segmentation task on the SemanticKITTI and nuScenes datasets, and alleviate the catastrophic forgetting problem with a large margin during incremental learning. This is a preview of subscription content, log in via an institution. Baur, C. Google Scholar. Behley, J. In: ICCV

Open-world semantic segmentation for lidar point clouds

Open-world Semantic Segmentati Incremental learning. LIDAR point clouds. Open-set semantic segmentation. Open-world semantic segmentation. Cited By Counts. Online Attention. Usage Metrics Page views. Full-text downloads. Current methods for LIDAR semantic segmentation are not robust enough for real-world applications, e.

Is kathy ireland related to jill ireland

In this paper, we introduce a comprehensive 3D pre-training framework designed to facilitate the acquisition of efficient 3D representations, thereby establishing a pathway to 3D foundational models. Accurate and fast scene understanding is one of the challenging task for autonomous driving, which requires to take full advantage of LiDAR point clouds for semantic segmentation. The closed-set assumption makes the network only able to output labels of trained classes, even for objects never seen before, while a static network cannot update its knowledge base according to what it has seen. Point clouds are unstructured and unordered data, as opposed to images. Lis, K. Lecture Notes in Computer Science, vol Wang, Y. Download references. View author publications. Or, discuss a change on Slack.

.

Sorry, a shareable link is not currently available for this article. LNCS, vol. In: CVPR Point clouds are unstructured and unordered data, as opposed to images. Publish with us Policies and ethics. Reprints and permissions. Print ISBN : Copy to clipboard. Computer Vision. Skip to main content. See all. Delange, M. Cite this paper Cen, J. In: ICLR Rannen, A.

2 thoughts on “Open-world semantic segmentation for lidar point clouds

Leave a Reply

Your email address will not be published. Required fields are marked *