not yet published
Conference article  Open Access

Breaking the 2D dependency: what limits 3D-only open-vocabulary scene understanding

D’orsi Domenico, Carrara Fabio, Falchi Fabrizio, Tonellotto Nicola

Open-vocabulary 3D scene understanding  3D scene segmentation, multimodal point cloud encoder, 3D-only pipeline 

Open-vocabulary 3D scene understanding, i.e., recognizing and classifying objects in 3D scenes without being limited to a predefined set of classes, is a foundational task for robotics and extended reality applications. Current leading methods often rely on 2D foundation models to extract semantics, then projected in 3D. This paper investigates the viability of a purely 3D-native pipeline, thereby eliminating dependencies on 2D models and reprojections. We systematically explored various architectural combinations using established 3D components. However, our extensive experiments on benchmark datasets reveal significant performance limitations with this direct 3D-native approach, with performance metrics falling short of expectations. Rather than a simple failure, these outcomes provide critical insights into the current deficiencies of existing 3D models when cascaded for complex open-vocabulary tasks. We highlight the lessons learned, identify the pipeline's limitations (e.g., segmenter-encoder domain gap, robustness to imperfect segmentations), and posit future research directions. We argue that a fundamental rethinking of model design and interplay is necessary to realize the potential of truly 3D-native open-vocabulary understanding.


Metrics



Back to previous page
BibTeX entry
@inproceedings{oai:iris.cnr.it:20.500.14243/562945,
	title = {Breaking the 2D dependency: what limits 3D-only open-vocabulary scene understanding},
	author = {D’orsi Domenico and Carrara Fabio and Falchi Fabrizio and Tonellotto Nicola},
	doi = {10.5281/zenodo.17338754 and 10.5281/zenodo.17338755},
	year = {9999}
}

Social and Human Centered XR
Social and Human Centered XR