Improving 3D inline computational imaging of textureless objects using pattern illumination

Publikation: Beitrag in Buch oder TagungsbandVortrag mit Beitrag in TagungsbandBegutachtung

Abstract

Feature-based 3D reconstruction methods only work reliably for images with enough features (i.e., texture) that can be matched to infer a depth map. Contradicting the core assumption of such methods, the 3D reconstruction of objects with textureless surfaces remains challenging. This paper explores a simple solution to this problem, i.e., adding artificial texture to such objects. In particular, we equipped a multi-view stereo based inline computational imaging system with a pattern illumination module to compensate for the absence of texture. Comparisons of 3D reconstructions from acquisitions with and without projected patterns show an increase in accuracy when using the pattern illumination.
OriginalspracheEnglisch
TitelComputer Vision Systems
UntertitelProceedings of the 14th International Conference on Computer Vision Systems (ICVS 2023)
Redakteure/-innenHenrik I. Christensen, Peter Corke, Renaud Detry, Jean-Baptiste Weibel, Markus Vincze
Seiten412-421
Seitenumfang10
Band14253
ISBN (elektronisch)978-3-031-44137-0
DOIs
PublikationsstatusVeröffentlicht - 21 Sept. 2023
Veranstaltung14th International Conference on Computer Vision Systems (ICVS 2023) - Vienna University of Technology, Vienna, Österreich
Dauer: 27 Sept. 202329 Sept. 2023
https://icvs2023.conf.tuwien.ac.at/

Publikationsreihe

NameLecture Notes in Computer Science
Herausgeber (Verlag)Springer Cham
Band14253
ISSN (Print)0302-9743
ISSN (elektronisch)1611-3349

Konferenz

Konferenz14th International Conference on Computer Vision Systems (ICVS 2023)
Land/GebietÖsterreich
StadtVienna
Zeitraum27/09/2329/09/23
Internetadresse

Research Field

  • High-Performance Vision Systems

Fingerprint

Untersuchen Sie die Forschungsthemen von „Improving 3D inline computational imaging of textureless objects using pattern illumination“. Zusammen bilden sie einen einzigartigen Fingerprint.

Diese Publikation zitieren