Abstract
Vision-based perception is a key enabling technology when attempting to convert human work processes into automated robotic workflows in diverse production and transport scenarios. Automation of such workflows, however, faces several challenges due to the diversity governing these scenarios: various objects to be handled, differing viewing conditions, partial visibility and occlusions. In this paper we describe the concept of an occlusion-robust pallet recognition methodology trained fully in the synthetic domain and well coping with varying object appearance. A key factor in our representation learning scheme is to entirely focus on geometric traits, captured by the surface normals of dense stereo depth data. Furthermore, we adopt a local key-point detection scheme with regressed attributes allowing for a bottom-up voting step for object candidates. The proposed geometric focus combined with local key-point based reasoning yields an appearance-independent (color, texture, material, illumination) and occlusion-robust detection scheme. A quantitative evaluation of recognition accuracy for two network architectures is performed using a manually fine-annotated multiwarehouse dataset. Given the standardized pallet dimensions, spatially accurate pose estimation and tracking, and robotic path planning are carried out and demonstrated in two automated forklift demonstrators. These demonstrators exhibit the ability to consistently perform automated pick-up and drop-off of pallets carrying arbitrary items, under a wide variation of settings.
Originalsprache | Englisch |
---|---|
Titel | Proceedings of 2023 IEEE Conference on Artificial Intelligence (CAI) |
Seiten | 74-75 |
Seitenumfang | 2 |
ISBN (elektronisch) | 979-8-3503-3984-0 |
Publikationsstatus | Veröffentlicht - 6 Juni 2023 |
Research Field
- Assistive and Autonomous Systems