Abstract
Language-driven grasp detection is a fundamental yet challenging task in robotics with various industrial applications. This work presents a new approach for language-driven grasp detection that leverages lightweight diffusion models to achieve fast inference time. By integrating diffusion processes with grasping prompts in natural language, our method can effectively encode visual and textual information, enabling more accurate and versatile grasp positioning that aligns well with the text query. To overcome the long inference time problem in diffusion models, we leverage the image and text features as the condition in the consistency model to reduce the number of denoising timesteps during inference. The intensive experimental results show that our method outperforms other recent grasp detection methods and lightweight diffusion models by a clear margin. We further validate our method in real-world robotic experiments to demonstrate its fast inference time capability.
Original language | German |
---|---|
Title of host publication | Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) |
Pages | 13719-13725 |
DOIs | |
Publication status | Published - 25 Dec 2024 |
Event | 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024) - Abu Dhabi, United Arab Emirates Duration: 14 Oct 2024 → 18 Oct 2024 |
Conference
Conference | 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024) |
---|---|
Country/Territory | United Arab Emirates |
City | Abu Dhabi |
Period | 14/10/24 → 18/10/24 |
Research Field
- Complex Dynamical Systems