See, Hear, Feel:
Smart Sensory Fusion for Robotic Manipulation

CoRL 2022

       Hao Li*           Yizhi Zhang*               Junzhe Zhu       Shaoxiong Wang       Michelle A. Lee
         Huazhe Xu         Edward Adelson         Li Fei-Fei         Ruohan Gao         Jiajun Wu

Stanford University     Massachusetts Institute of Technology

*equal contribution.       †equal advising.

[arXiv] [Supp] [Code] [Bibtex]

feature

Abstract

Humans use all of their senses to accomplish different tasks in everyday activities. In contrast, existing work on robotic manipulation mostly relies on one, or occasionally two modalities, such as vision and touch. In this work, we systematically study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks. We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor, with all three sensory modalities fused with a self-attention model. Results on two challenging tasks, dense packing and pouring, demonstrate the necessity and power of multisensory perception for robotic manipulation: vision displays the global status of the robot but can often suffer from occlusion, audio provides immediate feedback of key moments that are even invisible, and touch offers precise local geometry for decision making. Leveraging all three modalities, our robotic system significantly outperforms prior methods.

Qualitative Video

In the supplementary video, we show 1) the motivation of leveraging multisensory data for robotic manipulation tasks and our multisensory learning pipeline, 2) illustrations of the dense packing task and qualitative results, and 3) illustrations of the pouring task and qualitative results.


Publications

H. Li, Y. Zhang, J. Zhu, S. Wang, M. A Lee, H. Xu, E. Adelson, L. Fei-Fei, R. Gao, J. Wu. "See, Hear, Feel: Smart Sensory Fusion for Robotic Manipulation". In CoRL, 2022. [Bibtex]

Acknowledgement

We thank Chen Wang, Kyle Hsu, Yunzhi Zhang, and Koven Yu for helpful discussions or feedback on paper drafts. The work is in part supported by the Stanford Institute for Human-Centered AI (HAI), the Toyota Research Institute (TRI), NSF RI #2211258, ONR MURI N00014-22-1-2740, Adobe, Amazon, Meta, and Samsung.