Developing embodied agents in simulation has been a key research topic in recent years. Exciting new tasks, algorithms, and benchmarks have been developed in various simulators. However, most of them assume deaf agents in silent environments, while we humans perceive the world with multiple senses. We introduce Sonicverse, a multisensory simulation platform with integrated audio-visual simulation for training household agents that can both see and hear. Sonicverse models realistic continuous audio rendering in 3D environments in real-time. Together with a new audio-visual VR interface that allows humans to interact with agents with audio, Sonicverse enables a series of embodied AI tasks that need audio-visual perception. For semantic audio-visual navigation in particular, we also propose a new multi-task learning model that achieves state-of-the-art performance. In addition, we demonstrate Sonicverse's realism via sim-to-real transfer, which has not been achieved by other simulators: an agent trained in Sonicverse can successfully perform audio-visual navigation in real-world environments.
In the supplementary video, we show 1) key features of audio simulation in Sonicverse, 2) task prototypes enabled by our audio-visual virtual reality interface, 3) examples of semantic audio-visual navigation, and 4) sim2real transfer results in real-world environments.
R.Gao, H. Li, G. Dharan, Z. Wang, C. Li, F. Xia, S. Savarese, L. Fei-Fei, J. Wu. "Sonicverse: A Multisensory Simulation Platform for Embodied Household Agents that See and Hear". In ICRA, 2023. [Bibtex]
This work is in part supported by ONR MURI N00014-22-1-2740, NSF #2120095, Stanford Institute for Human-Centered AI (HAI), Adobe, Amazon, Bosch, Meta, and Salesforce.