Look, Listen, and Act: Towards Audio-Visual Embodied Navigation
Chuang Gan* Yiwei Zhang* Jiajun Wu Boqing Gong Joshua B. Tenenbaum
Abstract:
A crucial aspect of mobile intelligent agents is their ability to integrate the evidence from multiple sensory inputs in an environment and plan a sequence of actions to achieve their goals. In this paper, we attempt to address the problem of Audio-Visual Embodied Navigation, the task of planning the shortest path from a random starting location in a scene to the sound source in an indoor environment, given only raw egocentric visual and audio sensory data. To accomplish this task, the agent is required to learn from various modalities, i.e., relating the audio signal to the visual environment. Here we describe an approach to the audio-visual embodied navigation that can take advantage of both visual and audio pieces of evidence. Our solution is based on three key ideas: a visual perception mapper module that can construct its spatial memory of the environment, a sound perception module that infers the relative location of the sound source from the agent, and a dynamic path planner that plans a sequence of actions based on the visual-audio observations and the spatial memory of the environment, and then navigates towards the goal. Experimental results on a newly collected Visual-Audio-Room dataset using the simulated multi-modal environment demonstrate the effectiveness of our approach over several competitive baselines.
Video:
Paper:
Look, Listen, and Act: Towards Audio-Visual Embodied Navigation
Chuang Gan*, Yiwei Zhang*, Jiajun Wu, Boqing Gong, Joshua Tenenbaum
ICRA 2020 (* indicates equal contributions)
[PDF]
Chuang Gan*, Yiwei Zhang*, Jiajun Wu, Boqing Gong, Joshua Tenenbaum
ICRA 2020 (* indicates equal contributions)
[PDF]
Dataset
Coming soon.
Related Publications
Self-supervised Audio-visual Co-segmentation
Andrew Rouditchenko*, Hang Zhao*, Chuang Gan, Josh McDermott, and Antonio Torralba
ICASSP 2019 (* indicates equal contributions)
Andrew Rouditchenko*, Hang Zhao*, Chuang Gan, Josh McDermott, and Antonio Torralba
ICASSP 2019 (* indicates equal contributions)