Ruohan Gao

Postdoctoral Research Fellow
Department of Computer Science
Stanford University
Email: rhgao[AT]cs[DOT]stanford[DOT]edu

CV     Google Scholar     GitHub     Short Bio

I am a SAIL Postdoctoral Fellow working with Prof. Jiajun Wu, Prof. Fei-Fei Li, and Prof. Silvio Savarese at the Stanford Vision and Learning Lab. I received my Ph.D. at The University of Texas at Austin advised by Prof. Kristen Grauman, and my B.Eng. from The Chinese University of Hong Kong. My research interests are mainly in computer vision and machine learning. Particularly, I am interested in multimodal learning from videos and embodied learning with multiple modalities.

News

  • I am very honored to have received the 2021 Michael H. Granof University's Best Dissertation Award!

  • We are organizing the Sight and Sound Workshop at CVPR 2021.

  • We are organizing the Embodied Multimodal Learning Workshop at ICLR 2021.

  • Publications


    Look and Listen: From Semantic to Spatial Audio-Visual Perception

    Ruohan Gao
    Ph.D. Dissertation, UT Austin, 2021.
    Michael H. Granof University's Best Dissertation Award
    UT Austin Outstanding Dissertation Award in Mathematics, Engineering, Physical Science, and Biological and Life Sciences




    VisualVoice: Audio-Visual Speech Separation with Cross-Modal Consistency

    Ruohan Gao and Kristen Grauman.
    Conference on Computer Vision and Pattern Recognition (CVPR), 2021.

    PDF Supp Project Page Code





    Learning to Set Waypoints for Audio-Visual Navigation

    Changan Chen, Sagnik Majumder, Ziad Al-Halah, Ruohan Gao, Santhosh K. Ramakrishnan, Kristen Grauman.
    International Conference on Learning Representations (ICLR), 2021.
    PDF Project Page Code




    VisualEchoes: Spatial Image Representation Learning through Echolocation

    Ruohan Gao, Changan Chen, Ziad Al-Halah, Carl Schissler, Kristen Grauman.
    European Conference on Computer Vision (ECCV), 2020.
    PDF Supp Data Project Page






    Listen to Look: Action Recognition by Previewing Audio

    Ruohan Gao, Tae-Hyun Oh, Kristen Grauman, Lorenzo Torresani.
    Conference on Computer Vision and Pattern Recognition (CVPR), 2020.

    PDF Supp Poster Project Page Code






    Co-Separating Sounds of Visual Objects

    Ruohan Gao and Kristen Grauman.
    International Conference on Computer Vision (ICCV), 2019.

    PDF Supp Poster Project Page Code








    2.5D Visual Sound

    Ruohan Gao and Kristen Grauman.
    Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
    (Oral Presentation) [Best Paper Award Finalist]
    PDF Project Page Dataset Code Media Coverage Oral Video




    Learning to Separate Object Sounds by Watching Unlabeled Video

    Ruohan Gao, Rogerio Feris, Kristen Grauman.
    European Conference on Computer Vision (ECCV), 2018.
    (Oral Presentation)
    PDF Supp Poster Project Page Code Oral Video




    ShapeCodes: Self-Supervised Feature Learning by Lifting Views to Viewgrids

    Dinesh Jayaraman, Ruohan Gao, Kristen Grauman.
    European Conference on Computer Vision (ECCV), 2018.
    PDF Supp




    Im2Flow: Motion Hallucination from Static Images for Action Recognition

    Ruohan Gao, Bo Xiong, Kristen Grauman.
    Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
    (Oral Presentation)
    PDF Supp Poster Project Page Code Oral Video








    On-Demand Learning for Deep Image Restoration

    Ruohan Gao and Kristen Grauman.
    International Conference on Computer Vision (ICCV), 2017.

    PDF Supp Poster Project Page Code









    Object-Centric Representation Learning from Unlabeled Videos

    Ruohan Gao, Dinesh Jayaraman, Kristen Grauman.
    Asian Conference on Computer Vision (ACCV), 2016.

    PDF Poster Project Page





    Talks

  • Invited Talk at the UTSA AI Consortium Seminar Series, April 2020, "Look to Listen and Listen to Look: Audio-Visual Learning from Video" (PDF, PPT)

  • Invited Talk at the MIT Vision Seminar Series, Sept. 2019, "Learning to See and Hear with Unlabeled Video" (PDF, PPT)

  • Invited Talk at the Sight and Sound Workshop, CVPR'19, "Learning to See and Hear with Unlabeled Video" (PDF, PPT)

  • CVPR'19 Oral, Long Beach, "2.5D Visual Sound" (Video, PDF, PPT)

  • ECCV'18 Oral, Munich, Germany, "Learning to Separate Object Sounds by Watching Unlabeled Video" (Video, PDF, PPT)

  • CVPR'18 Oral, Salt Lake City, "Im2Flow: Motion Hallucination from Static Images for Action Recognition" (Video, PDF)

  • Media Coverage

  • Facebook AI Blog: New milestones in embodied AI.

  • MIT Technology Review: Deep learning turns mono recordings into immersive sound.

  • Two Minute Papers: This AI produces binaural (2.5D) audio.

  • Facebook AI Blog: Creating 2.5D visual sound for an immersive audio experience.
  • Undergraduate Research Publications

    Ruohan Gao, Huanle Xu, Pili Hu, Wing Cheong Lau, “Accelerating Graph Mining Algorithms via Uniform Random Edge Sampling”, IEEE ICC, 2016. [PDF]

    Ruohan Gao, Pili Hu, Wing Cheong Lau, “Graph Property Preservation under Community-Based Sampling”, IEEE Globecom, 2015. [PDF]

    Ruohan Gao, Huanle Xu, Pili Hu, Wing Cheong Lau, “Accelerating Graph Mining Algorithms via Uniform Random Edge Sampling (Poster)”, ACM Conference on Online Social Networks (COSN), 2015.