Skip to main content

Headed by Prof. Bryan Pardo, the Interactive Audio Lab is in the Computer Science Department of Northwestern University. We develop new methods in Machine Learning, Signal Processing and Human Computer Interaction to make new tools for understanding and manipulating sound.

Ongoing research in the lab is applied to audio scene labeling, audio source separation, inclusive interfaces, new audio production tools and machine audition models that learn without supervision. For more see our projects page.


Projects

  • Man with hands over his eyes

    Eyes Free Audio Production

    Bryan Pardo, Jack Wiig, Abir Saha, Robin Brewer, Andrew Karp, Anne Marie Piper

    This project focuses on building novel accessible tools for creating audio-based content like music or podcasts. The tools should support the needs of blind creators, whether working independently or on teams with sighted collaborators.

  • ISED logo

    ISED

    Bongjun Kim and Bryan Pardo

    Interactive Sound Event Detector (I-SED) is a human-in-the-loop interface for sound event annotation that helps users label sound events of interest within a lenghty recording quickly. The annotation is performed by a collaboration between a user and a machine.

Full List of Projects