Skip to main content

Headed by Prof. Bryan Pardo, the Interactive Audio Lab is in the Computer Science Department of Northwestern University. We develop new methods in Machine Learning, Signal Processing and Human Computer Interaction to make new tools for understanding and manipulating sound.

Ongoing research in the lab is applied to audio scene labeling, audio source separation, inclusive interfaces, new audio production tools and machine audition models that learn without supervision. For more see our projects page.


Projects

  • Audacity logo

    Deep Learning Tools for Audacity

    Hugo Flores Garcia, Aldo Aguilar, Ethan Manilow, Dmitry Vedenko and Bryan Pardo

    We provide a software framework that lets deep learning practitioners easily integrate their own PyTorch models into the open-source Audacity DAW. This lets ML audio researchers put tools in the hands of sound artists without doing DAW-specific development work.

  • System diagram of Cerberus

    Cerberus

    Ethan Manilow, Prem Seetharaman, Bryan Pardo

    Cerberus is a single deep learning architecture that can simultaneously separate sources in a musical mixture and transcribe those sources.

  • Voogle logo

    Voogle

    Max Morrison, Fatemeh Pishdadian, Bongjun Kim, Prem Seetharaman, Madhav Ghei, Bryan Pardo

    Voogle is an audio search engine that lets users search a database of sounds by vocally imitating or providing an example of the sound they are searching for.

Full List of Projects