Headed by Prof. Bryan Pardo, the Interactive Audio Lab is in the Computer Science Department of Northwestern University. We develop new methods in Machine Learning, Signal Processing and Human Computer Interaction to make new tools for understanding and manipulating sound.
Ongoing research in the lab is applied to audio scene labeling, audio source separation, inclusive interfaces, new audio production tools and machine audition models that learn without supervision. For more see our projects page.
Latest News
New Audio Source Separation Tutorial
Nov 20, 2020
Alisa Liu wins student paper award in DCASE 2020
Nov 1, 2020
Fatemeh Pishdadian defends dissertation
Oct 27, 2020
New paper in Machine Learning For Signal Processing
Sep 21, 2020
Invited Talk at AES 2020
Sep 16, 2020
Pardo on Headroom podcast
Sep 16, 2020
Lab welcomes 2 new members
Aug 30, 2020
Projects
-
Cerberus, simultaneous audio separation and transcription
Ethan Manilow, Prem Seetharaman, Bryan Pardo
Cerberus is a single deep learning architecture that can simultaneously separate sources in a musical mixture and transcribe those sources.
-
Eyes Free Audio Production
Bryan Pardo, Hugo Flores Garcia, Jack Wiig, Abir Saha, Anne Marie Piper
This project focuses on building novel accessible tools for creating audio-based content like music or podcasts. The tools should support the needs of blind creators, whether working independently or on teams with sighted collaborators.