Bryan Pardo, Hugo Flores Garcia, Jack Wiig, Abir Saha, Anne Marie Piper
This work is supported by NSF Award 1901456
This project focuses on building novel accessible tools for creating audio-based content like music or podcasts. The tools should support the needs of blind creators, whether working independently or on teams with sighted collaborators.
Advancing accessible content production tools requires rethinking the way information is processed, rendered, and interacted with, which brings critical challenges in human-computer interaction, machine learning, and collaboration to the forefront of research. This project will contribute:
novel algorithms to process audio-based content, which enable new ways of presenting information while solving challenges relevant to machine learning and audio processing
accessible interaction techniques that advance the ability of blind users to understand, navigate, and edit their works
collaboration support features that will support collaboration in the context of mixed-ability teams that have different levels of vision.
[pdf]A. Karp and B. Pardo, “HaptEQ: A Collaborative Tool For Visually Impaired Audio Producers,” in Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences, 2017, p. 39.
[pdf]R. N. Brewer, M. Cartwright, A. Karp, B. Pardo, and A. M. Piper, “An approach to audio-only editing for visually impaired seniors,” in Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility, 2016, pp. 307–308.