A Human-AI Collaborative Approach for Designing Sound Awareness Systems (HACS)

Published in CHI, 2024

ACM DL PDF

Abstract
HACS is a novel sound awareness system that classifies sounds based on their characteristics (e.g., a beep) and encourages DHH users to apply contextual knowledge (e.g., location) to recognize specific sound events (e.g., a microwave). For HACS, we collaborated with American Sign Language interpreters to develop a novel sound classification taxonomy and evaluated the system using qualitative input from DHH individuals and a sound recognition model, which demonstrated HACS’s promise for creating a more accurate and reliable human-AI sound awareness system.

Key Contributions

  • For this project, I initially generated a similarity matrix from ASL interpreters’ sorting of different sound classes to hierarchically cluster our novel taxonomy.

  • I also generated Mel-Spectrograms using Python to help validate the Convolutional Neural Network (CNN) we used to ensure the algorithmic distinguishability of different sounds in our taxonomy.

Recommended citation: Jeremy Zhengqi Huang, Reyna Wood, Hriday Chhabria, and Dhruv Jain. (2024). "A Human-AI Collaborative Approach for Designing Sound Awareness Systems." CHI 2024. ACM, Article 884, 1–11.
Download Paper