About Me

Hello! I am Rio, a rising undergraduate senior who’s excited to be a part of Gallaudet’s REU AICT 10-week summer research program in accessible computing! This summer, outside of research, I deliver pizza for Domino’s and play in an ultimate frisbee league. I have two rabbits at home in Acton, MA where I currently reside; one of them pictured above.

About My Mentor

Linda Kozma-Spytek is Senior Research Audiologist in the Technology Access Program of Gallaudet University. She also co-directs the Deaf and Hard of Hearing Technology Rehabilitation Engineering Research Center at Gallaudet University, funded by the National Institute on Disability, Independent Living and Rehabilitation Research. Her research includes investigating the use of Internet Protocol-Captioned Telephone Service (IP-CTS), the compatibility of digital cellular telephones and hearing aids and the accessibility of VoIP telephony applications for individuals with hearing loss. She is active in standards development and public policy that addresses communication technology and services and serves as a Technical Adviser to Hearing Loss Association of America (HLAA).

About My Project

Internet Protocol Captioned Telephone Service (IP CTS) is a federally funded telecommunications relay service that allows individuals who can speak but who have difficulty hearing to make calls over the internet and read captions of what the other party is saying while listening to them. Many of these services currently rely on speaker-independent automatic speech recognition (ASR) to generate captions. Studies have shown, however, that users are dissatisfied with the accuracy of ASR generated captions in in-person settings, such as in one-on-one meetings and higher education classes. Little is known about the user experience with ASR captioning with regards to voice-based telecommunication relay services. Furthermore, not many have studied speaker behavior when interacting with an ASR captioning system specifically for the Deaf or Hard-of-Hearing (DHH). The goal of this project is to investigate the impact of showing ASR captions of the hearing individual’s speech to both the hearing and DHH pair on a call on conversation quality. Specifically, we wonder whether a hearing individual, when they see captions of their speech, will alter their speaker behavior in ways that facilitate communication such as self-correcting ASR captioning errors or changing acoustic properties of their speech such as loudness, pitch range, or speech rate that result in more accurate captions. Our hopes for this study are threefold: to pursue functional equivalence in IP CTS by better understanding ASR captioning, to encourage a more equal distribution of error correction between DHH and hearing individuals by exposing captions to the hearing individual, and to inform future ASR system designs.

My Final Paper

Mutual Caption Access of Hearing Speakers’ Speech in One-on-One Voice Calls: The user experience of hearing individuals and their calling partners with hearing loss

My Blog

Check out my blog here! –> My Blog!

~Updated weekly~