CMU-HCII-20-110 Human-Computer Interaction Institute School of Computer Science, Carnegie Mellon University
Accessible User-Generated Social Media Cole Gleason Novmeber 2020 Ph.D. Thesis
Some unique categories of media on these platforms, such as memes and animated GIFs, are hard to describe while maintaining their humurous or emotive effects. I explored alternative methods using audio to convey this media in richer nonvisual format beyond alternative text, and built a system to make these accessible by re-using templates created by online volunteers. While audio-based methods should not replace textual descriptions of visual media, they can add a new, richer method to convey a similar tone and increase understanding. To address the seemingly insurmountable problem of making all of this user-generated content accessible, I built and deployed Twitter A11y to demonstrate and evaluate multiple methods for sourcing image descriptions including text recognition, automatic image captioning, and human crowdsourcing. Participants with vision impairments who used Twitter A11y saw a drastic increase in accessible content on their accounts, with every image having a description and majority being high-quality. By combining rich human descriptions and automatic methods, my work seeks to make visual media on social media platforms accessible at scale. Through automatic methods we can close the accessibility gap on this platform by rehabilitating inaccessible content, while still working towards the ultimate goal of helping original content authors create accessible content from the start. This work recommends that social media platforms and researchers enact a model of shared responsibility for the deluge of inaccessible content on technology platforms, requiring all actors to work towards more inclusive online spaces for people with disabilities.
pages
Jodi Forlizzi, Head, Human-Computer Interaction Institute
| |
Return to:
SCS Technical Report Collection This page maintained by reports@cs.cmu.edu |