CMU-HCII-20-110
Human-Computer Interaction Institute
School of Computer Science, Carnegie Mellon University



CMU-HCII-20-110

Accessible User-Generated Social Media
for People with Vision Impairments

Cole Gleason

Novmeber 2020

Ph.D. Thesis

CMU-HCII-20-110.pdf


Keywords: Social media, image descriptions, accessibility, vision impairments, screen readers, memes, GIFs, accomodations, blindness, Twitter


Social media platforms are becoming less accessible to people with vision impairments as the prevalence of user-generated images and videos increase. For exam ple, over 25% of content on Twitter contains visual media, but I have found that only 0.1% of images contain descriptions for people with vision impairments. Through interviews with some of the few sighted social media users who currently write image descriptions, I've uncovered that poor feature design and a lack of user education is stymying efforts to increase accessible content on social media platforms.

Some unique categories of media on these platforms, such as memes and animated GIFs, are hard to describe while maintaining their humurous or emotive effects. I explored alternative methods using audio to convey this media in richer nonvisual format beyond alternative text, and built a system to make these accessible by re-using templates created by online volunteers. While audio-based methods should not replace textual descriptions of visual media, they can add a new, richer method to convey a similar tone and increase understanding. To address the seemingly insurmountable problem of making all of this user-generated content accessible, I built and deployed Twitter A11y to demonstrate and evaluate multiple methods for sourcing image descriptions including text recognition, automatic image captioning, and human crowdsourcing. Participants with vision impairments who used Twitter A11y saw a drastic increase in accessible content on their accounts, with every image having a description and majority being high-quality.

By combining rich human descriptions and automatic methods, my work seeks to make visual media on social media platforms accessible at scale. Through automatic methods we can close the accessibility gap on this platform by rehabilitating inaccessible content, while still working towards the ultimate goal of helping original content authors create accessible content from the start. This work recommends that social media platforms and researchers enact a model of shared responsibility for the deluge of inaccessible content on technology platforms, requiring all actors to work towards more inclusive online spaces for people with disabilities.

pages

Thesis Committee:
Jeffrey P. Bigham (Co-chair)
Kris M. Kitani (Co-chair)
Patrick Carrington
Chieko Asakawa (CMU / IBM Research)
Meredith Ringle Morris (Microsoft Research)

Jodi Forlizzi, Head, Human-Computer Interaction Institute
Martial Hebert, Dean, School of Computer Science



Return to: SCS Technical Report Collection
School of Computer Science homepag e

This page maintained by reports@cs.cmu.edu