Last week marked the 31st Annual International Technology and Persons with Disabilities Conference, the world’s largest gathering of people who develop or use assistive technology and the only one hosted by a college–California State University, Northridge (CSUN). It is a critical source of inspiration and information for multiple disabilities, but especially for the visually impaired or blind.
To join the 5000 other attendees at the CSUN conference was to be deeply impressed by the university’s commitment to world-wide, universal access to internet information. Day 1 alone featured all the major vendors plus nine different sessions related to visual processing, including one from Thailand and another from Canada.
Clearly, progress is being made in web services for the visually impaired. It’s encouraging that the dominant commercial screen reader, JAWS (Job Access With Speech) helps to increase international access by offering text-to-speech in 30 different languages. It’s also encouraging that ARIA (Accessible Rich Internet Applications), another hot new product showcased at CSUN, advertises robust text-based chat support for handicapped web users, both mobile and desktop.
Yet a look at CSUN’s offerings suggests the limitations of current web-access technology for folks with vision issues. The simple fact that the conference is 31 years old indicates the fundamental problem: that available technology supports are still aimed solely at reading text. Worse, every screen reader works differently and is maximally compatible only with a particular browser; JAWS, for instance, uses Explorer, while its chief competition, NVDA, prefers Firefox. Even essential legislation only mandates that the operations or results obtained from any device with a keyboard “can be discerned textually,” never mind charts, graphics, drawings or photos. Tellingly, the motto of that immense resource, the National Library Service for the Blind and Physically Handicapped, is “That all may read.” It advertises itself as “a free braille and talking book library service;” images are never mentioned.
But as every modern librarian and web-surfer knows, images are important sources of information. Consider, for instance, that Facebook users upload more than 350 million photos per day. The Library of Congress hosts 13.7 million images on its site. Yet screen-reading software cannot perceive photos or any other graphics—only text. Facebook’s recent addition of bots to type descriptions of photo content, while a definite improvement, is actually a tacit admission of the problem. Their descriptions simply convert the visual information to text.
To give them credit, a few sessions at CSUN targeted visuals, such as “Social Media Accessibility – Importance for Professionals and Jobseekers” and “Embedded Described Video Best Practices.” But there are only a few of these, and they do not solve the crucial difficulty with image perception. Their goal is admirably clear: to make web access perceivable, operable, understandable and robust. The methods, however, remain, lamentably, the same as they have been for more than 30 years.
Joyce Johnston teaches at George Mason University and has been writing and speaking on digital intellectual property and virtual instruction for more than 20 years. As a non-librarian, but a proud member of the Virginia Association of School Librarians, she has provided updates on intellectual property at its annual conference for the past 10 years and serves on the Executive Committee for the World Conference on Educational Media and Technology (aka EdMedia).