I’d like you to consider this a “yes … and” piece to complement Joe’s post. Instead of refuting everything he’s saying, I’m pointing out some areas where AI may make real, positive impacts on people with disabilities. To be clear, I’m not saying that there aren’t true threats or pressing problems with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hope that we’ll get there one day.
Other text
Joe’s article spends a lot of time addressing computer-vision types ‘ ability to create alternative words. He raises a lot of valid points about the state of the world right now. And while computer-vision concepts continue to improve in the quality and complexity of information in their information, their benefits aren’t wonderful. As he rightly points out, the state of image research is currently very poor, especially for some graphic types, in large part due to the lack of context for which AI systems look at images ( which is a result of having separate “foundation” models for words analysis and picture analysis ). Today’s models aren’t trained to distinguish between images that are contextually relevant ( that should probably have descriptions ) and those that are purely decorative ( which might not need a description ) either. However, I still think there’s possible in this area.
As Joe mentions, human-in-the-loop publishing of alt word should definitely be a factor. And if AI can intervene and provide a starting point for alt text, even if the rapid reads,” What is this BS?” That’s not correct at all … Let me try to offer a starting point— I think that’s a gain.
If we can specifically station a design to examine image usage in context, it might help us more quickly determine which images are likely to be elegant and which ones are likely to need a description. That will help clarify which situations require image descriptions, and it will increase authors ‘ effectiveness in making their pages more accessible.
The image example provided in the GPT4 announcement provides an intriguing opportunity, even though complex images like graphs and charts are challenging to summarize succinctly ( even for humans ). Let’s say you came across a chart that was simply the description of the chart’s title and the type of visualization it was: Pie chart comparing smartphone usage to feature phone usage in US households earning under$ 30, 000 annually. ( That would be a pretty bad alt text for a chart because it would frequently leave many unanswered questions about the data, but let’s just assume that that was the description in place. ) Imagine a world where users could ask questions about the graphic if your browser knew that that image was a pie chart ( because an onboard model concluded this ).
- Are there more smartphone users than feature phones?
- How many more?
- Is there a group of people who don’t fall under any of these categories?
- How many is that?
Setting aside the realities of large language model ( LLM) hallucinations—where a model just makes up plausible-sounding “facts” —for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It might also be useful in educational settings to assist those who can see these charts as they are able to comprehend the data contained therein.
What if you could ask your browser to make a complicated chart simpler? What if you asked it to separate a single line from a line graph? What if you could ask your browser to transpose the different lines ‘ colors so they match your color blindness better? What if you asked it to switch colors in favor of patterns? Given these tools ‘ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility.
Imagine a specially designed model that could extract the data from that chart and convert it to another format. For example, perhaps it could turn that pie chart ( or better yet, a series of pie charts ) into more accessible ( and useful ) formats, like spreadsheets. That would be amazing!
Matching algorithms
When Safiya Umoja Noble chose to write her book Algorithms of Oppression, she hit the nail on the head. Although her book focused on the ways that search engines can foster racism, I believe it to be equally accurate to say that all computer models have the potential to amplify conflict, bias, and intolerance. We all know that poorly designed and maintained algorithms are incredibly harmful, whether it’s Twitter that keeps bringing you the most recent tweet from a drowsy billionaire, YouTube that keeps us in a q-hole, or Instagram that keeps us guessing what natural bodies look like. Many of these are the result of a lack of diversity in the people who create and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities.
Take Mentra, for example. They serve as a network of employment for people who are neurodivers. Based on more than 75 data points, they match job seekers with potential employers using an algorithm. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. Mentra made the decision to change the script when it came to the typical employment websites because it was run by neurodivergent people. They lower the emotional and physical labor on the job-seeker side of things by recommending available candidates to companies who can then connect with job seekers they are interested in.
When more people with disabilities are involved in the development of algorithms, this can lower the likelihood that these algorithms will harm their communities. That’s why diverse teams are so important.
Imagine if the social media company’s recommendation engine was tuned to prioritize follow recommendations for people who discussed topics similar to those that were important but who were not in your current sphere of influence in any significant way. For instance, if you follow a group of white men who are not white or aren’t white and who also discuss AI, it might be wise to follow those who are also disabled or who are not white. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward ) those groups.
Other ways that AI can helps people with disabilities
I’m sure I could go on and on about using AI to assist people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order:
- Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an artificial intelligence model to mimic your voice, which can be incredibly helpful for those who have ALS ( Lou Gehrig’s disease ), motor neuron disease, or other medical conditions that can make it difficult to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it’s something that we need to approach responsibly, but the tech has truly transformative potential.
- Voice recognition. Researchers like those involved in the Speech Accessibility Project are offering compensation to people with disabilities for their assistance in the collection of audio recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. More people with disabilities will be able to use voice assistants, dictation software, and voice-response services as a result of this research, which will lead to more inclusive data sets that enable them to use their computers and other devices more effectively and with just their voices.
- Text transformation. The most recent generation of LLMs is capable of altering already-existing text without giving off hallucinations. This is incredibly empowering for those who have cognitive disabilities and who may benefit from text summaries or simplified versions, or even text that has been prepared for Bionic Reading.
the significance of various teams and data
We must acknowledge that our differences matter. The intersections of the identities we live in have an impact on our lived experiences. These lived experiences—with all their complexities ( and joys and pain ) —are valuable inputs to the software, services, and societies that we shape. The data we use to train new models must be based on our differences, and those who provide it to us need to be compensated for doing so. More robust models are produced by inclusive data sets, which promote more justifiable outcomes.
Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that the training data includes information about disabilities written by people with a range of disabilities.
Want a model that doesn’t use ableist language? You might be able to use already-existing data sets to create a filter that can read ableist language before it is read. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon.
Want a coding copilot who can provide you with useful recommendations after the jump? Train it on code that you know to be accessible.
I have no doubt that AI can and will harm people … today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility ( and, more broadly, inclusion ), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.
Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.







