I’d like you to consider this a “yes … and” piece to complement Joe’s post. I’m just trying to contradict what he’s saying, but I’m just trying to give some context to initiatives and opportunities where AI can make a difference for people with disability. To be clear, I want to take some time to speak about what’s possible in hope that we’ll get there one evening. There are, and we’ve needed to address them, like, yesterday.
Other words
Joe’s article spends a lot of time addressing computer-vision types ‘ ability to create alternative words. He raises a number of true points about the state of affairs right now. And while computer-vision concepts continue to improve in the quality and complexity of information in their information, their benefits aren’t wonderful. He argues to be accurate that the state of image research is currently very poor, especially for some graphic types, in large part due to the absence of contextual contexts in which to look at images ( as a result of having separate “foundation” models for words analysis and image analysis ). Today’s models aren’t trained to distinguish between images that are contextually relevant ( should probably have descriptions ) and those that are purely decorative ( couldn’t possibly need a description ) either. However, I still think there’s possible in this area.
As Joe points out, alt text editing via human-in-the-loop should be a given. And if AI can intervene and provide a starting point for alt text, even if the quick reads,” What is this BS?” That’s not correct at all … Let me try to offer a starting point— I think that’s a gain.
If we can specifically teach a design to consider image usage in context, it might be able to help us more swiftly distinguish between images that are likely to be attractive and those that are more descriptive. That will clarify which situations require image descriptions, and it will increase authors ‘ effectiveness in making their sites more visible.
While complex images—like graphs and charts—are challenging to describe in any sort of succinct way ( even for humans ), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s say you came across a map that was simply the name of the table and the type of visualization it was: Pie table comparing smartphone use to have phone use among US households making under$ 30, 000 annually. ( That would be a pretty bad alt text for a chart because it would frequently leave many unanswered questions about the data, but let’s just assume that that was the description in place. ) If your website knew that that picture was a pie graph ( because an ship model concluded this ), imagine a world where people could ask questions like these about the creative:
- Do more people use smartphones or other types of smartphones?
- How many more?
- Is there a group of people that don’t fall into either of these buckets?
- That number, how many?
For a moment, the chance to learn more about images and data in this way could be revolutionary for people with low vision and blindness as well as for those with various forms of color blindness, cognitive disabilities, and other issues. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.
What if you could ask your browser to make a complicated chart simpler? What if you demanded that the line graph be isolated into just one line? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to switch colors for patterns? That seems like a possibility given the chat-based interfaces and our current ability to manipulate images in today’s AI tools.
Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For instance, it might be able to convert that pie chart (or, better yet, a number of pie charts ) into more usable ( and useful ) formats, like spreadsheets. That would be incredible!
Matching algorithms
When Safiya Umoja Noble chose to write her book Algorithms of Oppression, she hit the nail on the head. Although her book focused on the ways that search engines can foster racism, I believe it to be equally accurate to say that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. Many of these are the result of a lack of diversity in the people who create and build them. There is real potential for algorithm development when these platforms are built with inclusive features in, though.
Take Mentra, for example. They serve as a network of employment for people who are neurodivers. They match job seekers with potential employers using an algorithm based on more than 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it takes into account each work environment, communication strategies for each job, and other factors. Mentra made the decision to change the script when it came to traditional employment websites because it was run by neurodivergent people. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in, reducing the emotional and physical labor on the job-seeker side of things.
When more people with disabilities are involved in developing algorithms, this can lower the likelihood that these algorithms will harm their communities. Diverse teams are crucial because of this.
Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For instance, if you follow a group of white men who are not white or aren’t white and who also discuss AI, it might be wise to follow those who are also disabled or who are not white. If you followed its advice, you might be able to understand what is happening in the AI field more fully and nuancedly. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward ) those groups.
Other ways that AI can assist people with disabilities
I’m sure I could go on and on about using AI to assist people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order:
- preservation of voice You might have heard about the voice-preserve offerings from Microsoft, Acapela, or others, or have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS ( Lou Gehrig’s disease ) or motor-neuron disease or other medical conditions that can lead to an inability to talk. We need to approach this tech responsibly because it has the potential to have a truly transformative impact, which is why it can also be used to create audio deepfakes.
- voice recognition is. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively seeking out people who have Parkinson’s and related conditions, and they intend to expand this list as the project develops. More people with disabilities will be able to use voice assistants, dictation software, and voice-response services as a result of this research, which will result in more inclusive data sets that will enable them to use their computers and other devices more easily and with just their voices.
- Text transformation. The most recent generation of LLMs is capable of altering already-existing text without giving off hallucinations. This is incredibly empowering for those who have cognitive disabilities and who may benefit from text summaries, simplified versions, or even text that has been prepared for Bionic Reading.
The importance of diverse teams and data
We must acknowledge the importance of our differences. The intersections of the identities we live in have an impact on our lived experiences. These lived experiences—with all their complexities ( and joys and pain ) —are valuable inputs to the software, services, and societies that we shape. Our differences must be reflected in the data we use to develop new models, and those who provide it need to be compensated for doing so. More robust models are produced by inclusive data sets, which promote more justifiable outcomes.
Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that you include information about disabilities that has been written by people with a variety of disabilities in the training data.
Want a model that doesn’t speak in ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. Despite this, AI models won’t be replacing human copy editors anytime soon when it comes to sensitivity reading.
Want a copilot for coding that provides recomprehensible recommendations after the jump? Train it on code that you know to be accessible.
I have no doubts about how dangerous AI will be for people today, tomorrow, and for the rest of the world. However, I also think that we can acknowledge this and make thoughtful, thoughtful, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.
Many thanks to Kartik Sawhney for supporting the development of this article, Ashley Bischoff for providing me with invaluable editorial support, and, of course, Joe Dolson for the prompt.
Recommended Story For You :

GET YOUR VINCHECKUP REPORT

The Future Of Marketing Is Here

Images Aren’t Good Enough For Your Audience Today!

Last copies left! Hurry up!

GET THIS WORLD CLASS FOREX SYSTEM WITH AMAZING 40+ RECOVERY FACTOR

Browse FREE CALENDARS AND PLANNERS

Creates Beautiful & Amazing Graphics In MINUTES

Uninstall any Unwanted Program out of the Box

Did you know that you can try our Forex Robots for free?


Leave a Reply