A blue "AI" symbol with a red triangle sign with an exclamation point next to it.

The AI Accessibility Divide: How Emerging Tech Could Create New Barriers for Disabled People

AI often promises efficiency, fairness, and convenience. Yet for disabled people, interacting with these systems can show a different reality. Take hiring, for example. When you apply for a job today, there’s a good chance an algorithm screens your application before any human sees it.

Before we go further, it’s worth clarifying what we’re actually talking about when we say “AI.” A lot of people don’t realize that AI is essentially an umbrella term, and not all AI technology works the same way or causes the same problems. Traditional AI includes things like hiring algorithms, credit scoring systems, spam filters, and facial recognition systems that make decisions about people’s lives based on patterns in data.

Generative AI is the newer subset that creates content — chatbots like ChatGPT, Gemini or Claude; image generators like DALL-E and Midjourney; systems that produce text, images, video, and code-based on what they learned from massive datasets.

What AI Feels Like When You Are Disabled

Both types create problems for disabled people, but in different ways. I’ve experienced this directly when I interact with AI chatbots. I run into responses that feel off in ways that are hard to explain at first but become clear over time. The systems often give me stereotypical responses about disability — the inspiration porn framing, the tragedy narrative, the assumption that my life needs fixing-rather than accommodation.

When I try to discuss my experiences as a disabled person, especially anything involving struggle or honest talk about topics like mental health, the chatbot usually blocks the conversation. It treats disability topics like they’re inherently dangerous or taboo, changing the subject or giving overly cautious, avoiding responses that feel insulting. It’s like the system assumes talking about disability honestly is automatically a crisis rather than just my reality.

This happens because of how these systems get built and trained. AI learns from massive amounts of internet content. Most of that content reflects prejudice against disability: stereotypes, medical model thinking that views disability as something defective, misunderstandings about what disabled people’s lives actually look like.

AI Bias Is Blocking Disabled Talent From the Workplace

What I’m describing isn’t unique to me. It’s part of a larger pattern of how AI systems exclude and misrepresent disabled people across nearly every application where these technologies touch our lives.

Recently, companies have been using AI to watch video interviews: they analyze your face, voice, and eye movements to predict whether you’ll be a good employee. The technology looks fine on paper until you consider what happens when someone stutters, doesn’t make typical eye contact because they’re autistic, or has facial expressions that differ from what the system learned as “normal.”

The AI disqualifies them so they never make it to the actual interview. They usually don’t even know why they were rejected. This is actually happening. When researchers tested ChatGPT 4 by submitting resumes that mentioned disability-related achievements — leadership awards, volunteer work with disability organizations — the AI consistently ranked them lower than similar resumes without those mentions.

AI Expansion Is Making Inequality Worse Every Day

The discrimination against disabled people is bad enough on its own, but it’s just one piece of a bigger picture. As tech companies race to build AI infrastructure as fast as possible, they’re making choices about where to put data centers, how much energy to burn, and where to dump the pollution.

These decisions happen behind closed doors, often through shell companies and nondisclosure agreements, so communities don’t find out until construction starts. But it’s not only about power — it’s also about who pays the environmental and social costs of this technology boom. The data centers and power plants needed for AI are going up in predominantly Black and low-income communities, where people have the least say in stopping them.

The future isn’t hypothetical; it’s happening now. Every day more companies deploy AI hiring tools that weed out disabled applicants. Every month more data centers are built in poor neighborhoods without proper oversight. Every year the surveillance infrastructure becomes more established and harder to reverse. If we don’t act now, this only gets worse. We’re not headed toward this problem. We’re already in it, and it’s getting worse every day that we pretend innovation justifies any cost.


Noa Everhart (he/him) is a writer and researcher exploring social dynamics, environmental issues, culture, technology, and innovation. Through his work, he addresses emerging challenges and promotes justice across multiple fields. 

About Rooted In Rights

Rooted in Rights exists to amplify the perspectives of the disability community. Blog posts and storyteller videos that we publish and content we re-share on social media do not necessarily reflect the opinions or values of Rooted in Rights nor indicate an endorsement of a program or service by Rooted in Rights. We respect and aim to reflect the diversity of opinions and experiences of the disability community. Rooted in Rights seeks to highlight discussions, not direct them. Learn more about Rooted In Rights.

Leave a Comment

Your email address will not be published. Required fields are marked *