Julie Eshleman, PhD, MEd, BCBA; LEND Fellow, Georgia State University

The promise of artificial intelligence (AI) for the disability community is often framed as a revolutionary leap in independence and autonomy. We see this in AI-powered Assistive Technologies (or AI-AT) that translate sign language in real-time or help people with low vision to navigate physical spaces. However, as these tools move from the lab to the living room, they enter a complex policy landscape where the stakes can be life-altering.  

As a Georgia LEND Fellow and Research Fellow at Georgia Institute of Technology, I’ve recently conducted an exploratory scoping review of the benefits, risks, and equity barriers associated with AI-AT. What is clear is that while technology is moving at light speed, our civil rights protections are still catching up. A primary example of this “catch up” is the recently introduced AI Civil Rights Act of 2025 (H.R.6356) (S.3308), a bill that attempts to place human-centered guardrails on the algorithms that increasingly govern our lives and access. This bill is bicameral, led by Representative Yvette Clarke (D-NY) in the House and Senator Ed Markey (D-MA) in the Senate. 

A Benefit-Risk Paradox 

My research identifies a significant paradox: the very features that make AI-AT beneficial—its ability to personalize and adapt to unique needs—are also the sources of its greatest risks. AI excels at fostering autonomy, independence, and communication access, breaking down barriers that traditional, static AT could not. However, because AI is trained on data that often excludes or “others” disabled people, it carries high risks for bias, discrimination, privacy, and surveillance concerns.  

The AI Civil Rights Act is a direct response to this paradox, utilizing deterrent-based “sticks” to establish guardrails around AI use and consequences for violations. In order to realize a true vision of equity, I believe and there is an implied need for incentive-based “carrots” in this policy space as well. 

The “Stick”: Accountability and the Private Right of Action 

The AI Civil Rights Act introduces some of the most aggressive deterrents we have seen to date. It explicitly prohibits the use of algorithms that contribute to a disparate impact on protected characteristics, including disability. For the disability community, the bill offers three critical protections already written in the text: 

  • Independent Audits: The bill requires “consequential” AI systems (those impacting health, housing, or employment) to undergo rigorous testing for bias before and after they are deployed.  
  • The Right to Human Review: This legislation affirms that automated systems should not replace human judgment. If an AI denies you a benefit or a job, you have the legal right to have a human review your appeal. 
  • Private Right of Action: Perhaps most significantly, it allows people to sue for violations. Private right of action empowers individuals to file a civil lawsuit against another party or a business for alleged harm. This transforms the bill from a regulatory suggestion into an enforceable civil right.  

The “Carrot”: Expanding the Vision for Equity  

While the AI Civil Rights Act provides the necessary “sticks” to prevent some harms, my research suggests that regulation alone does not guarantee access. To achieve true equity, we must look beyond this bill alone to consider future “carrots” that would make AI-AT truly available and inclusive:  

  • Incentivizing Inclusive Design: While the AI Civil Rights Act requires stakeholder consultation, we need future policy that mandates true co-design, ensuring that disabled people are the architects, not just the subjects, of these tools. 
  • Financing and Coverage: Deterring discrimination is moot if a user cannot afford the tool. We must advocate for insurance and Medicaid to cover AI-AT as “durable medical equipment,” especially with AI powering tools to work better and for longer for each user.  
  • Infrastructure Investment: AI requires high-speed connectivity. Future equity means ensuring rural and low-income disabled people aren’t left behind by the digital divide.   

Call to Action: Advocacy in the Age of AI 

The AI Civil Rights Act would create a critical foundation for AI-AT policy, but it needs the voices of disabled people, families, and practitioners to ensure it doesn’t just prevent harm, but actively promotes access.  

For Disabled People and Families: Under this bill, you would have a protected right to human review—don’t be afraid to ask how AI is making decisions about your care.  

For Practitioners and Researchers: Insist on transparency from vendors about how their AI was trained and advocate for policy that treats AI-AT with the same financial urgency as other durable medical equipment. 

For Everyone: Contact your representatives. Let them know that civil rights in the 21st century must include the right to an unbiased, accessible digital world.  

The future of AI-AT depends on a policy framework that is as smart and adaptive as the technology itself. By advocating for both accountability and access, we can ensure that the AI age is one of true inclusion, not automated exclusion. 

Julie Eshleman, PhD, MEd, BCBA, is a Postdoctoral Research Fellow at Georgia Tech and a 2025-26 LEND Fellow at Georgia State University. With a PhD in Sociology and Social Policy, an MEd in Low-Incidence Disabilities and Autism, and a PG Cert in Organizational Business Psychology, she is a scholar-advocate focused on neuroaffirming approaches to inclusion through the use of AI, assistive technologies, cultural shifts, and policy changes. In her free time, Julie enjoys puzzles, being outdoors, and reading or choosing what to add to her ‘to be read’ list.