We must not let AI ‘pull the doctor out of the visit’ for low-income patients | Leah Goodridge and Oni Blackstock

Generative AI is being pushed into healthcare – and diagnostic risks may deepen the class divide

In southern California, where rates of homelessness are among the highest in the nation, a private company, Akido Labs, is running clinics for unhoused patients and others with low incomes. The caveat? The patients are seen by medical assistants who use artificial intelligence (AI) to listen to the conversations, then spit out potential diagnoses and treatment plans, which are then reviewed by a doctor. The company’s goal, its chief technology officer told the MIT Technology Review, is to “pull the doctor out of the visit”.

This is dangerous. Yet it’s part of a larger trend where generative AI is being pushed into healthcare for medical professionals. In 2025, a survey by the American Medical Association reported that two out of three physicians used AI to assist with their daily work, including diagnosing patients. One AI startup raised $200m to provide medical professionals with an app dubbed “ChatGPT for doctors”. US lawmakers are considering a bill that would recognize AI as able to prescribe medication. While this trend of AI in healthcare affects almost all patients, it has a deeper impact on people with low incomes who already face substantial barriers to care and higher rates of mistreatment in healthcare settings. People who are unhoused and have low incomes should not be testing grounds for AI in healthcare. Instead, their voices and priorities should drive if, how, and when AI is implemented in their care.

Leah Goodridge is a lawyer who worked in homeless prevention litigation for 12 years

Oni Blackstock, MD, MHS, is a physician, founder and executive director of health justice, and a Public Voices Fellow on technology in the public interest with The OpEd Project

Continue reading…