**Artificial Intelligence Chatbots Perpetuate Racist Medical Ideas, Stanford Study Warns**
Artificial intelligence-powered chatbots used in healthcare settings may be perpetuating racist and debunked medical beliefs, according to a study conducted by researchers at Stanford School of Medicine. The study raises concerns that these chatbot tools could potentially exacerbate health disparities faced by Black patients. The chatbots, including ChatGPT and Google’s Bard, responded to the researchers’ queries with a series of misconceptions and false information about Black patients. The study found that the chatbots failed to accurately respond to medical questions related to kidney function, lung capacity, and skin thickness, often reinforcing long-held false beliefs about biological differences between Black and white individuals. These beliefs have contributed to lower pain ratings for Black patients, misdiagnosis, and inadequate treatment in the medical field. Experts warn that the use of such chatbot systems could amplify existing forms of medical racism.
The researchers expressed concern over the potential real-world harms that could arise from these chatbots, as more physicians rely on them for daily tasks like communicating with patients and dealing with health insurers. Even some dermatology patients have reportedly used chatbots to help self-diagnose their symptoms. However, the study revealed that the chatbots frequently provided erroneous information about differences in skin thickness and lung capacity between Black and white individuals, despite there being no factual basis for such claims. The researchers also tested the chatbots’ responses to a disproven method of measuring kidney function based on race, which resulted in false assertions about Black people having higher creatinine levels due to differences in muscle mass.
Both OpenAI and Google responded to the study by highlighting their efforts to reduce bias in their models. They also emphasized that these chatbots are not meant to replace medical professionals and should not be relied upon for medical advice. Previous testing of similar language models has shown promising results in assisting human doctors with diagnosing complex cases. However, the chatbots’ diagnostic accuracy varied, and it is important to investigate potential biases and blind spots in these models to ensure their fairness and reliability.
AI algorithms, like chatbots, have been used in healthcare settings for years. However, they have been known to perpetuate racial biases, as seen in cases where algorithms favored white patients over Black patients. Discrimination and biases in healthcare have contributed to higher rates of chronic health conditions among Black individuals, including asthma, diabetes, high blood pressure, Alzheimer’s, and COVID-19. The Stanford study highlights the potential dangers of relying on chatbots and similar AI models in clinical decision-making, as they have the potential to steer physicians towards biased treatment decisions.
While healthcare systems and technology companies have invested heavily in generative AI, many of these tools are still in the development stage. The Mayo Clinic, for example, has been piloting language models such as Google’s Med-PaLM to assist with tasks like form filling. However, the clinic emphasizes the importance of independently testing commercial AI products to ensure their fairness, equity, and safety. Mayo Clinic Platform’s president distinguishes between widely used chatbots trained on internet content versus those tailored to clinicians and trained on medical literature. The clinic believes that large medical models trained on the patient experience could potentially augment human decision-making in the future. They plan to rigorously test and evaluate these models before deploying them with clinicians.
Overall, the Stanford study sheds light on the concerning potential for AI chatbots to perpetuate racist medical beliefs and exacerbate existing health disparities. It underscores the need for continued research and improvement in AI models to ensure they are fair, accurate, and unbiased. Medical professionals, healthcare systems, and technology companies must work together to address these issues and develop AI tools that can contribute to equitable healthcare delivery.
The study conducted by Stanford School of Medicine researchers brings attention to the worrisome issue of racial biases perpetuated by AI chatbots in healthcare settings. It raises concerns that these chatbot tools, which are increasingly being used by physicians, could contribute to health disparities faced by Black patients. The study’s findings underscore the need for further research and improvement in AI models to eliminate biases and provide equitable healthcare. Healthcare providers, technology companies, and researchers must collaborate to ensure that AI tools are fair, accurate, and do not amplify existing racial biases in medical practice.
For more news and insights on AI and its impact on various industries, visit the [GPT News Room](https://gptnewsroom.com).