Dr. Who?

The AI “Doctor” Is (Not) In

Even before the advent of chatbots, people have been searching the web for answers when they feel ill. Sites such as WebMD offer a convenient first stop in the search for a diagnosis, and many people use them to determine how worried they should be about their symptoms. It can sometimes seem like a search on your symptoms returns a worst-case scenario—it’s become a running joke that according to Google, something as innocuous as a runny nose is probably a sign of cancer. But despite the potential for catastrophizing, many people continue to find these searches useful. And with the rise of AI chatbots and Google AI overviews, researching your symptoms has gotten even easier (though whether the advice is becoming more helpful is a very different question).

Proceed with Caution

Many expect that advances in AI will transform the practice of medicine. Some medical offices (my own family doctor among them) use ambient recording in the exam room to generate post-visit summaries, cutting down on administrative burdens. UnitedHealth has 1,000 AI applications in operation across its insurance, health delivery, and pharmacy units performing such tasks as transcribing conversations, summarizing data, processing claims, and running chatbots.1 Studies are being done to test the abilities of AI to interpret medical images, diagnose illnesses, and recommend the most beneficial course of treatment. Some hope that within a few years certain doctor visits might become unnecessary—simply tell a chatbot your symptoms and it will spit out the exact diagnosis and recommended treatment, making healthcare far cheaper and more broadly accessible.

The contemporary practice of medicine is only possible because of technological advances. In some ways, AI is a technology like any other, and there are many reasons to be hopeful that continued development will lead to further improvements in our knowledge of the human body, diseases, and treatments. But with that said, and despite some promising advances, such as a study which found that doctors who used GPT-4 performed better on a test of diagnostic reasoning than those who used only conventional resources,2 AI technology is nowhere near ready to make your doctor obsolete. Furthermore, we should seriously question whether doing so would be a good thing. The push for the expansion of AI into healthcare is indicative of some worrying trends that should give us pause, and we should think carefully about how these technologies are being applied.

Misplaced Trust

One area of concern comes from the rise in people turning to chatbots for medical advice. While the accuracy of chatbots has been improving, they are by no means perfect, and they frequently give out incorrect information. Most popular chatbots have not been designed for medical purposes, nor have they been vetted or tested for the accuracy of the advice they dispense. In fact, if the data sets they are trained on are not appropriate for the task, they can lead to prediction models whose results are “unusable.”3 Despite this, five Cornell University researchers found that people tend to “over trust AI-generated medical responses,” judging AI-dispensed medical information to be just as reliable as a doctor’s advice, even when the information given by the AI was inaccurate. Participants in their study

demonstrated a preference for AI-generated responses.… Participants not only found these low-accuracy AI-generated responses to be valid, trustworthy, and complete/satisfactory but also indicated a high tendency to follow the potentially harmful medical advice and incorrectly seek unnecessary medical attention as a result of the response provided.4

Sites like Google and WebMD make it clear that they provide information only. When using WebMD’s Symptom Checker, before you can even enter any information you will see the disclaimer “This tool does not provide medical advice.”5 At one time, these disclaimers were frequently found on AI chatbots as well. Worryingly, though, a recent paper found that generative AI models such as OpenAI, xAI, Anthropic, DeepSeek, Grok, and even Google AI are now dispensing medical advice without any disclaimers,6 reinforcing the erroneous perception that these tools are just as authoritative and trustworthy as your doctor, if not more so.

A misplaced trust in chatbots has the potential to drive a wedge between physicians and their patients. Doctors are highly trained professionals with years of education and experience. They are qualified to interpret symptoms and test results in ways that a layperson is simply not equipped to do. AI chatbots put a great deal of information at a patient’s disposal, but they don’t provide the understanding and experience necessary to interpret it. As Tom Nichols chronicles in his book The Death of Expertise, the ability for people to do their own “research” in the internet age frequently leads to shallow if not erroneous understanding and an unjustified conviction that they know just as much as an expert in the field.

When patients come into their doctor’s office with the results of their latest Google search or chatbot conversation, it frequently leads to frustrations on both sides. On the doctor’s side, the sentiment “don’t confuse your Google search with my medical degree” reflects irritation at patients who come to them convinced they have the correct self-diagnosis even if they are wildly off base, or at patients who insist on tests or treatments that would not be of use in their specific situation. Doctors may also have to spend additional time talking patients down, explaining why the rare or serious condition they discovered online is not something they need to be concerned about. These conversations need to be handled skillfully, or else patients may perceive that their doctor is paternalistic or dismissive of their concerns. Increased reliance on at-home, AI-generated diagnoses will likely lead to more second-guessing of doctors and further breakdown of the trust needed for a healthy physician-patient relationship.

Deskilling

Another concern is that studies are finding AI use can have a detrimental effect on doctors’ ability to perform crucial tasks. One study found that doctors who used an AI program to help them detect potentially cancerous growths in colonoscopies for just three months detected fewer lesions when the program was taken away than they had before the program had been introduced.7 Though this study is only preliminary, it suggests that reliance on AI can actually lower doctors’ performance, a phenomenon known as “deskilling.” This especially becomes a concern for the next generation of doctors who go through training using these kinds of tools—relying on AI throughout training might lead to “never-skilling,” a failure to acquire the skills that can only come through learning unassisted by AI.8

This may not seem like a big deal—after all, once a professional starts using an AI tool, why would they ever go without it? This thought fails to recognize several key factors. First, AI tools are still in development. Doctors need the ability to spot when the AI has made an error, but if physicians are deskilling or never-skilling, this type of error is more likely to go undetected. Second, for AI to improve, it needs human inputs—its outputs will only be as good as the data sets it is trained on. If doctors become less skilled, the hoped-for progress could easily stall as doctors come to rely on what has already been developed and are poorly equipped to help develop anything further. Finally, this also raises concerns about doctors’ ability to practice medicine in situations where they don’t have access to the tools they trained on—it is unrealistic to expect that every hospital system across the country will have access to the same AI tools.

Remember the “A” in AI

At a time when mistrust of systems, institutions, and experts is at an all-time high, it is ironic that people have become more willing to place their trust in AI, not just for something as simple as checking their grammar but for something as crucial and complex as their health. One wonders if that distrust of experts is part of what is driving the embrace of AI. Complex algorithms provide an air of objectivity and competence without the politicization that has led to the distrust in the first place. It seems that many are forgetting that the “A” in AI stands for “artificial.”

While the algorithms behind chatbots are sophisticated, they are developed by people and are not necessarily value-neutral. Neither do they have their own agency, but they reflect the errors and biases of those who created them. AI applications and chatbots are tools, and like any tool they can be used for benefit or harm. As the companies that develop these tools seek to expand their use and gain public trust, we should insist that our trust be earned and not simply ceded to them.

There are certainly ways in which AI tools can be of benefit to the health professions, potentially helping doctors become more accurate in their diagnoses, more aware of which patients are at the most risk for diseases, and more knowledgeable of which treatments will be best for which patients. However, these tools are not without risk, and their uncritical adoption could lead to breakdowns within physician-patient relationships as well as the deskilling of medical professionals. AI provides us with useful tools, but they are not, nor should they ever be, a substitute for a trained physician.

Notes
1. Isabelle Bousquette, “UnitedHealth Now Has 1,000 AI Use Cases, Including in Claims,” The Wall Street Journal (May   5, 2025).
2. Ethan Goh et al., “GPT-4 Assistance for Improvement of Physician Performance on Patient Care Tasks: A Randomized Controlled Trial,” Nature Medicine 31 (2025), 1233–1238.
3. Wessel T. Stam et al., “The Prediction of Surgical Complications Using Artificial Intelligence in Patients Undergoing Major Abdominal Surgery: A Systematic Review,” Surgery 171, no.   4 (2022), 1014–1021.
4. Shruthi Shekar et al., “People over Trust AI-Generated Medical Responses and View Them to Be as Valid as Doctors, Despite Low Accuracy” (Aug. 11, 2024).
5. “WebMD Symptom Checker” (accessed Sep. 2, 2025).
6. Sonali Sharma et al., “A Systematic Analysis of Declining Medical Safety Messaging in Generative AI Models” (Jul. 8, 2025).
7. Krzysztof Budzyń et al., “Endoscopist Deskilling Risk After Exposure to Artificial Intelligence in Colonoscopy: A Multicentre, Observational Study,” The Lancet Gastroenterology & Hepatology 10, no.   10 (2025), 896–903.
8. Teddy Rosenbluth, “Are A.I. Tools Making Doctors Worse at Their Jobs?New York Times (Aug. 29, 2025).

is the Event & Executive Services Manager at The Center for Bioethics and Human Dignity. He holds a BA in psychology from Nyack College and MAs in church history and theological studies from Trinity Evangelical Divinity School.

This article originally appeared in Salvo, Issue #75, Winter 2025 Copyright © 2025 Salvo | www.salvomag.com https://salvomag.com/article/salvo75/dr-who

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]