If AI Can Diagnose You, Do You Really Need a Doctor?
How AI actually could strengthen the doctor-patient relationship 🤝
ICYMI 👉
🙋🏻♀️ Join me TODAY, Tuesday 7/22 at 5 pm ET, for a rollicking good time! I’ll be talking with Debbie Weil about “[b]oldly exploring old age, from the blessings to the bullshit.” After 50 years as a reporter, editor, author, and podcaster, Debbie now writes a hilarious and honest column called “[Bold] Age” where she confronts big questions about identity, purpose, ambition, grief, and more.
Join us here! 👯
On a recent flight back to DC, I sat next to a woman whose nervous energy was palpable. Fidgety and restless, she abruptly turned to me before takeoff and said, “I’m a wreck about flying. How about you?” I could tell she needed to talk to someone, so we started chatting. She asked me what I do for a living. When I told her I’m a primary care doctor, she reached for her phone and pulled up her favorite health app. "Look," she said, scrolling through AI-generated insights about her anxiety symptoms, sleep patterns, and long list of medical diagnoses. "It's like having a doctor in my pocket. Why do I even need a real one anymore?"
I’m hearing this a lot. Recent headlines have been breathless about AI's diagnostic prowess—ChatGPT outperforming doctors on medical case studies, AI systems detecting diseases radiologists missed, algorithms achieving 90% accuracy while physicians using the same AI tools managed only 76%. The message seems clear: artificial intelligence is coming for healthcare.
Maybe it's exactly what our broken system needs.
But here's what those headlines miss: medicine isn't just about getting the right diagnosis. It's about getting the right diagnosis for the right person at the right time—and then translating that knowledge into a treatment plan that actually works within the messy reality of someone's life. It’s about empowering people with tools and information to manage risk and make everyday decisions. Ultimately, medicine is about relationships.
The question isn't whether AI can diagnose diseases. Increasingly, it can. The real question is whether artificial intelligence can provide what patients actually need most from healthcare: trust, understanding, and the kind of human connection that makes healing possible.

Humanism in Medicine
Let’s talk about a patient I will call Maria, a 45-year-old woman with debilitating fatigue, brain fog, and joint pain. She came to me after three visits with different specialists and a bunch of conflicting diagnoses from various AI-powered symptom checkers.
"The AI keeps telling me it might be lupus or fibromyalgia," she said, pulling up her phone to show me the latest algorithm's assessment. "But the rheumatologist says it's not lupus, the neurologist says it's not MS, and the urgent care doctor just keeps ordering more tests."
What Maria needed wasn't another diagnostic opinion—digital or human. She needed someone to listen to her whole story, to understand how her symptoms connected to her recent divorce, her caregiving responsibilities for her aging mother, and her pattern of putting everyone else's needs before her own. She needed someone to help navigate uncertainty while we figured things out together.
We organized a plan to address her chronic stress and sleep deprivation—one that actually worked with her busy life. We ruled out serious underlying conditions. Three months later, Maria's symptoms had improved significantly. No AI algorithm had suggested that her physical pain might be rooted in the accumulated weight of years of self-neglect. No diagnostic tool had recommended that we start with basic stress management and see what happened.
Or consider my patient Robert who came to me after a heart attack at age 52. His questions weren't just medical. Sure, he wanted to understand his medications and the necessary lifestyle modifications. But he also needed to process his fear of dying, his guilt about his family's financial security, and his struggle to reconcile his identity as a "healthy person" with his new reality as someone with heart disease.
No AI system, however sophisticated, could have provided what David needed: someone to sit with his uncertainty, to normalize his fears while taking them seriously, and to help him gradually rebuild confidence in his body's capacity for healing.
Let me be clear: I'm a huge fan of technology. AI has tremendous potential to enhance healthcare by handling routine screenings, catching subtle patterns in imaging, and freeing up physicians to spend more time on what humans do best—connecting, understanding, and healing.
A recent study in Denmark showed that AI could reliably identify about half of all normal chest X-rays, allowing radiologists to focus their expertise on the cases that actually needed human attention. In Sweden, AI-assisted mammography screening identified 20% more breast cancers while reducing radiologist workload by nearly half. This is AI at its best: amplifying human capability rather than replacing human judgment.
But the patients I see don't just want accurate diagnoses—they want to feel heard, understood, and supported through uncertainty. They want someone who can validate their experiences and help them manage the emotional weight of being sick or scared.
Trust is the Glue in Medicine
Here's what decades of research tells us about healthcare outcomes: the quality of the patient-doctor relationship is often as important as the treatment itself. Patients who trust their providers are more likely to follow treatment recommendations, report symptoms honestly, and engage in preventive care. They experience less anxiety, better pain control, and improved outcomes across virtually every measure we can track.
Trust doesn't emerge from diagnostic accuracy alone—though that's certainly important. Trust develops through consistency, transparency, and the accumulated experience of feeling seen and heard as a whole person.
Take Elena, my patient whose diabetes management had been suboptimal for years despite seeing multiple endocrinologists, all working from the same evidence-based guidelines. When she came to my practice, her defensive posture told me everything I needed to know about her past experiences.
Previous doctors had focused on her numbers—blood sugar readings, A1C levels, medication adherence—without ever exploring why she struggled to follow their recommendations.
It turned out Elena had been hiding binge-eating behaviors rather than discussing them openly because a previous provider had made her feel ashamed about her weight and eating patterns. Once we established trust—once she knew I saw her as a person rather than a set of metrics—she became an engaged participant in her care. Her glucose control improved not because I prescribed different medications, but because she felt understood.
Trust is the secret ingredient that makes medicine work. And trust is a uniquely human experience.
The Future We Actually Want
So where does this leave us in the age of AI? I believe we're heading toward a future where artificial intelligence handles what it does best—pattern recognition, data analysis, routine screening—while freeing up human physicians to do what we do best: listen, understand, support, and heal.
Imagine visiting your doctor knowing that AI has already reviewed your lab results, analyzed your symptoms against vast databases of medical knowledge, and flagged potential concerns that deserve human attention. Your physician, no longer buried in routine diagnostic work, can spend the appointment focused entirely on you—your concerns, your questions, your individual circumstances and preferences.
This is a future where AI elevates—not eliminates—the patient-doctor relationship; it's a future where technology amplifies the human elements of healthcare that patients value most. AI might identify your chest pain as likely cardiac in origin, but your doctor helps you understand what that means for your specific situation, your family history, your work demands, and your personal tolerance for risk.
The AI might flag your mammogram for further review, but your physician sits with you through the uncertainty, explains the next steps, and helps you process the emotional weight of waiting for results. The algorithm might suggest a particular medication based on your symptoms and medical history, but your doctor helps you weigh the benefits against potential side effects within the context of your actual life.
AI is poised to transform healthcare in this country but only if it works to empower doctors to deliver the kind of personalized attention and evidence-based care that patients deserve.
What This Means for You
As patients, we shouldn’t have to choose between high-tech and high-touch healthcare. The best care will integrate both, using AI to enhance accuracy and efficiency while preserving the human relationships that make healing possible.
But this future requires something from us as patients: we need to value and advocate for the relational aspects of healthcare, not just the technological ones. We need to choose providers who see us as whole humans, not just data points. We need to engage in the kind of authentic communication that builds trust over time.
Most importantly, we need to remember that being healthy isn't just about having the right diagnosis or the most sophisticated monitoring—it's about being empowered to participate in our own care, supported through uncertainty, and confident that someone is looking out for our wellbeing as complete human beings. (FYI this is what my upcoming book is about.)
The question isn't whether AI will change healthcare—it already is. The question is whether we'll leverage technology to spend more time building trust and understanding, or whether we'll let algorithms become a substitute for the irreplaceable experience of being cared for by another human being.
As for my tech-savvy airplane friend with her AI health app? I told her that her pocket physician might be great at pattern recognition, but it will never notice the stress in her voice or the fidgeting restlessness that clued me into her anxiety. It will never help her celebrate her small victories—like getting through a turbulent plane ride—like another human can.
She gripped my arm as we landed and said, “I hope you’re right.”
If you enjoyed this post, please share it widely!! And click the ❤️ or 🔄 button below so that more people can discover it on Substack 🙏
Disclaimer: The views expressed here are entirely my own. They are not a substitute for advice from your personal physician.
I have mast cell activation syndrome. I have spent the majority of my 73 years with my multi-system symptoms and bizarre (but benign) growths. I understand that it was only recognized in 2016. MCAS is a complex disorder with difficult testing, and that virtually no physicians are trained that this syndrome even exists. I was lucky enough to be referred to the pre-eminent expert on MCAS. But despite the fact that I was lucky, I am on a FB group with hundreds of people who have- or believe they have MCAS. They are terrified. And from my own experience and theirs, there is little sympathy in our doctors’ offices for a complex disorder for which there is little reliable testing and no standard treatment. We have been labeled “depressed” or “hypochondriacal” and even “fired” from their practice. We have made frequent ER visits with anaphylaxis’s. I volunteer for teams of medical students, to teach them about MCAS and the experiences people with chronic, complex diseases have - and they confirm that none of their professors have spent more than 10 minutes on such topics. Doctors are obviously uncomfortable with their lack of knowledge but they are not focusing on the fears and limitations of our lives. Perhaps if AI had been able to determine an initial diagnosis, patients AND physicians could relax enough to listen and offer that extra care you describe. I hope AI can help.
I love your newsletters. Read everyone and share so many. I happen to be looking for a new doctor, but you’re not even close to my community. My point is… How is it that you have the time to sit with people to really clue in on what’s going on, and to connect, as well as all the diagnosing and treating? What’s the key that I’m asking for when I interview new physicians? TIA