Could AI in Medicine Weaken Physicians’ Skills?
Empirical evidence shows that AI use can impact physicians’ skill performance or reduce the opportunities to maintain a necessary skill, according to the results of a scoping review published in ESMO Real World Data and Digital Oncology. The review examined whether routine use of AI in clinical care may contribute to de-skilling, or the loss of physician expertise from relying on automated tools. Reviewing literature published between 2015 and 2025, a period during which AI use increased markedly, the authors found early signs that AI can increase automation bias, weaken independent judgment, and reduce opportunities for clinicians to maintain core skills.
The authors, including lead and corresponding study author Pierre E. Heudel, MD, of the Department of Medical Oncology at Centre Léon Bérard in Lyon, France, searched PubMed, Embase, Web of Science, and Scopus, and screened relevant gray literature. Eligible studies included quantitative, qualitative, and mixed-methods research, along with reviews, editorials, and conceptual papers published through August 2025; only articles written in English were considered. They excluded studies that focused only on algorithm development and did not examine physician expertise, as well as those from nonmedical populations. Of 373 records identified, 12 studies were included in the qualitative synthesis. The review was descriptive and did not include a formal risk-of-bias assessment.
Findings By Specialty
One of the clearest findings came from gastroenterology. A multicenter, observational study published in The Lancet Gastroenterology & Hepatology in 2025, which analyzed more than 23,000 colonoscopy procedures, found that endoscopists’ adenoma detection rate dropped from 28.4% before AI introduction to 22.4% when they later worked without AI support after a period of routine AI use. With AI assistance available, the adenoma detection rate remained 25.3%. The review cited this as direct evidence that repeated AI use may alter physician performance even after the tool is removed.
Similar patterns were reported in radiology and pathology. A controlled study of 27 radiologists interpreting 720 mammograms found that incorrect AI recommendations increased error rates by 12% to 15%, including among experienced readers. In computational pathology, an experimental study involving 28 pathologists found that under time pressure, participants reversed 7% of initially correct diagnoses after being shown erroneous AI suggestions, suggesting that misleading AI outputs can sway clinical judgment. The review also cited structural changes in cytology after the United Kingdom’s transition to human papillomavirus primary screening, which reduced cytology workloads by 80% to 85% and cut the number of laboratories performing the work from 45 to 8, limiting training opportunities for junior pathologists.
Additionally, the review described situations in which AI improved performance but still introduced new risk. In a randomized crossover study of 40 clinicians diagnosing anterior cruciate ligament rupture on magnetic resonance imaging, AI assistance improved overall diagnostic accuracy from 87.2% to 96.4%. However, 45.5% of the errors made with AI assistance at any experience level were attributed to automation bias.
Comprehensive Results
The review suggested that the risk of de-skilling may be higher when cases are ambiguous, time is limited, or when clinicians are repeatedly exposed to AI output. The study authors also noted that workflow changes and organizational contributors may reduce opportunities to practice and maintain basic skills and instead reinforce the dependence on AI
Most of the studies were outside oncology. Still, the authors say oncology may be especially vulnerable because AI algorithms are already being used in imaging, digital pathology, radiotherapy planning, and treatment decision support. Over time, that could mean clinicians spend less time interpreting clinical findings themselves and more time working from AI-generated results, thus increasing the possibility of automation bias.
Dr. Heudel and colleagues acknowledged that the evidence base remains limited and fragmented, with studies varying in both design and rigor. Even so, they argued that preserving clinical expertise should be treated as part of AI safety.
“AI is reshaping medicine with unprecedented promise, yet clear signals of de-skilling reveal that expertise cannot be taken for granted. Preserving the art and science of clinical judgment will require deliberate choices: embedding AI literacy into training, enforcing regulatory safeguards, and fostering genuine human–AI collaboration. Without such action, progress risks eroding the very expertise on which safe and compassionate patient care depends,” the study authors concluded.
DISCLOSURES: No funding was declared. The authors reported no conflicts of interest.
ASCO AI in Oncology is published by Conexiant under a license arrangement with the American Society of Clinical Oncology, Inc. (ASCO®). The ideas and opinions expressed in ASCO AI in Oncology do not necessarily reflect those of Conexiant or ASCO. For more information, see Policies.