News Ethics and Policy Responsible AI

AMA Issues Policy to Protect Physicians From AI Deepfakes

May 11, 2026 By ASCO AI Staff 2 min read
Share Share via Email Share on Facebook Share on LinkedIn Share on Twitter

The American Medical Association (AMA) has developed a policy framework to protect physicians from unauthorized AI-generated “deepfakes,” including synthetic images, audio, and video used to impersonate individuals.

Physicians have increasingly been impersonated through deepfakes, with AI-generated content falsely depicting them as endorsing or creating medical misinformation. Such impersonations could undermine patient-physician relationships, erode trust in the broader health-care system, and pose risks to patient safety.

“AI deepfakes that impersonate physicians are not just scams—they are a public health and safety crisis,” stated John Whyte, MD, MPH, CEO of the AMA. “When bad actors exploit a doctor’s identity, they undermine patient trust and can steer people toward harmful, unproven care. We need strong action by federal and state lawmakers to protect physicians’ identities, ensure transparency, and stop this fraud. Safeguarding professional integrity is essential to preserving trust and delivering high-quality care in a rapidly evolving digital landscape.”

The framework, developed by the AMA’s Center for Digital Health and AI, establishes seven key policy principles aimed at protecting physicians’ identities, patient safety, public trust, and professional integrity.

First, the policy framework emphasizes that a physician’s name, image, likeness, voice, and digital replica are protected interests. It urges health institutions, service providers, and other vendors to recognize that these interests cannot be used without the physician’s consent. Any AI-generated or manipulated content created without informed consent should therefore be considered deceptive. Consent for AI-created or altered content should be voluntary and revocable, with clear disclosure of how and where a physician's identity will be used. The AMA also mandated that all AI-generated content be clearly labeled.  

The organization stated that preventing the misuse of physician identities is a shared responsibility and called for a clear, workable process enabling physicians, health systems, platforms, and vendors to report and remove health-related deepfakes. However, the policy also noted that physicians should not bear an undue burden for ongoing monitoring or enforcement.

The AMA also stated that it is prepared to work with lawmakers, regulators, and industry stakeholders to protect patients from the risks posed by deepfakes.

ASCO AI in Oncology is published by Conexiant under a license arrangement with the American Society of Clinical Oncology, Inc. (ASCO®). The ideas and opinions expressed in ASCO AI in Oncology do not necessarily reflect those of Conexiant or ASCO. For more information, see Policies.

KOL Commentary
Watch

Related Content