More than two-thirds of physicians (66%) are now using health AI tools, according to a new report from the American Medical Association (AMA), marking an 78% increase from 2023 and signaling a dramatic shift in the integration of artificial intelligence into clinical care.
“Physicians are rapidly shifting from skepticism to pragmatic adoption,” the AMA reports. “Tools that demonstrably save time and improve care quality are driving this transformation.”
Leading the adoption curve are primary care providers, radiologists, and specialists in pathology and dermatology, where AI enhances diagnostic interpretation and documentation efficiency. Among the most commonly adopted technologies are ambient documentation assistants, diagnostic image analyzers, and predictive analytics tools that support early identification of at-risk patients.
This trend reflects a significant decline in concern about job replacement by AI. In 2022, 41% of physicians expressed concern about being replaced by AI; today, that number has decreased to 15%. Instead, physicians increasingly view AI as a form of “augmented intelligence”—a term coined by the AMA to emphasize AI’s supportive rather than autonomous role.
“In clinical settings, AI should enhance—not replace—physician decision-making,” the AMA’s Principles for the Use of Augmented Intelligence state. “It must preserve clinical autonomy, support shared decision-making, and foster a patient-centered ethic.”
The implications extend beyond individual tools. As adoption spreads, so does the urgency for governance and policy development. In its AI Governance Toolkit, the AMA outlines an eight-step governance model that includes AI intake processes, risk evaluation, clinical vetting, and post-deployment oversight.
“AI in health care is not plug-and-play,” the toolkit notes. “It requires structured governance aligned with ethical, legal, and operational policies to ensure trust and clinical safety.”
Still, challenges remain. Many healthcare organizations report a lack of internal readiness, ranging from inadequate training and infrastructure to uncertainty regarding liability and regulatory compliance. The AMA advises organizations to begin now, even with limited resources, to avoid falling behind.
According to the AMA’s How to Develop AI Policies guide, “Organizations that delay formal governance of AI risk falling out of compliance, undertraining their staff, and eroding patient trust.”
Looking ahead, the AMA forecasts that AI-assisted documentation will reach near-universal adoption within five years, with diagnostic support tools and predictive risk scoring close behind. More complex uses—like generative AI for patient communication or autonomous decision-making—are expected to grow more slowly due to safety and ethical concerns.
Physicians remain central to the future of AI in medicine. “The clinician’s role in oversight and application is indispensable,” the AMA emphasizes. “AI must serve physicians, not the other way around.”
As the line between digital assistant and clinical partner blurs, physicians are urged to engage with their institutions, demand transparent and evidence-based AI tools, and advocate for governance structures that protect patients, preserve trust, and support professional integrity.