Technology | Europe
The Agentic AI Revolution in Healthcare: When Computers Start Making Medical Decisions
AI systems are now making initial medical decisions without doctor review. Here is where this is happening, what the results show, and the crucial question about who is accountable when things go wrong.
AI systems are now making initial medical decisions without doctor review. Here is where this is happening, what the results show, and the crucial question about who is accountable when things go wrong.
- AI systems are now making initial medical decisions without doctor review.
- The deployment of agentic AI in healthcare settings — AI that makes initial clinical decisions without human review of individual cases — has advanced further and faster than most public awareness suggests, and the accou...
- The clearest current example is AI-powered diagnostic imaging: radiology AI systems at multiple major hospital networks are now routing urgent findings to immediate human review while routing non-urgent findings to a sch...
AI systems are now making initial medical decisions without doctor review.
The deployment of agentic AI in healthcare settings — AI that makes initial clinical decisions without human review of individual cases — has advanced further and faster than most public awareness suggests, and the accountability frameworks that should govern these deployments are lagging behind the technology's operational reality.
The clearest current example is AI-powered diagnostic imaging: radiology AI systems at multiple major hospital networks are now routing urgent findings to immediate human review while routing non-urgent findings to a scheduled review queue, without human radiologists approving each routing decision. The AI is making the triage decision autonomously, based on pattern recognition in the image, and only routing its output for human review rather than each individual decision.
This is agentic AI in a healthcare context — autonomous decision-making with defined boundaries. The AI is not deciding treatment; it is deciding which images need urgent attention. But 'which images need urgent attention' is a consequential clinical decision, and the accountability question is real: if the AI incorrectly classifies an urgent finding as non-urgent, and the delay in human review contributes to patient harm, who bears responsibility?
The current accountability framework in most healthcare systems places the ordering physician and the reviewing radiologist in the accountability chain, with the AI system operator potentially liable under emerging medical device and AI product liability frameworks. But when the AI makes the initial routing decision autonomously and a human radiologist reviews the case hours later believing the AI had appropriately prioritised it, the accountability chain becomes genuinely unclear.
For European AI regulation, the AI Act's classification of medical diagnostic AI as a 'high risk' AI system requiring specific conformity assessment creates a framework for deployment accountability. Whether that framework is sufficient for agentic applications — where the AI is making sequential autonomous decisions rather than providing a single output for human review — is one of the most actively discussed questions in EU AI regulation in 2026.