Technology

How the FDA Approves AI Medical Devices

The FDA has authorized over 1,400 AI-enabled medical devices, from radiology scanners to smartwatch heart monitors. Here is how the approval process works, which pathways exist, and why it matters for patients.

R
Redakcia
4 min read
Share
How the FDA Approves AI Medical Devices

A Quiet Revolution in Medical Regulation

Artificial intelligence is reshaping medicine—reading X-rays, flagging irregular heartbeats, guiding ultrasound probes in untrained hands. But before any AI-powered tool can touch a patient in the United States, it must pass through the Food and Drug Administration. By the end of 2025, the FDA had authorized more than 1,450 AI- and machine-learning-enabled medical devices, up from just 33 in the two decades between 1995 and 2015. The surge raises an obvious question: how exactly does a software algorithm earn the same regulatory stamp as a surgical scalpel?

Three Pathways to Market

The FDA sorts medical devices into three risk classes. Most AI tools land in Class II (moderate risk) and reach the market through one of three main routes:

  • 510(k) Clearance — The workhorse pathway. A manufacturer demonstrates that its device is "substantially equivalent" to a legally marketed predicate device—one already on the market with the same intended use and similar technology. Nearly all cleared AI devices have entered through 510(k), which avoids the need for large-scale clinical trials.
  • De Novo Classification — Used when no suitable predicate exists. The manufacturer must show the device is low-to-moderate risk and submit clinical evidence. Caption Health's AI-guided cardiac ultrasound, which lets non-experts capture diagnostic-quality heart images, was cleared through this route in 2020.
  • Premarket Approval (PMA) — The strictest path, reserved for high-risk (Class III) devices. It demands rigorous clinical trials proving safety and efficacy. Few AI devices take this expensive route.

The Breakthrough Device Fast Track

Since 2016, the FDA has granted "Breakthrough Device" designation to more than 1,200 devices, a growing number of them AI-powered. The voluntary program targets tools that treat or diagnose life-threatening conditions and offer significant advantages over existing options. Designation does not guarantee faster clearance, but it opens a direct line to FDA reviewers, allows more flexible evidence requirements, and gives submissions priority review.

Recent examples include RecovryAI, a generative-AI chatbot that guides patients through post-surgical recovery, and Cognita, an AI radiology model that interprets medical images. Both received Breakthrough designation in early 2026, reflecting the FDA's increasing willingness to evaluate AI that goes beyond simple pattern detection.

The Update Problem—and a New Solution

Traditional devices rarely change after approval. AI models, by contrast, improve continuously as they ingest new data—a trait regulators call the "learning" problem. Requiring a new submission for every software tweak would bury the FDA in paperwork and slow innovation.

The agency's answer is the Predetermined Change Control Plan (PCCP). Finalized in late 2024, a PCCP lets manufacturers describe in advance what kinds of modifications they plan to make, how they will validate each change, and how they will assess its impact on safety. Once the FDA approves the plan alongside the original device, covered updates can ship without a new marketing submission. The approach mirrors how the software industry handles iterative releases—while keeping a regulatory leash on safety-critical changes.

Where AI Devices Concentrate

Radiology dominates, accounting for roughly 76 percent of all authorized AI devices. Algorithms that detect lung nodules on CT scans, flag strokes on brain MRIs, or reduce imaging noise have become standard in many hospitals. Cardiology follows at about 9 percent, including wearable tools like Apple Watch's atrial fibrillation detector and AliveCor's Kardia Mobile ECG. Neurology, pathology, and ophthalmology round out the landscape.

What Critics Say

Not everyone is satisfied. A 2025 report submitted to the FDA argued that the 510(k) pathway creates an "illusion of safety" because it relies on equivalence to predicates rather than independent clinical proof. Transparency is another concern: many cleared AI devices do not publicly disclose the demographic makeup of their training data, raising questions about whether algorithms perform equally well across different patient populations.

The FDA has responded with guidance urging manufacturers to address bias, transparency, and cybersecurity throughout the device lifecycle. Since mid-2025, all connected-device submissions must include a software bill of materials and a plan for addressing post-market vulnerabilities.

Why It Matters

The regulatory framework the FDA builds now will shape how AI enters clinical practice for decades. Get it right, and algorithms could democratize specialist-level diagnostics in rural clinics and developing countries. Get it wrong, and patients could be harmed by opaque software that no regulator fully understands. As AI devices multiply—295 were authorized in 2025 alone—the balance between speed and safety has never been more consequential.

Stay updated!

Follow us on Facebook for the latest news and articles.

Follow us on Facebook

Related articles