TruDi's Injury Reports Expose FDA's AI Blind Spot

A surgical navigation system's post-AI injury spike reveals fundamental gaps in how the FDA evaluates machine learning in medical devices.

AI Safetyregulationmedical AIphysical AIFDA

A surgical navigation system called TruDi is now at the center of a Reuters investigation that should concern anyone building software for physical systems. The device, which guides instruments through patients' sinuses, had one reported injury before AI was added in 2021. Since then, at least 10 patients have been injured, according to FDA reports. Two are now stroke victims who have filed lawsuits.

The company that now owns TruDi, Integra LifeSciences, says there's "no credible evidence" linking the AI to injuries. The device was merely "in use" when adverse events occurred. This is technically true and entirely beside the point.

The 510(k) problem

The FDA's 510(k) pathway lets manufacturers skip clinical trials if they can show their device is substantially equivalent to something already on the market. This works fine for bandages. It works less well when you're adding machine learning to a tool that operates millimeters from carotid arteries and skull bases.

Before AI integration, TruDi had seven reported malfunctions and one injury. After: at least 100 malfunctions and 10 injuries. The complications are severe. Punctured skull bases. Cerebrospinal fluid leaking from noses. Damaged carotid arteries. Strokes.

One lawsuit alleges that "the product was arguably safer before integrating changes in the software." Another claims Acclarent, the original manufacturer, "set as a goal only 80 percent accuracy for some of this new technology."

Eighty percent accuracy. For a device telling surgeons where their instruments are inside someone's skull.

The FDA has now cleared at least 1,357 AI-enabled medical devices, double the number from 2022. TruDi isn't an outlier. It's a preview.

Beyond the operating room

The Hacker News discussion drew immediate parallels to Therac-25, the radiation therapy machine that killed patients in the 1980s due to software bugs. That disaster eventually forced the FDA to care about software quality in medical devices.

We may be watching a similar inflection point for AI. The difference is scale: Therac-25 was one machine. AI is being added to thousands of device categories simultaneously, each cleared through the same predicate-device pathway that treats the AI component as essentially irrelevant.

As we covered in our Physical AI explainer, the regulatory frameworks for AI systems that act in the physical world simply don't exist yet. Georgetown's CSET identified regulatory fragmentation as one of the fundamental barriers to safe physical AI deployment. TruDi is an early, ugly data point proving them right.

The pattern is predictable. Company adds AI to existing device. FDA clears it based on the non-AI version. Injury reports rise. Company denies connection. Eventually, after enough harm, regulators catch up.

One commenter in the HN thread put it bluntly: software engineers can "cause damage exceeding most doctors or lawyers" but face none of the professional accountability.

When your code runs inside someone's head, that's not just a liability question. It's an engineering ethics question the industry has been avoiding for decades.

Our read: TruDi isn't primarily an "AI gone wrong" story. It's a regulatory architecture story. The 510(k) pathway was designed for an era when medical devices were mechanical objects. Adding AI to a device fundamentally changes its risk profile in ways the current framework can't evaluate. The fix isn't banning AI in surgery; it's building evaluation infrastructure that treats AI as what it is: software that can fail in novel, unpredictable ways.

For developers shipping code that touches physical systems—from surgical tools to autonomous vehicles to warehouse robots—TruDi is the case study to remember. The regulatory gap between "software that runs on a server" and "software that runs inside a patient" is closing slowly, and the closure will be written in injury reports.

Frequently Asked Questions