Paragon Health Institute suggests a novel approach for overseeing AI advancements in healthcare, focusing on patient safety while encouraging innovation.

Regulatory bodies overseeing medical devices in the United States may adopt a new framework for artificial intelligence (AI) innovations, drawing inspiration from the Department of Transportation’s policies on AI-equipped vehicles. This proposition is presented by Paragon Health Institute, a think tank based in Washington, D.C., which focuses on fostering innovation and competition within the healthcare sector while identifying potential cost reductions. The discussions, captured in their latest literature review, highlight the distinct challenges and opportunities associated with regulating software that leverages machine learning, enhancing its capabilities over time.

Paragon’s report emphasises the necessity for regulations that not only protect patient safety but also preserve incentives that encourage continual improvement of AI-enabled medical devices. The authors note that regulatory oversight must be structured in a way that does not stifle the drive for software advancement. “Regulation must protect the incentives for software improvement, including but not limited to feature enhancements and the remediation of known software anomalies that do not impair the system’s safety or effectiveness,” the report suggests.

Report author Kev Coleman elaborates on the implications for AI systems as they evolve, particularly regarding mechanisms for addressing deficiencies. He asserts that if new regulatory burdens are imposed without regard for existing improvements, the incentive for companies to resolve issues diminishes considerably. Coleman highlights the importance of allowing AI systems to demonstrate remediation of deficiencies, stating, “Specifically, a regulatory obligation—e.g., a supplemental clinical evaluation—addressing a known AI deficiency should no longer apply to an AI system that can satisfactorily demonstrate that the issue has been successfully remediated.”

The discourse also touches on the notion of ‘hallucinations’ in AI, problems wherein AI systems generate inaccurate or misleading outputs. While explicit regulations on this matter are currently non-existent, Coleman points out advancements within both commercial and academic spheres that aim to mitigate such occurrences. For instance, researchers at the University of Oxford have developed techniques to assess the uncertainty of AI-generated responses, laying the groundwork for systems that utilise external data sources to validate outputs.

The risk profile associated with AI systems is a pivotal factor influencing their approval process by the FDA. Coleman explains that identifying which pathway to take for approval hinges on the extent of the risk posed to patients, with more serious unresolved safety issues likely resulting in failure to attain FDA validation. Conversely, less significant software defects that do not present a danger to patient safety may still be approved.

Emerging functionalities within AI systems also necessitate careful regulatory scrutiny, as new capabilities demand fresh FDA endorsements. Coleman elaborates, “There are also improvement scenarios that pertain to neither a defect nor a new function,” pointing to a maturing spectrum of AI systems that might operate with less clinical oversight as they evolve.

The FDA’s historical approach provides valuable lessons for the prospective regulation of AI in healthcare. Coleman underscores the agency’s strategy, stating, “First and foremost, the agency’s approach does not demand perfection from medical devices but does enforce patient safety as its preeminent priority.” He notes that the FDA balances risk against potential benefits, making room for nuanced decision-making regarding advancements in medical devices.

Through their findings, Paragon Health Institute presents a framework for AI regulatory governance that remains integrated within established healthcare agencies, suggesting that while existing guidelines are pertinent, they must incorporate adaptations that address the unique characteristics of AI technologies. This conceptual model aims to facilitate a regulatory pathway that fosters innovation while maintaining safety as a primary concern.

Source: Noah Wire Services

More on this

Share.
Leave A Reply

Exit mobile version