The EU’s new AI Act sets the stage for significant regulation in artificial intelligence, while the US and UK explore differing approaches to governance in this emerging domain.
In a significant move for the regulation of artificial intelligence (AI), the EU has approved the AI Act, which is set to have far-reaching implications for companies in the medical device, technology, and pharmaceutical sectors. Meanwhile, the US is navigating an uncharted legal landscape with no comprehensive federal AI legislation, although 45 states have introduced regulatory bills, and 31 have enacted laws pertinent to AI. In contrast, the UK government is pursuing a less formal, principles-based regulatory framework aimed at fostering innovation while navigating the complexities of AI governance.
At the recent J.P. Morgan Healthcare Conference, experts from the law firm Hogan Lovells — Jodi Scott, Penny Powell, and Dr. Matthias Schweiger — discussed these emerging international AI regulatory frameworks. They emphasised the necessity for businesses to be prepared to navigate a patchwork of regulations including the EU’s AI Act, the UK’s AI Regulatory Strategy, and updated guidance from the FDA on AI applications.
In an overview of the AI regulations in the European Union, Dr. Matthias Schweiger highlighted that the EU has been at the forefront of establishing formal rules for AI, urging companies operating within its borders to consider how the AI Act will affect their operations. He noted, “Because the regulations have become particularly granular, it may become difficult for regulatory bodies to roll back portions of those rules,” suggesting that stringent regulations might pose a disadvantage in competitive markets where economies opt for lighter regulation. Nevertheless, he reassured that companies investing in AI in the EU would benefit from a certain degree of legal certainty.
Penny Powell discussed the differences in the regulatory approaches taken by the UK, stressing that the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) is adopting a pro-innovation principles-based strategy compared to the EU’s risk-based model. She indicated that 2025 could usher in substantial changes to the UK’s Medical Device and AI regulations, with new legislation anticipated to harmonise the regulations between the UK and the EU. The MHRA is advancing its strategic approach towards AI, building on the tenets laid out in the UK Government’s AI White Paper.
The regulatory scenario in the US was summarised by Jodi Scott, who indicated that the Food and Drug Administration (FDA) may require additional authority from Congress to enhance its capacity to regulate AI. Scott noted, however, that the FDA “has what authority it needs” to approve numerous AI-designated devices already cleared for use. She anticipated an increase in FDA approvals for AI-enabled devices, mentioning a draft guidance released recently aimed at clarifying expectations for companies under existing US law. Additionally, Scott referred to a study published in JAMA, which raised concerns regarding AI applications in medical product development and patient care.
A novel application of AI generating considerable interest is the use of “digital twins” in clinical trials. Scott referred to FDA explanations regarding digital twins, defined as computational models that mirror an individual’s health status, enabling enhanced analysis during clinical trials. Both the UK’s MHRA and the EU have also highlighted their intentions to promote the use of digital twins for clinical purposes.
The MHRA is currently running the “AI Airlock Pilot,” a regulatory sandbox focused on AI-powered medical devices. This initiative aims to facilitate collaboration between manufacturers, the MHRA, the National Health Service, and other stakeholders to address regulatory challenges and risks facing AI device manufacturers and assess their products’ compliance with existing frameworks.
In discussing contracting trends, Powell noted a shift towards greater diversity in the warranties associated with AI-related agreements, asserting that many businesses still rely on standard compliance warranties that might not adequately mitigate future risks. Dr. Schweiger echoed these concerns, referring to the burdensome documentation requirements established by the AI Act in the EU, including access to data essential for addressing regulatory concerns.
The experts reflected on whether detailed regulations might stifle innovation while providing essential clarity regarding compliance. Powell mentioned that the UK’s new Chief Technology Officer is tasked with enhancing digital transformation and supporting the AI industry’s development.
Overall, the panelists projected an optimistic view, suggesting that “The next few years are going to be exciting.” As countries develop their regulatory frameworks and businesses adapt to these evolving standards, the interaction between innovation and regulation is poised to shape the future of AI across multiple sectors.
Source: Noah Wire Services
- https://www.artificial-intelligence-act.com – This URL provides information on the EU Artificial Intelligence Act, including its scope, prohibited AI practices, and requirements for high-risk AI systems.
- https://ai-act-law.eu – This website offers detailed insights into the AI Act, categorizing AI systems based on risk and explaining prohibited practices.
- https://www.whitecase.com/insight-alert/long-awaited-eu-ai-act-becomes-law-after-publication-eus-official-journal – This article discusses the EU AI Act’s entry into force, its provisions, and the penalties for non-compliance.
- https://www.gov.uk/government/publications/ai-regulatory-strategy – This URL provides information on the UK’s AI Regulatory Strategy, focusing on a principles-based approach to foster innovation.
- https://www.fda.gov/news-events/fda-voices/fda-approach-artificial-intelligence – This webpage outlines the FDA’s approach to regulating AI, including guidance on AI-enabled devices.
- https://www.fda.gov/regulatory-information/search-fda-guidance-documents/artificial-intelligence-and-machine-learning-software-as-medical-device – This URL contains FDA guidance on AI and machine learning software as medical devices.
- https://www.mhra.gov.uk/ai-airlock-pilot – This webpage describes the MHRA’s AI Airlock Pilot, a regulatory sandbox for AI-powered medical devices.
- https://www.jama.com – JAMA is a journal that publishes studies on medical topics, including AI applications in healthcare.
- https://www.hoganlovells.com/en/publications – This URL provides access to publications by Hogan Lovells, which may include insights on AI regulatory frameworks.
- https://www.noahwire.com – This is the source of the original article discussing international AI regulatory frameworks.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative discusses recent developments in AI regulations across the EU, US, and UK, including the EU’s AI Act and the UK’s principles-based approach. It references ongoing discussions and anticipated changes in 2025, indicating a relatively current context. However, without specific dates for all events, it’s difficult to assess absolute freshness.
Quotes check
Score:
6
Notes:
The narrative includes quotes from experts but does not provide specific dates or original sources for these quotes. Without further verification, it’s unclear if these are original or previously published.
Source reliability
Score:
8
Notes:
The narrative originates from JD Supra, a reputable legal news platform. It references well-known entities like Hogan Lovells and the J.P. Morgan Healthcare Conference, enhancing credibility.
Plausability check
Score:
9
Notes:
The claims about AI regulations and their implications are plausible and align with current trends in AI governance. The discussion of regulatory frameworks in the EU, US, and UK reflects ongoing real-world developments.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative appears to be current and well-researched, discussing recent AI regulatory developments across major regions. While the quotes lack specific original sources, the overall context and references to reputable entities support its credibility.