
Healthcare AI at a crossroads: Uncertainty ahead, but clearer paths for medical devices
Although healthcare AI developers face 1–2 more years of uncertainty as the EU revisits the AI Act, one major change could simplify life for medical‑device startups and SMEs.
AI solutions for healthcare is high risk under AI Act
Regulatory uncertainty can slow down the sector and creates further risk of changes
New risks for patients and providers
Guidance and support for SMEs and startups
Almost all AI solutions that relate to healthcare or medical products will be classified as “High Risk” under the new EU AI Act. Several proposed changes to the regulation specifically impact the High Risk classification, leading to significant implementation and enforcement delays.
The EU Commission say these changes are necessary and proportionate, blaming the delays in the drafting of implementing acts and the technical specification that was originally expected to be completed by August 2026.
Industry and market experts have raised repeated concerns that the technical documentation, including the EU-wide harmonized standards that act as the gold-standard for compliance, will not be ready by the deadline, and this is a direct response to market fears that complying with a law without standards is unreasonable.
The review is being done as part a large-scale review of digital regulation called the “Digital Omnibus package proposal”, announced in 2025, which aims to simplify and streamline regulation for business and individuals alike, and boosting competitiveness and lowering ultimate cost.
Medical AI remains high‑risk – but assessment moves into MDR/IVDR
One of the most anticipated (and welcomed) changes is the way that medical AI devices will be assessed. Although their classification as High-risk does not change, medical AI devices may now be assessed not as part of the AI act, but directly as part of the MDR/IVDR conformity assessment procedures. Previous fears of parallel conformity assessments are thereby removed, and developers of medical AI products can more easily focus on the MDR/IVDR as their main conformity requirements, with fewer duplicate audits and a suggested regulatory pathway that is more coherent.
Overseeing bodies and industry cherry-pick their narratives
Medical AI remains high‑risk – but assessment moves into MDR/IVDR
One of the most anticipated (and welcomed) changes is the way that medical AI devices will be assessed. Although their classification as High-risk does not change, medical AI devices may now be assessed not as part of the AI act, but directly as part of the MDR/IVDR conformity assessment procedures. Previous fears of parallel conformity assessments are thereby removed, and developers of medical AI products can more easily focus on the MDR/IVDR as their main conformity requirements, with fewer duplicate audits and a suggested regulatory pathway that is more coherent.
Overseeing bodies and industry cherry-pick their narratives
Although the justification for the changes is broadly accepted, the specific proposals have been met with a critical response by regulators. The main selling point of the AI Act is its focus on the rights of the individual and EU citizens, compared to e.g. the United States and China where economic progress is put above fundamental rights.
In response to the proposed changes, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have released a joint opinion expressing “sincere concern” to the proposed changes. The AI Act was intended to be world-leading in promoting innovation safely, but with the proposed changes EDPB and EDPS are warning that the changes are at risk of fundamentally softening the protections given to the individual and giving too much leeway to large corporations.
The risks are therefore that with every proposed change, the world-leading AI Act becomes so watered down that it fails in its original aim of safe AI innovation. And yet the medical device proposals are broadly welcomed by the industry sector, noting that proposed changed to the rigid 5-year certification validity, replacing them with periodic reviews, and introducing processes that help with conflict resolution for borderline classification decisions, should not only create clarity but decrease the pressure on Notified Bodies, allowing certification times to decrease.
The European Commission says one of the main aims of the review is to reduce administrative burden and make compliance easier. One concrete example is the introduction an exemption where a deployer of a High Risk product, under certain exemptions, can unilaterally bypass the registration requirement in an open registry.
However, in their joint opinion, EDPB and EDPS are highly critical of this move, noting that the decrease in accountability and transparency cannot be justified. Without the registration, national competent authorities do not need to be informed prior to a product being placed on the market and gives undue incentives to companies to classify their products as procedural or lower risk. Most damningly, the EDPB and EDPS say that the measure, meant to save costs to companies, can be as little as 100 EUR each, further emphasizing the lack of value gained.
Regulatory uncertainty can slow down the sector and creates further risk of changes
Whether the proposed changes are seen as proportionate or not, the changes in enforcement timelines create a longer period than expected of regulatory uncertainty.
A delay in implementation extends the regulatory vacuum during which rules apply but are not enforced. Even more seriously, SMEs who had previously been working toward compliance against the approved regulation, now need to bear in mind that the regulation may change. Experts and consultants in the field will be less likely to be able to provide concrete answers and recommendations, potentially making an expensive process even longer.
The proposed changes are not yet confirmed, and a set deadline for when they will or will not be adopted has not been agreed, further compounding the uncertainty. An uncertain regulatory landscape gives all actors – whether they be benevolent or ill-intentioned – the chance to further lobby the regulators into amending the regulation beyond its intended scope. In an uncertain global landscape with worldwide technological AI development, this risk should not be underestimated.
New risks for patients and providers
For care-providers and patients, the delays to the high-risk regulation means an extension of the grandfathering clause, or grace period, for certain products placed on the market prior to the enforcement of the rules.
The deadline for when high risk products need to comply has been moved from August 2026 until December 2027. This means that more products will be introduced to the market under the old regulation, gaining the status of legacy products.
The legacy products will not fall under the relevant regulation unless they undergo “significant changes”. For buyers and users of medical products, this means more products legally on the market do not live up to the strict rules of the AI Act, minimizing trust and compounding procurement processes.
Guidance and support for SMEs and startups
For SMEs, the biggest challenge now is uncertainty – timelines are shifting, requirements are changing, and enforcement in part is delayed. In this context, there is a growing need for practical guidance, testing, and validation support to bridge the gap between high-level regulation and real-world implementation.
TEF-Health, as an EU-wide initiative and recognised implementation support instrument under the AI Act, is addressing this need by providing guidance to SMEs and staying updated on the latest changes.
EU delays disrupt national rulemaking
Although TEF-Health welcome the Omnibus measures for innovation friendliness, this cannot happen at the expense of the fundamental rights of EU citizens or inadvertently cause more uncertainty to the companies these changes are meant to support.
TEF-Health will continue to monitor and review the legislation for innovation and safety risks to best support SME:s, partners, and the wider healthcare ecosystem.
The Swedish Authority for Privacy Protection (IMY) is the main authority in Sweden handling matters of data privacy and the rights of the individual under GDPR, but until the matter is settled at the EU level, IMY are limited in what they can or cannot say about practical impacts this will have for companies and public authorities in Sweden.
Rebecka Rosenberg
Project Coordinator for Health Data and Ethics for the Swedish TEF-Health Node
Read more about European AI regulations
- EDPB-EDPS Joint opinion 1/2026 on the Proposal for a Regulation as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI) and the official European Commission site on the Digital Omnibus Regulation Proposal and proposed changes to MDR/IVDR.
- MEPs support postponement of certain rules on artificial intelligence·
- Council agrees position to streamline rules on Artificial Intelligence