Skip to content
AI_Healthcare

An AI Conversation Unlike Any Other

How the EU regulates AI in healthcare could have implications far beyond the continent. Brunswick’s Francesca Scassellati Sforzolini reports.

AI in healthcare isn’t like AI in other industries. The broad storyline may be the same—a roster of revolutionary benefits alongside some serious risks—but the stakes are higher. It involves our most sensitive, personal data. Its consequences can be life or death.

Like the rest of the world, the EU is grappling with how to regulate AI. But unlike the rest of the world, Europe’s legislation has already been drafted and large sections of it have already been agreed by policymakers. When it becomes law, the EU AI Act—which will apply to many sectors other than healthcare—is likely to be the first clear framework of its kind, and perhaps the world’s most comprehensive and influential legislation on artificial intelligence.

When the EU passed its General Data Protection Regulation (GDPR) in 2016, it set a new—and very high—standard for online privacy. Many companies chose to follow the EU’s laws even when operating outside of Europe; adhering to a single standard globally was preferable to following a patchwork of standards across different jurisdictions. With artificial intelligence, Europe could once more set the global rules of the game.

Reasons to get excited about AI in healthcare

In announcing the EU AI Act, the European Parliament listed out AI’s “many benefits.” First on that list, ahead of sustainable energy, more efficient manufacturing and cleaner transportation, was “better healthcare.

AI’s proponents might call “better” an undersell. The New England Journal of Medicine wrote that AI “holds tremendous promise for improving nearly all aspects of how we deliver and receive care.” AI-powered tools could allow patients to self-diagnose—to use your phone to check whether a mole is cancerous, for example—and also save healthcare professionals’ time by cutting down on paperwork and reviewing things like X-rays, MRIs and retinal scans. AI-fueled algorithms could transform how we research and treat rare diseases, identifying patterns that would be difficult or impossible for human analysts to detect, while predictive modeling could forecast how those rare diseases progress, enabling more targeted treatment.

The list goes on. AI could help create virtual trials that supplement clinical trials—meaning that medicines, vaccines, devices and procedures could reach people faster and cost less to develop. “Digital twins” could help forecast how a patient might respond to a procedure or medication. “Can AI help heal the world?” The Economist asked in 2022—a question that already felt rhetorical then, after all the revolutionary technologies it profiled.

Causes for concern

The main risks of AI in healthcare, according to a 2022 report by the European Parliament, are: “potential errors and patient harm; risk of bias and increased health inequalities; lack of transparency and trust; and vulnerability to hacking and data privacy breaches.” While some of those risks aren’t unique to healthcare, their consequences are.

Take the risk of bias, for instance. AI relies on large amounts of data. In healthcare, that means personal health data. Big data pools could reflect and perpetuate different types of biases—ones that can particularly target vulnerable groups/minorities.

My colleague Ben Hirschler has highlighted the lack of diversity in clinical trials: “Clinical trials are falling short by routinely failing to accurately reflect the diversity of different patient groups,” he wrote. “This means the data they generate is not painting the full picture of the good (and sometimes bad) that modern medicine can do.” AI could supercharge the dissemination and misapplication of such biased data.

There’s a story, for instance, about an AI chatbot being asked: “The doctor and the nurse got married and she got pregnant. Who became pregnant?” The chatbot’s reply: the nurse. Informed that the nurse was a man, the chatbot didn’t understand the question—it couldn’t grasp the doctor was a woman, or that the nurse was a man.

Such bias and unrepresentative data poses risks to patients. “This can result in biased AI-enabled decisions, which can further perpetuate healthcare disparities, discrimination, unequal treatment and unequal access to healthcare,” says Milana Trucl, Policy Officer for the European Patients Forum, a nonprofit with members who collectively represent more than 150 million patients across Europe.

Another worry is protecting privacy and confidentiality for both patients and companies. Patients obviously don’t want their data used or accessed without their consent, or for purposes they don’t agree with. “Limited patient involvement during AI development also poses a risk, as it may result in solutions that inadequately address specific patient needs,” says Trucl. “Lack of clear rules on accountability, human oversight and transparency, including traceability and explainability, were also identified among the major risks, as well as leakage, unauthorized disclosure, or unintended use of health data. Such mishandling and misuse of sensitive data could have far-reaching consequences.”

For companies, protecting intellectual property and trade secrets are paramount. Generative AI in particular can pose challenges to both: “the AI system is not designed to differentiate between confidential and non-confidential information,” Reuters reported in December 2023. “The input will not qualify as a trade secret, and the output will not qualify as a trade secret…”

As AI integrates into more aspects of care, there’s also a fear of AI replacing—rather than merely helping—healthcare providers. As the technology makes care more efficient, some worry it could also make it less empathetic—driven more by data than by doctors. Sara Roda, Senior Policy Advisor for CPME [Comité Permanent des Médecins Européens, or Standing Committee of European Doctors], says that the organization prefers the term “augmented intelligence”—“AI should be used to enhance physicians’ expertise and improve specific capabilities,” says Roda. Having physicians “co-design” AI in healthcare will help ensure that complementary relationship, says Roda. Another non-negotiable: “the possibility for a human to intervene and (safely) stop the operation of an AI, without exceptions.”

A final challenge is perhaps the most fundamental: trust, or lack thereof. That could improve as patients, healthcare professionals and policymakers become “AI literate”—aware of what it means for them, their work, their data, their lives. Proving that AI-powered solutions and tools are reliable will help. But both are far easier said than done.

All of those challenges exist in any country looking to integrate AI into healthcare—they aren’t distinctly European. But these three are:

1.  How will the new regulation fit into what’s already been written?

How AI-specific legislation will fit in with the healthcare sector’s already challenging existing legislation—in particular the Medical Devices Regulation (MDR) and the In Vitro Diagnostic Medical Device Regulation (IVDR)—remains a question. So does how it will fit in with upcoming legislation, like the one that will set up the European Health Data Space.

“The AI Act represents a significant step forward in the regulation of healthcare data protection,” says Patrick Boisseau, Director General at MedTech Europe, the leading trade association for the medical technology industry. “It is imperative that we reinforce existing measures and align with sectoral rules such as MDR/IVDR, but also horizontal legislation such as GDPR, intellectual property rights and trade secrets rights directive.”

Sofia Palmieri, a researcher at Gent University focused on the legal and ethical challenges of AI in healthcare, fears that, “despite the MDR being a rather young regulation, this might not be able to encapsulate the specific features that justified special regulation for AI in the first place. I am particularly worried about the clinical validation of AI systems that keep on learning from the new data they receive. There is a need to scrutinize the current medical device regulation to assess whether it can provide clinical safety and effectiveness for this type of AI system over time.”

As both Boisseau and Palmieri allude, the legislative landscape is complicated—and only set to become more so.

2. Irreconcilable aims?

Two of the EU’s stated aims are to “promote … the well-being of its citizens” as well as “scientific and technological progress.” Such beacons seemingly argue for light-touch legislation given AI’s potential to power progress in most every industry—healthcare included. Adding to that argument is the sense that Europe is already lagging behind the US and China in AI innovation and development, and that its heavily regulated healthcare industry is struggling with some key areas of innovation, as Europe’s antibiotic shortages last winter highlighted.

On the other hand, two of the EU’s founding values are “freedom” and “human rights,” and Europe has a reputation for passing tough laws—like GDPR—to protect individual rights and privacy. The nature of artificial intelligence unavoidably creates new, profound challenges to many rights and freedoms. How to balance those competing aims isn’t necessarily unique to Europe, but it is pronounced here.

3.  Deliberate to a fault?

Effy Vayena, the founding professor of the health ethics and policy lab at ETH Zurich, a Swiss university, and Andrew Morris, director of Health Data Research UK, a scientific institute, wrote that AI legislation “will have to keep pace with ongoing technological developments—which is not happening at present. It will also need to take account of the dynamic nature of algorithms, which learn and change over time.”

That challenge is magnified by Europe’s legislative process. Take the EU AI Act—the European Commission proposed it in 2021; the broad approach was adopted by the Council of the EU in 2022, and in mid-2023 the European Parliament stated its position. The three bodies are in the process of agreeing on the final policy. Then that policy will become law, likely going into effect a two or three years later. GDPR had regulatory roots that reached back more than two decades.

Proponents say such a measured approach produces effective, considered legislation. Critics, on the other hand, worry such an approach is too plodding for the technology it is trying to regulate. It’s been dubbed “the Red Queen Problem,” referring to the Red Queen’s advice to Alice in Through the Looking Glass: “Now here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run twice as fast as that!”

Policymakers need all the support they can get

A few months ago the WHO published a list of considerations for regulating AI in the health sector. The final step on that list was: “Fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives and government partners.” Such collaboration, according to Patrick Boisseau of MedTech Europe, is integral to the EU achieving a delicate balance: protecting personal health data, prioritizing innovation and improving health outcomes.

As regulators explore important, complex topics in healthcare—from “regulatory sandboxes” to what differentiates “medium risk” and “high risk” in AI—they are open to engagement. The ones I have spoken with in Brussels appreciate the complexity and importance of these issues, and welcome constructive conversations.

The conversations I had with different stakeholder groups—patients and doctors, private-sector and academia—for this story underscored just how nuanced that debate is. Sara Roda of the CPME mentioned the importance of an “AI auditor” to “ensure there is an independent and external report on the reliability and trustworthiness of an AI system.” Milana Trucl of the European Patients Forum mentioned the importance of “harmonized anonymization and pseudonymization techniques” to protect patient data. Each group was supportive of the legislation broadly but highlighted the importance of the details.

For business, the call is clear: articulate the value of AI in healthcare to both policymakers and the general public—and do so while navigating the sometimes conflicting aims of public policy. Communicating that value will require a willingness to adjust as the landscape changes, and a message that is credible, clear and relevant.

For all the debate about how Europe should regulate AI, there’s one aspect not being disputed: that it remains ongoing. It is a discussion healthcare organizations have an opportunity to join—and possibly help shape.

AI in HealthcareEarly Days?

Google searches for AI in medicine and healthcare have skyrocketed since the launch of ChatGPT in November 2022, yet artificial intelligence has been used in medicine since the 1970s. Admittedly, the cutting-edge AI of the 1970s sounds rudimentary by today’s standard. One of the earliest uses—and some say the earliest use—of AI in healthcare was MYCIN, a computer program developed at Stanford University. It asked doctors a series of yes/no questions, compared those answers to a database of known bacterial infections, and then ranked the likelihood of potential diagnoses. MYCIN was used in several hospitals in the 1970s and ’80s, however the technology—and AI more broadly—remained more novelty than revolutionary in healthcare.

Around the same time MYCIN was being developed, the International Joint Conference on Artificial Intelligence was held. The conference was widely covered in 1972, including by several tiny American newspapers. The Morning Sentinel (Waterville, Maine) reported “although scientists have not evolved the humanoid automaton that most people think of as a robot, they have gone some way to creating a robot doctor. A patient simply feeds information about his symptoms to the machine and gets back a diagnosis and prescription at the other end.”

The Sioux City Journal (Sioux City, Iowa) concluded: “Sounds fantastic. But maybe it wouldn’t be too difficult to program a robot-computer to reply, ‘Take two aspirins and go to bed.’”

The Authors

FrancescaScassellatiSforzolini
Francesca Scassellati Sforzolini

Partner, Brussels

Francesca co-leads Brunswick’s Healthcare & Life Sciences group globally. With over 20 years of experience in the life sciences industry, she helps clients navigate the regulatory, political, and social landscape, and to build trust and reputation with their key audiences.