A Legal View on AI in Healthcare: Balancing Opportunity and Risk | Brunswick Group

A Legal View on AI in Healthcare: Balancing Opportunity and Risk

An interview with Linklaters’ Georgina Kon, a Partner specializing in complex domestic and international IT, sourcing and information governance matters.

As artificial intelligence (AI) rapidly evolves, regulators and policymakers are racing to catch up, seeking to impose the right safeguards without stifling innovation. While every sector is trying to understand the promise and pitfalls of AI, the stakes are especially high in healthcare. Potential errors and patient harm, data breaches and the risk of introducing or exacerbating biases are among the concerns the industry is trying to balance with the massive potential that AI offers in transforming treatment and delivery of care to patients.

The landscape is changing quickly: In March, the European Parliament approved wide-ranging rules to govern AI, and global leaders at a UK summit last November signed a declaration to address the risks surrounding AI.

As a specialist advising a broad range of high-profile public- and private-sector clients on AI, digital, cyber and online safety, Georgina Kon, Partner at Linklaters, has a frontline perspective on the opportunities and challenges of rapid adoption of AI technology through the healthcare value chain. She sat down with Brunswick Healthcare and Life Sciences specialists Ayesha Bharmal and James Paton to discuss how companies can continue to take advantage of transformative advanced technologies while balancing risk.

In the “View From” series, the Brunswick Healthcare & Life Sciences team interviews leading experts including board members, scientists,
investment bankers, doctors and more on what they see as the biggest trends, challenges and opportunities for the sector.

Subscribe to receive future notes from the Healthcare & Life Sciences team here.


How do healthcare and life sciences companies get the balance right between adoption of advanced technologies and safeguarding against the risks they present?

We are already seeing incredible advances in the use of AI to deliver better outcomes for patients – from diagnostic tools to supporting medicine adherence to helping clinical decision-making in surgical settings.

AI is only as good as the data behind it. The first question companies must ask themselves is, Can we get hold of a large, high-quality and diverse enough data set? Then, Can we get permission to use that data? And, thirdly, Do we have the technology to use that data meaningfully?

There are many more challenges, of course, but these have been difficult historically. This is true of all industries, but in healthcare, the stakes are much higher. It is more heavily regulated, and both legally and reputationally there is more to lose.

Companies are rightly cautious about overstating what their AI programs can achieve. Unfortunately, we have seen some examples of companies “AI-washing” with serious consequences. And that is a risk for the whole sector as it erodes patient trust.

AI can absolutely revolutionize healthcare, but the development cycle is necessarily going to be longer than in other industries because the impact of failure could be so severe. That is why we see businesses deploy AI in safer, less high-profile areas first – to support greater efficiencies in back-office functions, for example.

Source: World Health Organization

How is AI changing the dynamics in dealmaking? Do acquirers need to think differently about due diligence?

There is so much more to think about when acquiring an AI asset or business. Technical due diligence is required to help companies understand: Does the AI do what we are told it does? How confident are we in the ownership rights given all the data that will have been fed into the AI, and what level of risk are we comfortable with? If the target business has drawn from “dirty” data – unlicensed data sources – that could have liability implications for the acquirer.

There may be cases where the focus is on acquiring the tool or code rather than the data. That brings with it a range of additional questions, including on ownership rights. Knowledge transfer and retention of expertise is another major consideration; it is a dealbreaker if the acquirer doesn’t believe it will be able to use the AI to full effect post-transaction. Incentivization arrangements are therefore much more common in these types of deals, to keep the founders on board – not just to maintain the status quo post-acquisition but to help upskill the new owner. 

Traditional due-diligence methods alone won’t cut it with these sorts of deals. We need to speak directly to the product team to understand the practical and legal risks. There may be some risks that acquirers can get themselves comfortable with, knowing that work will be needed to raise data protection or cybersecurity standards. It is common to see remediation plans put in place between signing and close, to ensure fixes are put in place, and even for payments to be held back until that work is done.

Our job is to help our clients understand: Is there a smoking gun somewhere in the target business? Is there a risk they can’t live with?

Acquiring an AI business from private equity is often appealing to healthcare companies. PE tends to have a higher risk appetite and can send in “clean teams” to address these sorts of issues, de-risking the business ahead of a sale.

Regulation around AI is evolving – do you have clarity on what the regulatory picture will be globally?

We are seeing healthcare companies become more open to sharing data for public good, and legislators encourage this activity through initiatives like the European Health Data Space.

From a legislative perspective, we see some interesting geographic variation. The European AI Act is, much like General Data Protection Regulation (GDPR), principles-based legislation focused on high-risk products. It is intended to be proportionate and to support innovation, while also protecting citizens.

The UK has intentionally taken a different approach, shying away from creating lots of new AI legislation that could be deemed unhelpful to business. Similarly – and perhaps particularly in an election year – the US government is being careful to demonstrate that it is business-friendly.

However, my sense is that regulators in most countries will find ways to hold bad AI actors to account. There is a lot of existing regulation to support this, as we have seen with several US Securities and Exchange Commission and Federal Trade Commission investigations.

Regulation is necessarily broad and – while the new EU AI law does make specific references to scientific research – is not designed to take every eventuality into account. That can be off-putting in a risk-averse sector like healthcare. Projects with potential to deliver real benefit can sometimes be stopped through fear of the regulatory impact.

In fact, there are very few circumstances in which you are unable to have a sensible conversation with a regulator about something that will deliver a very clear health benefit. In our experience, regulators understand the complexities and are very willing to provide guidance to companies to overcome barriers.

How can healthcare companies make sure the right safeguards are in place – to ensure data security and, critically, to guard against data bias? 

Effective and responsible use of AI has been a focus of boards and executive teams for some time. We are seeing a lot of healthcare companies create internal ecosystems in which to trial AI projects – and where failures can occur safely – away from the “traditional” parts of the business.

The healthcare industry is characterized by its evidence-based approach, so it will always be more cautious than less heavily regulated sectors. There is more discussion about what acceptable risk looks like, and a deeper understanding of issues like data bias, transparency and explainability.

The risks are not limited to privacy or cyber breaches. There are competition and IP risks, consumer protection and ESG angles to consider. Take diagnostics software programs, for example, designed in line with a predominantly male experience; these products could pose a risk to women. Similarly, we do not have a lot of data on how an LGBTQ+ population may be impacted by certain types of AI – there is a danger of making things worse for minority populations.

The flip side of this is that AI also gives us opportunities to address bias – to program it out. The question is how you do that in the right way, because we don’t all have the same sets of values. AI is imperfect because we are imperfect.

We could be doing more to share knowledge across sectors. If lessons can be learned in a “less risky” sector and cross-pollinated, it could be beneficial for all. AI legislation asks companies to do just this: risk assess using available evidence to understand whether their plans are sensible or not. This need to gather real world evidence is something the healthcare sector understands very well, and it puts them in a good position to meet the challenge.

What are you most excited about for the future of AI-enabled healthcare?

It is still early days and, of course, there is never going to be a time when it is completely risk-free. We cannot hold up innovation waiting for the perfect moment. And the potential of AI to create better access to healthcare across the globe is huge. Not just in countries that already have sophisticated healthcare systems in place, but also in remote parts of the world where technology can be deployed cheaply compared to the cost of bringing in “human-only” solutions. AI can bring greater consistency to the care delivered and, ultimately, improve outcomes for patients.

You can download a copy of this interview here.

To continue the conversation

Ayesha Bharmal
Partner, London
[email protected]

Ayesha is on Brunswick’s global Healthcare & Life Sciences team, working with clients from major pharmaceutical and medical device developers to biotech and health-tech companies.

James Paton
Director, London
[email protected]

James joined Brunswick Group after more than two decades as a journalist at Bloomberg News and other media organizations. He advises global companies, foundations and nonprofits across the healthcare and life sciences sector, as well as other fields.

 

This note is not intended to be taken as legal advice.