The Healthcare World Is Considering an AI Future | Brunswick Group

The Healthcare World Is Considering an AI Future

What businesses need to know while navigating this evolving environment

From personalized and targeted cancer treatments to more face time with your physician, aspirations for AI in medicine are nothing short of transformative. A widely circulated story about a mother who used ChatGPT to correctly diagnose her son after seeing 17 doctors in a three-year span is inspiring hope that AI can advance diagnostics.

And while artificial intelligence is already hard at work across the healthcare landscape, the broader conversations about AI are still in their early stages. Some have equated AI in 2024 to the internet going mainstream in the 1990s. Others have said we are one minute into a 90-minute match. Analogies aside, most of us have heard of AI and realize that it will forever alter our future, but the precise implementations and implications are still largely unknown. As the Dean of the Stanford University School of Medicine, Dr. Lloyd Minor, said at Stanford Medicine’s inaugural AI event in May, “The internet brought us access to information; generative AI will bring us access to knowledge.”

Just as important as the algorithms and models are the ethical considerations of AI, particularly when it comes to healthcare. Privacy, security, data bias and health equity concerns, along with regulatory frameworks, are all critical components of the responsible use and adoption of AI in medicine.

These critical questions have been the foremost topic at healthcare conferences and convenings across the US During the past year, Brunswick Group advisors have been a part of these conversations, including with leading healthcare providers, academic institutions, technology companies and journalists. The subject of AI and healthcare raises unique concerns around topics like data and the role of human connection in the age of rapidly advancing technology, as organizations around the globe grapple with how to leverage the benefits of AI while mitigating the risks.

Below are four key takeaways for businesses thinking about the intersection of AI and health:

Think about the user first

Technologists and healthcare providers should think about the user as they are building the models and products to ensure usability and effectiveness. When done correctly, AI can empower providers and patients and improve access to healthcare, including in underserved communities. Those developing AI healthcare solutions should connect with underserved communities—some of which are already reluctant to engage with advanced technologies because of the potential for bias—to incorporate diverse perspectives and mitigate concerns.

User-centric thinking should also include communicating empathy and shared values. Patients facing health concerns will continue to seek human connection and trust, even with advancements in technology. Roughly 60% of Americans say they would feel uncomfortable if their healthcare provider relied on AI to diagnose diseases or recommend treatment options. But, if AI note-taking tools are created with the user in mind and allow mental health providers to spend more time focused on therapy instead of charting, that’s a win for the patient, provider and the system.

It will be important to ensure that the technology—in conjunction with routine patient/provider communication—conveys understanding and concern for the patient, not just technological competence.

Transparency is essential

Transparency is key to building trust with patients as we implement new technologies. Everyone involved in the process, especially patients and providers, should understand how AI tools are making decisions that affect health outcomes and what types of data underlie those decisions. Patients should also understand when and how their own data is collected and used in AI models.

Citing a new American Medical Association (AMA) report, Dr. Jesse Ehrenfeld, president of the AMA, said at the Stanford RAISE Health event that about 40% of US physician practices use some type of AI today, but it’s mostly for “back-end, administrative office things.”

Even when the AI isn’t “touching” patients, and especially when it is, it should be communicated to patients and providers in language that is concise and easy to understand. The most common transparency gaps exist around the AI training data and the ethical considerations that were weighed when building the models, limiting patient and provider understanding of safety and risks. Improving transparency can help improve trust about the increasing use of technology in healthcare and corresponding concerns about patient privacy.

Data accuracy is critical, but so is speed

AI has the potential to revolutionize the healthcare system with faster and more accurate diagnoses, enhanced drug discovery and highly personalized treatment plans. But the training data needed for models to, for example, understand biology and discover new drugs, must be excellent to ensure results are accurate and lead to the best possible outcomes.

And there’s plenty of data – as Kimberly Powell, vice president of Healthcare at NVIDIA, noted at STAT Breakthrough Summit West, approximately 30% of the world’s data volume is being generated by the healthcare industry. How it will or will not be used is a key part of the equation.

While data quality is critical, healthcare systems do not have the luxury of time and abundant resources to collect data, especially at the rate at which AI is advancing. Public-private partnerships can be particularly helpful in bridging this gap; industry should focus on reducing costs for AI healthcare solutions, and healthcare providers should partner with industry to represent patient and provider challenges so AI models are appropriately designed and tailored.

Many also contend that to have a generative and learning health system, we need better approaches to sharing data and a stronger willingness to do so. Data sharing increases the size of data sets and potentially allows AI to correct biases that might not otherwise be recognized in smaller data sets.

Liability needs to be delineated

Using AI in the healthcare context poses many concerns about liability, from misdiagnosis to the potential for bias and discrimination. Much of the existing liability work focuses on medical malpractice since it’s the doctors who are traditionally responsible for care, but the AI development and use ecosystems have new stakeholders to consider. Technologists, healthcare professionals, patient advocates, and legal and policy experts should work together to consider what frameworks are still valid in the AI context and where new frameworks and laws should be developed. Solutions should focus on balancing patient safety and technological innovation.

To continue the conversation


Michael Fitzpatrick
Partner, Washington, DC
[email protected]
Michael is a Partner at Brunswick Group encompassing public affairs and regulatory counseling across many sectors, with a focus on technology and digital policy, as well as crisis management and litigation communications.


Chelsea Magnant
Director, Washington, DC
[email protected]
Chelsea is a Director in Brunswick’s D.C. office. She comes to Brunswick with 15 years of strategic advisory experience, primarily at the intersection of technology policy and geopolitics. She advises clients on a range of policy, regulatory, and reputational issues.
 

Jennifer Sukawaty
Director, San Francisco
[email protected]
Jennifer is a Director in Brunswick’s San Francisco where she helps lead the firm’s west coast healthcare practice. She advises clients on a range of crisis and reputational issues with a focus on the intersection of health and tech.

Kate Larsen
Associate, San Francisco
[email protected]
Kate is an Associate in Brunswick’s San Francisco office. She draws on her experience as journalist to help clients manage their most pressing issues, solve problems, and tell meaningful stories that make a long-term impact.