Think about the user first
Technologists and healthcare providers should think about the user as they are building the models and products to ensure usability and effectiveness. When done correctly, AI can empower providers and patients and improve access to healthcare, including in underserved communities. Those developing AI healthcare solutions should connect with underserved communities—some of which are already reluctant to engage with advanced technologies because of the potential for bias—to incorporate diverse perspectives and mitigate concerns.
User-centric thinking should also include communicating empathy and shared values. Patients facing health concerns will continue to seek human connection and trust, even with advancements in technology. Roughly 60% of Americans say they would feel uncomfortable if their healthcare provider relied on AI to diagnose diseases or recommend treatment options. But, if AI note-taking tools are created with the user in mind and allow mental health providers to spend more time focused on therapy instead of charting, that’s a win for the patient, provider and the system.
It will be important to ensure that the technology—in conjunction with routine patient/provider communication—conveys understanding and concern for the patient, not just technological competence.
Transparency is essential
Transparency is key to building trust with patients as we implement new technologies. Everyone involved in the process, especially patients and providers, should understand how AI tools are making decisions that affect health outcomes and what types of data underlie those decisions. Patients should also understand when and how their own data is collected and used in AI models.
Citing a new American Medical Association (AMA) report, Dr. Jesse Ehrenfeld, president of the AMA, said at the Stanford RAISE Health event that about 40% of US physician practices use some type of AI today, but it’s mostly for “back-end, administrative office things.”
Even when the AI isn’t “touching” patients, and especially when it is, it should be communicated to patients and providers in language that is concise and easy to understand. The most common transparency gaps exist around the AI training data and the ethical considerations that were weighed when building the models, limiting patient and provider understanding of safety and risks. Improving transparency can help improve trust about the increasing use of technology in healthcare and corresponding concerns about patient privacy.
Data accuracy is critical, but so is speed
AI has the potential to revolutionize the healthcare system with faster and more accurate diagnoses, enhanced drug discovery and highly personalized treatment plans. But the training data needed for models to, for example, understand biology and discover new drugs, must be excellent to ensure results are accurate and lead to the best possible outcomes.
And there’s plenty of data – as Kimberly Powell, vice president of Healthcare at NVIDIA, noted at STAT Breakthrough Summit West, approximately 30% of the world’s data volume is being generated by the healthcare industry. How it will or will not be used is a key part of the equation.
While data quality is critical, healthcare systems do not have the luxury of time and abundant resources to collect data, especially at the rate at which AI is advancing. Public-private partnerships can be particularly helpful in bridging this gap; industry should focus on reducing costs for AI healthcare solutions, and healthcare providers should partner with industry to represent patient and provider challenges so AI models are appropriately designed and tailored.
Many also contend that to have a generative and learning health system, we need better approaches to sharing data and a stronger willingness to do so. Data sharing increases the size of data sets and potentially allows AI to correct biases that might not otherwise be recognized in smaller data sets.
Liability needs to be delineated
Using AI in the healthcare context poses many concerns about liability, from misdiagnosis to the potential for bias and discrimination. Much of the existing liability work focuses on medical malpractice since it’s the doctors who are traditionally responsible for care, but the AI development and use ecosystems have new stakeholders to consider. Technologists, healthcare professionals, patient advocates, and legal and policy experts should work together to consider what frameworks are still valid in the AI context and where new frameworks and laws should be developed. Solutions should focus on balancing patient safety and technological innovation.