¶¶ŅõĢ½Ģ½

Vermont Medicine Feature

AI in Medicine: A New Paradigm?

by John Turner

As the subject of almost unrelenting hype, Artificial Intelligence (AI) is arguably the hottest technology of our time. In the last few years, AI has evolved from a futuristic concept into an inescapable juggernaut of innovation emerging from technology companies that impacts many facets of our lives. 

It is not unreasonable to expect the use of AI in medicine to have a significant impact, and potentially revolutionize patient care, biomedical research, and health care systems. A compelling argument can be made that AI will allow the business of health care to automate large chunks of time-consuming tasks and spark a new era of medical and scientific breakthroughs. 

AI is already being widely adopted in medicine and the provision of health care. As of August 2024, the FDA had approved 950 AI and machine learning (ML) enabled medical devices. Itā€™s estimated the number will only continue to grow every year. Every day, patients send hundreds of thousands of messages to their doctors through MyChart, a communications platform that is nearly ubiquitous in U.S. health care systems, with approximately 15,000 physicians and assistants at more than 150 health systems using a new AI feature in MyChart to draft replies to such messages. 

Despite the enthusiasm for, and promise of AI, there is skepticism in some quarters about how much transformation is possible when AI solutions are layered on top of fragmented or flawed systems that are widespread in health care. When coupled with concerns about privacy, the possibility of bias, device accuracy, ethical considerations, and some well-publicized AI hallucination patient incidents, it is no surprise that a chorus of voices calling for a more measured perspective tempering the technologies rollout and expectations, is rising.

Reducing Physician Burnout and Improving Patient Care

While the majority of FDA-approved devices are in the imaging and radiology fields, AI-powered communication and transcription tools are quickly emerging as easy-to-deploy possible antidotes to addressing ever-piling paperwork mountains and clinician burnout. 

¶¶ŅõĢ½Ģ½ Health Network (¶¶ŅõĢ½Ģ½HN) recently piloted and tested ambient AI products with primary care providers from both Vermont and New York that record a natural conversation between patient and clinician to automatically generate the clinical note. 

The key results from the pilot and deploying the AI solution, Abridge, were encouraging in impacting contributing factors to physician burnout:

  1. Improved professional fulfillment: Cliniciansā€™ professional fulfillment increased by 53 percent based on the Stanford Professional Fulfillment Index
  2. Significant time savings: Clinicians reported a 60 percent decrease in time spent on documenting patient encounters
  3. Decreased cognitive load: Clinicians experienced a 51 percent reduction in cognitive load, allowing for more focus and attention with patients

Laura McCray, M.D., M.S.C.E., professor and interim chair of family medicine, who helped operationalize the pilot said, ā€œI really love using the AI product, in that Iā€™m able to put away the computer and the keyboard and really focus on my conversation with the patient, who anecdotally seem to really love it. They feel heard and understood. Also, my notes are done by the end of the day, which is a huge time saver, and it takes the focus away from documentation in the computer and puts it more on the essentials of patient care, so thatā€™s been awesome. In fact, weā€™re rolling it out across all the ¶¶ŅõĢ½Ģ½HN primary care sites now.ā€

McCray is already looking to future opportunities to use the technology in clinical care. ā€œI think weā€™ll be able to leverage AI to help us not only with the note writing, but with capturing billing and coding data, interfacing with insurance companies, language translation, orders, or taking new medication prescriptions, and tee those things up for the human provider to always cross-check, make sure itā€™s accurate, and then sign off to get things accurately moving forward for the patient.ā€

She adds a note of caution, however. ā€œWhile it will improve efficiency, I think itā€™s critical to the use of AI and patient care, that thereā€™s always a human set of eyes at the end to confirm accuracy before things are signed off or sent.ā€ 

McCray cited a randomized controlled trial in primary care at ¶¶ŅõĢ½Ģ½HN on the use of AI-assisted radiology reports that create a patient-friendly radiology report in much more easy-to-understand terms, as opposed to the current radiologist report thatā€™s available for patients in the electronic portal. She noted, ā€œWhat we found was that we still needed human radiologists to review and edit the AI-generated reports.ā€

Class of 2027 medical students Francisco Cordero and Teddy Harrington worked on a secondary study, assessing the quality of notes and patient reaction to use of AI. While their study results are yet to be published, their research anecdotally confirmed McCrayā€™s experience that the notes are as high-quality as provider-generated notes, and patient feedback consistently mentioned appreciating better eye contact and conversations, less provider screen time and a more fulfilling experience with providers. 

The Ethics of AI Care

As AI is increasingly being deployed in health care, it promises substantial benefits but also poses risks that could exacerbate existing disparities and ethical challenges. 

ā€œOne interesting challenge is inadvertent incorporation into AI of human bias,ā€ says Tim Lahey, M.D., M.M.Sc., professor of medicine, director of ethics at ¶¶ŅõĢ½Ģ½ Medical Center, and a member of the ¶¶ŅõĢ½Ģ½HN AI governance board. ā€œAI is trained on our behavior and trained on our scientific literature. Since humans are biased, and our scientific literature and clinical practice have some bias in them, there may be biases that AI could adopt in a way that perhaps weā€™re not even aware of. We could be hardwiring bias into the future practice.ā€

ā€œLetā€™s say you have an AI scheduling software trying to efficiently book people into the clinic. Maybe it turns out that patients of a given demographic are less likely to make their appointmentā€”single parents, for instance, might be less likely to show up for an appointment than members of two-parent households, for legitimate reasons. Maybe then the AI says, ā€˜Iā€™m going to prioritize the person who shows up for appointments more because I want to fill the schedule more reliably and make sure clinicians are maximally efficient.ā€™ That sort of system would, unintentionally, make it even harder for that busy single parent to get the health care they need. It would exacerbate an existing challenge unintentionally, because AI has no intentions, but that would be the output. Itā€™s going to be our duty while weā€™re using this hopefully effort-saving technology, to make sure itā€™s not saving us on the backs of people who are already disadvantaged in our system.ā€

Lahey continued, ā€œThe way I approach it is that with any new technology thereā€™s always promise and peril. AI is no different. These benefits could really alter health care for the better in many ways, but also can make things worse ... I think that itā€™s going to boil down to investing in making sure that we know what the outcome of the use of AI is at a population level, that we formally ask whether itā€™s leading to biased outcomes or whether itā€™s neglecting the needs of patients with less common conditions.ā€

Beyond the risk of bias, Lahey pointed out an additional AI-related risk that users will have to manageā€”patient data breaches. Clinical AI accesses and learns from patient information at a faster pace and while making more connections than in traditional software. That means privacy protections and monitoring for breaches will have to be intensified as the flood of software entrepreneurs sells new AI products to health care institutions.

Supercharging Research

On October 8, 2024, Geoffrey Hinton of the University of Toronto, and Princeton Universityā€™s John Hopfield were awarded the Nobel Prize in Physics for their work with machine learningā€”in essence providing building blocks for developments in AI. And just one day later, scientists David Baker of the University of Washington, and Demis Hassabis and John M. Jumper of Google DeepMind, were awarded the Nobel Prize in Chemistry for discoveries that show the potential of advanced technology, including AI, to predict the shape of proteins and to invent new ones. Itā€™s safe to say that, as far as the Royal Swedish Academy of Sciences is concerned, AI and research have officially arrived. 

The Nobel Prize in Chemistry honored a real-world example of how AI is helping research today unearthing enormous discoveries in protein structuresā€”insights that have far-reaching implications. The AlphaFold Protein Structure Database that Hassabis and Jumper developed has thus far predicted more than 200 million protein structures, nearly all catalogued proteins known to science. As the Nobel committee pointed out, proteins ā€œcontrol and drive all the chemical reactions that together are the basis of life. Proteins also function as hormones, signal substances, antibodies and the building blocks of different tissues.ā€ 

To design new drugs and vaccines, scientists need to know how a protein looks or behaves. While the AlphaFold database and its partner, AlphaServer, currently only predict how proteins will interact with other molecules throughout cells, such predictions can accelerate biomedical research, with the potential to save millions of dollars and years in research time.

Kate Tracy, Ph.D., senior associate dean of research at the Larner College of Medicine, believes AI will fuel an acceleration in research to unlock new discoveries, supercharging our ability to explore previously intractable problems. ā€œScientifically thereā€™ve been some seismic shifts related to the human genome project mapping, the development of immune therapies, and the sequencing of large amounts of data and being able to use that information to match treatments and therapeutics to the personal genetic code of the individual. And these are amazing innovations,ā€ she said.

She continued, ā€œIf weā€™re going to generate this massive amount of data about your basic biology and genetic code, you must have a method for harnessing it all. And thatā€™s where big data and AI come in. How do you digest tens of millions of bits of data about an individual and make sense of it? You need supercomputing for that. AI and machine learning will become a fundamental tool of medicine, and for research.ā€ 

Tracy also noted, ā€œWhile machine learning and AI have so much to offer for advancing science, it is essential that the design and implementation be transparent so results can be scrutinized, and validity of conclusions weighed.ā€

Imagining the Future

If the ultimate goal of medicine is to provide the right treatment for the right patient at the right time and be able to provide a treatment for every patient, patient-centered precision health care is top of mind for many physicians. Precision health considers differences in peopleā€™s genes, environments and lifestyles, and formulates treatment and prevention strategies based on the patientā€™s unique background and conditions.

In theory, precision medicine will allow doctors and researchers from across medical disciplines to:

  • Determine the best care for each individual patient
  • Identify disease mutations (changes in genes that cause disease) in patients with undiagnosed conditions
  • Avoid serious side effects from medications
  • Identify genetic risk factors to guide lifestyle/environmental recommendations that can improve the health of each patient

With health careā€™s quest for precision medicine now in the era of AI, the creation of medical ā€œdigital twins,ā€ sometimes called virtual twins, that can mimic physical situations have become an increasingly popular goal and hold even greater promise for helping diagnose and treat populations in the future. 

Pioneered in the 1960s, the idea of a digital twin was born at the National Aeronautics and Space Administration (NASA) as a ā€œliving modelā€ of the Apollo mission. In response to Apollo 13ā€™s oxygen tank explosion and subsequent damage to the main engine, NASA employed multiple simulators to evaluate the failure and extended a physical model of the vehicle to include digital components. This so-called digital twin allowed for a continuous ingestion of data to model the events leading up to the accident for forensic analysis and exploration of next steps. This concept has subsequently been adopted by various industries and is now critical to the success of assembly lines worldwide. 

black and white silhouette heads with representation of data in them

The size of global digital twins in the health care market in terms of revenue was estimated to be worth $1.6 billion in 2023 and is forecast to reach $21.1 billion in 2028. Digital twinning has come of age in medicine during the last several years, moving into models for livers, brains, joints, eyes, lungs, and other body parts. The technology is also being used to test new medical devices and even drugs, with computer models powerful enough to predict a new moleculeā€™s impact on organs and cells.

In a December 2023 report, the National Academies of Sciences, Engineering, and Medicine (NASEM), the independent panel founded by Congress to advise the federal government and the public on advances and implications of science, engineering, and medicine issues, evaluated the rapidly spreading technology. It defined a digital twin as a virtual replica that ā€œmimics the structure, context, and behavior of a natural, engineered, or social system ā€¦ is dynamically updated with data from its physical twin, has a predictive capability and informs decisions that realize value.ā€ The Food and Drug Administration, which reviews and approves medical devices, has been developing standards for the software in this emerging technology, as well as methods of evaluating it as it progresses.

As a surgeon, as well as a computational biology and mathematical modeling expert, Gary An, M.D., Green and Gold Professor of Trauma and Critical Care and vice chair of research in ¶¶ŅõĢ½Ģ½ā€™s Department of Surgery, has been working on developing and researching digital twins and computational models for such diseases as sepsis and COVID-19 for more than 20 years. In short, he uses a combination of mechanism-based computer simulation, artificial intelligence, and high-performance computing to help ā€œdevelop therapies for the injured and critically ill,ā€ he says. An participated in the NASEM report; however, from that position and experience, he strikes a note of caution with respect to the hype swirling around AI and the advances it will make. An, who currently serves on the National Institutes of Healthā€™s Multiscale Modeling Consortium Working Group on Digital Twins says, ā€œPeople just want to believe in magic, that if you could collect enough data, and you threw it at a large enough supercomputer and a sophisticated enough algorithm, the answer would magically appear, and it obviously is not the case.ā€

ā€œWe have become much better at extracting data with all our experiments, analyses, and sensors, and very good at constructing hypotheses from these things. 

This is where big data, machine learning, and modern AI works,ā€ says An. 

ā€œWhere you have all this data that was previously daunting and now you have these methods that you can use to construct hypotheses about why that would be ā€¦ and then the testing of whether or not your particular belief that results from your analysis is actually true. And you do that through experiments, by representing your hypothesis in some sort of form and then evaluating it. Ever since Francis Bacon and Sir Isaac Newton, this is the scientific cycle. And thereā€™s no shortcut around that process. So thatā€™s where the bottleneck and an impetus to want to believe in magic occurs.ā€

NASEM also warned of runaway enthusiasm for virtual twinning in its comprehensive report. ā€œThe publicity around digital twins and digital twin solutions currently outweighs the evidence base of success,ā€ the panel of experts said. 

Whatever the future holds for the use of AI in medicine, there are numerous health care applications already in various stages of development. A sampling of recent headlines includes a first-of-a-kind AI system that enabled a stroke survivor with paralysis to communicate in two languages, both Spanish and English. AI has shown significant promise in improving the accuracy of cancer diagnosis and X-ray analyses, and Google is testing a method that uses audio signals to anticipate initial symptoms of sickness and has utilized 300 million audio samples, including coughs, sniffles, and labored breathing, to train its AI foundation model to identify signs of diseases like tuberculosis. 

Anyone who has used ChatGPT knows that AI is in its infancy, sometimes prone to ā€œhallucinations,ā€ yet its promise of a brave new worldā€”a new paradigm in health careā€”is alluring and undeniable. Whatever the future holds for the impact AI will have on medicine and health care, the hope is that if nothing else, the technology will be able to unlock more of the humanity in patient careā€”and the human touch will always remain.