The downside of machine learning in health care

red information

DISCLAIMER: This blog is fully automated, unmonitored, and does not reflect the views of Garett MacGowan. The ML model may produce content that is offensive to some readers.

blue information

This blog post was generated with a (potentially) real article title as the only prompt. A link to the original article is below.

Generated: 2/2/2022
The downside of machine learning in health care: AI doesn't know everything

By Susan Zwoyer HealthDay Reporter

MONDAY, April 2, 2018

COPENHAGEN, Denmark (AP) — The use of artificial intelligence in medicine has soared in recent years. Companies are racing to develop applications for everything from diagnosing cancer to delivering drugs to an unborn baby.

Some researchers say the promise of AI's broad reach belies serious challenges, most of them arising from the fact that AI is, by nature, limited.

"If we believe that machine learning can solve all problems, think again," warns Rada Mitevski, a professor of health care technology and business at the Dartmouth-Hitchcock Medical Center in Lebanon, N.H. "We can't just plug in any data because we think the system will work. We have to know what we are trying to do in medical decision making."

Doctors want computer systems to help sift through all types of data, including unstructured digital text like notes that doctors and patients write, images and videos taken by sensors on patients at home and other information scattered across healthcare systems.

But unlike human doctors, who have years of clinical training, AI algorithms have no such built-in knowledge.

"We are seeing this AI takeover in health care, and a lot of it doesn't make sense to us," says Dr. David J. Ascher, an emergency physician and medical director of research and strategic planning at Cleveland Clinic. "What it does is not know what it is supposed to do."

Machine-learning systems must be trained to do what their creators specify because they are never told all there is to know about a new case, Ascher adds. But what is known may not include all the types of information about a medical problem that can trigger the best treatment, he notes.

So Ascher and others aren't sure what will happen when AI is allowed to make decisions. "The question is, could we go to health care delivery and say, OK, this is what it looks like now? Do we want to be in that future, or how would we want to define that future?"

Still, AI is having an impact in many areas — from research and technology to the care of patients and training of doctors and nurses.

Computer systems increasingly are being used for everything from diagnosing cancer to predicting the risk of hospitalization on hospital computers. Patients in the U.S. use them to order tests for diabetes, cancer, depression and other diseases. Some also are being used to select a particular medication.

Other applications — including a decision to deliver emergency drugs to a preterm baby whose lung isn't developing properly or a plan to administer potentially life-saving medication to a woman in labor — have been restricted by government regulators to reduce the potential harm to the patient.

But the potential rewards for applying AI promise to be substantial. At a technology conference in March, industry experts and computer system developers outlined the huge potential for AI in health care and promised an eventual industry worth $1.7 trillion per year.

The field is still embryonic, and the potential benefits aren't yet clear. Regulators are struggling to determine what uses of AI will trigger greater scrutiny or even legal prohibitions. But AI's widespread use is accelerating rapidly in areas like diagnosing cancer and predicting which patients might die, so it's likely to make headlines, experts say.

It's not easy being the smartest guy in the room.

Machine learning is the process of teaching computers how to teach themselves. A machine in this case is a computer, which isn't usually smart, so a human "teaches" the computer with examples, often by training them on massive data sets of medical records. Data are fed into the machine through a process called "feeding," and then an algorithm or "learning" process sorts through all the possibilities and, in the best-case scenario, predicts the outcome.

"The thing that's new about AI is the machines can learn to take the data, the learning, the outcomes and provide good, reliable, accurate, actionable diagnoses, predictions, outcomes, and care," says Dr. David Ascher.

A machine learning system trained on past data — called a "model" — can generate a very accurate prediction about whether an individual will be sick again in the future, Ascher notes.

But the AI system's ability to provide the best treatment for an ill individual is limited by two problems, Ascher says. "The first is that we still aren't certain why we are doing certain things." What does that have to do with the future of what AI will be able to do? "It's not as simple as we don't know what to do anymore," he says.

"The second, much bigger one, is that we don't actually know who this person is, what their underlying risk of certain diseases might be." We don't know who they are for any number of reasons — a recent hospital admission could result in them not being well enough for researchers to know more. And they could receive care in a system different from the one being analyzed.

The result, Ascher explains, is that "machine learning does not know everything" and "it's possible we could go forward and get to the situation where it does not know anything. We may be taking someone who is on the cusp of having a health problem and we get an inaccurate forecast."

There are some areas where it can and does do better than human doctors, especially when the medical condition is relatively specific and the patient's record is extensive. One study, for example, found that an AI system could accurately diagnose the presence or absence of brain tumors when researchers showed a sample of patient images to the system and asked it to diagnose them.

"AI is certainly very good at diagnosing acute disease," says Dr. Kevin Pham, head of the Division on Decision Science and Medical Decision Making at the American College of Physicians. "The challenge of AI in health care is that most of the diseases can't be diagnosed, or treated, by one AI system."

Health care, with its huge array of medical conditions, requires systems that learn over time, says Pham, who is director of the division, as well as chief of the medical decision making department at the U.S. Centers for Medicare and Medicaid Services.

"This is a much more difficult task. If you're diagnosing a heart attack, there's one path and one algorithm to follow," he says. "But if you're diagnosing cancer, there are a myriad of possible cancer treatments — chemotherapy, immunotherapy, surgery or radiation. An AI system is never going to learn all this."

AI is best at what it is told to do. A machine learning system can work with new cases where it hasn't previously been exposed to data, but it will take time to train the system so that it can distinguish certain conditions or recommend the right drug or medical procedure.

And just because a machine learning system has been trained on large data sets, many of them from research institutions that are known for producing solid health care data sets, doesn't guarantee success when it's been tested to see whether it can be used in health care. Researchers at the University of Washington recently did some test runs with a research model.

They gave it more than a million samples of medical images and a million samples of clinical notes. It got them almost every time, correctly identifying whether a patient had breast or ovarian cancer, for example. However, the system didn't always recognize that one patient had received all the recommended care while another hadn't.

Pham said that although the results were encouraging, the system still has a long way to go.

"That's the problem in AI — the system works, but is it better than physicians?" he notes. "If there was a true magic bullet, people would know what to do and could cure everyone. That's just not the case."

Dr. Susan Zwoyer, a general internist at Henry Ford Hospital in Detroit, is among those skeptical that AI will live up to its hype in health care.

"We're always using data from the past to predict the future," Zwoyer says. "This is something that just doesn't happen in health care … so to call it AI is a little bit sensationalist."

Zwoyer, who is also a professor of information science and technology at the University of Michigan, has a system called ZCode that can help reduce emergency-room visits and hospitalizations and reduce the cost of health care overall.

"ZCode is a system that can identify and predict medical situations that may be life-threatening," Zwoyer says, in essence, helping doctors recognize those in need of more urgent care.

The AI component of ZCode is capable of analyzing vast amounts of data, but it doesn't tell us everything, Zwoyer says. "It can do wonderful things with the data that it is presented with," she says.

But AI alone is not designed to provide definitive answers. And it can't be used to determine whether a patient will have problems in the future.

"It's not going to cure anyone; we still have to practice the medicine," Zwoyer says. "So we have to figure out how to make it right every time, and that isn't possible. As far as we know, the technology doesn't allow us to do that."

Doctors can use it to help decide where to focus primary care and where to direct people who might be referred to specialist care, but that isn't always the case, she notes.

Zwoyer sees some promise in applying AI to care. For instance, it can predict how long patients might live, based on a variety of risk factors.
logo

Garett MacGowan

© Copyright 2023 Garett MacGowan. Design Inspiration