By Charles Binkley, co-author of Encoding Bioethics: AI in Clinical Decision-Making

Artificial Intelligence (AI) is being introduced into every sector of the human experience, and healthcare is no exception.

AI models were first used in radiology in the 1980s to aid radiologists in interpreting various imaging studies. AI was particularly well suited for radiology since AI models are trained to recognize patterns and make associations or predictions. For instance, in radiology a lung mass almost always has specific characteristics in terms of how similar it is to surrounding tissue, its location, and appearance. radiologists make a diagnosis based on a constellation of imaging patterns that they themselves have been trained to recognize and associate with a specific diagnosis.

Not long after, AI clinical decision support systems (AI CDS) were introduced into other image-based specialities like pathology and dermatology. More recently, AI models have assisted gastroenterologists in identifying suspicious lesions during an endoscopy, surgeons in identifying critical structures during an operation, and cardiologists in predicting who will in the future develop atrial fibrillation. There are even AI models in development that can predict an individual patient’s likelihood of dying within the next six months.

As AI models become more common in influencing clinical decisions, patients should think about whether they want AI models to be used in their medical care. Do you want to know the prediction that the AI model has made about your current or future health? Do you want to control whether a prediction was made about you in the first place? Is “opting out” of AI in medical care possible or even feasible? Another related concern is whether you are willing to share your personal health data with the model, a requirement to receive AI decisional support. The decision to share data may also entail data being used in the future for the model’s ongoing learning, or to train and validate other AI models. 

A recent survey by the Pew Foundation found that 60% of Americans would be uncomfortable with their provider relying on AI in their care. The survey also showed that although most people believe that AI will lead to better health outcomes, they are still concerned about their personal relationship with their health care provider and the security of their personal health information. It seems that most Americans want to keep a “human in the loop,” meaning that they still want their clinicians to play an active part in the collaboration with an AI model and not just accept the system’s output. Patients may place a high value on the personal touch that a human clinician provides, or they may believe that human clinicians provide a level of safety and quality in care when they have oversight over an AI system. They may also believe that having a human will ensure that their rights are protected, such as the confidentiality of their information, prioritization of their health and well-being, and respect for their individual health choices.

However, imagine the very likely scenario in which humans hold AI models back from achieving their full potential, a time when AI systems are better than clinicians at diagnosing and treating diseases, and even better at performing procedures. Patients may still want the connection that a human being provides. Yet even some of today’s AI models are excellent at conveying empathy, understanding, and openness in ways that clinicians may be lacking. An AI model could have consistently excellent bedside manner, while physicians can have a bad day and be grumpy, dismissive, or appear harried. In addition, having AI “clinicians” would almost certainly significantly reduce health care costs, decrease the time it takes to be seen by a clinician, and improve overall access to care—three of the chief complaints that patients have about the health care system today.

There are a few key considerations before we embrace AI models for clinical decision making, in their current or future form. One of the most important questions is whether a patient is willing to give the model access to their health care data. The model uses personal information to make a prediction, and then the information may be retained to aid in the model’s continuous learning, or the information may be used to train and validate other AI models in the future. While these training data sets may be “deidentified”—in that one’s name, medical record number, social security number, and other directly identifying information may be removed—the greater number of unique pieces of information specific to an individual that the model has, the more likely the model will be able to “reidentify” that individual, if not by name, by profile. The risk is that patients will be profiled across platforms and—though their names will be removed—their profile will be attached to their data. It also can’t be assumed that one’s data will always be retained and protected by heath care organizations that have an ethical obligation to maintain confidentiality. Data sets are commonly shared between major health systems and large technology companies for the purpose of training AI models. Data breaches could also result in patients’ personal health profiles being accessed in ways that could lead to discrimination in employment, health insurance, and elsewhere.

AI in healthcare carries both great potential and risks. The overwhelming majority of people building these models have excellent intentions. However, for every good action there are unintended consequences that must be weighed against the potential benefit of the action. As patients weigh the risks and benefits of every medical intervention so too must they also weigh the risks and benefits of AI in their health care.