Using Explainable AI (XAI) for Compliance and Trust in the Healthcare Industry

A fundamental barrier the Healthcare industry faces in adoption of Machine Learning is a lack of trust and compliance for artificial intelligence solutions (AI). Increased calls for regulation, demonstrated by the FDA’s most recent guidance, pose a moral dilemma for healthcare providers trying to serve their patients. Explainable AI (XAI) in healthcare can help build confidence in an industry that demands governance and compliance.

In September 2022, the FDA recommended that black box models designed to replace physician decision-making begin to be treated as medical devices. This has changed machine learning systems to be subject to rigorous frameworks in order to regulate medical devices. This has drastically increased the amount of regulation and scrutiny they are subject to. It would appear that calls for more regulation of AI solutions will intensify until companies are able to earn the trust of physicians and their patients.

In this article we’ll be diving into what explainability is, the current relationship between the healthcare industry and AI, and the challenges/opportunities XAI has.

What is Explainable AI?

Explainable AI consists of the processes and tools in the application of AI that enable the reasoning behind a model’s outcomes to be understood by a human. Explainability provides us with algorithms that give insight into trained model predictions and robustness of the model. ML model predictions that are justified by XAI are much more likely to be trusted by people.

Explainable AI can also contribute to model accurateness, transparency, and compliance to current and future regulations. XAI is crucial for organizations who want to adopt a responsible approach to artificial intelligence development.  Explainable AI is still at an early stage despite the potential it can bring to the healthcare industry.

The need for Explainable AI quickly becomes apparent when, for example, an image classification model is used to evaluate CT scans to diagnose cancer patients, or predict which patients are more likely to develop diabetes. Understanding how your model reached its conclusion is necessary so that practitioners can know whether a model is accurate and can explain the reasoning during an audit. In an audit, explainability algorithms would show which parts of an image were used to detect the cancer, or what was picked up in toxicology reports that made the model suggest it was diabetes.

If applied correctly, XAI would allow physicians to understand the reasoning behind the model’s output by identifying the markers that led to its conclusion. Explainable AI has the ability to reduce invasive surgery and ensure faster diagnosis.

A Common XAI use case in healthcare 

The importance of XAI can be easily demonstrated in one of the most common ML use cases in healthcare: feeding a model Electronic Medical Records (EMRs) data to make predictions about a patient’s health outcomes or alert physicians to potential complications. If the model doesn’t provide context as to why it thinks a patient is more likely to suffer complications or what those complications would be, physicians can be sent down a rabbit hole of performing multiple procedures to establish a patient’s diagnosis. 

Sometimes a false positive is detected. With XAI, a model can explain why it assumed a wrong result. In this case, an added explainability framework would enable doctors to reduce diagnostics time and the health system see a faster pathway to treatment. This is only one use case the healthcare industry can take advantage of, but there’s a wide range of other applications and uses available.

The Current Relationship between the Healthcare Marketplace and AI

Explanations might sound like something that should already be included within a ML model. However, most models do not have a way of being interpreted. It leads to lack of quality assurance, fails to evoke trust, and restricts dialogue between physicians and patients. This is due to the vast amounts of data and features that medical models draw upon and because machine learning models typically don’t follow linear logic and depends on particular case.

Models are trained to make accurate predictions based on inputs. They are not trained to justify their position, making models black box by default. Dr. Matt Turek of the Defense Advanced Research Projects Agency (DARPA) defines XAI as a framework to “enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.” 

The medical industry itself presents unique ethical and legal challenges to allowing Data Scientists access to the vast quantities of data necessary to create effective models. The data reflects patients’ personal medical information which is protected by law such as HIPPA or GDPR.

Medical data has strict requirements that often prevent organizations from sharing data with each other, limiting the scope and diversity of data. This makes models in training and development less accurate outside of their training environments. Trusting models in this environment requires understanding how they draw conclusions to prevent biased or poorly trained models from impacting patients’ access to care.

The Challenges for XAI in Healthcare

The healthcare industry has seen how difficult it can be to train models to provide accurate predictions across various patient populations, medical facilities, and even amongst individual providers. In most industries, errors may result in a loss in productivity, or a misallocation of inventory. In the healthcare arena lives are at stake.

Detractors have noted the difficulty in creating models that remain accurate across diverse patient populations and medical settings, the numerous incidents of AI bias in Healthcare, and models’ tendency to take into account extraneous data. No human would ever think someone is more likely to have cancer because they come from an urban area, have more melanin, or are wearing a red tie in their profile picture.

But models can and will learn this connections between these items if they are not trained properly. By choosing to create XAI models, errors can be found in real time before they affect a patient’s medical plan. Providing them with the care they need and protecting providers, developers and health systems from legal liability.

The Opportunities for XAI in Healthcare

Explainability tools such as Alibi Explain and Alibi Detect, can help the end user understand how the ML models they used came to their conclusions and flag any anomalous data that are reducing  accuracy  in model predictions. This allows physicians and their patients to use their judgment in conjunction with machines to determine the best course of action. With concerns about bias in ML models, providing the evidence that explains the models conclusion will drastically increase the adoption of AI technology.

Explainable AI also has a large role to play in the ethical discussions around using Machine Learning solutions in healthcare. Proponents point to the positives of enhanced diagnostic capabilities, more accurate decision making, and increasing operational efficiency. 

Creating models that can be trusted to provide physicians with the information they need to take care of their patients is a key step from moving AI solutions beyond hype and hope in the medical field. Creating or finding explainability frameworks that clearly show how models process their data will be a key challenge for data scientists in healthcare especially due to the recent guidance issued by the FDA late last year regarding ml models.

There are quite a few examples of medical products that can benefit from AI but will be subject to regulations in the future. Some of these include clinical guidelines, medication reconciliations and test reconciliations, and discharge papers. It would be wise to proactively implement explainability techniques into machine learning pipelines that include these medical products, prior to regulatory enforcement.

Explainers that Seldon Can Offer Healthcare Organizations

Seldon Deploy Advanced drives deeper insights into model behaviour and bias with productised Explainable Artificial Intelligence (XAI) workflows. Seldon empowers organizations with both local and global explainer methods across a range of data modalities to interpret model predictions for black-box and white-box models. You can finally build trust towards transparency of model decisions for compliance and governance purposes.

Speak with the Seldon team today and see why industry giants like Johnson and Johnson and Exscientia trust us with enabling their MLOps program.

Contents