What is Interpretable Machine Learning?

BALTIMORE, MD, April 25, 2022 – New audio is available for media use featuring Cynthia Rudin, a professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics, and Biostatistics & Bioinformatics at Duke University. This content is made available by INFORMS, the largest association for the decision and data sciences. All sound should be attributed to Cynthia Rudin. She is also a Principal Investigator at the university’s Interpretable Machine Learning Lab. What follows are 4 questions and responses. These responses were provided on April 18, 2022.

 

Question 1: You work in Interpretable Machine Learning. What is Interpretable Machine Learning?

Time Cue: 0:31, Soundbite Duration: 1:08

“An interpretable machine learning model is constrained so that a human can better understand its reasoning processes. An interpretable machine learning model is the opposite of a black box model, which is a formula that's either too complicated for any human to understand, or it's proprietary, so people are not allowed to understand it. Some of these interpretable machine learning models are actually formulas that are so simple you could print them out on an index card, and sometimes they’re simple enough that you can memorize them. Like for example, you get three points for this, and one point for that, and two points for this, and then add them up, and then there’s a little table that translates that, those numbers into a risk. So, that’s an example of a risk score. And you can have other very simple models that are like a series of logical conditions. Like, ‘If the patient has this condition,’ or ‘if they have this other condition then it’s probably that.’ It’s a bunch of logical conditions that make up the interpretable machine learning model. So, these are actually formulas that almost look like someone made them up, but they were actually derived from data by an algorithm.”

 

Question 2: When would you need interpretable machine learning?

Time Cue: 01:45, Soundbite Duration: 0:26

“You need interpretable machine learning anytime you want to use AI for a high stakes decision, or when you want to be able to troubleshoot it. Interpretability is a key component of ethics, trustworthiness, fairness - I don't even think in many cases that you could even have fairness if you don't have interpretability and transparency.” 

 

Question 3: What are the challenges that stand in the way of taking advantage of the full potential of interpretable machine learning?

Time Cue: 02:11, Soundbite Duration: 1:01

“Well, there's definitely a cultural myth that black box predictive models must be more accurate than an interpretable predictive model. People think that the black box AI is picking up on things that are so subtle that there's no way a human could ever understand it. While that could be true, usually the types of things that black box AI picks up on in that case are essentially stemming from flaws in the dataset that you really don't want your model to depend on. Like for medical images, an algorithm might pick up on some words in an image rather than the medical content of the image. So, it’s better if we can understand the reasoning process of the algorithm and use interpretable machine learning. And generally, we can create interpretable models with the same level of accuracy as the best possible black box model for a given data set. And some of the interpretive machine learning models are so simple you can print them on an index card or even memorize them.”

 

Question 4: What should the government and companies be considering now to capitalize on the potential of interpretable machine learning?

Time Cue: 03:18, Soundbite Duration: 0:33

“Well, I think for decisions that deeply affect people's lives, like AI models that determine people's freedom, like parole or bail decisions, for those kinds of decisions, I think we should always use interpretable models. Also, for medical decisions, like high-impact medical decisions, the models should pretty much always be interpretable, because there's no loss in accuracy to use an interpretable model. Like I said, I think people often underestimate the power of models that look too good to be true. But if they try it out, they might be surprised.”

What is Interpretable Machine Learning?

Media Contact

Ashley Smith
Public Affairs Coordinator
INFORMS
Catonsville, MD
[email protected]
443-757-3578

See all Releases