“What Should I Do,” Really…

At this morning’s keynote, Bill Groves, CDAO at Honeywell, said that people (executives, users of software tools, etc.) don’t want to hear what might happen, they want to hear what they should do. That is, they don’t typically want to hear the output of a predictive model — it’s 70% likely to rain — they want to hear what actions will cover their bases the best — bring an umbrella.

I agree with this, but want to add a cautionary note. Here’s the situation you don’t want to be in: I tell you to bring an umbrella a couple of days in a row. But it didn’t rain. You start to question your trust in me. You say, “hey, why’d you tell me to bring an umbrella?” I shrug. You lose your trust in me and stop listening to my guidance.

For an analytics system that’s making recommendations to earn and retain trust, it needs two things. First, it needs to be honest about its uncertainty. If I’d said “you should bring an umbrella, although you may or may not need it today,” you gain some understanding of how reliable my recommendations are. Especially if in subsequent days I say “you won’t need an umbrella,” and “you should bring your big golf umbrella — you’re gonna need it.”

Second, your analytics system needs the ability to explain its predictions, not in the abstract, but in the specific. It’s not all that useful for my response to “why should I bring an umbrella today” is “my recommendations are based on measurements of air pressure and humidity in the central plains, along with stochastic models of shifts in air masses.” Especially if you give exactly that response every time I ask the question! Instead you need to say something specific to today’s forecast — “a cold front is coming in, and we should get some thunderstorms this afternoon.”

In the context of recommendations due to a predictive model, that means your explanations are not what typically comes out of “variable importance” measures, such as the coefficients of a linear model. Instead, your system needs to identify what changes in the world would have led to a different recommendation. A relatively new model-explanation system, LIME, does this.  When you ask it why it thinks it’s going to rain, you get an answer back along the lines of: “if the air pressure had been higher, or if the humidity had been lower, I wouldn’t have predicted rain.”

So, when building models that help people make great decisions, yes — listen to what they need, and build that. But don’t stop there. Think about what it’ll take to earn and maintain trust, and build in the honesty and explanations that will help in the inevitable situations when the recommendations are imperfect.


For more on this topic, I wrote a blog post recently on similar issues.