
What’s so important about interpretability in machine learning?
It’s a poorly kept secret that we lack insight into how complex machine learning models like neural networks make decisions. We see the data that goes in, the math that goes in, and the results that come out. But in the middle, where we want to see a chain of reasoning like a human could give us to explain decisions, there’s only a black box. Neither data scientists nor these complex machine learning models can provide insight into “why” a model chose output A rather than output B.
What does it matter whether we have an understandable explanation for why a machine learning model delivers a specific result? For example, when diagnosing whether or not a patient has cancer, isn’t it enough that the model is accurate, according to rigorous testing? I’ll look deeper into the implications of interpretability in future blog posts. But Continue reading “Understanding Decisions Made By Machine Learning”