Probabilistic Programming and Bayesian Inference for Time Series Analysis and Forecasting in Python
July 28, 2020
Professor Judea Pearl commented “Speaking about “explainable AI”, this paper shows that, even in classification tasks, and even after agreeing on a Bayesian Network classifier, answering “why” is not a trivial matter.”
Recent work has shown that some common machine learning classifiers can be compiled into Boolean circuits that have the same input-output behavior. We present a theory for unveiling the reasons behind the decisions made by Boolean classifiers and study some of its theoretical and practical implications. We define notions such as sufficient, necessary and complete reasons behind decisions, in addition to classifier and decision bias. We show how these notions can be used to evaluate counterfactual statements such as “a decision will stick even if … because … .” We present efficient algorithms for computing these notions, which are based on new advances on tractable Boolean circuits, and illustrate them using a case study.
AIWS Innovation Network - Powered by BGF