AIML Special Presentation: Explaining the Uncertain - Stochastic Shapley values for Gaussian process models

In the rapidly evolving field of machine learning, it is important to quantify model uncertainty and explain algorithm decisions, especially for safety-critical domains such as healthcare. In this talk, Dr Chau presented a novel approach to explaining Gaussian processes which we term GP-SHAP. our method is based on the popular solution concept of Shapley values extended to stochastic cooperative games, resulting in explanations that are random variables.

GP-SHAP's explainations satisfy similar favourable axioms to standard Shapley values and possess a tractable co-variance function across features and data observations. This co-variance allows for quantifying explanation uncertainties and studying statistical dependences between explanations. We further extend our framework to the problem of predictive explanation and propose a Shapley prior over the explanation function to predict Shapley values for new data based on previously computed ones. This work was accepted at NeurIPS 2023 as a spotlight paper. 

Tagged in Machine Learning, Gaussian