Treffer: Interpretable Model Summaries Using the Wasserstein Distance.
Weitere Informationen
Large-parameter statistical and machine learning models are widely used in many fields but often lack interpretability. This limits the ability of practitioners to make informed decisions based on models; this can be especially challenging in multivariate analyses or Bayesian inference where there is a distribution of predictions to summarize. In response to these challenges, we propose a new method that uses the Wasserstein distance to find low-dimensional linear models that approximate the predictions of complex multivariate models, effectively summarizing them in a way that prioritizes the preservation of their predictive distributions. These summaries can facilitate the communication and understanding of complex models by practitioners in various fields and we provide diagnostic tools to assess their performance. We demonstrate our method on simulated data with different data generating processes and also apply our method to a Bayesian additive regression tree model that predicts survival time for glioblastoma multiforme (GBM) patients. [ABSTRACT FROM AUTHOR]
Copyright of American Statistician is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)