Introduction to Responsible Machine Learning
View the Project on GitHub
jphall663/GWU_rml
Download
ZIP File
Download
TAR Ball
View On
GitHub
Lecture 2 Additional Software Tools
Python
:
allennlp
alibi
anchor
DiCE
interpret
lime
shap
PiML-Toolbox
tf-explain
R
:
ALEPlot
DALEX
ICEbox
iml
Model Oriented
pdp
shapFlex
vip
Python, R or other
:
h2o-3
Lecture 2 Additional Software Examples
Global and Local Explanations of a Constrained Model
Building from Penalized GLM to Monotonic GBM
Monotonic XGBoost models, partial dependence, individual conditional expectation plots, and Shapley explanations
Decision tree surrogates, LOCO, and ensembles of explanations
Machine Learning for High-risk Applications
:
Use Cases
(Chapter 6)
Lecture 2 Additional Reading
Introduction and Background
:
On the Art and Science of Explainable Machine Learning
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
Post-hoc Explanation Techniques
:
A Unified Approach to Interpreting Model Predictions
Anchors: High-Precision Model-Agnostic Explanations
Elements of Statistical Learning
- Section 10.13
Extracting Tree-Structured Representations of Trained Networks
Interpretability via Model Extraction
Interpretable Machine Learning
-
Chapter 6
and
Chapter 7
Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation
Towards Better Understanding of Gradient-based Attribution Methods for Deep Neural Networks
Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models
“Why Should I Trust You?” Explaining the Predictions of Any Classifier
Problems with Post-hoc Explanation
:
General Pitfalls of Model-Agnostic Interpretation Methods
Limitations of Interpretable Machine Learning Methods
When Not to Trust Your Explanations