Introduction to Responsible Machine Learning
View the Project on GitHub
jphall663/GWU_rml
Download
ZIP File
Download
TAR Ball
View On
GitHub
Lecture 1 Additional Software Tools
Python
:
causalml
interpret
imodels
PiML-Toolbox
sklearn-expertsys
skope-rules
tensorflow/lattice
R
:
arules
elasticnet
gam
glmnet
quantreg
rpart
RuleFit
Python, R or other
:
h2o-3
Rudin Group code
xgboost
Lecture 1 Additional Software Examples
Building from Penalized GLM to Monotonic GBM (simple)
Building from Penalized GLM to Monotonic GBM
Simple Explainable Boosting Machine Example
PiML Assignment 1 Example
and simple
requirements.txt
Machine Learning for High-risk Applications
:
Use Cases
(Chapter 6)
Lecture 1 Additional Reading
Introduction and Background
:
An Introduction to Machine Learning Interpretability
Designing Inherently Interpretable Machine Learning Models
Psychological Foundations of Explainability and Interpretability in Artificial Intelligence
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead
Explainable Machine Learning Techniques
:
Accurate Intelligible Models with Pairwise Interactions
Elements of Statistical Learning
- Chapters 3,4, and 9
Fast Interpretable Greedy-Tree Sums (FIGS)
Interpretable Machine Learning
-
Chapter 5
GAMI-Net: An Explainable Neural Network Based on Generalized Additive Models with Structured Interactions
Neural Additive Models: Interpretable Machine Learning with Neural Nets
A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing
This Looks Like That: Deep Learning for Interpretable Image Recognition
Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification