XAI-papers/README.md
2018-02-16 14:52:30 -06:00

1.0 KiB

Papers on Understanding and Explaining Neural Networks

This is an on-going attempt to consolidate all interesting efforts in the area of understanding / interpreting / explaining / visualizing neural networks.


1. GUI tools

  • Deep Visualization

2. Feature Visualization / Activation Maximization

  • DGN-AM
  • PPGN

3. Heatmap / Attribution

  • Learning how to explain neural networks: PatternNet and PatternAttribution (pdf)

Layer-wise Backpropagation

  • Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation (pdf)

4. Bayesian

  • Yang, S. C. H., & Shafto, P. Explainable Artificial Intelligence via Bayesian Teaching. NIPS 2017 (pdf)

Distilling DNNs into more interpretable models