Interpreting Deep Neural Networks
No matter how powerful our current neural networks are, we still can't use them in safety-critical domains like medicine or law because we can't understand them. How can we better interpret these deep learning models? Can we directly optimize them to be more understandable?
Initial commit for a blog about AI.