Explaining the prediction of your model is a really crucial thing. Previously, it was a trade-off between accuracy and interpretability
But, now you can use LIME, explanation technique proposed by Ribeiro and al. at 2016. This technique learns interpretable model around the prediction. More details are in the original paper: https://arxiv.org/abs/1602.04938.