How much do we see? On the explainability of partial dependence plots for credit risk scoring


  • Gero Szepannek Stralsund University of Applied Science
  • Karsten Lübke FOM University of Applied Sciences


credit scoring, interpretable machine learning (IML), partial dependence plot (PDP), explainability


Risk prediction models in credit scoring have to fulfil regulatory requirements, one of which consists in the interpretability of the model. Unfortunately, many popular modern machine learning algorithms result in models that do not satisfy this business need, whereas the research activities in the field of explainable machine learning have strongly increased in recent years. Partial dependence plots denote one of the most popular methods for model-agnostic interpretation of a feature’s effect on the model outcome, but in practice they are usually applied without answering the question of how much can actually be seen in such plots. For this purpose, in this paper a methodology is presented in order to analyse to what extent arbitrary machine learning models are explainable by partial dependence plots. The proposed framework provides both a visualisation, as well as a measure to quantify the explainability of a model on an understandable scale. A corrected version of the German credit data, one of the most popular data sets of this application domain, is used to demonstrate the proposed methodology.