Recently, ensemble-based machine learning models have been widely adopted and have demonstrated their effectiveness in bankruptcy prediction. However, these algorithms often function as black boxes, making it difficult to understand how they generate forecasts. This lack of transparency has led to growing interest in interpretability methods within artificial intelligence research.
In this paper, we assess the predictive performance of Random Forest, LightGBM, XGBoost, and NGBoost (Natural Gradient Boosting for probabilistic prediction) on French firms across various industries, with a forecasting horizon of one to five years. We then apply Shapley Additive Explanations (SHAP), a model-agnostic interpretability technique, to explain XGBoost, one of the best-performing models in our study. SHAP highlights the contribution of each feature to the model’s predictions, enabling a clearer understanding of how financial and macroeconomic factors influence bankruptcy risk. Moreover, it allows for the explanation of individual predictions, making black-box models more applicable in credit risk management.
Cet article rend compte d’une collaboration insolite de près de 50 ans entre un dessinateur et une organisation militaire chargée de la protection des...
À l’heure où l’environnement est toujours plus incertain et où la somme des efforts demandés aux acteurs de la société ne cesse de croître,...
Pitch pour le Prix FNEGE de la Meilleure Thèse en 180 secondes / Prix AGRH Cette recherche propose d’examiner les thématiques très actuelles de...