Banks use a tremendous amount of mathematical models throughout the company. Trading obviously uses complicated algorithms to value derivatives transactions, to compute the XVA charges on a new deal, etc. Middle office uses mathematical models to estimate economic capital through an expected shortfall or value at risk computation while ALM leverages models to analyze prepayment behavior of a mortgage portfolio or credit risk in a pool of consumer loans. On top of these more traditional use cases, banks are setting up transversal data analytics teams, who build data driven models using machine learning to improve customer intelligence, to build credit models, and much more.
These models, whether they are built by in-house quantitative teams or bought from an analytics vendor are often black boxes for the decision making layer of a corporation. Moreover, given the sheer volume of models, it is often very hard to understand which algorithms are the riskiest. As a consequence, model development is often prioritized using ad-hoc rules and heuristics while validation cycles simply loop through all models in a sequential fashion.