The Heart of Bank Stress Testing

Stress testing has become an annual rite of passage where banks prove to federal regulators that their models and data are sufficient for measuring and mitigating risk under a variety of economic stress scenarios that range from standard to extreme. Early on, testing demonstrated some gaps in banks’ reserves, as well as a few challenges with the tests themselves.

However, by now, banks and regulators have worked out most of the bugs and few banks struggle to pass. Thus, regulators have decided to up the ante and increase the difficulty of tests to ensure that banks do not become complacent. They have multiple tools to increase pressure on banks including adjustments to the models and data requirements.

Prima facie, data does not seem terribly challenging but, the fact is, data can be a huge undertaking. Data necessary for stress testing is vast, complicated, dispersed, and all too frequently, not in a usable electronic format. Gathering, verifying and putting bank data into a single useable system that is acceptable to the regulators and useful to banks can be a Herculean task.

Banks are flooded with existing loan data across multiple historic platforms, including commercial loan servicing, commercial loan origination, retail loan servicing, retail loan origination, commercial deposit servicing, retail deposit servicing and more. In addition, there are often geographical and legal entity breakdowns of data within those groups, which add even more variables to the equation.

Compounding the situation is the challenge of dealing with data from acquired institutions. In 1980, there were approximately 18,000 banks in the U.S. Today, there are 6,400 and that number is shrinking. Acquisitions would seem like the ideal time to update data storage and retrieval, thus unifying systems in the process, but the fact is that few banks ever actually did that. And, the ones that did only did it for the most current data, not for the historic information that regulators are requesting.

Once data is captured and identified, the elements must be validated, recalculated, updated, and then integrated into a risk management system. Next, the data is reviewed and a multivariable loan stratification analysis is conducted to identify risks inherent in the loan portfolios and to determine how each particular risk can be mitigated. Another challenge for banks is when the regulators request information that, in many cases, is not currently captured in any bank system, or possibly has been overwritten with subsequent information. Loan origination data, for instance, captured the borrower’s credit score on the day of the loan application but was rarely kept electronically. Regulators now require the original credit score to be maintained on the core banking system for the life of the loan, forcing banks to find the original credit memo located in the original, physical loan documents to obtain this score.

Most loans possess between 120 to 250 core data elements that need be collected, and there are numerous additional data elements that are derived from these core data elements. Once the core data elements are determined, they are collected historically (defined at loan origination and for the previous current nine quarters). Then protocols must be established to collect the required elements throughout the life of the loan. The process can become a massive data aggregation project requiring hundreds of thousands of loan files to be prepared and scanned, and can involve multiple certifications and/or validations of the data.

Regulators want to make sure that banks monitor, evaluate and mitigate risk in order to avoid another financial crisis. Data is possibly the hardest thing to gather, but it is at the heart of stress testing and key to the success of reducing risk for financial institutions and investors.

Situs Financial Institutions Group has validated and populated 10 million data points in CCAR and DFAST projects. To learn more about our capabilities, click here.