When America’s major banks created executive positions to reduce exposure to financial risk, the intent was similar to an employer designating a fire warden to protect the workplace against smoke and flames.
But in reality, the result has been closer to handing the fire warden job over to a pyromaniac.
“The hiring of chief risk officers was expected to reduce risky behaviour and mitigate the likelihood of insolvency, or at the very least protect bank executives from going to jail,” said Assistant Professor Kim Pernell, Canada Research Chair in Economic Sociology at U of T and lead author of a new study published in American Sociological Review. “The move instead led to increases in the kind of risky behaviour that helped lead Wall Street into the 2008 financial crash, the biggest since the Great Depression.”
Pernell (left), along with researchers at the University of Illinois at Urbana-Champaign and Harvard University, set out to examine why America’s biggest banks became so heavily exposed to high-risk derivatives in the lead-up to the 2008 financial crisis. Much like placing a bet that is based on the performance and outcome of something, these financial instruments are essentially contracts between two parties based on the value of underlying assets such as stocks, bonds, commodities and global currencies.
Certain types of derivatives expose banks to greater risk such as credit risk, where the other party to a contract does not pay up, and liquidity risk, where the bank will be unable to unwind a contract for its expected value.
After examining the derivatives holdings of the 157 largest banks between 1995 and 2010, Pernell and her colleagues found that banks with a chief risk officer (CRO) were substantially more likely to abuse the riskiest kinds of financial derivatives – over-the-counter options, swaps or credit derivatives. But they were not more likely to use safer, more traditional derivatives, like futures and forwards, which have traded in U.S. financial markets for centuries.
Analysis showed JP Morgan, for example, held credit derivatives based on $366-million worth of underlying assets in 2002, the year they promoted a CRO. That ballooned to more than $1 billion the next year and was valued at more than $8 billion in 2008, the year of the crash. Bank of America and Wells Fargo also saw dramatic increases after taking on CROs.
“The trend wasn’t just driven by the banks’ enthusiasm for profits, however. The federal government’s efforts to dampen risk in the early 2000s backfired by encouraging banks to elevate these champions of risk into positions of power in the first place,” said Pernell of the Faculty of Arts & Science.
CROs became popular with American banks after the U.S. government introduced several new regulations and laws to dampen risk-taking in the early 2000s, a reaction to a series of scandals and risk management failures at the turn of the century.
“The punchline here is that many banks responded to these laws by moving their risk experts up to the C-suite, making them executives and giving them a lot more power,” said Pernell. “CEOs did this to show they were complying with the law and taking risk seriously. However, the new executives encouraged banks to increase their exposure to the riskiest kinds of derivatives in the lead-up to the crisis.”
According to Pernell, a number of factors could have been at play, not the least of which was a shift in the professional agenda of risk managers.
Risk management experts first emerged in the 1980s to help banks manage the many crises of this turbulent economic period. But they saw their power decline in the 1990s when the economy was booming and fears of catastrophe waned.
“As a result, risk managers rebranded themselves to demonstrate their value to bank leaders,” Pernell said. “Instead of emphasizing their ability to minimize risk, they played up their ability to maximize profits. This new agenda was incompatible with reducing risk – and it led risk managers to start promoting riskier behavior in the name of ‘maximizing risk-adjusted returns.’”
By the time risk experts were elevated into new positions as CROs, they were already primed to see derivatives as the right tools for the job.
Pernell notes that appointing CROs also likely encouraged other bank officers below them to let down their guard when taking risks. She suggests that creating a new, high-level position to oversee risk management signaled to everyone else that risk was already being effectively handled elsewhere, which may have reduced the incentive for managers of other bank departments to police their own risky behavior.
The findings hold an important lesson for corporate leaders who want to avoid repeating the risk-management debacles of the past: even the most sophisticated risk-modeling techniques won’t keep banks out of trouble if CROs believe their duty to maximize returns trumps their duty to minimize catastrophe.
And CROs have only become more popular in American banking since the credit crisis.
“This trend is worrisome,” said Pernell. “While CROs have largely turned away from risky derivatives since the credit crisis, their broader agenda of maximizing risk-adjusted returns has not changed. If policy makers and corporate leaders continue to delegate oversight for risk management to actors who seek to optimize risk, they shouldn’t be surprised when financial disasters and scandals follow.”
Though the researchers looked only at the situation as it pertained to American banks and financial institutions, Pernell hopes to to extend the analysis to Canada next.
“The Canadian case presents a puzzle,” she said. “Leading up to the crisis, many Canadian banks appointed CROs, yet banks in Canada didn’t get into nearly the same trouble with derivatives.
“The question is why not. Did Canadian CROs have a different professional agenda than American CROs, or did CEOs and institutional investors in Canada just do a better job of keeping them in line?”