What caused the global financial crisis? And how can the United States avoid a repeat? Those questions have sparked endless handwringing among economists, policymakers, financiers, and voters over the last decade. Little wonder: the crisis not only entailed the worst financial shock and recession in the United States since 1929; it also shook the country’s global reputation for financial competence.

Before the crisis, Wall Street seemed to epitomize the best of twenty-first-century finance. The United States had the most vibrant capital markets in the world. It was home to some of the most profitable banks; in 2006 and early 2007, Goldman Sachs’ return on equity topped an eye-popping 30 percent. American financiers were unleashing dazzling innovations that carried newfangled names such as “collateralized debt obligations,” or CDOs. The financiers insisted that these innovations could make finance not only more effective but safer, too. Indeed, Wall Street seemed so preeminent that in 2003, when I published a book about the Japanese banking crisis, Saving the Sun, I presumed that one of the ways to “fix” Japanese finance was to make it more American.

Within five years, this supposed success had been reduced to ashes. The brilliant innovations with strange abbreviations, it turned out, had contributed to a massive credit bubble. When it burst, investors around the world suffered steep losses, mortgage borrowers were tossed out of their homes, and the value of those once mighty U.S. banks shriveled as markets froze and asset prices tumbled. Instead of a beacon for the brilliance of modern finance, by 2008, the United States seemed to be a global scourge.

Why? Numerous explanations have been offered in the intervening years: the U.S. Federal Reserve kept interest rates too low, Asia’s savings glut drove up the U.S. housing market, the banks had captured regulators and politicians in Washington, mortgage lenders made foolish loans, the credit-rating agencies willfully downplayed risks.

Thinking about the original meanings of “finance,” “credit,” and “bank” helps explain what went wrong with American finance in the past and what might fix it in the future.

All these explanations are true. But there is another, less common way of looking at the financial crisis that also offers insight: anthropologically. Just as psychologists believe that it is valuable to consider cognitive biases when trying to understand people, anthropologists study half-hidden cultural patterns to understand what makes humans tick. That often entails examining how people use rituals or symbols, but it can also involve looking at the meaning of the words they use. And although financiers themselves do not spend much time thinking about the words they toss around each day, those words can be distinctly revealing.

Consider “finance,” “credit,” and “bank.” Today, those terms are usually associated with abstract concepts involving markets and money, but their historical roots, or etymology, are rather different. “Finance” originates from the Old French word finer, meaning “to end,” in the sense of settling a dispute or debt, implying that finance is a means to an end. “Credit” comes from the Latin credere, meaning “to believe.” And “bank” hails from the Old Italian word banca, meaning “bench” or “table,” since moneylenders used to ply their trade at tables in the market, talking to customers or companies. “Company” also has an interesting history: it comes from the Latin companio, meaning “with bread,” since companies were, in essence, people who dined together.

All of this may sound like a historical curiosity, best suited to Trivial Pursuit. But the original senses should not be ignored, since they reveal historical echoes that continue to shape the culture of finance. Indeed, thinking about the original meanings of “finance,” “credit,” and “bank”—namely, as activities that describe banking as a means to an end, carried out with trust, by social groups—helps explain what went wrong with American finance in the past and what might fix it in the future.

FINANCE

If you want to understand the word “finance,” a good place to start is not with words but with some extraordinary numbers compiled by the economists Thomas Philippon and Ariell Reshef on a topic dear to bankers’ hearts: their pay. After the crisis, Philippon and Reshef set out to calculate how this had fluctuated over the years in the United States, relative to what professionals who didn’t work in finance, such as doctors and engineers, were paid. They found that in the early twentieth century—before the Roaring Twenties—financiers were paid around 1.5 times as much as other educated professionals, but the financial boom pushed this ratio up to almost 1.7 times. After the Great Depression hit, it fell, and stayed around 1.1—almost parity—during the postwar years. But it soared again after a wave of deregulation in the late 1970s, until it hit another peak of 1.7 times as much in 2006—just before the crash.

If you show these statistics to people outside finance, they sometimes blame the latest uptick in bankers’ pay on greed: pay rose when the markets surged, the argument goes, because financiers were skimming profits. If you show them to financiers (as I often have), they usually offer another explanation for the recent surge: skill. Wall Street luminaries tend to think they deserve higher pay because finance now requires greater technical competence.

In truth, both explanations are correct: as bankers’ pay has swelled, the financial sphere has exploded in size and complexity, enabling financiers to skim more profits but also requiring greater skill to manage it. In the United States in the immediate postwar decades, the financial sector accounted for between ten and 15 percent of all business profits and around 3.5 percent of GDP. Subject to tight government controls, the industry was more akin to a sleepy utility than a sphere of aggressive profit seeking. By the early years of this century, the economic footprint of finance had more than doubled: it accounted for almost 30 percent of all business profits and nearly eight percent of GDP. Deregulation had unleashed a frenzy of financial innovation.

One of these innovations was derivatives, financial instruments whose value derives from an underlying asset. Derivatives enabled investors to insure themselves against risks—and gamble on them. It was as if people were placing bets on a horserace (without the hassle of actually owning a horse) and then, instead of merely profiting from the performance of their horses, creating another market in which they could trade their tickets. Another new tool was securitization, or the art of slicing and dicing loans and bonds into small pieces and then reassembling them into new packages (such as CDOs) that could be traded by investors around the world. The best analogy here is culinary: think of a restaurant that lost interest in serving steaks and started offering up sausages and sausage stew.

There were (and are) many benefits to all this innovation. As finance grew, it became easier for consumers and companies to get loans. Derivatives and securitization allowed banks to protect themselves against the danger of concentrated defaults—borrowers all going bust in one region or industry—since the risks were shared by many investors, not just one group. These tools also enabled investors to put their money into a much wider range of assets, thus diversifying their portfolios. Indeed, financiers often presented derivatives and securitization as the magic wands that would conjure the Holy Grail of free-market economics: an entirely liquid world in which everything was tradable. Once that was achieved, the theory went, the price of every asset would accurately reflect its underlying risk. And since the risks would be shared, finance would be safer.

Wall Street became a never-ending loop of financial flows and frantic activity in which financiers often acted as if their profession was an end in itself.

It was a compelling sales pitch, but a deeply flawed one. One problem was that derivatives and securitization were so complex that they introduced a brand new risk into the system: ignorance. It was virtually impossible for investors to grasp the real risks of these products. Little to no actual trading took place with the most complex instruments. That made a mockery of the idea that financial innovation would create perfect free markets, with market prices set by the wisdom of crowds. Worse still, as the innovation became more frenzied, finance became so complex and fast growing that it fed on itself. History has shown that in most corners of the business world, when innovation occurs, the middlemen get cut out. In finance, however, the opposite occurred: the new instruments gave birth to increasingly complex financial chains and a new army of middlemen who were skimming off fees at every stage. To put it another way, as innovation took hold, finance stopped looking like a means to an end—as the word finer had once implied. Instead, Wall Street became a never-ending loop of financial flows and frantic activity in which financiers often acted as if their profession was an end in itself. This was the perfect breeding ground for an unsustainable credit bubble.

CREDIT

The concept of credit is also crucial in understanding how the system spun out of control. Back in 2009, Andy Haldane, a senior official at the Bank of England, tried to calculate how much information an investor would need if he or she wanted to assess the price and risk of a CDO. He calculated that for a simple CDO, the answer was 200 pages of documentation, but for a so-called CDO-squared (a CDO of CDOs), it was “in excess of 1 billion pages.” Worse still, since a CDO-squared was rarely traded on the open market, it was also impossible to value it by looking at public prices, as investors normally do with equities or bonds. That meant that when investors tried to work out the price or risk of a CDO-squared, they usually had to trust the judgment of banks and rating agencies.

Traders at the stock exchange in Frankfurt, Germany, December 2015
Traders at the stock exchange in Frankfurt, Germany, December 2015
Ralph Orlowski / REUTERS

In some senses, there is nothing unusual about that. Finance has always relied on trust. People have put their faith in central banks to protect the value of money, in regulators to ensure that financial institutions are safe, in financiers to behave honestly, in the wisdom of crowds to price assets, in precious metals to underpin the value of coins, and in governments to decide the value of assets by decree.

What was startling about the pattern before the 2008 crash, however, was that few investors ever discussed what kind of credit—or trust—underpinned the system. They presumed that shareholders would monitor the banks, even though this was impossible given the complexity of the banks and the products they were peddling. They assumed that regulators understood finance, even though they were actually little better informed than shareholders. Financiers trusted the accuracy of credit ratings and risk models, even though these had been created by people with a profit motive and had never been tested in a crisis. Modern finance might have been presented as a wildly sophisticated endeavor, full of cutting-edge computing power and analysis, but it ran on a pattern of trust that, in retrospect, looks as crazily blind as the faith that cult members place in their leaders. It should not have been surprising, then, that when trust in the underlying value of the innovative financial instruments started to crack, panic ensued.

BANK

Why did nobody see these dangers? To understand this, it pays to ponder that third word, “bank,” and what it (and the word “company”) says about the importance of social patterns. These patterns were not often discussed before the 2008 crisis, partly because it often seemed as if the business of money was leaping into disembodied cyberspace. In any case, the field of economics had fostered a belief that markets were almost akin to a branch of physics, in the sense that they were driven by rational actors who were as unemotional and consistent in their behavior as atoms. As a result, wise men such as Alan Greenspan (who was Federal Reserve chair in the period leading up to the crisis and was lauded as “the Maestro”) believed that finance was self-correcting, that any excesses would automatically take care of themselves.

Former chairman of the US Federal Reserve Alan Greenspan on Capitol Hill, July 2005
Former Chairman of the U.S. Federal Reserve Alan Greenspan on Capitol Hill, July 2005
Larry Downing / REUTERS

The theory sounded neat. But once again, and as Greenspan later admitted, there was a gigantic flaw: humans are never as impersonal as most economists imagined them to be. On the contrary, social patterns matter as deeply for today’s bankers as they did for those Renaissance-era Italian financiers. Consider the major Wall Street banks on the eve of the crisis. In theory, they had risk-management systems in place, with flashy computers to measure all the dangers of their investments. But the Wall Street banks also had siloed departments that competed furiously against one another in a quasi-tribal way to grab revenues. Merrill Lynch was one case in point: between 2005 and 2007, it had one team earning big bonuses by amassing big bets on CDOs that other departments barely knew about (and sometimes bet against). Traders kept information to themselves and took big risks, since they cared more about their own division’s short-term profits than they did about the long-term impact of their trades on the company as a whole—to say nothing of the impact on the wider financial system. Regulators, too, suffered from tribalism: the economists who tracked macroeconomic issues (such as inflation) did not communicate much with the officials who were looking at micro-level trends in the financial markets.

Then there was the matter of social status. By the early years of the twenty-first century, financiers seemed to be such an elite tribe, compared with the rest of society, that it was difficult for laypeople to challenge them (or for them to challenge themselves). Like priests in the medieval Catholic Church, they spoke a language that commoners did not understand (in this case, financial jargon, rather than Latin), and they dispensed blessings (cheap money) that had been sanctioned by quasi-sacred leaders (regulators). If an anthropologist had been let loose in a bank at that time, he or she might have pointed out the dangers inherent in treating bankers as a class apart from wider society and the risks raised by bankers’ blind spots and the lack of oversight. (And a few anthropologists, such as Karen Ho, did do studies on Wall Street, noting these patterns.) Sadly, however, these dangers went largely unnoticed. Few people ever pondered how the original, social meanings of “bank” and “company” might matter in the computing age, and how tribalism was undermining neat market theories.

IS PAST PROLOGUE?

A decade after the crisis, it may be tempting to see this story as mere history. In 2019, Wall Street is confident again. No, the market is not as complacent as it was before 2008; financiers are still (somewhat) chastened by the 2008 crash and hemmed in by tighter scrutiny and controls. Regulators forced banks to hold more capital and imposed new constraints on how they make loans or trade with their own money. Formerly gung ho investment banks, such as Goldman Sachs, are moving into the retail banking sector, becoming ever so slightly more like a utility than a hedge fund. The return on equity of most major banks is less than half of pre-crisis levels: that of Goldman Sachs was just above ten percent in early 2019. Everyone insists that the lessons of the credit bubble have been learned—and the mistakes will not be repeated.

Maybe so. But memories are short, and signs of renewed risk taking are widespread. For one thing, financiers are increasingly performing riskier activities through nonbank financial institutions, such as insurance companies and private equity firms, which face less scrutiny. Innovation and financial engineering have resurfaced: the once reviled “synthetic CDOs” (CDOs composed of derivatives) have returned. Asset prices are soaring, partly because central banks have flooded the system with free money. Wall Street has lobbied the Trump administration for a partial rollback of the postcrisis reforms. Profits have surged. And although pay in finance fell after 2008, it has since risen again, particularly in the less regulated parts of the business.

It would be foolish to imagine that the lessons of the crisis have been fully learned.

What’s more, American finance now looks resurgent on the global stage. In Europe, U.S. banks’ would-be rivals have been hobbled by bad government policy decisions and a weak economy in the eurozone. In Asia, the Chinese banking giants are saddled with bad loans, and Japan’s massive financial sector is still grappling with a stagnant economy. Ironically, a drama that was “made in America” has left American banks more, rather than less, dominant. Indeed, the biggest threat to Wall Street today comes not from overseas competitors but from domestic ones, as U.S. technology companies have set their sights on disrupting finance.

It would be foolish to imagine that the lessons of the crisis have been fully learned. Today, as before, there is still a tendency for investors to place too much faith in practices they do not understand. The only solution is to constantly question the basis of the credit that underpins credit markets. Just as there was in 2007, there is still a temptation to assume that culture does not matter in the era of sophisticated, digitally enabled finance.

That is wrong. Banks and regulators today are trying to do a better job of joining up the dots when they look at finance. But tribalism has not disappeared. Wall Street banks still have trading desks that compete furiously with one another. Regulators remain fragmented. Moreover, as finance is being disrupted by digital innovation, a new challenge is arising: the officials and financiers who understand how money works tend to sit in different government agencies and bank departments from those who understand cyberspace. A new type of tribal fracture looms: between techies and financiers.

Policymakers need to ask what Wall Street’s mighty money machine exists for in the first place. Should the financial business exist primarily as an end in itself, or should it be, as in the original meaning of “finance,” a means to an end? Most people not working in finance would argue that the second vision is self-evidently the desirable one. Just think of the beloved film It’s a Wonderful Life, in which the banker played by Jimmy Stewart sees his mission not as becoming fabulously rich but as realizing the dreams of his community. When finance becomes an end in itself, the public is liable to get angry. That’s one reason for the wave of populism that has washed over the globe since the crisis.

But does the United States really know how to build a financial system that is the servant, not the master, of the economy? Sadly, the answer is probably no; at present, it is hard to imagine what this would even look like. No matter what, however, if American financiers—along with regulators, politicians, and shareholders—wish to reduce the odds of another crash and another populist backlash, they would do well to tape the original meanings of “finance,” “bank,” and “credit” to their computer screens.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and over a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print and online, plus audio articles
Subscribe Now
  • GILLIAN TETT is U.S. Chair of the Editorial Board and American Editor-at-Large for the Financial Times.
  • More By Gillian Tett