New approaches to financial regulation: A complex systems approach to global monitoring and policy analysis

To gain inspiration about how we can better monitor the world’s economy and its financial markets, it is useful to make a comparison to meteorology.  Like financial markets, the weather is a complex system with fundamental limits on predictability. In the last 35 years we have made dramatic improvements in our ability to forecast the weather.  How did this come about, and what are the lessons for economics and financial regulation?

Traditional weather forecasting was based on the method of analogs. Professional weather forecasters did their job by studying a current weather map, and searching through their library of past weather maps to find the best match to the current map.  They would then look forward in time from the best match, make a few subjective adjustments, e.g. for wind intensity and direction, and use this as their prediction.  This method remained the state of the art until 1980.

In the meantime, while searching for peacetime uses of the digital computer, in 1950 John von Neumann teamed up with Carl-Gustav Rossby and Jule Charney to make the first physically based weather forecasts.  The early forecasts were slower than real time and only provided a proof of principle.  But this set off an effort to improve meteorological science, gather better data, and develop better simulation methods.   When combined with vastly better computer power, physical weather forecasting finally broke even with traditional weather forecasters in 1980, after thirty years of effort involving hundreds of millions of dollars.   But as a result, weather forecasts are now far more accurate.  Furthermore this effort was leveraged to produce the climate models that are the key underpinning for forecasts of climate change.

Is it possible to do something similar for the stability of the financial system?   To be clear, I am not talking about predicting the direction of price movements, which are subject to strong limits due to market efficiency.  Rather I am talking about a model that would predict the volatility and instability of markets, for which such limits do not directly apply.  What might such a system look like and how would it be built? What are the technical and societal barriers to accomplishing this?  How might it be done?

A vision of a financial monitoring system

 My vision is based on the empirical observation that financial trading is highly specialized, and can be classified into broad groups.  The names of these groups are familiar to market practitioners, e.g. fundamentalists, trend follows, market makers, and statistical arbitrageurs.  Each trading strategy affects market prices differently.  Fundamentalists (value investors) are stabilizing and trend followers are destabilizing.   By understanding how each strategy influences the market, if one knows how much is currently being invested with each type of strategy, it becomes possible to make a good assessment of the stability of financial markets.  This can be done using an agent-based model, in which one explicitly simulates the trading decisions of each type of market participant.  One can go beyond this to get deeper insight into the interactions between strategies and their evolution in time by using concepts and methods from ecology.  I believe this could make it possible to make a very accurate assessment of financial risk. 

Doing this would require a serious data collection effort involving the cooperation of key government agencies.  Such data would be used in several ways, both to build the underlying model and to run it in real time.   So far very few researchers have had access to transaction data with counterparty identifiers that label who made each trade.  Without such data ecological analysis is simply impossible – it is as if a ecologist were only able to observe that an animal ate another animal, without knowing what kind of animals these were.  With counterparty identifiers one can use machine learning methods and prior knowledge to sort the transaction streams of different agents into groups and reverse engineer the typical behavior of each group.  If one also knows the initial conditions, i.e. the capital invested in each group at any given point in time, one can then simulate the market to estimate volatility, and do systemic stress tests to assess financial stability.

Much of the data needed already exists, though not in easily useable and accessible forms. Most countries collect such data, e.g. the British Financial Conduct Authority, the European Central Bank, and several US agencies.

Of course this brings up concerns about confidentiality, which are serious and need to be addressed carefully.  While this introduces complications they are surmountable.  This typically involves having trusted parties within the agencies doing the analysis to create the necessaries summaries and diagnostics about each group’s behavior.

The impediments to the success of such a project are both technical and societal.  Though I do not mean to trivialize them, I think the technical problems are all solvable.  I would be happy to elaborate on this but there is not sufficient space to do that here.

The more serious problem is societal.  In the case of the weather there was broad agreement by leading physical scientists that this had to be done, and billions of dollars of government funding have gone into producing the accurate forecasts that we enjoy today.  In economics, in contrast, such an approach is opposed by the mainstream and funding is hard to come by.

The first step toward achieving this goal would be to create a proof of principle.  This would consist of working with a restricted set of assets and showing how data with counterparty identifiers can be used to classify them into groups, developing a simple agent-based model and showing that it can reproduce risk levels for historical prices.  This could be done by working with sympathetic people in the agencies who have access to the data (such people exist). The gating step is funding to do the research and develop the software.  Proposals to government funding agencies fail because they are blocked by mainstream referees.  Philanthropies do not understand the problem, and view this as the province of central banks.  Some of the central banks are interested but they lack the expertise, they are too caught up solving short term problems, and they do not have the mandate to provide outside funding for research.  

Despite these hurdles, it is essential that we make this happen.  As estimated by the Dallas Fed, the last crisis cost the U.S. between 6 and 14 trillion dollars, and cost the world a lot more.  Even if such an effort has only a one percent probability of success, if it could lop $1T off the losses induced by subsequent crises, an investment of a million dollars would have a expected return for society of a million percent.  Put differently, in order for the investment to not be worthwhile, the odds of success have to be less than one ten thousandth of a percent.