Ted Goranson - Personal Blog

The blog of Ted Goranson. This is both a personal blog and an ongoing update on his projects.

Black-Scholes as a Root Problem

Published: 20 Mar 2012

Here is my take on the recent financial disaster that blew over 10 trillion dollars in wealth and turned a recession into a worldwide tragedy.

The usual explanation revolves around unbridled greed in our financial institutions. But that doesn’t work; unbridled greed is what makes the system work; that’s the point.

Other explanations fault the Bush administration’s failure to act, but this also falls in the expected greed category. W Bush was selected by the Republican establishment to be a compliant tool in dismantling regulation and passing engineered tax breaks. Clearly, if regulators were allowed to do their job some flags may have been raised, but the role of the Administration was an effect of simple market forces; the financial community bought a President (and Congress), because they could.

What we had — still have — is a system where people and institutions do what they can get away with. ‘Get away with’ includes evading regulatory action, but the decisions involved made no sense if you look at it closely, because the people hurt the most were the people at thetop, making the decisions. Why they would put their livelihood at risk? That is what this note is about.

On average, the key people in the financial community are not dumb; these are very smart men (nearly all men). All their wealth is tied to the economic framework in which they work, so it is safe to say that though they knew they were doing dangerous things, they had no idea they were testing the fragility of the entire world. I believe that their guts said that what they were doing is risky, but then again every day is a day of risk for them. My thesis is that their metrics told them that they were working within an acceptable (meaning usual) level of risk.

As an aside, we have a number of folks stepping forward and claiming that they saw this disaster coming and publicly said so. Because it happened as they said — and because at the time they had the same facts we all did — it seems like those few had the only common sense in the financial world.

But I have been studying this domain long enough to know that at any given time, there are countless Cassandras predicting dozens of disasters. These are disasters of all different kinds, often quite contradictory and using different logics and worldviews. Like a large pool of psychics, whenever one happens to be right, she steps forward and even sometimes get the apparent causality right, so we forget the hundreds who were deadwrong.

But when things are going on — actually happening — they have no more credibility than the next observer. Why? Partly this is because there is no scientific way to choose, because the most scientific methods available are the ones used internal to the system. And we will see that they are broken.

The Black-Scholes Method

The one mathematical model to rule them all.

No one in this domain works on a scientific basis outside a very simple set of principles. What we had before the disaster was an industry and regulators who looked to the ratings agencies to tell them whether things were okay. There is a lot of complex mathematics involved, so it is something of a black box to all but a very few. There are only three ratings agencies, and they all use precisely the same methods. After all the after-the-fact investigations, we have no evidence that the analysts in these agencies objected to the ratings of the junk derivatives (outside of the usual level of dissent).

You have to appreciate that the finest mathematical minds from the previous decades were drawn into this elite group of ‘quants,’ away from their lives as the probable top engineers and scientists of the era. These surely were no dummies, and their calculations said we were okay.

What are those principles, the ones that gave the wrong answers? That is the question we should be asking. I think I can put my finger on this, both generally and specifically. It matters to me beyond thinking about how to prevent the nearly certain next disaster, because the same deficiency in our science is hurting the dynamics I study in introspective selforganizing collaborative organizations.

The general problem is that economic systems have never fit the model of science we have for, say physics. A great many dynamics in economic systems are based on non-logicalbehavior of the public.

The Original Metric

In the ‘70s, the options marketplace was just about ready to become mainstream. A problem was that the established markets (like the stock market) had well established fundamentals, metrics that could be applied to determine value. For the options market to be viable, it needed such a model of value. Two men, Myron Scholes and Fischer Black (from whence the name), and later joined by Bob Merton developed such a formula.

Once published, it was received immediately because it has the two qualities that practitioners need: it is a differential equation of the type used by real engineers (Black had a doctorate in physics) and it takes only four parameters: duration of the option, prices, interest rates, and market volatility. An entire options industry grew around this one value metric. No one used anything else, and since every transaction was based on the same metric, no one seemed to care much. But only five years into the creation of that market the world was thrown into a financial crisis, the largest between then and the recent W Bush Crisis.

Analysts were quick to recognize the key role played by Black-Scholes and Forbes magazine went so far as to blame the method as the primary cause. The reason was very simple: the metrics allowed markets to outpace true value. What happened?

  • Merton conceded that the model was indeed the primary cause of much of the crisis, but claimed it was not because of the validity of the model but the way it was used by all the ‘non-experts.’
  • Merton improved the model in a minor, but interesting way.
  • The model became the underpinning of a new market in derivatives,
  • Merton and Scholes received the NobelPrize in Economics in 1997. (Black had died.)

Merton is now a senior professor at both Harvard and MIT and advises a large investment fund.

We went through a delusional period where economists insisted that all economic decisions at all levels are based on a discrete, characterizable logic — logic of the ordinary type. In other words, we individually and as groups were supposed to deduce our way through different cost-benefit analyses, thousands or more per day.

If it seemed not so, then the decisions must use ‘hidden parameters.’ We have escaped some of that now, but we still have not outrun the fact that the mathematical tools we currently have in economics assume a logic.

That ‘Nobel Prize in Economic Sciences’ is a strange beast. When Swedish arms merchant Alfred Nobel established his foundation, he set up three science prizes: in physics, chemistry and medicine together with his prizes in peace and literature. His goal was to reward not the largest advance or the most admirable achievement, but ‘the greatest benefit on mankind.’ (I have a note on another destructive man buying his way to grace.)

Nobel was a chemist who had seen his talents used for destruction, and he hoped to subsidize advances in chemistry to redress the balance. In the late 1890s when the prize was established, the promise of pharmaceuticals was just beginning to appear. Physics was still the basis of all science, so it made eminent sense to invest in those three. Notably absent was any prize in mathematics. Urban legend has it that this was because mathematics at the time was dominated by Jews and Nobel was openly anti-Semitic. But it is more likely that he simply did not believe mathematics could directly benefit mankind like science could.

Today, we would probably term his notion of science as technology or ‘applied’ science, though the award is normally awarded for advances in fundamental science. Mathematics is a glaring omission in this modern reinvention of the prizes because so many of the breakthroughs in biology and medicine come either from new (mathematically founded) models, or new (mathematically enabled) computational methods.

Meanwhile, the Swedish National Bank (the world’s first national bank) established its own prize to denote the beginning of its fourth century. Since the bank’s business is finance, it made sense to award a prize for the greatest benefit on mankind from the science of finance. When the prize was established in 1968, the press remarked that the bank may have been atoning for past sins the same way Nobel was.

Three notable qualities of this prize:

  • Though it is popularly considered a prize in mathematics, it is not; it is a prize for advances in the science of finance.
  • Though the Nobel in general is thought to be for the most profound work, nowhere is it more clear than with this prize that it is for the most influential. The important prize in mathematics is the Fields Medal and it is notable that there is zero overlap between it and the economics prize. The more recent (since 2003) Abel Prize in Mathematics is sponsored by the Norwegians to fill the obvious hole in the Nobels. The ACM A.M. Turing Award is essentially mathematical and more prestigious.
  • Though presented as a Nobel prize, and embraced by the Nobel Foundation, it is not. It is funded separately (by the bank) and awarded in a separate ceremony.

Why the Model is to Blame

The general case of what went wrong.

Think of markets generally as a domain where value metrics can be determined one of three ways:

  • The first way is by direct evaluation. In this case, you value something based on how it enriches your life or directly advances your goals. Food is a good example of this. Its value can come from a direct evaluation, ideally based on a direct experience.
  • The second way is by predictive, arithmetic contract. You give someone some money for some contract, and they agree to give you some money back over time. This is predictively arithmetic, because the value metric is built into the instrument by definition. Bonds are this way: you invest a certain amount and the recipient promises a value reward. Stocks can also fit into this category, because there is an implicit and very narrow set of metrics that establish the value of the stock. At any given time, an observer can ‘run the math.’ Stocks have some speculative component, because you cannot be sure what tomorrow will bring. But the value is deterministic at any given instant.
  • The third way is more abstract. There is no direct way for you to engage with the item to evaluate it by non-abstract means. And there is no defining predictivearithmetic in the contract.

This third category depends on mathematical models to determine its value. They have to be mathematical (more precisely arithmetic), because that is the only trusted way we have to reason about (financial) value. As it happens, what Black, Scholes and Merton created forms the theoretical basis for valuation of every derivative — every single one on the planet. You cannot trust yourself, as you can in the type 1. You cannot trust the simple, definitive arithmetic in the contract as in type 2.

You have to trust the model.

And that is what happened in the W Bush crisis. We had a class of derivatives that were designed to be temporarily a type 3, with no contractual arithmetic. These were traded based on the value that the ratings agencies used, which they could certify by ‘showing the math.’ There was no conspiracy to lie, no tricks. They looked at the way the things were defined, applied the math, and the math said they were high value.

We likely would have been okay if the type 3 instruments stayed type 3 forever, as most derivatives do. But these were designed to transition to type 2, as the mortgage holders started to deliver on the contractual arithmetic. And when this supposed validation of value came due, the real type 2 value and the type 3 calculation did not line up. Just as when Black-Scholes caused the global crisis of 1978-9, so too did it in 2007-8.

But this time, as much as 10% of all the world’s assets had been encapsulated in these instruments. Trust in the value of the instruments evaporated as the community (temporarily) mistrusted the Black-Scholes evaluation. The basic value was still there: properties andcorporations in type 2 arrangements. But we had no way to get at that value.

This is important to me beyond my desire to understand my larger world and how it works. I help design infrastructure for fluid collaborative organizations. A central problem is how to evaluate the value any given partner, process or product feature can add to the prospective enterprise, or to the prospective future of a given enterprise. We have to have these evaluation methods, and we also have to introspectively evaluate the trust we have in the methods, metrics and infrastructure.

As it turns out, Black-Scholes has infected the enterprise and is vexing my attempts to do well for the world.

One reason is obvious: enterprises exist and function as the financial community supports. Any methods that community uses are inherited by the enterprises and used internally. A second reason is more pernicious: enterprise engineers are lazy. They are not used to examining methods all the way down to first principles, and readily accept methods that are apparently trusted elsewhere.

If I am going to contribute to the creation of better business models, we have to fix this, at least at the level of enterprises.

This isn’t exactly true. Most derivatives were originally designed to play a specific role in an engineered portfolio, in a technique called ‘hedging.’ The idea is that the arithmetic of the instrument cannot be evaluated directly, as we note here. But if you tie the variables to the speculative component of the type two investments to those engineered into the type three, they can cancel each other.

A simple example is a fund manager who invests in stocks. The speculative value is in what will happen tomorrow, and if things go wrong, value will diminish. But if you separately bet on those same factors ‘going wrong’ in the arithmetic definition of a derivative, then when the value of the stock goes down, the value of the derivative should go up to ‘hedge’ the risk — supposing that the engineering is good.

This essay addresses the case where the value of the derivatives are divorced from the fabric of the ‘hedge fund’ designed to round out a wiseportfolio.

In such a case, the derivatives are traded as if they have an intrinsic value. This is what happened in the W Bush era, as the regulatory intent that recognized the benefit of hedging was lifted, creating a market for these ‘new’ products.

Derivatives that support hedge funds as a component of balanced portfolios turn out to be useful. This is a good example of how financial engineers are worth what they get paid when they create value. The value in this case comes indirectly, as all beneficial effects from the financial community do.

When hedge funds are used, the pressure on corporations (the primary value creators) is relaxed. They don’t have to play things safe and focus only on the next quarter’s results; they can instead invest in the long run, innovate and otherwise take chances.

If you read any of the popular accounts of the crisis, you will get two incorrect points. One we have already mentioned, that this was a vast conspiracy of people who knew they were destroying lives. (Sometimes there is that evil, haunting, explicit notion of Jewish bankers.) We’ve noted that this is wrong.

Equally wrong is the notion that the phenomenon was wholly due to and contained by high risk home mortgages. This is because the mortgage derivatives are easy to trace, they were the trigger of the collapse in trust and the narrative is complicated enough with just them; it becomes satisfying when explainable as two-level irresponsibility and greed (the irresponsible borrowers and lenders).

In fact, all derivatives were subject to the collapse in trust in value. This is why all capital markets froze, not just home lending — a relatively small fraction of the holdings. And why tens of trillions in central bank loans were required. Every type 2 value was entangled.

What the Model Is

A brief overview of how it is structured and broken.

The specific model as originally developed, modified in the 80s and applied to options pricing is explained fairly well in its Wikipedia article. The way it has been adapted and extended to apply to the larger, growing world of derivatives is broad. You may want to browse through the Journal of Derivatives to get an idea of how broad the spectrum is.

Here, we worry only about the basic structure of the technique.

Continuity

The various methods all build on the basic structure of a differential equation. Differential equations are the basic stuff of Newtonian physics, and indeed, Newton developed the differential calculus for that very purpose. When applied to physics, It assumes that the world is continuous, that there are no abrupt tears in the fabric of reality. Further, this implies that if things are connected, they are connected smoothly. There can be abrupt swings in effects, but if you zoom in on the transitions, they have no breaks or corners.

This suited Newton well, because no observation of the human-scale world ever presented such a tear or corner. Everything is well-behaved; and it has to be because of the way mass and forces work in the model. A way of thinking about this is that the stuff of the world constrains itself to be well-behaved.

But financial instruments are inventions: human ideas, not real stuff. There is nothing intrinsic in the reality of these things (other than the model we create) to constrain it to be well-behaved and continuous. The behaviors that are captured include impressions by humans. Impressions and certain concepts tend to be non-continuous in the human mind. We have tostruggle, for instance, with shades of grey in our justice system: someone is guilty or not. Something is true or false.

A clear example can be seen in how numbers are handled. In everyday life we manage numbers naturally, but in two fundamentally different ways.

One way is continuously: for instance in how we measure time. We think of time as smooth, and if we wanted, we could measure every second, or tenth or millionth. There are no jumps in time, nor the numbers we use in measuring it. This in fact is the notion of time and number that Newton uses.

But at the same time we have a more fundamental notion of numbers as integers. You can have a certain number of children, or cars. You buy cans of food at the grocery store in discrete units. This notion of discreteness is also a foundation of Newtonian physics: the presumption that the matter in the world consists of objects. Planets are discrete things, as are leaves, atoms, even non-corporal things like breaths and concepts. They are all discrete, with no meaningful notion of in-between.

In the real world, we balance these twofundamentally different notions of numbers without noticing, even though we use the same notation and much of the same arithmetic.

Black-Scholes deals well with the one notion of number that is continuous and ungracefully with those that are discontinuous.

This matters not just because of numbers themselves; they were just the example. The numbers stand for something, often complex dynamics. When those dynamics are based on human reasoning, we find that they do have jumps, tears, discontinuous breaks.

First Class Independence

Another problem would be characterized by a formalist as the flattening of the type system. All concepts that are factored into the model are considered to be equal in their existence and independent except as described in the model. The base case first addressed was option trading where the four factors were duration of the option, prices, interest rates, and market volatility. These four factors are related by the model, but are assumed to exist in a world outside the model as real things. The existence of each of these things is equal. This is a very big deal.

The analogy in physics would be to say, for instance, that quarks, magnetic fields, light and color can be considered as qualities of the universe that are equal in their being. Also that so far as any consideration of how they interact, they are independent except for the equations we would state that capture the model of their interaction.

But the world as we deal with it has some things that are more fundamental than others. There is a hierarchy of types, in other words. Though different models and theories propose different hierarchies, no one would dispute that such a hierarchy exists. That is what science is all about.

How does the financial community deal with this? By fudging. Human judgement is supposed to be used in coloring these inputs. Every application of the model, across all the derivative applications hides the fact that the supposedly impressive formal model depends on providing the ‘correctly colored’ inputs. (This is the problem that Merton claimed was behind the failure of the very simple application of the approach to options in the late 70s.)

One can say that the system collapsed because the model is good but the inputs scurrilously or ignorantly miscolored. But it should be impossible to disentangle the relationships of the inputs outside the model from inside the model in any application.

Ordinary Causal Logic

The final fatal problem with the model is in the concept of causality used. The reader should be careful to sort this out from a complaint that the model gets what causes what wrong. No, it gets the fundamental notion of cause wrong. This is not so much a problem with what Merton and company bring to the table with the model, but the larger issue of how science is applied in the domain of economics.

In physics, something happens, it affects something else in the neighborhood and some change occurs. The change is said to have been caused by the initial events through the interaction. This is intuitive. Even in physics, the notion has someissues, but let’s say here that it just works.

Economics does involve arithmetic. But regardless of how apparently complex that arithmetic, the underlying phenomena are human actions. And human actions are not based on logic, not always or even usually. This is not a novel observation: the laws associated with physics have always sat uncomfortably in the domain of the ‘soft sciences.’ The conversation about this is long, but traditionally not very deep after the observation that no ‘other science’ is applicable.

Greater insight comes from the biology community who has a similar problem. Physics-centric science likes to have a notion of cause associated with objects and effects as we just noted. But microbiology in the medical domain likes to think of complex systems and some degree of selforganization. Insights from people looking at this problem inform some new notions about cause coming simultaneously from all the individual elements as physics would have it, but also from some sort of system awareness, perhaps at different levels of system.

Some tools associated with expanded logics are emerging, and should be considered where they serve the model well. Black-Scholes has no room for this, though some researchers in these new geometric logics propose some useful approaches.

That means we allow for a number of ‘quantum effects’ that are normally associated with the areas where Newton fails, even in the domain of physics.

One is that because economic systems are human systems, they are by their nature introspective: they see themselves. Whenever we measure ourselves, the observation process perturbs the measurement. This is well understood even in the simplest of public polls.

A second, intriguing phenomenon is that not all the causal linkages are knowable at any given time. So inexplicable linkages will be affecting the system, analogous to spooky action at a distance.

Even more fascinating is because the governing frameworks are narrative-based. We seem to be hardwired to organize our understanding of systems in narrative structures. These structures build tentative, dynamic causal structures. They are dynamic in the sense that things that happen late in the narrative reactively change the meaning of salient relationships early in the narrative. An example in fiction is when you have a murder mystery and at the end you are given some causal connections that force you to go back through your stored memory of the story and reinvent some causal connections.

We do this in real life as well. For instance, the science shows that eye-witnesses are notoriously unreliable when they have gone over the events several times and made sense of them by changing key facts and causal connections to create an order in our memory. We see this in action in political campaigns where the winner is the fellow who can reinvent the past of the opponent to make a sensible but unattractive narrative. Once this narrative settles, it is all but unshakable as truth.

We’ll give the example of the financial crisis that started this note. When the events were happening, there were no indications that things were wrong beyond the normal level of analysts that predict disaster. The events certainly unfolded as a result of bad models and metrics. Yet after the fact, certain narratives have gone back and ‘changed history’ so that we have a simple explanation based on bad guys and unsophisticated dynamics.

This final effect is much like the effect in quantum physics where some phenomenon can be understood as changing the past.

The way Black-Scholes deals with these logical softnesses is by introducing probability at the level of the logic. This confounds the causal model.

We don’t propose — as many do — that we use quantum physics as a basis for economic modeling. We think a deeper, more comprehensive and basic foundation is needed, one that addresses the three big problems with Black-Scholes inspired models:

  • A model that federates ontologies at least of sequence, discreteness and ordinary continuity.
  • A structured type system that is sensitive to introspective selforganizing systems.
  • A notion of causal logic that takes advantage of new mathematical foundations in this area.

With this, we may do better in understanding and reasoning about how we coexist.

The Bottom Line

The bottom line of this lengthy note:

  • We can reasonably pin the recent financial disasters — and coming ones — on the inadequacy of the way we measure and assign cause.
  • These same inadequacies face us in engineering advanced collaborative enterprises, and must be addressed if we want a more fulfilling world.
  • In our efforts, we roll these into a project to define a science base for advanced future enterprise engineering. This may be associated with the European cluster of projects called Future Internet Enterprise Systems. A note with some details on our approach will appear soon.

I speculate that the recent politicizing of homophobia in the US has its origin not in a moral problem — how could it? — but in the challenge to the simple, discontinuous categories of male and female.

The duality seems palatable and when something in between is presented, it calls into question any system that excludes the middle; this characterizes fundamental religion, whether Christian or Muslim. So the reality proof of homosexuals must be denied because of what it implies for the supposedly clean — and easy to communicate — world.

A third way of thinking about numbers should be mentioned. It matters in our advanced enterprise domain, though is not a factor in the flaws of Black-Scholes that this essay addresses. This is the matter of sequence. Enterprises are often naturally thought of as a network of processes and process dependencies. You have to drill the holes in a car’s engine block before you assemble the block with the pistons, for example. This has nothing to do with the time of day of the assembly, nor the number of engines.

This notion of sequence is also built into the concepts we use with numbers and has no essential connection with the others. We label the fourth step in series of tasks naturally. As it happens, these three separate notions of numbers have to be described separately to a computer that you expect to reason about the real world the way we do.

A clear example of problems is found in the literature on this. Suppose you have a hammer hitting the glass face of a watch. If the glass breaks, you would say that the hammer hitting the glass caused it to break. But what if the hammer is part of a test for manufacturing defects in the glass, and is hitting each watch to find those defects? All good watches pass unbroken. Bad watches break.

In this case, it is more intuitive to say that cold air in the annealing process (when the glass face cooled) caused the glass to (later) break.

Suppose we have a manned spacecraft explode and we find out what happened. Do we say a badly designed gasket caused the explosion? Or cold air on that day? Or a bad engineering decision? Or a broken supervisory chain? Or all? Or some nuanced chain in a complex model? At the time, there were some who claimed that since the probabilities of a fatal accident were real, then the very existence of a manned space program caused the accident.

The point is that the notion of cause is further down the food chain in the type system, even in physics.

Yes, I know the public outrage at the bonus some of the top guys were paid after the bailouts. We all love the narrative of a successful conspiracy. But the fact is that the bonuses were far less than they were used to, were paid to far fewer people and a good many high income folks lost their jobs.

It is the same with the stock market. No one seems to be able to reliably scope out what will happen next, but the day after — the day after! — everyone has an explanation and many claim to have foreseen it!

© copyright Ted Goranson, 2012