Monday, November 29, 2010

Behavioral Economics and Being Rational (or Not)

Yet another "reposted" post from the old blog.  It basically riffs on how the term "utility" has become iconic, particularly in the political economy profession.

Behavioral economics is a field that has gained a lot of attention recently through the publication of easy reads like Predictably Irrational and Animal Spirits that feature the discipline, as well as by other books that discuss the findings of studies either conducted by behavioral economists or relied upon by behavioral economists to support their conclusions.

A central question of behavioral economics is whether human beings are rational. To answer that question, behavioral economists draw heavily upon experiments conducted by psychologists and cognitive scientists.

Behavioral economists start with the assumption that neoclassical economic theory is fundamentally correct but can be improved through understanding how human beings actually make economic decisions. Staring at the experimental data through this neoclassical lens has led them to conclusions about the rationality of people instead of conclusions about the rationality of neoclassical economic theory. To borrow one of their own observations and apply it to them, framing the problem as "how to improve" neoclassical economics has affected how they approach solving it. Very convenient. And very self-unaware.

The crippling flaw of neoclassical economics that limits the promise of behavioral economics is the belief that human calculations of utility are defined solely by economic (i.e., pecuniary) value. It turns out that this may actually be a "feature" of neoclassical economics instead of a bug because this view actually refutes a fundamental belief of institutional economics, a school of economic thought that was dominant when neoclassical economics first arose. Institutional economists believed that economic decisions are necessarily affected by social and political considerations. That is, individuals calculate utility by considering social value and political value, not just pecuniary value. Adam Smith was of the same mind and said as much in The Wealth of Nations.

This unfortunate feature of neoclassical economics has led to the conclusion that human beings are not rational, at least with respect to economic theory. That conclusion is, in fact, wrong and points out a level of irrationality on the part of behavioral economists that is not present in the underlying experiments.

Let's start with a reasonable definition of "rationality:"

Rationality can be a difficult word to define-it has a long and convoluted intellectual history-but it's generally used to describe a particular style of thinking. Plato associated rationality with the use of logic, which he believed made humans think like the gods. Modern economics has refined this ancient idea into rational-choice theory, which assumes that people make decisions by multiplying the probability of getting what they want by the amount of pleasure (utility) that getting what they want will bring. This reasonable rubric allows us all to maximize our happiness, which is what rational agents are always supposed to do.
How We Decide, Jonah Lehrer

Now, let's take a look at a couple of economists who looked at the seminal work of Kahneman and Tversky, which is part of the foundation of behavioral economics, and concluded that human beings are irrational:

“[S]uppose you offer somebody a choice: They can flip a coin to win $200 for heads and nothing for tails, or they can skip the toss and collect $100 immediately. Most people, researchers have found, will take the sure thing. Now alter the game: They can flip a coin to lose $200 for heads and nothing for tails, or they can skip the toss and pay $100 immediately. Most people will take the gamble. To the imagined rational man, the two games are mirror images; the choice to gamble or not should be the same in both.  But to a real, irrational man, who feels differently about loss than gain, the two games are very different. The outcomes are different, and sublimely irrational.
The (Mis) Behavior of Markets: A Fractal View of Risk, Ruin And Reward, Benoit Mandelbrot and Richard L. Hudson

“Imagine that a rare disease is breaking out in some community and is expected to kill 600 people. Two different programs are available to deal with the threat. If Program A is adopted, 200 people will be saved; if Program B is adopted, there is a 33% probability that everyone will be saved and a 67% probability that no one will be saved.

Which program would you choose? If most of us are risk-averse, rational people will prefer Plan A's certainty of saving 200 lives over Plan B's gamble, which has the same mathematical expectancy but involves taking the risk of a 67% chance that everyone will die. In the experiment, 72% of the subjects chose the risk-averse response represented by Program A.

Now consider the identical problem posed differently. If Program C is adopted, 400 of the 600 people will die, while Program D entails a 33% probability that nobody ill die and a 67% probability that 600 people will die. Note that the first of the two choices is now expressed in terms of 400 deaths rather than 200 survivors, while the second program offers a 33% chance that no one will die. Kahneman and Tversky report that 78% of their subjects were risk-seekers and opted for the gamble: they could not tolerate the prospect of the sure loss of 400 lives.

This behavior, although understandable, is inconsistent with the assumptions of rational behavior. The answer to a question should be the same regardless of the setting in which it is posed.
Against the Gods: The Remarkable Story of Risk, Richard L. Bernstein

There is no way that a rational person can conclude from this experimental data that human beings are irrational. The experimental data show merely that (1) human beings exhibit loss aversion, i.e., a strong bias against losing what they have, and (2) that bias can be manipulated by how a problem is framed. Unfortunately for Mandlebrot and Bernstein, the expected economic values, i.e. the utility, of the risk-taking choice and the risk-avoiding choice that were presented in each question were identical.  Since both choices in each version of the problem resulted in maximum utility, the choice to embrace or avoid risk was of no consequence, at least not from an economic point of view.  Clearly, loss aversion affects the amount of risk that somebody is willing to embrace, but doesn't that tell us that happiness (aka utility) is defined by something more than just economic value?  And, just as clearly, how a problem is framed can affect the choice between embracing and avoiding risk, but doesn't that tell us that context matters, that individual economic decisions are indeed influenced by institutional factors?

Nevertheless, two very smart economists viewed the data and concluded that it proved human beings were irrational. How could that be?  It seems like there was a bit of bait-and-switch going on.  While economic theory describes rational behavior as acting to maximize one's happiness, Mandelbrot and Bernstein seemed to define rational behavior as solving equivalent problems identically.  The logical fallacy here is that problems that are "equivalent" in terms of expected economic value are not necessarily identical in terms of utility, but to recognize that, you must first question the neoclassical assumption that utility is determined solely by economic value.  Unfortunately, it is much easier to accept that human beings are irrational than it is to question the Useful Fictions through wich you understand the world. This is just another aspect of human decision-making, which is characterized by positive feedback, hysteresis and metastability.

NOTE: It is not so much that Mandelbrot and Bernstein are irrational, it is that they point out one form of bias (loss aversion) as establishing that human beings are irrational while ignoring the fact that they could not have arrived at that conclusion without their own bias (confirmation bias). The facts they relied upon simply provide no support for their conclusion.