Page 1 of 533

Page 2 of 533

Page 3 of 533

In memory of Amos Tversky

Page 4 of 533

Contents

Introduction

Part I. Two Systems

1. The Characters of the Story

2. Attention and Effort

3. The Lazy Controller

4. The Associative Machine

5. Cognitive Ease

6. Norms, Surprises, and Causes

7. A Machine for Jumping to Conclusions

8. How Judgments Happen

9. Answering an Easier Question

Part II. Heuristics and Biases

10. The Law of Small Numbers

<5>

11. Anchors

12. The Science of Availability

13. Availability, Emotion, and Risk

14. Tom W’s Specialty

Page 5 of 533

15. Linda: Less is More

16. Causes Trump Statistics

17. Regression to the Mean

18. Taming Intuitive Predictions

Part III. Overconfidence

19. The Illusion of Understanding

20. The Illusion of Validity

21. Intuitions Vs. Formulas

22. Expert Intuition: When Can We Trust It?

23. The Outside View

24. The Engine of Capitalism

Part IV. Choices

25. Bernoulli’s Errors

26. Prospect Theory

27. The Endowment Effect

28. Bad Events

29. The Fourfold Pattern

30. Rare Events

31. Risk Policies

Page 6 of 533

32. Keeping Score

33. Reversals

34. Frames and Reality

Part V. Two Selves

35. Two Selves

36. Life as a Story

37. Experienced Well-Being

38. Thinking About Life

Conclusions

Appendix A: Judgment Under

Uncertainty

Appendix B: Choices, Values, and Frames

Acknowledgments

Notes

Index

Page 7 of 533

Introduction

Every author, I suppose, has in mind a setting in which readers of his or her work could benefit from having read it. Mine is the proverbial office watercooler, where opinions are shared and gossip is exchanged. I hope

to enrich the vocabulary that people use when they talk about the

judgments and choices of others, the company’s new policies, or a

colleague’s investment decisions. Why be concerned with gossip?

Because it is much easier, as well as far more enjoyable, to identify and

label the mistakes of others than to recognize our own. Questioning what we believe and want is difficult at the best of times, and especially difficult when we most need to do it, but we can benefit from the informed opinions

of others. Many of us spontaneously anticipate how friends and colleagues will evaluate our choices; the quality and content of these anticipated

judgments therefore matters. The expectation of intelligent gossip is a

powerful motive for serious self-criticism, more powerful than New Year

resolutions to improve one’s decision making at work and at home.

To be a good diagnostician, a physician needs to acquire a large set of

labels for diseases, each of which binds an idea of the illness and its

symptoms, possible antecedents and causes, possible developments and

consequences, and possible interventions to cure or mitigate the illness.

Learning medicine consists in part of learning the language of medicine. A

deeper understanding of judgments and choices also requires a richer

vocabulary than is available in everyday language. The hope for informed

gossip is that there are distinctive patterns in the errors people make. Systematic errors are known as biases, and they recur predictably in

particular circumstances. When the handsome and confident speaker

bounds onto the stage, for example, you can anticipate that the audience will judge his comments more favorably than he deserves. The availability

of a diagnostic label for this bias—the halo effect—makes it easier to

anticipate, recognize, and understand. When you are asked what you are thinking about, you can normally

answer. You believe you know what goes on in your mind, which often

consists of one conscious thought leading in an orderly way to another. But

that is not the only way the mind works, nor indeed is that the typical way. Most impressions and thoughts arise in your conscious experience without

your knowing how they got there. You cannot tracryd>e how you came to

the belief that there is a lamp on the desk in front of you, or how you

detected a hint of irritation in your spouse’s voice on the telephone, or how

Page 8 of 533

you managed to avoid a threat on the road before you became consciously

aware of it. The mental work that produces impressions, intuitions, and many decisions goes on in silence in our mind. Much of the discussion in this book is about biases of intuition. However,

the focus on error does not denigrate human intelligence, any more than

the attention to diseases in medical texts denies good health. Most of us

are healthy most of the time, and most of our judgments and actions are

appropriate most of the time. As we navigate our lives, we normally allow

ourselves to be guided by impressions and feelings, and the confidence we have in our intuitive beliefs and preferences is usually justified. But not

always. We are often confident even when we are wrong, and an objective

observer is more likely to detect our errors than we are. So this is my aim for watercooler conversations: improve the ability to

identify and understand errors of judgment and choice, in others and

eventually in ourselves, by providing a richer and more precise language to

discuss them. In at least some cases, an accurate diagnosis may suggest

an intervention to limit the damage that bad judgments and choices often

cause.

Origins

This book presents my current understanding of judgment and decision making, which has been shaped by psychological discoveries of recent

decades. However, I trace the central ideas to the lucky day in 1969 when I

asked a colleague to speak as a guest to a seminar I was teaching in the Department of Psychology at the Hebrew University of Jerusalem. Amos

Tversky was considered a rising star in the field of decision research—

indeed, in anything he did—so I knew we would have an interesting time. Many people who knew Amos thought he was the most intelligent person

they had ever met. He was brilliant, voluble, and charismatic. He was also

blessed with a perfect memory for jokes and an exceptional ability to use

them to make a point. There was never a dull moment when Amos was

around. He was then thirty-two; Iwas thirty-five. Amos told the class about an ongoing program of research at the University of Michigan that sought to answer this question: Are people

good intuitive statisticians? We already knew that people are good

intuitive grammarians: at age four a child effortlessly conforms to the rules

of grammar as she speaks, although she has no idea that such rules exist. Do people have a similar intuitive feel for the basic principles of statistics?

Amos reported that the answer was a qualified yes. We had a lively debate

in the seminar and ultimately concluded that a qualified no was a better

Page 9 of 533

answer. Amos and I enjoyed the exchange and concluded that intuitive statistics was an interesting topic and that it would be fun to explore it together. That

Friday we met for lunch at Café Rimon, the favorite hangout of bohemians

and professors in Jerusalem, and planned a study of the statistical

intuitions of sophisticated researchers. We had concluded in the seminar

that our own intuitions were deficient. In spite of years of teaching and

using statistics, we had not developed an intuitive sense of the reliability of

statistical results observed in small samples. Our subjective judgments were biased: we were far too willing to believe research findings based on

inadequate evidence and prone to collect too few observations in our own

research. The goal of our study was to examine whether other researchers

suffered from the same affliction. We prepared a survey that included realistic scenarios of statistical

issues that arise in research. Amos collected the responses of a group of

expert participants in a meeting of the Society of Mathematical

Psychology, including the authors of two statistical textbooks.As expected, we found that our expert colleagues, like us, greatly exaggerated the

likelihood that the original result of an experiment would be successfully

replicated even with a small sample. They also gave very poor advice to a

fictitious graduate student about the number of observations she needed

to collect. Even statisticians were not good intuitive statisticians. While writing the article that reported these findings, Amos and I

discovered that we enjoyed working together. Amos was always very

funny, and in his presence I became funny as well, so we spent hours of

solid work in continuous amusement. The pleasure we found in working

together made us exceptionally patient; it is much easier to strive for

perfection when you are never bored. Perhaps most important, we

checked our critical weapons at the door. Both Amos and I were critical

and argumentative, he even more than I, but during the years of our

collaboration neither of us ever rejected out of hand anything the other

said. Indeed, one of the great joys I found in the collaboration was that

Amos frequently saw the point of my vague ideas much more clearly than I

did. Amos was the more logical thinker, with an orientation to theory and

an unfailing sense of direction. I was more intuitive and rooted in the

psychology of perception, from which we borrowed many ideas. We were

sufficiently similar to understand each other easily, and sufficiently different

to surprise each other. We developed a routine in which we spent much of

our working days together, often on long walks. For the next fourteen years

our collaboration was the focus of our lives, and the work we did together

during those years was the best either of us ever did. We quickly adopted a practice that we maintained for many years. Our

Page 10 of 533

research was a conversation, in which we invented questions and jointly

examined our intuitive answers. Each question was a small experiment,

and we carried out many experiments in a single day. We were not

seriously looking for the correct answer to the statistical questions we

posed. Our aim was to identify and analyze the intuitive answer, the first

one that came to mind, the one we were tempted to make even when we

knew it to be wrong. We believed—correctly, as it happened—that any

intuition that the two of us shared would be shared by many other people

as well, and that it would be easy to demonstrate its effects on judgments. We once discovered with great delight that we had identical silly ideas

about the future professions of several toddlers we both knew. We could

identify the argumentative three-year-old lawyer, the nerdy professor, the

empathetic and mildly intrusive psychotherapist. Of course these

predictions were absurd, but we still found them appealing. It was also

clear that our intuitions were governed by the resemblance of each child to

the cultural stereotype of a profession. The amusing exercise helped us

develop a theory that was emerging in our minds at the time, about the role

of resemblance in predictions. We went on to test and elaborate that

theory in dozens of experiments, as in the following example. As you consider the next question, please assume that Steve was

selected at random from a representative sample:

An individual has been described by a neighbor as follows:

“Steve is very shy and withdrawn, invariably helpful but with little

interest in people or in the world of reality. A meek and tidy soul,

he has a need for order and structurut and stre, and a passion for

detail.” Is Steve more likely to be a librarian or a farmer?

The resemblance of Steve’s personality to that of a stereotypical librarian

strikes everyone immediately, but equally relevant statistical

considerations are almost always ignored. Did it occur to you that there

are more than 20 male farmers for each male librarian in the United

States? Because there are so many more farmers, it is almost certain that more “meek and tidy” souls will be found on tractors than at library

information desks. However, we found that participants in our experiments

ignored the relevant statistical facts and relied exclusively on resemblance. We proposed that they used resemblance as a simplifying heuristic

(roughly, a rule of thumb) to make a difficult judgment. The reliance on the

heuristic caused predictable biases (systematic errors) in their

predictions. On another occasion, Amos and I wondered about the rate of divorce

among professors in our university. We noticed that the question triggered

Page 11 of 533

a search of memory for divorced professors we knew or knew about, and

that we judged the size of categories by the ease with which instances

came to mind. We called this reliance on the ease of memory search the

availability heuristic. In one of our studies, we asked participants to answer

a simple question about words in a typical English text:

Consider the letter K.

Is K more likely to appear as the first letter in a word OR as the

third letter?

As any Scrabble player knows, it is much easier to come up with words

that begin with a particular letter than to find words that have the same

letter in the third position. This is true for every letter of the alphabet. We

therefore expected respondents to exaggerate the frequency of letters

appearing in the first position—even those letters (such as K, L, N, R, V) which in fact occur more frequently in the third position. Here again, the

reliance on a heuristic produces a predictable bias in judgments. For

example, I recently came to doubt my long-held impression that adultery is more common among politicians than among physicians or lawyers. I had

even come up with explanations for that “fact,” including the aphrodisiac

effect of power and the temptations of life away from home. I eventually

realized that the transgressions of politicians are much more likely to be

reported than the transgressions of lawyers and doctors. My intuitive

impression could be due entirely to journalists’ choices of topics and to my

reliance on the availability heuristic. Amos and I spent several years studying and documenting biases of

intuitive thinking in various tasks—assigning probabilities to events,

forecasting the future, assessing hypotheses, and estimating frequencies.

In the fifth year of our collaboration, we presented our main findings in Science magazine, a publication read by scholars in many disciplines. The

article (which is reproduced in full at the end of this book) was titled

“Judgment Under Uncertainty: Heuristics and Biases.” It described the

simplifying shortcuts of intuitive thinking and explained some 20 biases as manifestations of these heuristics—and also as demonstrations of the role

of heuristics in judgment. Historians of science have often noted that at any given time scholars in

a particular field tend to share basic re share assumptions about their

subject. Social scientists are no exception; they rely on a view of human

nature that provides the background of most discussions of specific

behaviors but is rarely questioned. Social scientists in the 1970s broadly

accepted two ideas about human nature. First, people are generally

Page 12 of 533

rational, and their thinking is normally sound. Second, emotions such as

fear, affection, and hatred explain most of the occasions on which people

depart from rationality. Our article challenged both assumptions without

discussing them directly. We documented systematic errors in the thinking

of normal people, and we traced these errors to the design of the machinery of cognition rather than to the corruption of thought by emotion. Our article attracted much more attention than we had expected, and it

remains one of the most highly cited works in social science (more than

three hundred scholarly articles referred to it in 2010). Scholars in other

disciplines found it useful, and the ideas of heuristics and biases have

been used productively in many fields, including medical diagnosis, legal

judgment, intelligence analysis, philosophy, finance, statistics, and military

strategy.

For example, students of policy have noted that the availability heuristic

helps explain why some issues are highly salient in the public’s mind while

others are neglected. People tend to assess the relative importance of

issues by the ease with which they are retrieved from memory—and this is

largely determined by the extent of coverage in the media. Frequently mentioned topics populate the mind even as others slip away from

awareness. In turn, what the media choose to report corresponds to their

view of what is currently on the public’s mind. It is no accident that

authoritarian regimes exert substantial pressure on independent media. Because public interest is most easily aroused by dramatic events and by

celebrities, media feeding frenzies are common. For several weeks after Michael Jackson’s death, for example, it was virtually impossible to find a

television channel reporting on another topic. In contrast, there is little

coverage of critical but unexciting issues that provide less drama, such as

declining educational standards or overinvestment of medical resources in

the last year of life. (As I write this, I notice that my choice of “little-covered”

examples was guided by availability. The topics I chose as examples are mentioned often; equally important issues that are less available did not

come to my mind.)

We did not fully realize it at the time, but a key reason for the broad

appeal of “heuristics and biases” outside psychology was an incidental

feature of our work: we almost always included in our articles the full text of

the questions we had asked ourselves and our respondents. These

questions served as demonstrations for the reader, allowing him to

recognize how his own thinking was tripped up by cognitive biases. I hope

you had such an experience as you read the question about Steve the

librarian, which was intended to help you appreciate the power of

resemblance as a cue to probability and to see how easy it is to ignore

relevant statistical facts.

Page 13 of 533

The use of demonstrations provided scholars from diverse disciplines—

notably philosophers and economists—an unusual opportunity to observe

possible flaws in their own thinking. Having seen themselves fail, they

became more likely to question the dogmatic assumption, prevalent at the

time, that the human mind is rational and logical. The choice of method was crucial: if we had reported results of only conventional experiments,

the article would have been less noteworthy and less memorable.

Furthermore, skeptical readers would have distanced themselves from the

results by attributing judgment errors to the familiar l the famifecklessness

of undergraduates, the typical participants in psychological studies. Of

course, we did not choose demonstrations over standard experiments

because we wanted to influence philosophers and economists. We

preferred demonstrations because they were more fun, and we were lucky

in our choice of method as well as in many other ways. A recurrent theme

of this book is that luck plays a large role in every story of success; it is

almost always easy to identify a small change in the story that would have

turned a remarkable achievement into a mediocre outcome. Our story was

no exception.

The reaction to our work was not uniformly positive. In particular, our

focus on biases was criticized as suggesting an unfairly negative view of

the mind. As expected in normal science, some investigators refined our

ideas and others offered plausible alternatives. By and large, though, the

idea that our minds are susceptible to systematic errors is now generally

accepted. Our research on judgment had far more effect on social science

than we thought possible when we were working on it.

Immediately after completing our review of judgment, we switched our

attention to decision making under uncertainty. Our goal was to develop a

psychological theory of how people make decisions about simple

gambles. For example: Would you accept a bet on the toss of a coin where

you win $130 if the coin shows heads and lose $100 if it shows tails?

These elementary choices had long been used to examine broad

questions about decision making, such as the relative weight that people

assign to sure things and to uncertain outcomes. Our method did not

change: we spent many days making up choice problems and examining whether our intuitive preferences conformed to the logic of choice. Here

again, as in judgment, we observed systematic biases in our own

decisions, intuitive preferences that consistently violated the rules of

rational choice. Five years after the Science article, we published

“Prospect Theory: AnAnalysis of Decision Under Risk,” a theory of choice

that is by some counts more influential than our work on judgment, and is

one of the foundations of behavioral economics.

Page 14 of 533

Until geographical separation made it too difficult to go on, Amos and I

enjoyed the extraordinary good fortune of a shared mind that was superior

to our individual minds and of a relationship that made our work fun as well

as productive. Our collaboration on judgment and decision making was the

reason for the Nobel Prize that I received in 2002, whichAmos would have

shared had he not died, aged fifty-nine, in 1996.

Where we are now

This book is not intended as an exposition of the early research that Amos

and I conducted together, a task that has been ably carried out by many

authors over the years. My main aim here is to present a view of how the mind works that draws on recent developments in cognitive and social

psychology. One of the more important developments is that we now

understand the marvels as well as the flaws of intuitive thought. Amos and I did not address accurate intuitions beyond the casual

statement that judgment heuristics “are quite useful, but sometimes lead to

severe and systematic errors.” We focused on biases, both because we

found them interesting in their own right and because they provided

evidence for the heuristics of judgment. We did not ask ourselves whether

all intuitive judgments under uncertainty are produced by the heuristics we

studied; it is now clear that they are not. In particular, the accurate intuitions

of experts are better explained by the effects of prolonged practice than by

heuristics. We can now draw a richer andigha riche more balanced

picture, in which skill and heuristics are alternative sources of intuitive

judgments and choices.

The psychologist Gary Klein tells the story of a team of firefighters that

entered a house in which the kitchen was on fire. Soon after they started

hosing down the kitchen, the commander heard himself shout, “Let’s get

out of here!” without realizing why. The floor collapsed almost immediately

after the firefighters escaped. Only after the fact did the commander realize

that the fire had been unusually quiet and that his ears had been unusually

hot. Together, these impressions prompted what he called a “sixth sense

of danger.” He had no idea what was wrong, but he knew something was wrong. It turned out that the heart of the fire had not been in the kitchen but

in the basement beneath where the men had stood. We have all heard such stories of expert intuition: the chess master who walks past a street game and announces “White mates in three” without

stopping, or the physician who makes a complex diagnosis after a single

glance at a patient. Expert intuition strikes us as magical, but it is not.

Indeed, each of us performs feats of intuitive expertise many times each

Page 15 of 533

day. Most of us are pitch-perfect in detecting anger in the first word of a

telephone call, recognize as we enter a room that we were the subject of

the conversation, and quickly react to subtle signs that the driver of the car

in the next lane is dangerous. Our everyday intuitive abilities are no less marvelous than the striking insights of an experienced firefighter or

physician—only more common.

The psychology of accurate intuition involves no magic. Perhaps the

best short statement of it is by the great Herbert Simon, who studied chess masters and showed that after thousands of hours of practice they come to

see the pieces on the board differently from the rest of us. You can feel

Simon’s impatience with the mythologizing of expert intuition when he writes: “The situation has provided a cue; this cue has given the expert

access to information stored in memory, and the information provides the

answer. Intuition is nothing more and nothing less than recognition.” We are not surprised when a two-year-old looks at a dog and says

“doggie!” because we are used to the miracle of children learning to

recognize and name things. Simon’s point is that the miracles of expert

intuition have the same character. Valid intuitions develop when experts

have learned to recognize familiar elements in a new situation and to act in

a manner that is appropriate to it. Good intuitive judgments come to mind with the same immediacy as “doggie!” Unfortunately, professionals’ intuitions do not all arise from true

expertise. Many years ago I visited the chief investment officer of a large

financial firm, who told me that he had just invested some tens of millions of

dollars in the stock of Ford Motor Company. When I asked how he had made that decision, he replied that he had recently attended an automobile

show and had been impressed. “Boy, do they know how to make a car!” was his explanation. He made it very clear that he trusted his gut feeling

and was satisfied with himself and with his decision. I found it remarkable

that he had apparently not considered the one question that an economist would call relevant: Is Ford stock currently underpriced? Instead, he had

listened to his intuition; he liked the cars, he liked the company, and he

liked the idea of owning its stock. From what we know about the accuracy

of stock picking, it is reasonable to believe that he did not know what he was doing.

The specific heuristics that Amos and I studied proviheitudied de little

help in understanding how the executive came to invest in Ford stock, but a

broader conception of heuristics now exists, which offers a good account. An important advance is that emotion now looms much larger in our

understanding of intuitive judgments and choices than it did in the past.

The executive’s decision would today be described as an example of the

affect heuristic, where judgments and decisions are guided directly by

Page 16 of 533

feelings of liking and disliking, with little deliberation or reasoning. When confronted with a problem—choosing a chess move or deciding whether to invest in a stock—the machinery of intuitive thought does the

best it can. If the individual has relevant expertise, she will recognize the

situation, and the intuitive solution that comes to her mind is likely to be

correct. This is what happens when a chess master looks at a complex

position: the few moves that immediately occur to him are all strong. When

the question is difficult and a skilled solution is not available, intuition still

has a shot: an answer may come to mind quickly—but it is not an answer

to the original question. The question that the executive faced (should I

invest in Ford stock?) was difficult, but the answer to an easier and related

question (do I like Ford cars?) came readily to his mind and determined

his choice. This is the essence of intuitive heuristics: when faced with a

difficult question, we often answer an easier one instead, usually without

noticing the substitution.

The spontaneous search for an intuitive solution sometimes fails—

neither an expert solution nor a heuristic answer comes to mind. In such

cases we often find ourselves switching to a slower, more deliberate and

effortful form of thinking. This is the slow thinking of the title. Fast thinking

includes both variants of intuitive thought—the expert and the heuristic—as well as the entirely automatic mental activities of perception and memory,

the operations that enable you to know there is a lamp on your desk or

retrieve the name of the capital of Russia.

The distinction between fast and slow thinking has been explored by many psychologists over the last twenty-five years. For reasons that I

explain more fully in the next chapter, I describe mental life by the metaphor

of two agents, called System 1 and System 2, which respectively produce

fast and slow thinking. I speak of the features of intuitive and deliberate

thought as if they were traits and dispositions of two characters in your mind. In the picture that emerges from recent research, the intuitive System

1 is more influential than your experience tells you, and it is the secret

author of many of the choices and judgments you make. Most of this book

is about the workings of System 1 and the mutual influences between it

and System 2.

What Comes Next

The book is divided into five parts. Part 1 presents the basic elements of a

two-systems approach to judgment and choice. It elaborates the distinction

between the automatic operations of System 1 and the controlled

operations of System 2, and shows how associative memory, the core of

Page 17 of 533

System 1, continually constructs a coherent interpretation of what is going

on in our world at any instant. I attempt to give a sense of the complexity

and richness of the automatic and often unconscious processes that

underlie intuitive thinking, and of how these automatic processes explain

the heuristics of judgment. A goal is to introduce a language for thinking

and talking about the mind. Part 2 updates the study of judgment heuristics and explores a major

puzzle: Why is it so difficult for us to think statistically? We easily think

associativelm 1associay, we think metaphorically, we think causally, but

statistics requires thinking about many things at once, which is something

that System 1 is not designed to do.

The difficulties of statistical thinking contribute to the main theme of Part

3, which describes a puzzling limitation of our mind: our excessive

confidence in what we believe we know, and our apparent inability to

acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand

about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight. My views on this

topic have been influenced by Nassim Taleb, the author of The Black

Swan. I hope for watercooler conversations that intelligently explore the

lessons that can be learned from the past while resisting the lure of

hindsight and the illusion of certainty.

The focus of part 4 is a conversation with the discipline of economics on

the nature of decision making and on the assumption that economic

agents are rational. This section of the book provides a current view,

informed by the two-system model, of the key concepts of prospect theory,

the model of choice that Amos and I published in 1979. Subsequent

chapters address several ways human choices deviate from the rules of

rationality. I deal with the unfortunate tendency to treat problems in

isolation, and with framing effects, where decisions are shaped by

inconsequential features of choice problems. These observations, which

are readily explained by the features of System 1, present a deep

challenge to the rationality assumption favored in standard economics. Part 5 describes recent research that has introduced a distinction

between two selves, the experiencing self and the remembering self, which

do not have the same interests. For example, we can expose people to

two painful experiences. One of these experiences is strictly worse than

the other, because it is longer. But the automatic formation of memories—

a feature of System 1—has its rules, which we can exploit so that the worse episode leaves a better memory. When people later choose which

episode to repeat, they are, naturally, guided by their remembering self

Page 18 of 533

and expose themselves (their experiencing self) to unnecessary pain. The

distinction between two selves is applied to the measurement of well- being, where we find again that what makes the experiencing self happy is

not quite the same as what satisfies the remembering self. How two selves within a single body can pursue happiness raises some difficult questions,

both for individuals and for societies that view the well-being of the

population as a policy objective. A concluding chapter explores, in reverse order, the implications of three

distinctions drawn in the book: between the experiencing and the

remembering selves, between the conception of agents in classical

economics and in behavioral economics (which borrows from psychology),

and between the automatic System 1 and the effortful System 2. I return to

the virtues of educating gossip and to what organizations might do to

improve the quality of judgments and decisions that are made on their

behalf.

Two articles I wrote with Amos are reproduced as appendixes to the

book. The first is the review of judgment under uncertainty that I described

earlier. The second, published in 1984, summarizes prospect theory as well as our studies of framing effects. The articles present the contributions

that were cited by the Nobel committee—and you may be surprised by

how simple they are. Reading them will give you a sense of how much we

knew a long time ago, and also of how much we have learned in recent

decades.

Page 19 of 533

Part 1

Page 20 of 533

Two Systems

Page 21 of 533

The Characters of the Story

To observe your mind in automatic mode, glance at the image below.

Figure 1

Your experience as you look at the woman’s face seamlessly combines what we normally call seeing and intuitive thinking. As surely and quickly as

you saw that the young woman’s hair is dark, you knew she is angry.

Furthermore, what you saw extended into the future. You sensed that this woman is about to say some very unkind words, probably in a loud and

strident voice. A premonition of what she was going to do next came to mind automatically and effortlessly. You did not intend to assess her mood

or to anticipate what she might do, and your reaction to the picture did not

have the feel of something you did. It just happened to you. It was an

instance of fast thinking. Now look at the following problem:

17 × 24

You knew immediately that this is a multiplication problem, and probably

knew that you could solve it, with paper and pencil, if not without. You also

had some vague intuitive knowledge of the range of possible results. You would be quick to recognize that both 12,609 and 123 are implausible. Without spending some time on the problem, however, you would not be

Page 22 of 533

certain that the answer is not 568.A precise solution did not come to mind,

and you felt that you could choose whether or not to engage in the

computation. If you have not done so yet, you should attempt the multiplication problem now, completing at least part of it. You experienced slow thinking as you proceeded through a sequence of

steps. You first retrieved from memory the cognitive program for multiplication that you learned in school, then you implemented it. Carrying

out the computation was a strain. You felt the burden of holding much material in memory, as you needed to keep track of where you were and of where you were going, while holding on to the intermediate result. The

process was mental work: deliberate, effortful, and orderly—a prototype of

slow thinking. The computation was not only an event in your mind; your

body was also involved. Your muscles tensed up, your blood pressure

rose, and your heart rate increased. Someone looking closely at your eyes while you tackled this problem would have seen your pupils dilate. Your

pupils contracted back to normal size as soon as you ended your work—

when you found the answer (which is 408, by the way) or when you gave

up.

Two Systems

Psychologists have been intensely interested for several decades in the

two modagee fi Pn="cees of thinking evoked by the picture of the angry woman and by the multiplication problem, and have offered many labels for

them. I adopt terms originally proposed by the psychologists Keith

Stanovich and Richard West, and will refer to two systems in the mind, System 1 and System 2.

System 1 operates automatically and quickly, with little or no effort

and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that

demand it, including complex computations. The operations of

System 2 are often associated with the subjective experience of

agency, choice, and concentration.

The labels of System 1 and System 2 are widely used in psychology, but I

go further than most in this book, which you can read as a psychodrama with two characters. When we think of ourselves, we identify with System 2, the conscious,

Page 23 of 533

reasoning self that has beliefs, makes choices, and decides what to think

about and what to do. Although System 2 believes itself to be where the

action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2.

The automatic operations of System 1 generate surprisingly complex

patterns of ideas, but only the slower System 2 can construct thoughts in an

orderly series of steps. I also describe circumstances in which System 2

takes over, overruling the freewheeling impulses and associations of

System 1. You will be invited to think of the two systems as agents with

their individual abilities, limitations, and functions.

In rough order of complexity, here are some examples of the automatic

activities that are attributed to System 1:

Detect that one object is more distant than another. Orient to the source of a sudden sound. Complete the phrase “bread and...” Make a “disgust face” when shown a horrible picture. Detect hostility in a voice. Answer to 2 + 2 = ?

Read words on large billboards. Drive a car on an empty road.

Find a strong move in chess (if you are a chess master). Understand simple sentences. Recognize that a “meek and tidy soul with a passion for detail”

resembles an occupational stereotype.

All these mental events belong with the angry woman—they occur

automatically and require little or no effort. The capabilities of System 1

include innate skills that we share with other animals. We are born

prepared to perceive the world around us, recognize objects, orient

attention, avoid losses, and fear spiders. Other mental activities become

fast and automatic through prolonged practice. System 1 has learned

associations between ideas (the capital of France?); it has also learned

skills such as reading and understanding nuances of social situations. Some skills, such as finding strong chess moves, are acquired only by

specialized experts. Others are widely shared. Detecting the similarity of a

personality sketch to an occupatiohein occupatnal stereotype requires

broad knowledge of the language and the culture, which most of us

Page 24 of 533

possess. The knowledge is stored in memory and accessed without

intention and without effort. Several of the mental actions in the list are completely involuntary. You

cannot refrain from understanding simple sentences in your own language

or from orienting to a loud unexpected sound, nor can you prevent yourself

from knowing that 2 + 2 = 4 or from thinking of Paris when the capital of

France is mentioned. Other activities, such as chewing, are susceptible to

voluntary control but normally run on automatic pilot. The control of attention

is shared by the two systems. Orienting to a loud sound is normally an

involuntary operation of System 1, which immediately mobilizes the

voluntary attention of System 2. You may be able to resist turning toward

the source of a loud and offensive comment at a crowded party, but even if

your head does not move, your attention is initially directed to it, at least for

a while. However, attention can be moved away from an unwanted focus,

primarily by focusing intently on another target.

The highly diverse operations of System 2 have one feature in common:

they require attention and are disrupted when attention is drawn away. Here are some examples:

Brace for the starter gun in a race.

Focus attention on the clowns in the circus.

Focus on the voice of a particular person in a crowded and noisy

room.

Look for a woman with white hair. Search memory to identify a surprising sound. Maintain a faster walking speed than is natural for you. Monitor the appropriateness of your behavior in a social situation. Count the occurrences of the letter a in a page of text.

Tell someone your phone number. Park in a narrow space (for most people except garage attendants). Compare two washing machines for overall value.

Fill out a tax form. Check the validity of a complex logical argument.

In all these situations you must pay attention, and you will perform less well,

or not at all, if you are not ready or if your attention is directed

inappropriately. System 2 has some ability to change the way System 1 works, by programming the normally automatic functions of attention and memory. When waiting for a relative at a busy train station, for example,

Page 25 of 533

you can set yourself at will to look for a white-haired woman or a bearded man, and thereby increase the likelihood of detecting your relative from a

distance. You can set your memory to search for capital cities that start with N or for French existentialist novels. And when you rent a car at

London’s HeathrowAirport, the attendant will probably remind you that “we

drive on the left side of the road over here.” In all these cases, you are

asked to do something that does not come naturally, and you will find that

the consistent maintenance of a set requires continuous exertion of at least

some effort.

The often-used phrase “pay attention” is apt: you dispose of a limited

budget of attention that you can allocate to activities, and if you try to

i>Cyou try tgo beyond your budget, you will fail. It is the mark of effortful

activities that they interfere with each other, which is why it is difficult or

impossible to conduct several at once. You could not compute the product

of 17 × 24 while making a left turn into dense traffic, and you certainly

should not try. You can do several things at once, but only if they are easy

and undemanding. You are probably safe carrying on a conversation with a

passenger while driving on an empty highway, and many parents have

discovered, perhaps with some guilt, that they can read a story to a child while thinking of something else. Everyone has some awareness of the limited capacity of attention, and

our social behavior makes allowances for these limitations. When the

driver of a car is overtaking a truck on a narrow road, for example, adult

passengers quite sensibly stop talking. They know that distracting the

driver is not a good idea, and they also suspect that he is temporarily deaf

and will not hear what they say.

Intense focusing on a task can make people effectively blind, even to

stimuli that normally attract attention. The most dramatic demonstration was offered by Christopher Chabris and Daniel Simons in their book The

Invisible Gorilla. They constructed a short film of two teams passing

basketballs, one team wearing white shirts, the other wearing black. The

viewers of the film are instructed to count the number of passes made by

the white team, ignoring the black players. This task is difficult and

completely absorbing. Halfway through the video, a woman wearing a

gorilla suit appears, crosses the court, thumps her chest, and moves on.

The gorilla is in view for 9 seconds. Many thousands of people have seen

the video, and about half of them do not notice anything unusual. It is the

counting task—and especially the instruction to ignore one of the teams—

that causes the blindness. No one who watches the video without that task would miss the gorilla. Seeing and orienting are automatic functions of

System 1, but they depend on the allocation of some attention to the

Page 26 of 533

relevant stimulus. The authors note that the most remarkable observation

of their study is that people find its results very surprising. Indeed, the

viewers who fail to see the gorilla are initially sure that it was not there—

they cannot imagine missing such a striking event. The gorilla study

illustrates two important facts about our minds: we can be blind to the

obvious, and we are also blind to our blindness.

Plot Synopsis

The interaction of the two systems is a recurrent theme of the book, and a

brief synopsis of the plot is in order. In the story I will tell, Systems 1 and 2

are both active whenever we are awake. System 1 runs automatically and

System 2 is normally in a comfortable low-effort mode, in which only a

fraction of its capacity is engaged. System 1 continuously generates

suggestions for System 2: impressions, intuitions, intentions, and feelings.

If endorsed by System 2, impressions and intuitions turn into beliefs, and

impulses turn into voluntary actions. When all goes smoothly, which is most

of the time, System 2 adopts the suggestions of System 1 with little or no modification. You generally believe your impressions and act on your

desires, and that is fine—usually. When System 1 runs into difficulty, it calls on System 2 to support more

detailed and specific processing that may solve the problem of the moment. System 2 is mobilized when a question arises for which System 1

does not offer an answer, as probably happened to you when you

encountered the multiplication problem 17 × 24. You can also feel a surge

of conscious attention whenever you are surprised. System 2 is activ">< 2

is actated when an event is detected that violates the model of the world

that System 1 maintains. In that world, lamps do not jump, cats do not bark,

and gorillas do not cross basketball courts. The gorilla experiment

demonstrates that some attention is needed for the surprising stimulus to

be detected. Surprise then activates and orients your attention: you will

stare, and you will search your memory for a story that makes sense of the

surprising event. System 2 is also credited with the continuous monitoring

of your own behavior—the control that keeps you polite when you are

angry, and alert when you are driving at night. System 2 is mobilized to

increased effort when it detects an error about to be made. Remember a

time when you almost blurted out an offensive remark and note how hard

you worked to restore control. In summary, most of what you (your System

2) think and do originates in your System 1, but System 2 takes over when

things get difficult, and it normally has the last word.

The division of labor between System 1 and System 2 is highly efficient:

Page 27 of 533

it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it

does: its models of familiar situations are accurate, its short-term

predictions are usually accurate as well, and its initial reactions to

challenges are swift and generally appropriate. System 1 has biases,

however, systematic errors that it is prone to make in specified

circumstances. As we shall see, it sometimes answers easier questions

than the one it was asked, and it has little understanding of logic and

statistics. One further limitation of System 1 is that it cannot be turned off. If

you are shown a word on the screen in a language you know, you will read

it—unless your attention is totally focused elsewhere.

Conflict

Figure 2 is a variant of a classic experiment that produces a conflict

between the two systems. You should try the exercise before reading on.

Figure 2

You were almost certainly successful in saying the correct words in both

tasks, and you surely discovered that some parts of each task were much

easier than others. When you identified upper- and lowercase, the left- hand column was easy and the right-hand column caused you to slow down

Page 28 of 533

hand column was easy and the right-hand column caused you to slow down

and perhaps to stammer or stumble. When you named the position of words, the left-hand column was difficult and the right-hand column was much easier.

These tasks engage System 2, because saying “upper/lower” or

“right/left” is not what you routinely do when looking down a column of words. One of the things you did to set yourself for the task was to program

your memory so that the relevant words (upper and lower for the first task) were “on the tip of your tongue.” The prioritizing of the chosen words is

effective and the mild temptation to read other words was fairly easy to

resist when you went through the first column. But the second column was

different, because it contained words for which you were set, and you could

not ignore them. You were mostly able to respond correctly, but

overcoming the competing response was a strain, and it slowed you down. You experienced a conflict between a task that you intended to carry out

and an automatic response that interfered with it. Conflict between an automatic reaction and an intention to conWhetion

to ctrol it is common in our lives. We are all familiar with the experience of

trying not to stare at the oddly dressed couple at the neighboring table in a

restaurant. We also know what it is like to force our attention on a boring

book, when we constantly find ourselves returning to the point at which the

reading lost its meaning. Where winters are hard, many drivers have memories of their car skidding out of control on the ice and of the struggle

to follow well-rehearsed instructions that negate what they would naturally

do: “Steer into the skid, and whatever you do, do not touch the brakes!”

And every human being has had the experience of not telling someone to

go to hell. One of the tasks of System 2 is to overcome the impulses of

System 1. In other words, System 2 is in charge of self-control.

Illusions

To appreciate the autonomy of System 1, as well as the distinction

between impressions and beliefs, take a good look at figure 3.

This picture is unremarkable: two horizontal lines of different lengths, with fins appended, pointing in different directions. The bottom line is

obviously longer than the one above it. That is what we all see, and we

naturally believe what we see. If you have already encountered this image,

however, you recognize it as the famous Müller-Lyer illusion. As you can

easily confirm by measuring them with a ruler, the horizontal lines are in

fact identical in length.

Page 29 of 533

Figure 3

Now that you have measured the lines, you—your System 2, the

conscious being you call “I”—have a new belief: you knowthat the lines are

equally long. If asked about their length, you will say what you know. But you

still see the bottom line as longer. You have chosen to believe the measurement, but you cannot prevent System 1 from doing its thing; you

cannot decide to see the lines as equal, although you know they are. To

resist the illusion, there is only one thing you can do: you must learn to mistrust your impressions of the length of lines when fins are attached to

them. To implement that rule, you must be able to recognize the illusory

pattern and recall what you know about it. If you can do this, you will never

again be fooled by the Müller-Lyer illusion. But you will still see one line as

longer than the other. Not all illusions are visual. There are illusions of thought, which we call

cognitive illusions. As a graduate student, I attended some courses on the

art and science of psychotherapy. During one of these lectures, our

teacher imparted a morsel of clinical wisdom. This is what he told us: “You will from time to time meet a patient who shares a disturbing tale of multiple mistakes in his previous treatment. He has been seen by several

clinicians, and all failed him. The patient can lucidly describe how his

therapists misunderstood him, but he has quickly perceived that you are

different. You share the same feeling, are convinced that you understand

him, and will be able to help.” At this point my teacher raised his voice as

he said, “Do not even think of taking on this patient! Throw him out of the

office! He is most likely a psychopath and you will not be able to help him.” Many years later I learned that the teacher had warned us against

psychopathic charm, and the leading authority in the strn y in the udy of

Page 30 of 533

psychopathy confirmed that the teacher’s advice was sound. The analogy

to the Müller-Lyer illusion is close. What we were being taught was not how

to feel about that patient. Our teacher took it for granted that the sympathy we would feel for the patient would not be under our control; it would arise

from System 1. Furthermore, we were not being taught to be generally

suspicious of our feelings about patients. We were told that a strong

attraction to a patient with a repeated history of failed treatment is a

danger sign—like the fins on the parallel lines. It is an illusion—a cognitive

illusion—and I (System 2) was taught how to recognize it and advised not

to believe it or act on it.

The question that is most often asked about cognitive illusions is whether they can be overcome. The message of these examples is not

encouraging. Because System 1 operates automatically and cannot be

turned off at will, errors of intuitive thought are often difficult to prevent. Biases cannot always be avoided, because System 2 may have no clue to

the error. Even when cues to likely errors are available, errors can be

prevented only by the enhanced monitoring and effortful activity of System

2. As a way to live your life, however, continuous vigilance is not

necessarily good, and it is certainly impractical. Constantly questioning our

own thinking would be impossibly tedious, and System 2 is much too slow

and inefficient to serve as a substitute for System 1 in making routine

decisions. The best we can do is a compromise: learn to recognize

situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is

easier to recognize other people’s mistakes than our own.

Useful Fictions

You have been invited to think of the two systems as agents within the mind, with their individual personalities, abilities, and limitations. I will often

use sentences in which the systems are the subjects, such as, “System 2

calculates products.”

The use of such language is considered a sin in the professional circles

in which I travel, because it seems to explain the thoughts and actions of a

person by the thoughts and actions of little people inside the person’s

head. Grammatically the sentence about System 2 is similar to “The butler

steals the petty cash.” My colleagues would point out that the butler’s action

actually explains the disappearance of the cash, and they rightly question whether the sentence about System 2 explains how products are

calculated. My answer is that the brief active sentence that attributes

calculation to System 2 is intended as a description, not an explanation. It

Page 31 of 533

is meaningful only because of what you already know about System 2. It is

shorthand for the following: “Mental arithmetic is a voluntary activity that

requires effort, should not be performed while making a left turn, and is

associated with dilated pupils and an accelerated heart rate.” Similarly, the statement that “highway driving under routine conditions is

left to System 1” means that steering the car around a bend is automatic

and almost effortless. It also implies that an experienced driver can drive

on an empty highway while conducting a conversation. Finally, “System 2

prevented James from reacting foolishly to the insult” means that James would have been more aggressive in his response if his capacity for

effortful control had been disrupted (for example, if he had been drunk). System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are217at they a fictitious

characters. Systems 1 and 2 are not systems in the standard sense of

entities with interacting aspects or parts. And there is no one part of the

brain that either of the systems would call home. You may well ask: What is

the point of introducing fictitious characters with ugly names into a serious

book? The answer is that the characters are useful because of some

quirks of our minds, yours and mine.A sentence is understood more easily

if it describes what an agent (System 2) does than if it describes what

something is, what properties it has. In other words, “System 2” is a better

subject for a sentence than “mental arithmetic.” The mind—especially

System 1—appears to have a special aptitude for the construction and

interpretation of stories about active agents, who have personalities,

habits, and abilities. You quickly formed a bad opinion of the thieving

butler, you expect more bad behavior from him, and you will remember him

for a while. This is also my hope for the language of systems.

Why call them System 1 and System 2 rather than the more descriptive

“automatic system” and “effortful system”? The reason is simple:

“Automatic system” takes longer to say than “System 1” and therefore

takes more space in your working memory. This matters, because

anything that occupies your working memory reduces your ability to think. You should treat “System 1” and “System 2” as nicknames, like Bob and

Joe, identifying characters that you will get to know over the course of this

book. The fictitious systems make it easier for me to think about judgment

and choice, and will make it easier for you to understand what I say.

Speaking of System 1 and System 2

Page 32 of 533

“He had an impression, but some of his impressions are

illusions.”

“This was a pure System 1 response. She reacted to the threat

before she recognized it.”

“This is your System 1 talking. Slow down and let your System 2

take control.”

Page 33 of 533

Attention and Effort

In the unlikely event of this book being made into a film, System 2 would be

a supporting character who believes herself to be the hero. The defining

feature of System 2, in this story, is that its operations are effortful, and one

of its main characteristics is laziness, a reluctance to invest more effort

than is strictly necessary. As a consequence, the thoughts and actions that

System 2 believes it has chosen are often guided by the figure at the

center of the story, System 1. However, there are vital tasks that only

System 2 can perform because they require effort and acts of self-control

in which the intuitions and impulses of System 1 are overcome.

Mental Effort

If you wish to experience your System 2 working at full tilt, the following

exercise will do; it should br"0%e ca Tting you to the limits of your cognitive

abilities within 5 seconds. To start, make up several strings of 4 digits, all

different, and write each string on an index card. Place a blank card on top

of the deck. The task that you will perform is called Add-1. Here is how it

goes:

Start beating a steady rhythm (or better yet, set a metronome at

1/sec). Remove the blank card and read the four digits aloud. Wait for two beats, then report a string in which each of the

original digits is incremented by 1. If the digits on the card are

5294, the correct response is 6305. Keeping the rhythm is

important.

Few people can cope with more than four digits in the Add-1 task, but if

you want a harder challenge, please try Add-3.

If you would like to know what your body is doing while your mind is hard

at work, set up two piles of books on a sturdy table, place a video camera

on one and lean your chin on the other, get the video going, and stare at

the camera lens while you work on Add-1 or Add-3 exercises. Later, you will find in the changing size of your pupils a faithful record of how hard you worked.

I have a long personal history with the Add-1 task. Early in my career I

spent a year at the University of Michigan, as a visitor in a laboratory that

studied hypnosis. Casting about for a useful topic of research, I found an

article in Scientific American in which the psychologist Eckhard Hess

described the pupil of the eye as a window to the soul. I reread it recently

Page 34 of 533

and again found it inspiring. It begins with Hess reporting that his wife had

noticed his pupils widening as he watched beautiful nature pictures, and it

ends with two striking pictures of the same good-looking woman, who

somehow appears much more attractive in one than in the other. There is

only one difference: the pupils of the eyes appear dilated in the attractive

picture and constricted in the other. Hess also wrote of belladonna, a pupil- dilating substance that was used as a cosmetic, and of bazaar shoppers who wear dark glasses in order to hide their level of interest from

merchants. One of Hess’s findings especially captured my attention. He had noticed

that the pupils are sensitive indicators of mental effort—they dilate

substantially when people multiply two-digit numbers, and they dilate more

if the problems are hard than if they are easy. His observations indicated

that the response to mental effort is distinct from emotional arousal. Hess’s work did not have much to do with hypnosis, but I concluded that the idea

of a visible indication of mental effort had promise as a research topic. A

graduate student in the lab, Jackson Beatty, shared my enthusiasm and we

got to work. Beatty and I developed a setup similar to an optician’s examination

room, in which the experimental participant leaned her head on a chin-and- forehead rest and stared at a camera while listening to prerecorded

information and answering questions on the recorded beats of a metronome. The beats triggered an infrared flash every second, causing a

picture to be taken. At the end of each experimental session, we would

rush to have the film developed, project the images of the pupil on a

screen, and go to work with a ruler. The method was a perfect fit for young

and impatient researchers: we knew our results almost immediately, and

they always told a clear story. Beatty and I focused on paced tasks, such as Add-1, in which we knew

precisely what was on the subject’s mind at any time. We recorded strings

of digits on beats of the metronome and instructed the subject to repeat or

transform the digits one indigits onby one, maintaining the same rhythm. We soon discovered that the size of the pupil varied second by second,

reflecting the changing demands of the task. The shape of the response was an inverted V. As you experienced it if you tried Add-1 or Add-3, effort

builds up with every added digit that you hear, reaches an almost

intolerable peak as you rush to produce a transformed string during and

immediately after the pause, and relaxes gradually as you “unload” your

short-term memory. The pupil data corresponded precisely to subjective

experience: longer strings reliably caused larger dilations, the

transformation task compounded the effort, and the peak of pupil size

coincided with maximum effort. Add-1 with four digits caused a larger

Page 35 of 533

dilation than the task of holding seven digits for immediate recall. Add-3, which is much more difficult, is the most demanding that I ever observed. In

the first 5 seconds, the pupil dilates by about 50% of its original area and

heart rate increases by about 7 beats per minute. This is as hard as

people can work—they give up if more is asked of them. When we

exposed our subjects to more digits than they could remember, their pupils

stopped dilating or actually shrank. We worked for some months in a spacious basement suite in which we

had set up a closed-circuit system that projected an image of the subject’s

pupil on a screen in the corridor; we also could hear what was happening

in the laboratory. The diameter of the projected pupil was about a foot; watching it dilate and contract when the participant was at work was a

fascinating sight, quite an attraction for visitors in our lab. We amused

ourselves and impressed our guests by our ability to divine when the

participant gave up on a task. During a mental multiplication, the pupil

normally dilated to a large size within a few seconds and stayed large as

long as the individual kept working on the problem; it contracted

immediately when she found a solution or gave up. As we watched from

the corridor, we would sometimes surprise both the owner of the pupil and

our guests by asking, “Why did you stop working just now?” The answer

from inside the lab was often, “How did you know?” to which we would

reply, “We have a window to your soul.”

The casual observations we made from the corridor were sometimes as

informative as the formal experiments. I made a significant discovery as I was idly watching a woman’s pupil during a break between two tasks. She

had kept her position on the chin rest, so I could see the image of her eye while she engaged in routine conversation with the experimenter. I was

surprised to see that the pupil remained small and did not noticeably dilate

as she talked and listened. Unlike the tasks that we were studying, the mundane conversation apparently demanded little or no effort—no more

than retaining two or three digits. This was a eureka moment: Irealized that

the tasks we had chosen for study were exceptionally effortful. An image

came to mind: mental life—today I would speak of the life of System 2—is

normally conducted at the pace of a comfortable walk, sometimes

interrupted by episodes of jogging and on rare occasions by a frantic

sprint. The Add-1 and Add-3 exercises are sprints, and casual chatting is

a stroll. We found that people, when engaged in a mental sprint, may become

effectively blind. The authors of The Invisible Gorilla had made the gorilla

“invisible” by keeping the observers intensely busy counting passes. We

reported a rather less dramatic example of blindness during Add-1. Our

Page 36 of 533

subjects were exposed to a series of rapidly flashing letters while they worked. They were told to give the task complete priority, but they were

also asked to report, at the end of the digit task, whether the letter K had

appeared at any rored at antime during the trial. The main finding was that

the ability to detect and report the target letter changed in the course of the

10 seconds of the exercise. The observers almost never missed a K that was shown at the beginning or near the end of the Add-1 task but they missed the target almost half the time when mental effort was at its peak,

although we had pictures of their wide-open eye staring straight at it.

Failures of detection followed the same inverted-V pattern as the dilating

pupil. The similarity was reassuring: the pupil was a good measure of the

physical arousal that accompanies mental effort, and we could go ahead

and use it to understand how the mind works. Much like the electricity meter outside your house or apartment, the

pupils offer an index of the current rate at which mental energy is used. The

analogy goes deep. Your use of electricity depends on what you choose to

do, whether to light a room or toast a piece of bread. When you turn on a

bulb or a toaster, it draws the energy it needs but no more. Similarly, we

decide what to do, but we have limited control over the effort of doing it. Suppose you are shown four digits, say, 9462, and told that your life

depends on holding them in memory for 10 seconds. However much you want to live, you cannot exert as much effort in this task as you would be

forced to invest to complete an Add-3 transformation on the same digits. System 2 and the electrical circuits in your home both have limited

capacity, but they respond differently to threatened overload. A breaker

trips when the demand for current is excessive, causing all devices on that

circuit to lose power at once. In contrast, the response to mental overload

is selective and precise: System 2 protects the most important activity, so

it receives the attention it needs; “spare capacity” is allocated second by

second to other tasks. In our version of the gorilla experiment, we

instructed the participants to assign priority to the digit task. We know that

they followed that instruction, because the timing of the visual target had no

effect on the main task. If the critical letter was presented at a time of high

demand, the subjects simply did not see it. When the transformation task was less demanding, detection performance was better.

The sophisticated allocation of attention has been honed by a long

evolutionary history. Orienting and responding quickly to the gravest threats

or most promising opportunities improved the chance of survival, and this

capability is certainly not restricted to humans. Even in modern humans, System 1 takes over in emergencies and assigns total priority to self- protective actions. Imagine yourself at the wheel of a car that unexpectedly

Page 37 of 533

skids on a large oil slick. You will find that you have responded to the threat

before you became fully conscious of it. Beatty and I worked together for only a year, but our collaboration had a

large effect on our subsequent careers. He eventually became the leading

authority on “cognitive pupillometry,” and Iwrote a book titled Attention and

Effort, which was based in large part on what we learned together and on

follow-up research I did at Harvard the following year. We learned a great

deal about the working mind—which I now think of as System 2—from

measuring pupils in a wide variety of tasks. As you become skilled in a task, its demand for energy diminishes. Studies of the brain have shown that the pattern of activity associated with

an action changes as skill increases, with fewer brain regions involved.

Talent has similar effects. Highly intelligent individuals need less effort to

solve the same problems, as indicated by both pupil size and brain activity. A general “law of least effort” appd t” alies to cognitive as well as physical

exertion. The law asserts that if there are several ways of achieving the

same goal, people will eventually gravitate to the least demanding course

of action. In the economy of action, effort is a cost, and the acquisition of

skill is driven by the balance of benefits and costs. Laziness is built deep

into our nature.

The tasks that we studied varied considerably in their effects on the

pupil. At baseline, our subjects were awake, aware, and ready to engage

in a task—probably at a higher level of arousal and cognitive readiness

than usual. Holding one or two digits in memory or learning to associate a word with a digit (3 = door) produced reliable effects on momentary

arousal above that baseline, but the effects were minuscule, only 5% of the

increase in pupil diameter associated with Add-3. A task that required

discriminating between the pitch of two tones yielded significantly larger

dilations. Recent research has shown that inhibiting the tendency to read

distracting words (as in figure 2 of the preceding chapter) also induces moderate effort. Tests of short-term memory for six or seven digits were more effortful. As you can experience, the request to retrieve and say aloud

your phone number or your spouse’s birthday also requires a brief but

significant effort, because the entire string must be held in memory as a

response is organized. Mental multiplication of two-digit numbers and the Add-3 task are near the limit of what most people can do. What makes some cognitive operations more demanding and effortful

than others? What outcomes must we purchase in the currency of

attention? What can System 2 do that System 1 cannot? We now have

tentative answers to these questions. Effort is required to maintain simultaneously in memory several ideas

Page 38 of 533

that require separate actions, or that need to be combined according to a

rule—rehearsing your shopping list as you enter the supermarket,

choosing between the fish and the veal at a restaurant, or combining a

surprising result from a survey with the information that the sample was

small, for example. System 2 is the only one that can follow rules, compare

objects on several attributes, and make deliberate choices between

options. The automatic System 1 does not have these capabilities. System

1 detects simple relations (“they are all alike,” “the son is much taller than

the father”) and excels at integrating information about one thing, but it

does not deal with multiple distinct topics at once, nor is it adept at using

purely statistical information. System 1 will detect that a person described

as “a meek and tidy soul, with a need for order and structure, and a

passion for detail” resembles a caricature librarian, but combining this

intuition with knowledge about the small number of librarians is a task that

only System 2 can perform—if System 2 knows how to do so, which is true

of few people. A crucial capability of System 2 is the adoption of “task sets”: it can

program memory to obey an instruction that overrides habitual responses. Consider the following: Count all occurrences of the letter f in this page.

This is not a task you have ever performed before and it will not come

naturally to you, but your System 2 can take it on. It will be effortful to set

yourself up for this exercise, and effortful to carry it out, though you will

surely improve with practice. Psychologists speak of “executive control” to

describe the adoption and termination of task sets, and neuroscientists

have identified the main regions of the brain that serve the executive

function. One of these regions is involved whenever a conflict must be

resolved. Another is the prefrontal area of the brain, a region that is

substantially more developed in humans tht un humans an in other

primates, and is involved in operations that we associate with intelligence. Now suppose that at the end of the page you get another instruction:

count all the commas in the next page. This will be harder, because you will

have to overcome the newly acquired tendency to focus attention on the

letter f. One of the significant discoveries of cognitive psychologists in

recent decades is that switching from one task to another is effortful,

especially under time pressure. The need for rapid switching is one of the

reasons that Add-3 and mental multiplication are so difficult. To perform

the Add-3 task, you must hold several digits in your working memory at the

same time, associating each with a particular operation: some digits are in

the queue to be transformed, one is in the process of transformation, and

others, already transformed, are retained for reporting. Modern tests of working memory require the individual to switch repeatedly between two

Page 39 of 533

demanding tasks, retaining the results of one operation while performing

the other. People who do well on these tests tend to do well on tests of

general intelligence. However, the ability to control attention is not simply a measure of intelligence; measures of efficiency in the control of attention

predict performance of air traffic controllers and of Israeli Air Force pilots

beyond the effects of intelligence.

Time pressure is another driver of effort. As you carried out the Add-3

exercise, the rush was imposed in part by the metronome and in part by

the load on memory. Like a juggler with several balls in the air, you cannot

afford to slow down; the rate at which material decays in memory forces

the pace, driving you to refresh and rehearse information before it is lost. Any task that requires you to keep several ideas in mind at the same time

has the same hurried character. Unless you have the good fortune of a

capacious working memory, you may be forced to work uncomfortably

hard. The most effortful forms of slow thinking are those that require you to

think fast. You surely observed as you performed Add-3 how unusual it is for your mind to work so hard. Even if you think for a living, few of the mental tasks

in which you engage in the course of a working day are as demanding as Add-3, or even as demanding as storing six digits for immediate recall. We normally avoid mental overload by dividing our tasks into multiple easy

steps, committing intermediate results to long-term memory or to paper

rather than to an easily overloaded working memory. We cover long

distances by taking our time and conduct our mental lives by the law of

least effort.

Speaking of Attention and Effort

“Iwon’t try to solve this while driving. This is a pupil-dilating task. It

requires mental effort!”

“The law of least effort is operating here. He will think as little as

possible.”

“She did not forget about the meeting. She was completely

focused on something else when the meeting was set and she

just didn’t hear you.”

Page 40 of 533

“What came quickly to my mind was an intuition from System 1. I’ll

have to start over and search my memory deliberately.”

Page 41 of 533

The Lazy Controller

I spend a few months each year in Berkeley, and one of my great

pleasures there is a daily four-mile walk on a marked path in the hills, with

a fine view of San Francisco Bay. I usually keep track of my time and have

learned a fair amount about effort from doing so. I have found a speed,

about 17 minutes for a mile, which I experience as a stroll. I certainly exert

physical effort and burn more calories at that speed than if I sat in a

recliner, but I experience no strain, no conflict, and no need to push myself.

I am also able to think and work while walking at that rate. Indeed, I suspect

that the mild physical arousal of the walk may spill over into greater mental

alertness. System 2 also has a natural speed. You expend some mental energy in

random thoughts and in monitoring what goes on around you even when

your mind does nothing in particular, but there is little strain. Unless you are

in a situation that makes you unusually wary or self-conscious, monitoring what happens in the environment or inside your head demands little effort. You make many small decisions as you drive your car, absorb some

information as you read the newspaper, and conduct routine exchanges of

pleasantries with a spouse or a colleague, all with little effort and no strain.

Just like a stroll.

It is normally easy and actually quite pleasant to walk and think at the

same time, but at the extremes these activities appear to compete for the

limited resources of System 2. You can confirm this claim by a simple

experiment. While walking comfortably with a friend, ask him to compute

23 × 78 in his head, and to do so immediately. He will almost certainly stop

in his tracks. My experience is that I can think while strolling but cannot

engage in mental work that imposes a heavy load on short-term memory. If

I must construct an intricate argument under time pressure, I would rather

be still, and I would prefer sitting to standing. Of course, not all slow

thinking requires that form of intense concentration and effortful

computation—I did the best thinking of my life on leisurely walks with

Amos. Accelerating beyond my strolling speed completely changes the

experience of walking, because the transition to a faster walk brings about

a sharp deterioration in my ability to think coherently. As I speed up, my

attention is drawn with increasing frequency to the experience of walking

and to the deliberate maintenance of the faster pace. My ability to bring a

train of thought to a conclusion is impaired accordingly. At the highest

speed I can sustain on the hills, about 14 minutes for a mile, I do not even

try to think of anything else. In addition to the physical effort of moving my

Page 42 of 533

body rapidly along the path, a mental effort of self-control is needed to

resist the urge to slow down. Self-control and deliberate thought apparently

draw on the same limited budget of effort.

For most of us, most of the time, the maintenance of a coherent train of

thought and the occasional engagement in effortful thinking also require

self-control. Although I have not conducted a systematic survey, I suspect

that frequent switching of tasks and speeded-up mental work are not

intrinsically pleasurable, and that people avoid them when possible. This is

how the law of least effort comes to be a law. Even in the absence of time

pressure, maintaining a coherent train of thought requires discipline. An

observer of the number of times I look at e-mail or investigate the

refrigerator during an hour of writing could wahene dd reasonably infer an

urge to escape and conclude that keeping at it requires more self-control

than I can readily muster.

Fortunately, cognitive work is not always aversive, and people

sometimes expend considerable effort for long periods of time without

having to exert willpower. The psychologist Mihaly Csikszentmihalyi

(pronounced six-cent-mihaly) has done more than anyone else to study this

state of effortless attending, and the name he proposed for it, flow, has

become part of the language. People who experience flow describe it as

“a state of effortless concentration so deep that they lose their sense of

time, of themselves, of their problems,” and their descriptions of the joy of

that state are so compelling that Csikszentmihalyi has called it an “optimal

experience.” Many activities can induce a sense of flow, from painting to

racing motorcycles—and for some fortunate authors I know, even writing a

book is often an optimal experience. Flow neatly separates the two forms

of effort: concentration on the task and the deliberate control of attention. Riding a motorcycle at 150 miles an hour and playing a competitive game

of chess are certainly very effortful. In a state of flow, however, maintaining

focused attention on these absorbing activities requires no exertion of self- control, thereby freeing resources to be directed to the task at hand.

The Busy and Depleted System 2

It is now a well-established proposition that both self-control and cognitive

effort are forms of mental work. Several psychological studies have shown

that people who are simultaneously challenged by a demanding cognitive

task and by a temptation are more likely to yield to the temptation. Imagine

that you are asked to retain a list of seven digits for a minute or two. You

are told that remembering the digits is your top priority. While your

attention is focused on the digits, you are offered a choice between two

Page 43 of 533

desserts: a sinful chocolate cake and a virtuous fruit salad. The evidence

suggests that you would be more likely to select the tempting chocolate

cake when your mind is loaded with digits. System 1 has more influence

on behavior when System 2 is busy, and it has a sweet tooth.

People who are cognitively busy are also more likely to make selfish

choices, use sexist language, and make superficial judgments in social

situations. Memorizing and repeating digits loosens the hold of System 2

on behavior, but of course cognitive load is not the only cause of weakened self-control. A few drinks have the same effect, as does a

sleepless night. The self-control of morning people is impaired at night; the

reverse is true of night people. Too much concern about how well one is

doing in a task sometimes disrupts performance by loading short-term

memory with pointless anxious thoughts. The conclusion is straightforward:

self-control requires attention and effort. Another way of saying this is that

controlling thoughts and behaviors is one of the tasks that System 2

performs. A series of surprising experiments by the psychologist Roy Baumeister

and his colleagues has shown conclusively that all variants of voluntary

effort—cognitive, emotional, or physical—draw at least partly on a shared

pool of mental energy. Their experiments involve successive rather than

simultaneous tasks. Baumeister’s group has repeatedly found that an effort of will or self- control is tiring; if you have had to force yourself to do something, you are

less willing or less able to exert self-control when the next challenge comes

around. The phenomenon has been named ego depletion. In a typical

demo thypical denstration, participants who are instructed to stifle their

emotional reaction to an emotionally charged film will later perform poorly

on a test of physical stamina—how long they can maintain a strong grip on

a dynamometer in spite of increasing discomfort. The emotional effort in

the first phase of the experiment reduces the ability to withstand the pain of

sustained muscle contraction, and ego-depleted people therefore

succumb more quickly to the urge to quit. In another experiment, people

are first depleted by a task in which they eat virtuous foods such as

radishes and celery while resisting the temptation to indulge in chocolate

and rich cookies. Later, these people will give up earlier than normal when

faced with a difficult cognitive task.

The list of situations and tasks that are now known to deplete self-control

is long and varied. All involve conflict and the need to suppress a natural

tendency. They include:

avoiding the thought of white bears

inhibiting the emotional response to a stirring film

Page 44 of 533

making a series of choices that involve conflict

trying to impress others

responding kindly to a partner’s bad behavior

interacting with a person of a different race (for prejudiced

individuals)

The list of indications of depletion is also highly diverse:

deviating from one’s diet

overspending on impulsive purchases

reacting aggressively to provocation

persisting less time in a handgrip task

performing poorly in cognitive tasks and logical decision making

The evidence is persuasive: activities that impose high demands on System 2 require self-control, and the exertion of self-control is depleting

and unpleasant. Unlike cognitive load, ego depletion is at least in part a

loss of motivation. After exerting self-control in one task, you do not feel

like making an effort in another, although you could do it if you really had to.

In several experiments, people were able to resist the effects of ego

depletion when given a strong incentive to do so. In contrast, increasing

effort is not an option when you must keep six digits in short-term memory while performing a task. Ego depletion is not the same mental state as

cognitive busyness.

The most surprising discovery made by Baumeister’s group shows, as

he puts it, that the idea of mental energy is more than a mere metaphor.

The nervous system consumes more glucose than most other parts of the

body, and effortful mental activity appears to be especially expensive in the

currency of glucose. When you are actively involved in difficult cognitive

reasoning or engaged in a task that requires self-control, your blood

glucose level drops. The effect is analogous to a runner who draws down

glucose stored in her muscles during a sprint. The bold implication of this

idea is that the effects of ego depletion could be undone by ingesting

glucose, and Baumeister and his colleagues have confirmed this

hypothesis n ohypothesiin several experiments. Volunteers in one of their studies watched a short silent film of a woman

being interviewed and were asked to interpret her body language. While

they were performing the task, a series of words crossed the screen in

slow succession. The participants were specifically instructed to ignore the words, and if they found their attention drawn away they had to refocus their

concentration on the woman’s behavior. This act of self-control was known

to cause ego depletion. All the volunteers drank some lemonade before

Page 45 of 533

participating in a second task. The lemonade was sweetened with glucose

for half of them and with Splenda for the others. Then all participants were

given a task in which they needed to overcome an intuitive response to get

the correct answer. Intuitive errors are normally much more frequent among

ego-depleted people, and the drinkers of Splenda showed the expected

depletion effect. On the other hand, the glucose drinkers were not

depleted. Restoring the level of available sugar in the brain had prevented

the deterioration of performance. It will take some time and much further

research to establish whether the tasks that cause glucose-depletion also

cause the momentary arousal that is reflected in increases of pupil size

and heart rate. A disturbing demonstration of depletion effects in judgment was recently

reported in the Proceedings of the National Academy of Sciences. The

unwitting participants in the study were eight parole judges in Israel. They

spend entire days reviewing applications for parole. The cases are

presented in random order, and the judges spend little time on each one,

an average of 6 minutes. (The default decision is denial of parole; only

35% of requests are approved. The exact time of each decision is

recorded, and the times of the judges’ three food breaks—morning break,

lunch, and afternoon break—during the day are recorded as well.) The

authors of the study plotted the proportion of approved requests against

the time since the last food break. The proportion spikes after each meal, when about 65% of requests are granted. During the two hours or so until

the judges’ next feeding, the approval rate drops steadily, to about zero just

before the meal. As you might expect, this is an unwelcome result and the

authors carefully checked many alternative explanations. The best possible

account of the data provides bad news: tired and hungry judges tend to fall

back on the easier default position of denying requests for parole. Both

fatigue and hunger probably play a role.

The Lazy System 2

One of the main functions of System 2 is to monitor and control thoughts

and actions “suggested” by System 1, allowing some to be expressed

directly in behavior and suppressing or modifying others.

For an example, here is a simple puzzle. Do not try to solve it but listen

to your intuition:

A bat and ball cost $1.10.

The bat costs one dollar more than the ball. How much does the ball cost?

Page 46 of 533

A number came to your mind. The number, of course, is 10: 10¢. The

distinctive mark of this easy puzzle is that it evokes an answer that is

intuitive, appealing, and wrong. Do the math, and you will see. If the ball

costs 10¢, then the total cost will be $1.20 (10¢ for the ball and $1.10 for

the bat), not $1.10. The correct answer is 5¢. It%">5¢. is safe to assume

that the intuitive answer also came to the mind of those who ended up with

the correct number—they somehow managed to resist the intuition. Shane Frederick and I worked together on a theory of judgment based

on two systems, and he used the bat-and-ball puzzle to study a central

question: How closely does System 2 monitor the suggestions of System

1? His reasoning was that we know a significant fact about anyone who

says that the ball costs 10¢: that person did not actively check whether the

answer was correct, and her System 2 endorsed an intuitive answer that it

could have rejected with a small investment of effort. Furthermore, we also

know that the people who give the intuitive answer have missed an obvious

social cue; they should have wondered why anyone would include in a

questionnaire a puzzle with such an obvious answer. A failure to check is

remarkable because the cost of checking is so low: a few seconds of mental work (the problem is moderately difficult), with slightly tensed muscles and dilated pupils, could avoid an embarrassing mistake. People who say 10¢ appear to be ardent followers of the law of least effort. People who avoid that answer appear to have more active minds. Many thousands of university students have answered the bat-and-ball

puzzle, and the results are shocking. More than 50% of students at

Harvard, MIT, and Princeton ton gave the intuitive—incorrect—answer. At

less selective universities, the rate of demonstrable failure to check was in

excess of 80%. The bat-and-ball problem is our first encounter with an

observation that will be a recurrent theme of this book: many people are

overconfident, prone to place too much faith in their intuitions. They

apparently find cognitive effort at least mildly unpleasant and avoid it as much as possible. Now Iwill show you a logical argument—two premises and a conclusion.

Try to determine, as quickly as you can, if the argument is logically valid. Does the conclusion follow from the premises?

All roses are flowers. Some flowers fade quickly.

Therefore some roses fade quickly.

A large majority of college students endorse this syllogism as valid. In fact

the argument is flawed, because it is possible that there are no roses

among the flowers that fade quickly. Just as in the bat-and-ball problem, a

Page 47 of 533

plausible answer comes to mind immediately. Overriding it requires hard work—the insistent idea that “it’s true, it’s true!” makes it difficult to check

the logic, and most people do not take the trouble to think through the

problem.

This experiment has discouraging implications for reasoning in everyday

life. It suggests that when people believe a conclusion is true, they are also

very likely to believe arguments that appear to support it, even when these

arguments are unsound. If System 1 is involved, the conclusion comes first

and the arguments follow. Next, consider the following question and answer it quickly before

reading on:

How many murders occur in the state of Michigan in one year?

The question, which was also devised by Shane Frederick, is again a

challenge to System 2. The “trick” is whether the respondent will remember

that Detroit, a high-crime c thigh-crimeity, is in Michigan. College students

in the United States know this fact and will correctly identify Detroit as the

largest city in Michigan. But knowledge of a fact is not all-or-none. Facts

that we know do not always come to mind when we need them. People who remember that Detroit is in Michigan give higher estimates of the murder rate in the state than people who do not, but a majority of

Frederick’s respondents did not think of the city when questioned about

the state. Indeed, the average guess by people who were asked about

Michigan is lower than the guesses of a similar group who were asked

about the murder rate in Detroit. Blame for a failure to think of Detroit can be laid on both System 1 and

System 2. Whether the city comes to mind when the state is mentioned

depends in part on the automatic function of memory. People differ in this

respect. The representation of the state of Michigan is very detailed in

some people’s minds: residents of the state are more likely to retrieve many facts about it than people who live elsewhere; geography buffs will

retrieve more than others who specialize in baseball statistics; more

intelligent individuals are more likely than others to have rich

representations of most things. Intelligence is not only the ability to reason;

it is also the ability to find relevant material in memory and to deploy

attention when needed. Memory function is an attribute of System 1. However, everyone has the option of slowing down to conduct an active

search of memory for all possibly relevant facts—just as they could slow

down to check the intuitive answer in the bat-and-ball problem. The extent

of deliberate checking and search is a characteristic of System 2, which

varies among individuals.

Page 48 of 533

The bat-and-ball problem, the flowers syllogism, and the Michigan/Detroit problem have something in common. Failing these minitests appears to be, at least to some extent, a matter of insufficient motivation, not trying hard enough. Anyone who can be admitted to a good

university is certainly able to reason through the first two questions and to

reflect about Michigan long enough to remember the major city in that state

and its crime problem. These students can solve much more difficult

problems when they are not tempted to accept a superficially plausible

answer that comes readily to mind. The ease with which they are satisfied

enough to stop thinking is rather troubling. “Lazy” is a harsh judgment about

the self-monitoring of these young people and their System 2, but it does

not seem to be unfair. Those who avoid the sin of intellectual sloth could be

called “engaged.” They are more alert, more intellectually active, less willing to be satisfied with superficially attractive answers, more skeptical

about their intuitions. The psychologist Keith Stanovich would call them

more rational.

Intelligence, Control, Rationality

Researchers have applied diverse methods to examine the connection

between thinking and self-control. Some have addressed it by asking the

correlation question: If people were ranked by their self-control and by their

cognitive aptitude, would individuals have similar positions in the two

rankings?

In one of the most famous experiments in the history of psychology, Walter Mischel and his students exposed four-year-old children to a cruel

dilemma. They were given a choice between a small reward (one Oreo), which they could have at any time, or a larger reward (two cookies) for which they had to wait 15 minutes under difficult conditions. They were to

remain alone in a room, facing a desk with two objects: a single cookie

and a bell that the child could ring at any time to call in the experimenter

and receiven oand recei the one cookie. As the experiment was

described: “There were no toys, books, pictures, or other potentially

distracting items in the room. The experimenter left the room and did not

return until 15 min had passed or the child had rung the bell, eaten the

rewards, stood up, or shown any signs of distress.”

The children were watched through a one-way mirror, and the film that

shows their behavior during the waiting time always has the audience

roaring in laughter. About half the children managed the feat of waiting for

15 minutes, mainly by keeping their attention away from the tempting

reward. Ten or fifteen years later, a large gap had opened between those

Page 49 of 533

who had resisted temptation and those who had not. The resisters had

higher measures of executive control in cognitive tasks, and especially the

ability to reallocate their attention effectively. As young adults, they were

less likely to take drugs. A significant difference in intellectual aptitude

emerged: the children who had shown more self-control as four-year-olds

had substantially higher scores on tests of intelligence. A team of researchers at the University of Oregon explored the link

between cognitive control and intelligence in several ways, including an

attempt to raise intelligence by improving the control of attention. During

five 40-minute sessions, they exposed children aged four to six to various

computer games especially designed to demand attention and control. In

one of the exercises, the children used a joystick to track a cartoon cat and move it to a grassy area while avoiding a muddy area. The grassy areas

gradually shrank and the muddy area expanded, requiring progressively more precise control. The testers found that training attention not only

improved executive control; scores on nonverbal tests of intelligence also

improved and the improvement was maintained for several months. Other

research by the same group identified specific genes that are involved in

the control of attention, showed that parenting techniques also affected this

ability, and demonstrated a close connection between the children’s ability

to control their attention and their ability to control their emotions. Shane Frederick constructed a Cognitive Reflection Test, which

consists of the bat-and-ball problem and two other questions, chosen

because they also invite an intuitive answer that is both compelling and wrong (the questions are shown here). He went on to study the

characteristics of students who score very low on this test—the supervisory

function of System 2 is weak in these people—and found that they are

prone to answer questions with the first idea that comes to mind and

unwilling to invest the effort needed to check their intuitions. Individuals who

uncritically follow their intuitions about puzzles are also prone to accept

other suggestions from System 1. In particular, they are impulsive,

impatient, and keen to receive immediate gratification. For example, 63%

of the intuitive respondents say they would prefer to get $3,400 this month

rather than $3,800 next month. Only 37% of those who solve all three

puzzles correctly have the same shortsighted preference for receiving a

smaller amount immediately. When asked how much they will pay to get

overnight delivery of a book they have ordered, the low scorers on the Cognitive Reflection Test are willing to pay twice as much as the high

scorers. Frederick’s findings suggest that the characters of our

psychodrama have different “personalities.” System 1 is impulsive and

intuitive; System 2 is capable of reasoning, and it is cautious, but at least

for some people it is also lazy. We recognize related differences among

Page 50 of 533

individuals: some people are more like their System 2; others are closer to

their System 1. This simple test has emerged as one of the better

predictors of laztestors of ly thinking. Keith Stanovich and his longtime collaborator Richard West originally

introduced the terms System 1 and System 2 (they now prefer to speak of

Type 1 and Type 2 processes). Stanovich and his colleagues have spent

decades studying differences among individuals in the kinds of problems with which this book is concerned. They have asked one basic question in many different ways: What makes some people more susceptible than

others to biases of judgment? Stanovich published his conclusions in a

book titled Rationality and the Reflective Mind, which offers a bold and

distinctive approach to the topic of this chapter. He draws a sharp

distinction between two parts of System 2—indeed, the distinction is so

sharp that he calls them separate “minds.” One of these minds (he calls it

algorithmic) deals with slow thinking and demanding computation. Some

people are better than others in these tasks of brain power—they are the

individuals who excel in intelligence tests and are able to switch from one

task to another quickly and efficiently. However, Stanovich argues that high

intelligence does not make people immune to biases. Another ability is

involved, which he labels rationality. Stanovich’s concept of a rational

person is similar to what I earlier labeled “engaged.” The core of his

argument is that rationality should be distinguished from intelligence. In

his view, superficial or “lazy” thinking is a flaw in the reflective mind, a

failure of rationality. This is an attractive and thought-provoking idea. In

support of it, Stanovich and his colleagues have found that the bat-and-ball

question and others like it are somewhat better indicators of our

susceptibility to cognitive errors than are conventional measures of

intelligence, such as IQ tests. Time will tell whether the distinction between

intelligence and rationality can lead to new discoveries.

Speaking of Control

“She did not have to struggle to stay on task for hours. She was in

a state of flow.”

“His ego was depleted after a long day of meetings. So he just

turned to standard operating procedures instead of thinking

through the problem.”

Page 51 of 533

“He didn’t bother to check whether what he said made sense. Does he usually have a lazy System 2 or was he unusually tired?”

“Unfortunately, she tends to say the first thing that comes into her mind. She probably also has trouble delaying gratification. Weak

System 2.”

Page 52 of 533

The Associative Machine

To begin your exploration of the surprising workings of System 1, look at

the following words:

Bananas Vomit

A lot happened to you during the last second or two. You experienced

some unpleasant images and memories. Your face twisted slightly in an

expression of disgust, and you may have pushed this book imperceptibly

farther away. Your heart rate increased, the hair on your arms rose a little,

and your sweat glands were activated. In short, you responded to the

disgusting word with an attenuated version of how you would react to the

actual event. All of this was completely automatic, beyond your control.

There was no particular reason to do so, but your mind automatically

assumed a temporal sequence and a causal connection between the words bananas and vomit, forming a sketchy scenario in which bananas

caused the sickness. As a result, you are experiencing a temporary

aversion to bananas (don’t worry, it will pass). The state of your memory

has changed in other ways: you are now unusually ready to recognize and

respond to objects and concepts associated with “vomit,” such as sick,

stink, or nausea, and words associated with “bananas,” such as yellow and

fruit, and perhaps apple and berries. Vomiting normally occurs in specific contexts, such as hangovers and

indigestion. You would also be unusually ready to recognize words

associated with other causes of the same unfortunate outcome.

Furthermore, your System 1 noticed the fact that the juxtaposition of the

two words is uncommon; you probably never encountered it before. You

experienced mild surprise.

This complex constellation of responses occurred quickly, automatically,

and effortlessly. You did not will it and you could not stop it. It was an

operation of System 1. The events that took place as a result of your

seeing the words happened by a process called associative activation:

ideas that have been evoked trigger many other ideas, in a spreading

cascade of activity in your brain. The essential feature of this complex set

of mental events is its coherence. Each element is connected, and each

supports and strengthens the others. The word evokes memories, which

evoke emotions, which in turn evoke facial expressions and other

reactions, such as a general tensing up and an avoidance tendency. The

Page 53 of 533

facial expression and the avoidance motion intensify the feelings to which

they are linked, and the feelings in turn reinforce compatible ideas. All this

happens quickly and all at once, yielding a self-reinforcing pattern of

cognitive, emotional, and physical responses that is both diverse and

integrated—it has been called associatively coherent.

In a second or so you accomplished, automatically and unconsciously, a

remarkable feat. Starting from a completely unexpected event, your System 1 made as much sense as possible of the situation—two simple words, oddly juxtaposed—by linking the words in a causal story; it

evaluated the possible threat (mild to moderate) and created a context for

future developments by preparing you for events that had just become more likely; it also created a context for the current event by evaluating how

surprising it was. You ended up as informed about the past and as

prepared for the future as you could be. An odd feature of what happened is that your System 1 treated the mere

conjunction of two words as representations of reality. Your body reacted in

an attenuated replica of a reaction to the real thing, and the emotional

response and physical recoil were part of the interpretation of the event. As

cognitive scientists have emphasized in recent years, cognition is

embodied; you think with your body, not only with your brain.

The mechanism that causes these mental events has been known for a

long time: it is the ass12;velyociation of ideas. We all understand from

experience that ideas follow each other in our conscious mind in a fairly

orderly way. The British philosophers of the seventeenth and eighteenth

centuries searched for the rules that explain such sequences. In An Enquiry Concerning Human Understanding, published in 1748, the Scottish philosopher David Hume reduced the principles of association to

three: resemblance, contiguity in time and place, and causality. Our

concept of association has changed radically since Hume’s days, but his

three principles still provide a good start.

I will adopt an expansive view of what an idea is. It can be concrete or

abstract, and it can be expressed in many ways: as a verb, as a noun, as

an adjective, or as a clenched fist. Psychologists think of ideas as nodes in

a vast network, called associative memory, in which each idea is linked to many others. There are different types of links: causes are linked to their

effects (virus cold); things to their properties (lime green); things to

the categories to which they belong (banana fruit). One way we have

advanced beyond Hume is that we no longer think of the mind as going

through a sequence of conscious ideas, one at a time. In the current view

of how associative memory works, a great deal happens at once. An idea

that has been activated does not merely evoke one other idea. It activates

Page 54 of 533

many ideas, which in turn activate others. Furthermore, only a few of the

activated ideas will register in consciousness; most of the work of

associative thinking is silent, hidden from our conscious selves. The notion

that we have limited access to the workings of our minds is difficult to

accept because, naturally, it is alien to our experience, but it is true: you

know far less about yourself than you feel you do.

The Marvels of Priming

As is common in science, the first big breakthrough in our understanding of

the mechanism of association was an improvement in a method of measurement. Until a few decades ago, the only way to study associations was to ask many people questions such as, “What is the first word that

comes to your mind when you hear the word DAY?” The researchers tallied

the frequency of responses, such as “night,” “sunny,” or “long.” In the 1980s,

psychologists discovered that exposure to a word causes immediate and measurable changes in the ease with which many related words can be

evoked. If you have recently seen or heard the word EAT, you are

temporarily more likely to complete the word fragment SO_P as SOUP

than as SOAP. The opposite would happen, of course, if you had just seen WASH. We call this a priming effect and say that the idea of EAT primes

the idea of SOUP, and that WASH primes SOAP. Priming effects take many forms. If the idea of EAT is currently on your mind (whether or not you are conscious of it), you will be quicker than usual

to recognize the word SOUP when it is spoken in a whisper or presented

in a blurry font. And of course you are primed not only for the idea of soup

but also for a multitude of food-related ideas, including fork, hungry, fat,

diet, and cookie. If for your most recent meal you sat at a wobbly restaurant

table, you will be primed for wobbly as well. Furthermore, the primed ideas

have some ability to prime other ideas, although more weakly. Like ripples

on a pond, activation spreads through a small part of the vast network of

associated ideas. The mapping of these ripples is now one of the most

exciting pursuits in psychological research. Another major advance in our understanding of memory was the

discovery that priming is not restricted to concepts and words. You cannot

know this from conscious experience, of course, but you must accept the

alien idea that your actions and your emotions can be primed by events of which you are not even aware. In an experiment that became an instant

classic, the psychologist John Bargh and his collaborators asked students

at New York University—most aged eighteen to twenty-two—to assemble

four-word sentences from a set of five words (for example, “finds he it

Page 55 of 533

yellow instantly”). For one group of students, half the scrambled sentences

contained words associated with the elderly, such as Florida, forgetful,

bald, gray, or wrinkle. When they had completed that task, the young

participants were sent out to do another experiment in an office down the

hall. That short walk was what the experiment was about. The researchers

unobtrusively measured the time it took people to get from one end of the

corridor to the other. As Bargh had predicted, the young people who had

fashioned a sentence from words with an elderly theme walked down the

hallway significantly more slowly than the others.

The “Florida effect” involves two stages of priming. First, the set of words primes thoughts of old age, though the word old is never mentioned;

second, these thoughts prime a behavior, walking slowly, which is

associated with old age. All this happens without any awareness. When

they were questioned afterward, none of the students reported noticing that

the words had had a common theme, and they all insisted that nothing they

did after the first experiment could have been influenced by the words they

had encountered. The idea of old age had not come to their conscious

awareness, but their actions had changed nevertheless. This remarkable

priming phenomenon—the influencing of an action by the idea—is known

as the ideomotor effect. Although you surely were not aware of it, reading

this paragraph primed you as well. If you had needed to stand up to get a

glass of water, you would have been slightly slower than usual to rise from

your chair—unless you happen to dislike the elderly, in which case

research suggests that you might have been slightly faster than usual!

The ideomotor link also works in reverse. A study conducted in a German university was the mirror image of the early experiment that Bargh

and his colleagues had carried out in New York. Students were asked to walk around a room for 5 minutes at a rate of 30 steps per minute, which was about one-third their normal pace. After this brief experience, the

participants were much quicker to recognize words related to old age,

such as forgetful, old, and lonely. Reciprocal priming effects tend to

produce a coherent reaction: if you were primed to think of old age, you would tend to act old, and acting old would reinforce the thought of old age. Reciprocal links are common in the associative network. For example,

being amused tends to make you smile, and smiling tends to make you

feel amused. Go ahead and take a pencil, and hold it between your teeth

for a few seconds with the eraser pointing to your right and the point to your

left. Now hold the pencil so the point is aimed straight in front of you, by

pursing your lips around the eraser end. You were probably unaware that

one of these actions forced your face into a frown and the other into a

smile. College students were asked to rate the humor of cartoons from

Page 56 of 533

Gary Larson’s The Far Side while holding a pencil in their mouth. Those who were “smiling” (without any awareness of doing so) found the cartoons

rri221; (withfunnier than did those who were “frowning.” In another

experiment, people whose face was shaped into a frown (by squeezing

their eyebrows together) reported an enhanced emotional response to

upsetting pictures—starving children, people arguing, maimed accident

victims. Simple, common gestures can also unconsciously influence our thoughts

and feelings. In one demonstration, people were asked to listen to messages through new headphones. They were told that the purpose of

the experiment was to test the quality of the audio equipment and were

instructed to move their heads repeatedly to check for any distortions of

sound. Half the participants were told to nod their head up and down while

others were told to shake it side to side. The messages they heard were

radio editorials. Those who nodded (a yes gesture) tended to accept the message they heard, but those who shook their head tended to reject it. Again, there was no awareness, just a habitual connection between an

attitude of rejection or acceptance and its common physical expression. You can see why the common admonition to “act calm and kind regardless

of how you feel” is very good advice: you are likely to be rewarded by

actually feeling calm and kind.

Primes That Guide Us

Studies of priming effects have yielded discoveries that threaten our self- image as conscious and autonomous authors of our judgments and our

choices. For instance, most of us think of voting as a deliberate act that

reflects our values and our assessments of policies and is not influenced

by irrelevancies. Our vote should not be affected by the location of the

polling station, for example, but it is. A study of voting patterns in precincts

ofArizona in 2000 showed that the support for propositions to increase the

funding of schools was significantly greater when the polling station was in

a school than when it was in a nearby location. A separate experiment

showed that exposing people to images of classrooms and school lockers

also increased the tendency of participants to support a school initiative.

The effect of the images was larger than the difference between parents

and other voters! The study of priming has come some way from the initial

demonstrations that reminding people of old age makes them walk more

slowly. We now know that the effects of priming can reach into every corner

of our lives. Reminders of money produce some troubling effects. Participants in one

Page 57 of 533

experiment were shown a list of five words from which they were required

to construct a four-word phrase that had a money theme (“high a salary

desk paying” became “a high-paying salary”). Other primes were much more subtle, including the presence of an irrelevant money-related object

in the background, such as a stack of Monopoly money on a table, or a

computer with a screen saver of dollar bills floating in water. Money-primed people become more independent than they would be without the associative trigger. They persevered almost twice as long in

trying to solve a very difficult problem before they asked the experimenter

for help, a crisp demonstration of increased self-reliance. Money-primed

people are also more selfish: they were much less willing to spend time

helping another student who pretended to be confused about an

experimental task. When an experimenter clumsily dropped a bunch of

pencils on the floor, the participants with money (unconsciously) on their mind picked up fewer pencils. In another experiment in the series,

participants were told that they would shortly have a get-acquainted

conversation with another person and were asked to set up two chairs while the experimenter left to retrieve that person. Participants primed by money chose in the exto stay much farther apart than their nonprimed

peers (118 vs. 80 centimeters). Money-primed undergraduates also

showed a greater preference for being alone.

The general theme of these findings is that the idea of money primes

individualism: a reluctance to be involved with others, to depend on others,

or to accept demands from others. The psychologist who has done this

remarkable research, Kathleen Vohs, has been laudably restrained in

discussing the implications of her findings, leaving the task to her readers. Her experiments are profound—her findings suggest that living in a culture

that surrounds us with reminders of money may shape our behavior and

our attitudes in ways that we do not know about and of which we may not

be proud. Some cultures provide frequent reminders of respect, others

constantly remind their members of God, and some societies prime

obedience by large images of the Dear Leader. Can there be any doubt

that the ubiquitous portraits of the national leader in dictatorial societies

not only convey the feeling that “Big Brother Is Watching” but also lead to

an actual reduction in spontaneous thought and independent action?

The evidence of priming studies suggests that reminding people of their mortality increases the appeal of authoritarian ideas, which may become

reassuring in the context of the terror of death. Other experiments have

confirmed Freudian insights about the role of symbols and metaphors in

unconscious associations. For example, consider the ambiguous word

fragments W_ _ H and S_ _ P. People who were recently asked to think of

an action of which they are ashamed are more likely to complete those

Page 58 of 533

fragments as WASH and SOAP and less likely to see WISH and SOUP.

Furthermore, merely thinking about stabbing a coworker in the back leaves

people more inclined to buy soap, disinfectant, or detergent than batteries,

juice, or candy bars. Feeling that one’s soul is stained appears to trigger a

desire to cleanse one’s body, an impulse that has been dubbed the “Lady

Macbeth effect.”

The cleansing is highly specific to the body parts involved in a sin. Participants in an experiment were induced to “lie” to an imaginary person,

either on the phone or in e-mail. In a subsequent test of the desirability of

various products, people who had lied on the phone preferred mouthwash

over soap, and those who had lied in e-mail preferred soap to mouthwash. When I describe priming studies to audiences, the reaction is often

disbelief. This is not a surprise: System 2 believes that it is in charge and

that it knows the reasons for its choices. Questions are probably cropping

up in your mind as well: How is it possible for such trivial manipulations of

the context to have such large effects? Do these experiments demonstrate

that we are completely at the mercy of whatever primes the environment

provides at any moment? Of course not. The effects of the primes are

robust but not necessarily large. Among a hundred voters, only a few

whose initial preferences were uncertain will vote differently about a school

issue if their precinct is located in a school rather than in a church—but a

few percent could tip an election.

The idea you should focus on, however, is that disbelief is not an option.

The results are not made up, nor are they statistical flukes. You have no

choice but to accept that the major conclusions of these studies are true. More important, you must accept that they are true about you. If you had

been exposed to a screen saver of floating dollar bills, you too would likely

have picked up fewer pencils to help a clumsy stranger. You do not believe

that these results apply to you because they correspond to nothing in your

subjective experience. But your subjective expefteelief. Trience consists

largely of the story that your System 2 tells itself about what is going on. Priming phenomena arise in System 1, and you have no conscious access

to them.

I conclude with a perfect demonstration of a priming effect, which was

conducted in an office kitchen at a British university. For many years members of that office had paid for the tea or coffee to which they helped

themselves during the day by dropping money into an “honesty box.” A list

of suggested prices was posted. One day a banner poster was displayed

just above the price list, with no warning or explanation. For a period of ten weeks a new image was presented each week, either flowers or eyes that

appeared to be looking directly at the observer. No one commented on the

Page 59 of 533

new decorations, but the contributions to the honesty box changed

significantly. The posters and the amounts that people put into the cash

box (relative to the amount they consumed) are shown in figure 4. They

deserve a close look.

Figure 4

On the first week of the experiment (which you can see at the bottom of the

figure), two wide-open eyes stare at the coffee or tea drinkers, whose

average contribution was 70 pence per liter of milk. On week 2, the poster

shows flowers and average contributions drop to about 15 pence. The

trend continues. On average, the users of the kitchen contributed almost

three times as much in “eye weeks” as they did in “flower weeks.” Evidently, a purely symbolic reminder of being watched prodded people

into improved behavior. As we expect at this point, the effect occurs without any awareness. Do you now believe that you would also fall into the

same pattern?

Some years ago, the psychologist Timothy Wilson wrote a book with the

evocative title Strangers to Ourselves. You have now been introduced to

that stranger in you, which may be in control of much of what you do,

although you rarely have a glimpse of it. System 1 provides the

impressions that often turn into your beliefs, and is the source of the

impulses that often become your choices and your actions. It offers a tacit

interpretation of what happens to you and around you, linking the present

Page 60 of 533

with the recent past and with expectations about the near future. It contains

the model of the world that instantly evaluates events as normal or

surprising. It is the source of your rapid and often precise intuitive

judgments. And it does most of this without your conscious awareness of

its activities. System 1 is also, as we will see in the following chapters, the

origin of many of the systematic errors in your intuitions.

Speaking of Priming

“The sight of all these people in uniforms does not prime

creativity.”

“The world makes much less sense than you think. The

coherence comes mostly from the way your mind works.”

“They were primed to find flaws, and this is exactly what they

found.”

“His System 1 constructed a story, and his System 2 believed it. It

happens to allel

“Imade myself smile and I’m actually feeling better!”

Couldn't preview file
There was a problem loading this page.

Page 61 of 533

Cognitive Ease

Whenever you are conscious, and perhaps even when you are not, multiple

computations are going on in your brain, which maintain and update

current answers to some key questions: Is anything new going on? Is there

a threat? Are things going well? Should my attention be redirected? Is more effort needed for this task? You can think of a cockpit, with a set of

dials that indicate the current values of each of these essential variables.

The assessments are carried out automatically by System 1, and one of

their functions is to determine whether extra effort is required from System

2.One of the dials measures cognitive ease, and its range is between

“Easy” and “Strained.” Easy is a sign that things are going well—no

threats, no major news, no need to redirect attention or mobilize effort. Strained indicates that a problem exists, which will require increased mobilization of System 2. Conversely, you experience cognitive strain. Cognitive strain is affected by both the current level of effort and the

presence of unmet demands. The surprise is that a single dial of cognitive

ease is connected to a large network of diverse inputs and outputs. Figure

5 tells the story.

The figure suggests that a sentence that is printed in a clear font, or has

been repeated, or has been primed, will be fluently processed with

cognitive ease. Hearing a speaker when you are in a good mood, or even when you have a pencil stuck crosswise in your mouth to make you “smile,”

also induces cognitive ease. Conversely, you experience cognitive strain when you read instructions in a poor font, or in faint colors, or worded in

complicated language, or when you are in a bad mood, and even when you

frown.

Figure 5. Causes and Consequences of

Cognitive Ease

Page 62 of 533

The various causes of ease or strain have interchangeable effects. When you are in a state of cognitive ease, you are probably in a good mood, like what you see, believe what you hear, trust your intuitions, and

feel that the current situation is comfortably familiar. You are also likely to

be relatively casual and superficial in your thinking. When you feel strained,

you are more likely to be vigilant and suspicious, invest more effort in what

you are doing, feel less comfortable, and make fewer errors, but you also

are less intuitive and less creative than usual.

Illusions of Remembering

The word illusion brings visual illusions to mind, because we are all

familiar with pictures that mislead. But vision is not the only domain of

illusions; memory is also susceptible to them, as is thinking more

generally. David Stenbill, Monica Bigoutski, Sh"imight=s is pictana Tirana. I just made up these names. If you encounter any of them within the next few minutes you are likely to remember where you saw them. You know, and will know for a while, that these are not the names of minor celebrities. But

suppose that a few days from now you are shown a long list of names,

including some minor celebrities and “new” names of people that you have

never heard of; your task will be to check every name of a celebrity in the

list. There is a substantial probability that you will identify David Stenbill as

a well-known person, although you will not (of course) know whether you

encountered his name in the context of movies, sports, or politics. Larry

Jacoby, the psychologist who first demonstrated this memory illusion in the

laboratory, titled his article “Becoming Famous Overnight.” How does this

happen? Start by asking yourself how you know whether or not someone is

famous. In some cases of truly famous people (or of celebrities in an area

you follow), you have a mental file with rich information about a person—

think Albert Einstein, Bono, Hillary Clinton. But you will have no file of

information about David Stenbill if you encounter his name in a few days. All you will have is a sense of familiarity—you have seen this name

somewhere.

Jacoby nicely stated the problem: “The experience of familiarity has a

simple but powerful quality of ‘pastness’ that seems to indicate that it is a

direct reflection of prior experience.” This quality of pastness is an illusion.

The truth is, as Jacoby and many followers have shown, that the name David Stenbill will look familiar when you see it because you will see it more clearly. Words that you have seen before become easier to see

Page 63 of 533

again—you can identify them better than other words when they are shown

very briefly or masked by noise, and you will be quicker (by a few

hundredths of a second) to read them than to read other words. In short,

you experience greater cognitive ease in perceiving a word you have seen

earlier, and it is this sense of ease that gives you the impression of

familiarity.

Figure 5 suggests a way to test this. Choose a completely new word, make it easier to see, and it will be more likely to have the quality of

pastness. Indeed, a new word is more likely to be recognized as familiar if

it is unconsciously primed by showing it for a few milliseconds just before

the test, or if it is shown in sharper contrast than some other words in the

list. The link also operates in the other direction. Imagine you are shown a

list of words that are more or less out of focus. Some of the words are

severely blurred, others less so, and your task is to identify the words that

are shown more clearly. A word that you have seen recently will appear to

be clearer than unfamiliar words. As figure 5 indicates, the various ways of

inducing cognitive ease or strain are interchangeable; you may not know

precisely what it is that makes things cognitively easy or strained. This is

how the illusion of familiarity comes about.

Illusions of Truth

“New York is a large city in the United States.” “The moon revolves around

Earth.” “A chicken has four legs.” In all these cases, you quickly retrieved a

great deal of related information, almost all pointing one way or another. You knew soon after reading them that the first two statements are true and

the last one is false. Note, however, that the statement “A chicken has

three legs” is more obviously false than “A chicken has four legs.” Your

associative machinery slows the judgment of the latter sentence by

delivering the fact that many animals have four legs, and perhaps also that

supermarkets often sell chickenordblurred, legs in packages of four. System 2 was involved in sifting that information, perhaps raising the issue

of whether the question about New York was too easy, or checking the meaning of revolves.

Think of the last time you took a driving test. Is it true that you need a

special license to drive a vehicle that weighs more than three tons?

Perhaps you studied seriously and can remember the side of the page on which the answer appeared, as well as the logic behind it. This is certainly

not how I passed driving tests when I moved to a new state. My practice was to read the booklet of rules quickly once and hope for the best. I knew

some of the answers from the experience of driving for a long time. But

Page 64 of 533

there were questions where no good answer came to mind, where all I had

to go by was cognitive ease. If the answer felt familiar, I assumed that it was probably true. If it looked new (or improbably extreme), I rejected it.

The impression of familiarity is produced by System 1, and System 2

relies on that impression for a true/false judgment.

The lesson of figure 5 is that predictable illusions inevitably occur if a

judgment is based on an impression of cognitive ease or strain. Anything

that makes it easier for the associative machine to run smoothly will also

bias beliefs. A reliable way to make people believe in falsehoods is

frequent repetition, because familiarity is not easily distinguished from

truth. Authoritarian institutions and marketers have always known this fact. But it was psychologists who discovered that you do not have to repeat the

entire statement of a fact or idea to make it appear true. People who were

repeatedly exposed to the phrase “the body temperature of a chicken” were more likely to accept as true the statement that “the body temperature

of a chicken is 144°” (or any other arbitrary number). The familiarity of one

phrase in the statement sufficed to make the whole statement feel familiar,

and therefore true. If you cannot remember the source of a statement, and

have no way to relate it to other things you know, you have no option but to

go with the sense of cognitive ease.

How to Write a Persuasive Message

Suppose you must write a message that you want the recipients to believe. Of course, your message will be true, but that is not necessarily enough for

people to believe that it is true. It is entirely legitimate for you to enlist

cognitive ease to work in your favor, and studies of truth illusions provide

specific suggestions that may help you achieve this goal.

The general principle is that anything you can do to reduce cognitive

strain will help, so you should first maximize legibility. Compare these two

statements:

Adolf Hitler was born in 1892.

Adolf Hitler was born in 1887.

Both are false (Hitler was born in 1889), but experiments have shown that

the first is more likely to be believed. More advice: if your message is to be

printed, use high-quality paper to maximize the contrast between

characters and their background. If you use color, you are more likely to be

believed if your text is printed in bright blue or red than in middling shades

of green, yellow, or pale blue.

Page 65 of 533

If you care about being thought credible and intelligent, do not use

complex language where simpler language will do. My Princeton ton

colleague Danny Oppenheimer refuted a myth prevalent a wo ton colmong

undergraduates about the vocabulary that professors find most impressive.

In an article titled “Consequences of Erudite Vernacular Utilized

Irrespective of Necessity: Problems with Using Long Words Needlessly,”

he showed that couching familiar ideas in pretentious language is taken as

a sign of poor intelligence and low credibility.

In addition to making your message simple, try to make it memorable. Put your ideas in verse if you can; they will be more likely to be taken as

truth. Participants in a much cited experiment read dozens of unfamiliar

aphorisms, such as:

Woes unite foes.

Little strokes will tumble great oaks. A fault confessed is half redressed.

Other students read some of the same proverbs transformed into

nonrhyming versions:

Woes unite enemies.

Little strokes will tumble great trees. A fault admitted is half redressed.

The aphorisms were judged more insightful when they rhymed than when

they did not.

Finally, if you quote a source, choose one with a name that is easy to

pronounce. Participants in an experiment were asked to evaluate the

prospects of fictitious Turkish companies on the basis of reports from two

brokerage firms. For each stock, one of the reports came from an easily

pronounced name (e.g., Artan) and the other report came from a firm with

an unfortunate name (e.g., Taahhut). The reports sometimes disagreed.

The best procedure for the observers would have been to average the two

reports, but this is not what they did. They gave much more weight to the

report from Artan than to the report from Taahhut. Remember that System

2 is lazy and that mental effort is aversive. If possible, the recipients of your message want to stay away from anything that reminds them of effort,

including a source with a complicated name. All this is very good advice, but we should not get carried away. High- quality paper, bright colors, and rhyming or simple language will not be much help if your message is obviously nonsensical, or if it contradicts

facts that your audience knows to be true. The psychologists who do these

Page 67 of 533

If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

24 days OR 47 days

The correct answers to both problems are in a footnote at the bottom of the

page.

* The experimenters recruited 40 Princeton students to take the CRT. Half of them saw the puzzles in a small font in washed-out gray print. The

puzzles were legible, but the font induced cognitive strain. The results tell a

clear story: 90% of the students who saw the CRT in normal font made at

least one mistake in the test, but the proportion dropped to 35% when the

font was barely legible. You read this correctly: performance was better with the bad font. Cognitive strain, whatever its source, mobilizes System

2, which is more likely to reject the intuitive answer suggested by System

1.

The Pleasure of Cognitive Ease

An article titled “Mind at Ease Puts a Smile on the Face” describes an

experiment in which participants were briefly shown pictures of objects. Some of these pictures were made easier to recognize by showing the

outline of the object just before the complete image was shown, so briefly

that the contours were never noticed. Emotional reactions were measured

by recording electrical impulses from facial muscles, registering changes

of expression that are too slight and too brief to be detectable by

observers. As expected, people showed a faint smile and relaxed brows when the pictures were easier to see. It appears to be a feature of System

1 that cognitive ease is associated with good feelings. As expected, easily pronounced words evoke a favorable attitude. Companies with pronounceable names dmisorrectlo better than others for

the first week after the stock is issued, though the effect disappears over

time. Stocks with pronounceable trading symbols (like KAR or LUNMOO)

outperform those with tongue-twisting tickers like PXG or RDO—and they

appear to retain a small advantage over some time. A study conducted in Switzerland found that investors believe that stocks with fluent names like Emmi, Swissfirst, and Comet will earn higher returns than those with clunky

labels like Geberit and Ypsomed. As we saw in figure 5, repetition induces cognitive ease and a

comforting feeling of familiarity. The famed psychologist Robert Zajonc

dedicated much of his career to the study of the link between the repetition

of an arbitrary stimulus and the mild affection that people eventually have

for it. Zajonc called it the mere exposure effect. A demonstration

Page 68 of 533

conducted in the student newspapers of the University of Michigan and of

Michigan State University is one of my favorite experiments. For a period

of some weeks, an ad-like box appeared on the front page of the paper, which contained one of the following Turkish (or Turkish-sounding) words:

kadirga, saricik, biwonjni, nansoma, and iktitaf. The frequency with which

the words were repeated varied: one of the words was shown only once,

the others appeared on two, five, ten, or twenty-five separate occasions.

(The words that were presented most often in one of the university papers were the least frequent in the other.) No explanation was offered, and

readers’ queries were answered by the statement that “the purchaser of

the display wished for anonymity.” When the mysterious series of ads ended, the investigators sent

questionnaires to the university communities, asking for impressions of whether each of the words “means something ‘good’ or something ‘bad.’”

The results were spectacular: the words that were presented more

frequently were rated much more favorably than the words that had been

shown only once or twice. The finding has been confirmed in many

experiments, using Chinese ideographs, faces, and randomly shaped

polygons.

The mere exposure effect does not depend on the conscious

experience of familiarity. In fact, the effect does not depend on

consciousness at all: it occurs even when the repeated words or pictures

are shown so quickly that the observers never become aware of having

seen them. They still end up liking the words or pictures that were

presented more frequently. As should be clear by now, System 1 can

respond to impressions of events of which System 2 is unaware. Indeed,

the mere exposure effect is actually stronger for stimuli that the individual

never consciously sees.

Zajonc argued that the effect of repetition on liking is a profoundly

important biological fact, and that it extends to all animals. To survive in a

frequently dangerous world, an organism should react cautiously to a novel

stimulus, with withdrawal and fear. Survival prospects are poor for an

animal that is not suspicious of novelty. However, it is also adaptive for the

initial caution to fade if the stimulus is actually safe. The mere exposure

effect occurs, Zajonc claimed, because the repeated exposure of a

stimulus is followed by nothing bad. Such a stimulus will eventually become

a safety signal, and safety is good. Obviously, this argument is not

restricted to humans. To make that point, one of Zajonc’s associates

exposed two sets of fertile chicken eggs to different tones. After they

hatched, the chicks consistently emitted fewer distress calls when exposed

to the tone they had heard while inhabiting the shell.

Zajonc offered an eloquent summary of hing icts program of research:

Page 69 of 533

Zajonc offered an eloquent summary of hing icts program of research:

The consequences of repeated exposures benefit the organism

in its relations to the immediate animate and inanimate

environment. They allow the organism to distinguish objects and

habitats that are safe from those that are not, and they are the most primitive basis of social attachments. Therefore, they form

the basis for social organization and cohesion—the basic

sources of psychological and social stability.

The link between positive emotion and cognitive ease in System 1 has a

long evolutionary history.

Ease, Mood, and Intuition

Around 1960, a young psychologist named Sarnoff Mednick thought he

had identified the essence of creativity. His idea was as simple as it was

powerful: creativity is associative memory that works exceptionally well. He made up a test, called the Remote Association Test (RAT), which is still

often used in studies of creativity.

For an easy example, consider the following three words:

cottage Swiss cake Can you think of a word that is associated with all three? You probably worked out that the answer is cheese. Now try this:

dive light rocket

This problem is much harder, but it has a unique correct answer, which

every speaker of English recognizes, although less than 20% of a sample

of students found it within 15 seconds. The answer is sky. Of course, not

every triad of words has a solution. For example, the words dream, ball,

book do not have a shared association that everyone will recognize as

valid. Several teams of German psychologists that have studied the RAT in

recent years have come up with remarkable discoveries about cognitive

ease. One of the teams raised two questions: Can people feel that a triad

of words has a solution before they know what the solution is? How does mood influence performance in this task? To find out, they first made some

of their subjects happy and others sad, by asking them to think for several minutes about happy or sad episodes in their lives. Then they presented

these subjects with a series of triads, half of them linked (such as dive,

light, rocket) and half unlinked (such as dream, ball, book), and instructed

them to press one of two keys very quickly to indicate their guess about whether the triad was linked. The time allowed for this guess, 2 seconds,

Page 70 of 533

was much too short for the actual solution to come to anyone’s mind.

The first surprise is that people’s guesses are much more accurate than

they would be by chance. I find this astonishing. A sense of cognitive ease

is apparently generated by a very faint signal from the associative machine, which “knows” that the three words are coherent (share an

association) long before the association is retrieved. The role of cognitive

ease in the judgment was confirmed experimentally by another German

team: manipulations that increase cognitive ease (priming, a clear font,

pre-exposing words) all increase the tendency to see the words as linked. Another remarkable discovery is the powerful effect of mood on this

intuitive performance. The experimentershape tende computed an

“intuition index” to measure accuracy. They found that putting the

participants in a good mood before the test by having them think happy

thoughts more than doubled accuracy. An even more striking result is that

unhappy subjects were completely incapable of performing the intuitive

task accurately; their guesses were no better than random. Mood evidently

affects the operation of System 1: when we are uncomfortable and

unhappy, we lose touch with our intuition.

These findings add to the growing evidence that good mood, intuition,

creativity, gullibility, and increased reliance on System 1 form a cluster. At

the other pole, sadness, vigilance, suspicion, an analytic approach, and

increased effort also go together. A happy mood loosens the control of

System 2 over performance: when in a good mood, people become more

intuitive and more creative but also less vigilant and more prone to logical

errors. Here again, as in the mere exposure effect, the connection makes

biological sense. A good mood is a signal that things are generally going well, the environment is safe, and it is all right to let one’s guard down. A

bad mood indicates that things are not going very well, there may be a

threat, and vigilance is required. Cognitive ease is both a cause and a

consequence of a pleasant feeling.

The Remote Association Test has more to tell us about the link between

cognitive ease and positive affect. Briefly consider two triads of words:

sleep mail switch

salt deep foam

You could not know it, of course, but measurements of electrical activity in

the muscles of your face would probably have shown a slight smile when

you read the second triad, which is coherent (sea is the solution). This

smiling reaction to coherence appears in subjects who are told nothing

about common associates; they are merely shown a vertically arranged

triad of words and instructed to press the space bar after they have read it.

The impression of cognitive ease that comes with the presentation of a

coherent triad appears to be mildly pleasurable in itself.

Page 71 of 533

coherent triad appears to be mildly pleasurable in itself.

The evidence that we have about good feelings, cognitive ease, and the

intuition of coherence is, as scientists say, correlational but not necessarily

causal. Cognitive ease and smiling occur together, but do the good

feelings actually lead to intuitions of coherence? Yes, they do. The proof

comes from a clever experimental approach that has become increasingly

popular. Some participants were given a cover story that provided an

alternative interpretation for their good feeling: they were told about music

played in their earphones that “previous research showed that this music

influences the emotional reactions of individuals.” This story completely

eliminates the intuition of coherence. The finding shows that the brief

emotional response that follows the presentation of a triad of words

(pleasant if the triad is coherent, unpleasant otherwise) is actually the basis

of judgments of coherence. There is nothing here that System 1 cannot do. Emotional changes are now expected, and because they are unsurprising

they are not linked causally to the words.

This is as good as psychological research ever gets, in its combination

of experimental techniques and in its results, which are both robust and

extremely surprising. We have learned a great deal about the automatic workings of System 1 in the last decades. Much of what we now know

would have sounded like science fiction thirty or forty years ago. It was

beyond imagining that bad font influences judgments of truth and improves

cognitive performance, or that an emotional response to the cognitive

ease of a tri pr that aad of words mediates impressions of coherence. Psychology has come a long way.

Speaking of Cognitive Ease

“Let’s not dismiss their business plan just because the font makes it hard to read.”

“We must be inclined to believe it because it has been repeated

so often, but let’s think it through again.”

“Familiarity breeds liking. This is a mere exposure effect.”

“I’m in a very good mood today, and my System 2 is weaker than

usual. I should be extra careful.”

Page 72 of 533

Norms, Surprises, and Causes

The central characteristics and functions of System 1 and System 2 have

now been introduced, with a more detailed treatment of System 1. Freely mixing metaphors, we have in our head a remarkably powerful computer,

not fast by conventional hardware standards, but able to represent the

structure of our world by various types of associative links in a vast network

of various types of ideas. The spreading of activation in the associative machine is automatic, but we (System 2) have some ability to control the

search of memory, and also to program it so that the detection of an event

in the environment can attract attention. We next go into more detail of the wonders and limitation of what System 1 can do.

Assessing Normality

The main function of System 1 is to maintain and update a model of your

personal world, which represents what is normal in it. The model is

constructed by associations that link ideas of circumstances, events,

actions, and outcomes that co-occur with some regularity, either at the

same time or within a relatively short interval. As these links are formed

and strengthened, the pattern of associated ideas comes to represent the

structure of events in your life, and it determines your interpretation of the

present as well as your expectations of the future. A capacity for surprise is an essential aspect of our mental life, and

surprise itself is the most sensitive indication of how we understand our world and what we expect from it. There are two main varieties of surprise. Some expectations are active and conscious—you know you are waiting

for a particular event to happen. When the hour is near, you may be

expecting the sound of the door as your child returns from school; when the

door opens you expect the sound of a familiar voice. You will be surprised

if an actively expected event does not occur. But there is a much larger

category of events that you expect passively; you don’t wait for them, but

you are not surprised when they happen. These are events that are normal

in a situation, though not sufficiently probable to be actively expected. A single incident may make a recurrence less surprising. Some years

ago, my wife and I were of dealWhen normvacationing in a small island

resort on the Great Barrier Reef. There are only forty guest rooms on the

island. When we came to dinner, we were surprised to meet an

acquaintance, a psychologist named Jon. We greeted each other warmly

and commented on the coincidence. Jon left the resort the next day. About

two weeks later, we were in a theater in London. A latecomer sat next to

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 204 of 533

The Illusion of Validity

System 1 is designed to jump to conclusions from little evidence—and it is

not designed to know the size of its jumps. Because of WYSIATI, only the

evidence at hand counts. Because of confidence by coherence, the

subjective confidence we have in our opinions reflects the coherence of the

story that System 1 and System 2 have constructed. The amount of

evidence and its quality do not count for much, because poor evidence can make a very good story. For some of our most important beliefs we have

no evidence at all, except that people we love and trust hold these beliefs. Considering how little we know, the confidence we have in our beliefs is

preposterous—and it is also essential.

The Illusion of Validity

Many decades ago I spent what seemed like a great deal of time under a

scorching sun, watching groups of sweaty soldiers as they solved a

problem. I was doing my national service in the Israeli Army at the time. I

had completed an undergraduate degree in psychology, and after a year

as an infantry officer was assigned to the army’s Psychology Branch, where one of my occasional duties was to help evaluate candidates for

officer training. We used methods that had been developed by the British

Army in World War II. One test, called the “leaderless group challenge,” was conducted on an

obstacle field. Eight candidates, strangers to each other, with all insignia of

rank removed and only numbered tags to identify them, were instructed to

lift a long log from the ground and haul it to a wall about six feet high. The

entire group had to get to the other side of the wall without the log touching

either the ground or the wall, and without anyone touching the wall. If any of

these things happened, they had to declare itsigрЉ T and start again.

There was more than one way to solve the problem. A common solution was for the team to send several men to the other side by crawling over the

pole as it was held at an angle, like a giant fishing rod, by other members

of the group. Or else some soldiers would climb onto someone’s shoulders

and jump across. The last man would then have to jump up at the pole, held

up at an angle by the rest of the group, shinny his way along its length as

the others kept him and the pole suspended in the air, and leap safely to

the other side. Failure was common at this point, which required them to

start all over again. As a colleague and I monitored the exercise, we made note of who took

charge, who tried to lead but was rebuffed, how cooperative each soldier

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 218 of 533

actual prices is above .90. Why are experts e yinferior to algorithms? One reason, which Meehl

suspected, is that experts try to be clever, think outside the box, and

consider complex combinations of features in making their predictions. Complexity may work in the odd case, but more often than not it reduces

validity. Simple combinations of features are better. Several studies have

shown that human decision makers are inferior to a prediction formula

even when they are given the score suggested by the formula! They feel

that they can overrule the formula because they have additional information

about the case, but they are wrong more often than not. According to Meehl, there are few circumstances under which it is a good idea to

substitute judgment for a formula. In a famous thought experiment, he

described a formula that predicts whether a particular person will go to the movies tonight and noted that it is proper to disregard the formula if

information is received that the individual broke a leg today. The name

“broken-leg rule” has stuck. The point, of course, is that broken legs are

very rare—as well as decisive. Another reason for the inferiority of expert judgment is that humans are

incorrigibly inconsistent in making summary judgments of complex

information. When asked to evaluate the same information twice, they

frequently give different answers. The extent of the inconsistency is often a matter of real concern. Experienced radiologists who evaluate chest X- rays as “normal” or “abnormal” contradict themselves 20% of the time when they see the same picture on separate occasions. A study of 101

independent auditors who were asked to evaluate the reliability of internal

corporate audits revealed a similar degree of inconsistency. A review of

41 separate studies of the reliability of judgments made by auditors,

pathologists, psychologists, organizational managers, and other

professionals suggests that this level of inconsistency is typical, even when

a case is reevaluated within a few minutes. Unreliable judgments cannot

be valid predictors of anything.

The widespread inconsistency is probably due to the extreme context

dependency of System 1. We know from studies of priming that unnoticed

stimuli in our environment have a substantial influence on our thoughts and

actions. These influences fluctuate from moment to moment. The brief

pleasure of a cool breeze on a hot day may make you slightly more

positive and optimistic about whatever you are evaluating at the time. The

prospects of a convict being granted parole may change significantly

during the time that elapses between successive food breaks in the parole

judges’ schedule. Because you have little direct knowledge of what goes

on in your mind, you will never know that you might have made a different

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 235 of 533

intuitive skills. If an anesthesiologist says, “I have a feeling something is wrong,” everyone in the operating room should be prepared for an

emergency. Here again, as in the case of subjective confidence, the experts may not

know the limits of their expertise. An experienced psychotherapist knows

that she is skilled in working out what is going on in her patient’s mind and

that she has good intuitions about what the patient will say next. It is

tempting for her to conclude that she can also anticipate how well the

patient will do next year, but this conclusion is not equally justified. Short- term anticipation and long-term forecasting are different tasks, and the

therapist has had adequate opportunity to learn one but not the other. Similarly, a financial expert may have skills in many aspects of his trade

but not in picking stocks, and an expert in the Middle East knows many

things but not the future. The clinical psychologist, the stock picker, and the

pundit do have intuitive skills in some of their tasks, but they have not

learned to identify the situations and the tasks in which intuition will betray

them. The unrecognized limits of professional skill help explain why experts

are often overconfident.

Evaluating Validity

At the end of our journey, Gary Klein and I agreed on a general answer to

our initial question: When can you trust an experienced professional who

claims to have an intuition? Our conclusion was that for the most part it is

possible to distinguish intuitions that are likely to be valid from those that

are likely to be bogus. As in the judgment of whether a work of art is

genuine or a fake, you will usually do better by focusing on its provenance

than by looking at the piece itself. If the environment is sufficiently regular

and if the judge has had a chance to learn its regularities, the associative machinery will recognize situations and generate quick and accurate

predictions and decisions. You can trust someone’s intuitions if these

conditions are met. Unfortunately, associativentu memory also generates subjectively

compelling intuitions that are false. Anyone who has watched the chess

progress of a talented youngster knows well that skill does not become

perfect all at once, and that on the way to near perfection some mistakes

are made with great confidence. When evaluating expert intuition you

should always consider whether there was an adequate opportunity to

learn the cues, even in a regular environment.

In a less regular, or low-validity, environment, the heuristics of judgment

are invoked. System 1 is often able to produce quick answers to difficult

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 238 of 533

The Outside View

A few years after my collaboration with Amos began, I convinced some

officials in the Israeli Ministry of Education of the need for a curriculum to

teach judgment and decision making in high schools. The team that I

assembled to design the curriculum and write a textbook for it included

several experienced teachers, some of my psychology students, and

Seymour Fox, then dean of the Hebrew University’s School of Education, who was an expert in curriculum development. After meeting every Friday afternoon for about a year, we had

constructed a detailed outline of the syllabus, had written a couple of

chapters, and had run a few sample lessons in the classroom. We all felt

that we had made good progress. One day, as we were discussing

procedures for estimating uncertain quantities, the idea of conducting an

exercise occurred to me. I asked everyone to write down an estimate of

how long it would take us to submit a finished draft of the textbook to the Ministry of Education. I was following a procedure that we already planned

to incorporate into our curriculum: the proper way to elicit information from

a group is not by starting with a public discussion but by confidentially

collecting each person’s judgment. This procedure makes better use of the

knowledge available to members of the group than the common practice of

open discussion. I collected the estimates and jotted the results on the

blackboard. They were narrowly centered around two years; the low end was one and a half, the high end two and a half years.

Then I had another idea. I turned to Seymour, our curriculum expert, and

asked whether he could think of other teams similar to ours that had

developed a curriculum from scratch. This was a time when several

pedagogical innovations like “new math” had been introduced, and

Seymour said he could think of quite a few. I then asked whether he knew

the history of these teams in some detail, and it turned out that he was

familiar with several. I asked him to think of these teams when they had made as much progress as we had. How long, from that point, did it take

them to finish their textbook projects?

He fell silent. When he finally spoke, it seemed to me that he was

blushing, embarrassed by his own answer: “You know, I never realized this

before, but in fact not all the teams at a stage comparable to ours ever did

complete their task. A substantial fraction of the teams ended up failing to

finish the job.”

This was worrisome; we had never considered the possibility that we might fail. My anxiety rising, I asked how large he estimated that fraction was. Rw l

sidering t20;About 40%,” he answered. By now, a pall of gloom

Page 239 of 533

was falling over the room. The next question was obvious: “Those who

finished,” I asked. “How long did it take them?” “I cannot think of any group

that finished in less than seven years,” he replied, “nor any that took more

than ten.”

I grasped at a straw: “When you compare our skills and resources to

those of the other groups, how good are we? How would you rank us in

comparison with these teams?” Seymour did not hesitate long this time.

“We’re below average,” he said, “but not by much.” This came as a

complete surprise to all of us—including Seymour, whose prior estimate

had been well within the optimistic consensus of the group. Until I

prompted him, there was no connection in his mind between his

knowledge of the history of other teams and his forecast of our future. Our state of mind when we heard Seymour is not well described by

stating what we “knew.” Surely all of us “knew” that a minimum of seven

years and a 40% chance of failure was a more plausible forecast of the

fate of our project than the numbers we had written on our slips of paper a

few minutes earlier. But we did not acknowledge what we knew. The new

forecast still seemed unreal, because we could not imagine how it could

take so long to finish a project that looked so manageable. No crystal ball was available to tell us the strange sequence of unlikely events that were in

our future. All we could see was a reasonable plan that should produce a

book in about two years, conflicting with statistics indicating that other

teams had failed or had taken an absurdly long time to complete their mission. What we had heard was base-rate information, from which we

should have inferred a causal story: if so many teams failed, and if those

that succeeded took so long, writing a curriculum was surely much harder

than we had thought. But such an inference would have conflicted with our

direct experience of the good progress we had been making. The

statistics that Seymour provided were treated as base rates normally are —noted and promptly set aside. We should have quit that day. None of us was willing to invest six more

years of work in a project with a 40% chance of failure. Although we must

have sensed that persevering was not reasonable, the warning did not

provide an immediately compelling reason to quit. After a few minutes of

desultory debate, we gathered ourselves together and carried on as if

nothing had happened. The book was eventually completed eight(!) years

later. By that time Iwas no longer living in Israel and had long since ceased

to be part of the team, which completed the task after many unpredictable

vicissitudes. The initial enthusiasm for the idea in the Ministry of Education

had waned by the time the text was delivered and it was never used.

This embarrassing episode remains one of the most instructive

experiences of my professional life. I eventually learned three lessons from

Couldn't preview file
There was a problem loading this page.

Page 240 of 533

it. The first was immediately apparent: I had stumbled onto a distinction

between two profoundly different approaches to forecasting, which Amos

and I later labeled the inside view and the outside view. The second lesson was that our initial forecasts of about two years for the completion of the

project exhibited a planning fallacy. Our estimates were closer to a best- case scenario than to a realistic assessment. I was slower to accept the

third lesson, which I call irrational perseverance: the folly we displayed that

day in failing to abandon the project. Facing a choice, we gave up

rationality rather than give up the enterprise.

Drawn to the Inside View

On that long-ago Friday, our curriculum expert made two judgments about

the same problem and arrived at very different answers. The inside viewis

the one that all of us, including Seymour, spontaneously adopted to assess

the future of our project. We focused on our specific circumstances and

searched for evidence in our own experiences. We had a sketchy plan: we

knew how many chapters we were going to write, and we had an idea of

how long it had taken us to write the two that we had already done. The more cautious among us probably added a few months to their estimate

as a margin of error. Extrapolating was a mistake. We were forecasting based on the

information in front of us—WYSIATI—but the chapters we wrote first were

probably easier than others, and our commitment to the project was

probably then at its peak. But the main problem was that we failed to allow

for what Donald Rumsfeld famously called the “unknown unknowns.” There was no way for us to foresee, that day, the succession of events that would

cause the project to drag out for so long. The divorces, the illnesses, the

crises of coordination with bureaucracies that delayed the work could not

be anticipated. Such events not only cause the writing of chapters to slow

down, they also produce long periods during which little or no progress is made at all. The same must have been true, of course, for the other teams

that Seymour knew about. The members of those teams were also unable

to imagine the events that would cause them to spend seven years to

finish, or ultimately fail to finish, a project that they evidently had thought was very feasible. Like us, they did not know the odds they were facing.

There are many ways for any plan to fail, and although most of them are too

improbable to be anticipated, the likelihood that something will go wrong

in a big project is high.

The second question I asked Seymour directed his attention away from

us and toward a class of similar cases. Seymour estimated the base rate

of success in that reference class: 40% failure and seven to ten years for

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 245 of 533

penalize them for failing to anticipate difficulties, and for failing to allow for

difficulties that they could not have anticipated—the unknown unknowns.

Decisions and Errors

That Friday afternoon occurred more than thirty years ago. I often thought

about it and mentioned it in lectures several times each year. Some of my

friends got bored with the story, but I kept drawing new lessons from it. Almost fifteen years after I first reported on the planning fallacy withAmos, I

returned to the topic with Dan Lovallo. Together we sketched a theory of

decision making in which the optimistic bias is a significant source of risk

taking. In the standard rational model of economics, people take risks

because the odds are favorable—they accept some probability of a costly

failure because the probability of success is sufficient. We proposed an

alternative idea. When forecasting the outcomes of risky projects, executives too easily

fall victim to the planning fallacy. In its grip, they make decisions based on

delusional optimism rather than on a rational weighting of gains, losses,

and probabilities. They overestimate benefits and underestimate costs.

They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are

unlikely to come in on budget or on time or to deliver the expected returns —or even to be completed.

In this view, people often (but not always) take on risky projects because

they are overly optimistic about the odds they face. I will return to this idea

several times in this book—it probably contributes to an explanation of why

people litigate, why they start wars, and why they open small businesses.

Failing a Test

For many years, I thought that the main point of the curriculum story was what I had learned about my friend Seymour: that his best guess about the

future of our project was not informed by what he knew about similar

projects. I came off quite well in my telling of the story, ir In which I had the

role of clever questioner and astute psychologist. I only recently realized

that I had actually played the roles of chief dunce and inept leader.

The project was my initiative, and it was therefore my responsibility to

ensure that it made sense and that major problems were properly

discussed by the team, but I failed that test. My problem was no longer the

planning fallacy. I was cured of that fallacy as soon as I heard Seymour’s

statistical summary. If pressed, I would have said that our earlier estimates

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 252 of 533

an equally satisfying answer.

Competition Neglect

It is tempting to explain entrepreneurial optimism by wishful thinking, but

emotion is only part of the story. Cognitive biases play an important role,

notably the System 1 feature WYSIATI.

We focus on our goal, anchor on our plan, and neglect relevant base

rates, exposing ourselves to tnesehe planning fallacy. We focus on what we want to do and can do, neglecting the plans

and skills of others. Both in explaining the past and in predicting the future, we focus on

the causal role of skill and neglect the role of luck. We are therefore

prone to an illusion of control. We focus on what we know and neglect what we do not know, which makes us overly confident in our beliefs.

The observation that “90% of drivers believe they are better than

average” is a well-established psychological finding that has become part

of the culture, and it often comes up as a prime example of a more general

above-average effect. However, the interpretation of the finding has

changed in recent years, from self-aggrandizement to a cognitive bias. Consider these two questions:

Are you a good driver?

Are you better than average as a driver?

The first question is easy and the answer comes quickly: most drivers say

yes. The second question is much harder and for most respondents almost

impossible to answer seriously and correctly, because it requires an

assessment of the average quality of drivers. At this point in the book it

comes as no surprise that people respond to a difficult question by

answering an easier one. They compare themselves to the average without ever thinking about the average. The evidence for the cognitive

interpretation of the above-average effect is that when people are asked

about a task they find difficult (for many of us this could be “Are you better

than average in starting conversations with strangers?”), they readily rate

themselves as below average. The upshot is that people tend to be overly

Couldn't preview file
There was a problem loading this page.

Page 253 of 533

optimistic about their relative standing on any activity in which they do moderately well.

I have had several occasions to ask founders and participants in

innovative start-ups a question: To what extent will the outcome of your

effort depend on what you do in your firm? This is evidently an easy

question; the answer comes quickly and in my small sample it has never

been less than 80%. Even when they are not sure they will succeed, these

bold people think their fate is almost entirely in their own hands. They are

surely wrong: the outcome of a start-up depends as much on the

achievements of its competitors and on changes in the market as on its

own efforts. However, WY SIATI plays its part, and entrepreneurs naturally

focus on what they know best—their plans and actions and the most

immediate threats and opportunities, such as the availability of funding.

They know less about their competitors and therefore find it natural to

imagine a future in which the competition plays little part. Colin Camerer and Dan Lovallo, who coined the concept of competition

neglect, illustrated it with a quote from the then chairman of Disney

Studios. Asked why so many expensive big-budget movies are released

on the same days (such as Memorial Day and Independence Day), he

replied:

Hubris. Hubris. If you only think about your own business, you

think, “I’ve got a good story department, I’ve got a good marketing department, we’re going to go out and do this.” And

you don’t think that everybody else is thinking the same way. In a

given weekend in a year you’ll have five movies open, and there’s

certainly not enough people to go around. re

The candid answer refers to hubris, but it displays no arrogance, no

conceit of superiority to competing studios. The competition is simply not

part of the decision, in which a difficult question has again been replaced

by an easier one. The question that needs an answer is this: Considering what others will do, how many people will see our film? The question the

studio executives considered is simpler and refers to knowledge that is most easily available to them: Do we have a good film and a good

organization to market it? The familiar System 1 processes of WY SIATI

and substitution produce both competition neglect and the above-average

effect. The consequence of competition neglect is excess entry: more

competitors enter the market than the market can profitably sustain, so

their average outcome is a loss. The outcome is disappointing for the

typical entrant in the market, but the effect on the economy as a whole

could well be positive. In fact, Giovanni Dosi and Dan Lovallo call

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 256 of 533

When they come together, the emotional, cognitive, and social factors

that support exaggerated optimism are a heady brew, which sometimes

leads people to take risks that they would avoid if they knew the odds.

There is no evidence that risk takers in the economic domain have an

unusual appetite for gambles on high stakes; they are merely less aware of

risks than more timid people are. Dan Lovallo and I coined the phrase

“bold forecasts and timid decisions” to describe the background of risk

taking.

The effects of high optimism on decision making are, at best, a mixed

blessing, but the contribution of optimism to good implementation is

certainly positive. The main benefit of optimism is resilience in the face of

setbacks. According to Martin Seligman, the founder of potelsitive

psychology, an “optimistic explanation style” contributes to resilience by

defending one’s self-image. In essence, the optimistic style involves taking

credit for successes but little blame for failures. This style can be taught, at

least to some extent, and Seligman has documented the effects of training

on various occupations that are characterized by a high rate of failures,

such as cold-call sales of insurance (a common pursuit in pre-Internet

days). When one has just had a door slammed in one’s face by an angry

homemaker, the thought that “she was an awful woman” is clearly superior

to “I am an inept salesperson.” I have always believed that scientific

research is another domain where a form of optimism is essential to

success: I have yet to meet a successful scientist who lacks the ability to

exaggerate the importance of what he or she is doing, and I believe that

someone who lacks a delusional sense of significance will wilt in the face

of repeated experiences of multiple small failures and rare successes, the

fate of most researchers.

The Premortem: A Partial Remedy

Can overconfident optimism be overcome by training? I am not optimistic.

There have been numerous attempts to train people to state confidence

intervals that reflect the imprecision of their judgments, with only a few

reports of modest success. An often cited example is that geologists at

Royal Dutch Shell became less overconfident in their assessments of

possible drilling sites after training with multiple past cases for which the

outcome was known. In other situations, overconfidence was mitigated (but

not eliminated) when judges were encouraged to consider competing

hypotheses. However, overconfidence is a direct consequence of features

Page 258 of 533

“This is a case of overconfidence. They seem to believe they

know more than they actually do know.”

“We should conduct a premortem session. Someone may come

up with a threat we have neglected.”

Couldn't preview file
There was a problem loading this page.

Page 259 of 533

Part 4

Page 260 of 533

Choices

Couldn't preview file
There was a problem loading this page.

Page 261 of 533

Bernoulli’s Errors

One day in the early 1970s, Amos handed me a mimeographed essay by

a Swiss economist named Bruno Frey, which discussed the psychological

assumptions of economic theory. I vividly remember the color of the cover:

dark red. Bruno Frey barely recalls writing the piece, but I can still recite its

first sentence: “The agent of economic theory is rational, selfish, and his

tastes do not change.”

I was astonished. My economist colleagues worked in the building next

door, but I had not appreciated the profound difference between our

intellectual worlds. To a psychologist, it is self-evident that people are

neither fully rational nor completely selfish, and that their tastes are

anything but stable. Our two disciplines seemed to be studying different

species, which the behavioral economist Richard Thaler later dubbed

Econs and Humans. Unlike Econs, the Humans that psychologists know have a System 1.

Their view of the world is limited by the information that is available at a

given moment (WYSIATI), and therefore they cannot be as consistent and

logical as Econs. They are sometimes generous and often willing to

contribute to the group to which they are attached. And they often have little

idea of what they will like next year or even tomorrow. Here was an

opportunity for an interesting conversation across the boundaries of the

disciplines. I did not anticipate that my career would be defined by that

conversation. Soon after he showed me Frey’s article, Amos suggested that we make

the study of decision making our next project. I knew next to nothing about

the topic, but Amos was an expert and a star of the field, and he Mathematical Psychology, and he directed me to a few chapters that he

thought would be a good introduction.

I soon learned that our subject matter would be people’s attitudes to

risky options and that we would seek to answer a specific question: What

rules govern people’s choices between different simple gambles and

between gambles and sure things?

Simple gambles (such as “40% chance to win $300”) are to students of

decision making what the fruit fly is to geneticists. Choices between such

gambles provide a simple model that shares important features with the more complex decisions that researchers actually aim to understand. Gambles represent the fact that the consequences of choices are never

certain. Even ostensibly sure outcomes are uncertain: when you sign the

contract to buy an apartment, you do not know the price at which you later may have to sell it, nor do you know that your neighbor’s son will soon take

Couldn't preview file
There was a problem loading this page.

Page 263 of 533

option. In this example, both of us would have picked the sure thing, and

you probably would do the same. When we confidently agreed on a choice, we believed—almost always correctly, as it turned out—that most people would share our preference, and we moved on as if we had solid evidence. We knew, of course, that we would need to verify our hunches later, but by

playing the roles of both experimenters and subjects we were able to move

quickly.

Five years after we began our study of gambles, we finally completed an

essay that we titled “Prospect Theory: An Analysis of Decision under Risk.” Our theory was closely modeled on utility theory but departed from it in

fundamental ways. Most important, our model was purely descriptive, and

its goal was to document and explain systematic violations of the axioms

of rationality in choices between gambles. We submitted our essay to Econometrica, a journal that publishes significant theoretical articles in

economics and in decision theory. The choice of venue turned out to be

important; if we had published the identical paper in a psychological

journal, it would likely have had little impact on economics. However, our

decision was not guided by a wish to influence economics; Econometrica

just happened to be where the best papers on decision making had been

published in the past, and we were aspiring to be in that company. In this

choice as in many others, we were lucky. Prospect theory turned out to be

the most significant work we ever did, and our article is among the most

often cited in the social sciences. Two years later, we published in Science an account of framing effects: the large changes of preferences

that are sometimes caused by inconsequential variations in the wording of

a choice problem. During the first five years we spent looking at how people make

decisions, we established a dozen facts about choices between risky

options. Several of these facts were in flat contradiction to expected utility

theory. Some had been observed before, a few were new. Then we

constructed a theory that modified expected utility theory just enough to

explain our collection of observations. That was prospect theory. Our approach to the problem was in the spirit of a field of psychology

called psychophysics, which was founded and named by the German

psychologist and mystic Gustav Fechner (1801–1887). Fechner was

obsessed with the relation of mind and matter. On one side there is a

physical quantity that can vary, such as the energy of a light, the frequency

of a tone, or an amount of money. On the other side there is a subjective

experience of brightness, pitch, or value. Mysteriously, variations of the

physical quantity cause variations in the intensity or quality of the subjective

experience. Fechner’s project was to find the psychophysical laws that

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 266 of 533

concept of expected utility (which he called “moral expectation”) to

compute how much a merchant in St. Petersburg would be willing to pay to

insure a shipment of spice from Amsterdam if “he is well aware of the fact

that at this time of year of one hundred ships which sail from Amsterdam to Petersburg, five are usually lost.” His utility function explained why poor

people buy insurance and why richer people sell it to them. As you can see

in the table, the loss of 1 million causes a loss of 4 points of utility (from

100 to 96) to someone who has 10 million and a much larger loss of 18

points (from 48 to 30) to someone who starts off with 3 million. The poorer man will happily pay a premium to transfer the risk to the richer one, which

is what insurance is about. Bernoulli also offered a solution to the famous

“St. Petersburg paradox,” in which people who are offered a gamble that

has infinite expected value (in ducats) are willing to spend only a few

ducats for it. Most impressive, his analysis of risk attitudes in terms of

preferences for wealth has stood the test of time: it is still current in

economic analysis almost 300 years later.

The longevity of the theory is all the more remarkable because it is

seriously flawed. The errors of a theory are rarely found in what it asserts

explicitly; they hide in what it ignores or tacitly assumes. For an example,

take the following scenarios:

Today Jack and Jill each have a wealth of 5 million. Yesterday, Jack had 1 million and Jill had 9 million. Are they equally happy? (Do they have the same utility?)

Bernoulli’s theory assumes that the utility of their wealth is what makes

people more or less happy. Jack and Jill have the same wealth, and the

theory therefore asserts that they should be equally happy, but you do not

need a degree in psychology to know that today Jack is elated and Jill

despondent. Indeed, we know that Jack would be a great deal happier

than Jill even if he had only 2 million today while she has 5. So Bernoulli’s

theory must be wrong.

The happiness that Jack and Jill experience is determined by the recent

change in their wealth, relative to the different states of wealth that define

their reference points (1 million for Jack, 9 million for Jill). This reference

dependence is ubiquitous in sensation and perception. The same sound will be experienced as very loud or quite faint, depending on whether it was

preceded by a whisper or by a roar. To predict the subjective experience

of loudness, it is not enough to know its absolute energy; you also need to Bineli&r quite fa know the reference sound to which it is automatically

compared. Similarly, you need to know about the background before you

can predict whether a gray patch on a page will appear dark or light. And

Couldn't preview file
There was a problem loading this page.

Page 269 of 533

but they won’t be equally satisfied because their reference points

are different. She currently has a much higher salary.”

“She’s suing him for alimony. She would actually like to settle, but

he prefers to go to court. That’s not surprising—she can only

gain, so she’s risk averse. He, on the other hand, faces options

that are all bad, so he’d rather take the risk.”

Page 271 of 533

$1,000,500 and the utility of $1 million. And if you own the larger amount,

the disutility of losing $500 is again the difference between the utilities of

the two states of wealth. In this theory, the utilities of gains and losses are

allowed to differ only in their sign (+ or –). There is no way to represent the

fact that the disutility of losing $500 could be greater than the utility of winning the same amount—though of course it is. As might be expected in

a situation of theory-induced blindness, possible differences between

gains and losses were neither expected nor studied. The distinction

between gains and losses was assumed not to matter, so there was no

point in examining it. Amos and I did not see immediately that our focus on changes of wealth

opened the way to an exploration of a new topic. We were mainly

concerned with differences between gambles with high or low probability

of winning. One day, Amos made the casual suggestion, “How about

losses?” and we quickly found that our familiar risk aversion was replaced

by risk seeking when we switched our focus. Consider these two

problems:

Problem 1: Which do you choose?

Get $900 for sure OR 90% chance to get $1,000

Problem 2: Which do you choose?

Lose $900 for sure OR 90% chance to lose $1,000

You were probably risk averse in problem 1, as is the great majority of

people. The subjective value of a gain of $900 is certainly more than 90%

of the value of a ga Blth"it ue of a gin of $1,000. The risk-averse choice in

this problem would not have surprised Bernoulli. Now examine your preference in problem 2. If you are like most other

people, you chose the gamble in this question. The explanation for this

risk-seeking choice is the mirror image of the explanation of risk aversion

in problem 1: the (negative) value of losing $900 is much more than 90% of

the (negative) value of losing $1,000. The sure loss is very aversive, and

this drives you to take the risk. Later, we will see that the evaluations of the

probabilities (90% versus 100%) also contributes to both risk aversion in

problem 1 and the preference for the gamble in problem 2. We were not the first to notice that people become risk seeking when all

their options are bad, but theory-induced blindness had prevailed. Because the dominant theory did not provide a plausible way to

accommodate different attitudes to risk for gains and losses, the fact that

the attitudes differed had to be ignored. In contrast, our decision to view

Couldn't preview file
There was a problem loading this page.

Page 273 of 533

Anthony and Betty had a similar structure. How much attention did you pay to the gift of $1,000 or $2,000 that

you were “given” prior to making your choice? If you are like most people,

you barely noticed it. Indeed, there was no reason for you to attend to it,

because the gift is included in the reference point, and reference points

are generally ignored. You know something about your preferences that

utility theorists do not—that your attitudes to risk would not be different if

your net worth were higher or lower by a few thousand dollars (unless you

are abjectly poor). And you also know that your attitudes to gains and

losses are not derived from your evaluation of your wealth. The reason you

like the idea of gaining $100 and dislike the idea of losing $100 is not that

these amounts change your wealth. You just like winning and dislike losing —and you almost certainly dislike losing more than you like winning.

The four problems highlight the weakness of Bernoulli’s model. His

theory is too simple and lacks a moving part. The missing variable is the

reference point, the earlier state relative to which gains and losses are

evaluated. In Bernoulli’s theory you need to know only the state of wealth to

determine its utility, but in prospect theory you also need to know the

reference state. Prospect theory is therefore more complex than utility

theory. In science complexity is considered a cost, which must be justified

by a sufficiently rich set of new and (preferably) interesting predictions of

facts that the existing theory cannot explain. This was the challenge we had

to meet. Although Amos and I were not working with the two-systems model of

the mind, it’s clear now that there are three cognitive features at the heart

of prospect theory. They play an essential role in the evaluation of financial

outcomes and are common to many automatic processes of perception,

judgment, and emotion. They should be seen as operating characteristics

of System 1.

Evaluation is relative to a neutral reference point, which is

sometimes referred to as an “adaptation level.” You can easily set up

a compelling demonstration of this principle. Place three bowls of water in front of you. Put ice water into the left-hand bowl and warm

water into the right-hand bowl. The water in the middle bowl should

be at room temperature. Immerse your hands in the cold and warm

water for about a minute, then dip both in the middle bowl. You will

experience the same temperature as heat in one hand and cold in

the other. For financial outcomes, the usual reference point is the

status quo, but it can also be the outcome that you expect, or

Couldn't preview file
There was a problem loading this page.

Page 276 of 533

because you stand to gain more than you can lose, you probably dislike it —most people do. The rejection of this gamble is an act of System 2, but

the critical inputs are emotional responses that are generated by System

1. For most people, the fear of losing $100 is more intense than the hope

of gaining $150. We concluded from many such observations that “losses

loom larger than gains” and that people are loss averse. You can measure the extent of your aversion to losses by asking yourself

a question: What is the smallest gain that I need to balance an equal

chance to lose $100? For many people the answer is about $200, twice as much as the loss. The “loss aversion ratio” has been estimated in several

experiments and is usually in the range of 1.5 to 2.5. This is an average, of

course; some people are much more loss averse than others. Professional

risk takers in the financial markets are more tolerant of losses, probably

because they do not respond emotionally to every fluctuation. When

participants in an experiment were instructed to “think like a trader,” they

became less loss averse and their emotional reaction to losses (measured

by a physiological index of emotional arousal) was sharply reduced.

In order to examine your loss aversion ratio for different stakes, consider

the following questions. Ignore any social considerations, do not try to

appear either bold Blth"vioher or cautious, and focus only on the subjective

impact of the possible loss and the off setting gain.

Consider a 5 0–5 0 gamble in which you can lose $10. What is the

smallest gain that makes the gamble attractive? If you say $10, then

you are indifferent to risk. If you give a number less than $10, you

seek risk. If your answer is above $10, you are loss averse. What about a possible loss of $500 on a coin toss? What possible

gain do you require to off set it?

What about a loss of $2,000?

As you carried out this exercise, you probably found that your loss aversion

coefficient tends to increase when the stakes rise, but not dramatically. All

bets are off, of course, if the possible loss is potentially ruinous, or if your

lifestyle is threatened. The loss aversion coefficient is very large in such

cases and may even be infinite—there are risks that you will not accept,

regardless of how many millions you might stand to win if you are lucky. Another look at figure 10 may help prevent a common confusion. In this

chapter I have made two claims, which some readers may view as

contradictory:

Page 277 of 533

In mixed gambles, where both a gain and a loss are possible, loss

aversion causes extremely risk-averse choices.

In bad choices, where a sure loss is compared to a larger loss that is merely probable, diminishing sensitivity causes risk seeking.

There is no contradiction. In the mixed case, the possible loss looms twice

as large as the possible gain, as you can see by comparing the slopes of

the value function for losses and gains. In the bad case, the bending of the

value curve (diminishing sensitivity) causes risk seeking. The pain of losing

$900 is more than 90% of the pain of losing $1,000. These two insights

are the essence of prospect theory.

Figure 10 shows an abrupt change in the slope of the value function where

gains turn into losses, because there is considerable loss aversion even when the amount at risk is minuscule relative to your wealth. Is it plausible

that attitudes to states of wealth could explain the extreme aversion to

small risks? It is a striking example of theory-induced blindness that this

obvious flaw in Bernoulli’s theory failed to attract scholarly notice for more

than 250 years. In 2000, the behavioral economist Matthew Rabin finally

proved mathematically that attempts to explain loss aversion by the utility of wealth are absurd and doomed to fail, and his proof attracted attention. Rabin’s theorem shows that anyone who rejects a favorable gamble with

small stakes is mathematically committed to a foolish level of risk aversion

for some larger gamble. For example, he notes that most Humans reject

the following gamble:

50% chance to lose $100 and 50% chance to win $200

He then shows that according to utility theory, an individual who rejects that

gamble will also turn down the following gamble:

50% chance to lose $200 and 50% chance to win $20,000

But of course no one in his or her right mind will reject this gamble! In an

exuberant article they wrote abo Blth"ins> Perhaps carried away by their enthusiasm, they concluded their article

by recalling the famous Monty Python sketch in which a frustrated customer

Couldn't preview file
There was a problem loading this page.

Page 279 of 533

to these flaws has contributed to its acceptance as the main alternative to

utility theory. Consider the assumption of prospect theory, that the reference point,

usually the status quo, has a value of zero. This assumption seems

reasonable, but it leads to some absurd consequences. Have a good look

at the following prospects. What would it be like to own them?

A. one chance in a million to win $1 million B. 90% chance to win $12 and 10% chance to win nothing

C. 90% chance to win $1 million and 10% chance to win nothing

Winning nothing is a possible outcome in all three gambles, and prospect

theory assigns the same value to that outcome in the three cases. Winning

nothing is the reference point and its value is zero. Do these statements

correspond to your experience? Of course not. Winning nothing is a

nonevent in the first two cases, and assigning it a value of zero makes

good sense. In contrast, failing to win in the third scenario is intensely

disappointing. Like a salary increase that has been promised informally,

the high probability of winning the large sum sets up a tentative new

reference point. Relative to your expectations, winning nothing will be

experienced as a large loss. Prospect theory cannot cope with this fact,

because it does not allow the value of an outcome (in this case, winning

nothing) to change when it is highly unlikely, or when the alternative is very

valuable. In simple words, prospect theory cannot deal with

disappointment. Disappointment and the anticipation of disappointment

are real, however, and the failure to acknowledge them is as obvious a

flow as the counterexamples that I invoked to criticize Bernoulli’s theory. Prospect theory and utility theory also fail to allow for regret. The two

theories share the assumption that available options in a choice are

evaluated separately and independently, and that the option with the

highest value is selected. This assumption is certainly wrong, as the

following example shows.

Problem 6: Choose between 90% chance to win $1 million OR

$50 with certainty.

Problem 7: Choose between 90% chance to win $1 million OR

$150,000 with certainty.

Compare the anticipated pain of choosing the gamble and not winning in

the two cases. Failing to win is a disappointment in both, but the potential

Couldn't preview file
There was a problem loading this page.

Page 282 of 533

indifference curve. So if A and B are on the same indifference curve for

you, you are indifferent between them and will need no incentive to move

from one to the other, or back. Some version of this figure has appeared in

every economics textbook written in the last hundred years, and many millions of students have stared at it. Few have noticed what is missing. Here again, the power and elegance of a theoretical model have blinded

students and scholars to a serious deficiency. What is missing from the figure is an indication of the individual’s current

income and leisure. If you are a salaried employee, the terms of your

employment specify a salary and a number of vacation days, which is a

point on the map. This is your reference point, your status quo, but the

figure does not show it. By failing to display it, the theorists who draw this

figure invite you to believe that the reference point does not matter, but by

now you know that of course it does. This is Bernoulli’s error all over again.

The representation of indifference curves implicitly assumes that your utility

at any given moment is determined entirely by your present situation, that

the past is irrelevant, and that your evaluation of a possible job does not

depend on the terms of your current job. These assumptions are

completely unrealistic in this case and in many others.

The omission of the ref Con serence point from the indifference map is a

surprising case of theory-induced blindness, because we so often

encounter cases in which the reference point obviously matters. In labor

negotiations, it is well understood by both sides that the reference point is

the existing contract and that the negotiations will focus on mutual

demands for concessions relative to that reference point. The role of loss

aversion in bargaining is also well understood: making concessions hurts. You have much personal experience of the role of reference point. If you

changed jobs or locations, or even considered such a change, you surely

remember that the features of the new place were coded as pluses or minuses relative to where you were. You may also have noticed that

disadvantages loomed larger than advantages in this evaluation—loss

aversion was at work. It is difficult to accept changes for the worse. For

example, the minimal wage that unemployed workers would accept for new

employment averages 90% of their previous wage, and it drops by less

than 10% over a period of one year.

To appreciate the power that the reference point exerts on choices,

consider Albert and Ben, “hedonic twins” who have identical tastes and

currently hold identical starting jobs, with little income and little leisure time.

Their current circumstances correspond to the point marked 1 in figure 11.

The firm offers them two improved positions, A and B, and lets them

decide who will get a raise of $10,000 (position A) and who will get an

extra day of paid vacation each month (position B). As they are both

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 285 of 533

standard economic theory would be puzzled by it. Thaler was looking for an

account that could explain puzzles of this kind. Chance intervened when Thaler met one of our former students at a

conference and obtained an early draft of prospect theory. He reports that

he read the manuscript with considerable Bon s Able Bonexcitement,

because he quickly realized that the loss-averse value function of prospect

theory could explain the endowment effect and some other puzzles in his

collection. The solution was to abandon the standard idea that Professor R

had a unique utility for the state of having a particular bottle. Prospect

theory suggested that the willingness to buy or sell the bottle depends on

the reference point—whether or not the professor owns the bottle now. If he

owns it, he considers the pain of giving up the bottle. If he does not own it,

he considers the pleasure of getting the bottle. The values were unequal

because of loss aversion: giving up a bottle of nice wine is more painful

than getting an equally good bottle is pleasurable. Remember the graph of

losses and gains in the previous chapter. The slope of the function is

steeper in the negative domain; the response to a loss is stronger than the

response to a corresponding gain. This was the explanation of the

endowment effect that Thaler had been searching for. And the first

application of prospect theory to an economic puzzle now appears to have

been a significant milestone in the development of behavioral economics.

Thaler arranged to spend a year at Stanford when he knew that Amos

and I would be there. During this productive period, we learned much from

each other and became friends. Seven years later, he and I had another

opportunity to spend a year together and to continue the conversation

between psychology and economics. The Russell Sage Foundation, which was for a long time the main sponsor of behavioral economics, gave one

of its first grants to Thaler for the purpose of spending a year with me in Vancouver. During that year, we worked closely with a local economist,

Jack Knetsch, with whom we shared intense interest in the endowment

effect, the rules of economic fairness, and spicy Chinese food.

The starting point for our investigation was that the endowment effect is

not universal. If someone asks you to change a $5 bill for five singles, you

hand over the five ones without any sense of loss. Nor is there much loss

aversion when you shop for shoes. The merchant who gives up the shoes

in exchange for money certainly feels no loss. Indeed, the shoes that he

hands over have always been, from his point of view, a cumbersome proxy

for money that he was hoping to collect from some consumer. Furthermore,

you probably do not experience paying the merchant as a loss, because

you were effectively holding money as a proxy for the shoes you intended

to buy. These cases of routine trading are not essentially different from the

Couldn't preview file
There was a problem loading this page.
Couldn't preview file
There was a problem loading this page.

Page 288 of 533

prices are perceived as too high—when you feel that a seller is taking money that exceeds the exchange value. Brain recordings also indicate

that buying at especially low prices is a pleasurable event.

The cash value that the Sellers set on the mug is a bit more than twice

as high as the value set by Choosers and Buyers. The ratio is very close to

the loss aversion coefficient in risky choice, as we might expect if the

same value function for gains and losses of money is applied to both

riskless and risky decisions. A ratio of about 2:1 has appeared in studies

of diverse economic domains, including the response of households to

price changes. As economists would predict, customers tend to increase

their purchases of eggs, orange juice, or fish when prices drop and to

reduce their purchases when prices rise; however, in contrast to the

predictions of economic theory, the effect of price increases (losses

relative to the reference price) is about twice as large as the effect of

gains.

The mugs experiment has remained the standard demonstration of the

endowment effect, along with an even simpler experiment that Jack

Knetsch reported at about the same time. Knetsch asked two classes to fill

out a questionnaire and rewarded them with a gift that remained in front of

them for the duration of the experiment. In one session, the prize was an

expensive pen; in another, a bar of Swiss chocolate. At the end of the

class, the experimenter showed the alternative gift and allowed everyone

to trade his or her gift for another. Only about 10% of the participants opted

to exchange their gift. Most of those who had received the pen stayed with

the pen, and those who had received the chocolate did not budge either.

Thinking Like a Trader

The fundamental ideas of prospect theory are that reference points exist,

and that losses loom larger than corresponding gains. Observations in real markets collected over the years illustrate the power of these concepts. A

study of the market for condo apartments in Boston during a downturn

yielded particularly clear results. The authors of that study compared the

behavior of owners of similar units who had bought their dwellings at

different prices. For a rational agent, the buying price is irrelevant history—

the current market value is all that matters. Not so for Humans in a down market for housing. Owners who have a high reference point and thus face

higher losses set a higher price on their dwelling, spend a longer time

trying to sell their home, and eventually receive more money.

The original demonstration of an asymmetry between selling prices and

buying prices (or, more convincingly, between selling and choosing) was

Couldn't preview file
There was a problem loading this page.

Page 292 of 533

Bad Events

The concept of loss aversion is certainly the most significant contribution of

psychology to behavioral economics. This is odd, because the idea that

people evaluate many outcomes as gains and losses, and that losses

loom larger than gains, surprises no one. Amos and I often joked that we were engaged in studying a subject about which our grandmothers knew a

great deal. In fact, however, we know more than our grandmothers did and

can now embed loss aversion in the context of a broader two-systems model of the mind, and specifically a biological and psychological view in which negativity and escape dominate positivity and approach. We can

also trace the consequences of loss aversion in surprisingly diverse

observations: only out-of-pocket losses are compensated when goods are

lost in transport; attempts at large-scale reforms very often fail; and

professional golfers putt more accurately for par than for a birdie. Clever

as she was, my grandmother would have been surprised by the specific

predictions from a general idea she considered obvious.

Negativity Dominance

Figure 12

Your heartbeat accelerated when you looked at the left-hand figure. It

accelerated even before you could label what is so eerie about that

picture. After some time you may have recognized the eyes of a terrified

person. The eyes on the right, narrowed by the Crro raised cheeks of a

smile, express happiness—and they are not nearly as exciting. The two

pictures were presented to people lying in a brain scanner. Each picture

was shown for less than

2

/100 of a second and immediately masked by

“visual noise,” a random display of dark and bright squares. None of the

observers ever consciously knew that he had seen pictures of eyes, but

one part of their brain evidently knew: the amygdala, which has a primary

role as the “threat center” of the brain, although it is also activated in other

emotional states. Images of the brain showed an intense response of the

amygdala to a threatening picture that the viewer did not recognize. The