Thinking, Fast & Slow Summary

After this book was recommended to me by too many friends and coworkers, I decided it’s time to read it.

The structure of the book is such that concepts are introduced with examples, backed by scientific experiments. At the end of each chapter, summary statements are included.

It’s an okay book, though I cannot say that I have learned a ton from it – most of the stuff I had already experienced or read somewhere else. I liked the “puzzles” bit. The author starts talking about something and then walks you through your own thought processes while you were reading that piece. Felt pretty similar to GEB, in a way.

The book could’ve used a bit of structure to make things more explicit. I think a proper categorization of “Definitions”, “Experiment”, “Results”, “Takeaway” would make for a much easier read. Every page is just filled with letters – there’s a lot of text! πŸ™‚

Here I will give a short summary of every part.

Part I: Two systems

Two systems are introduced, called “System 1” (the creative thinker) and “System 2” (the logical thinker). System 1 thinks in automatic mode (passive), while System 2 thinks in manual mode (active). Note that these are what the author calls “Useful fictions” – the distinction is made just so that the concepts can be explained easier. Illusions get to System 1, but not to System 2 (do they really not?).

System 1 takes less resources/energy and System 2 takes more. Most of our actions are done in flight mode using System 1 as we are generally lazy. However, as you get more “expert” in a specific task, System 1 learns how to do it without too much effort (e.g. expert chess player vs beginner chess player). But when System 2 has worked for too long, it gets to a state of “ego depletion” and our decisions fall back to use System 1.

The effect of priming: Consider the string SO_P. The word EAT primes the word SOUP, while the word WASH primes the word SOAP. Talks about how the systems associate stuff.

I’m in a very good mood today, and my System 2 is weaker than usual. I should be extra careful

System 1 is pretty good at jumping to conclusions, as it can work with partial (yet unretrieved information). Consider Alan and the retrieval of information in the order intelligent -> impulsive -> envious, and Ben with the retrieval of information in the order envious -> impulsive -> intelligent. We get different picture about them over time as we get more information. Per Wikipedia, `Halo effect is β€œthe name given to the phenomenon whereby evaluators tend to be influenced by their previous judgments of performance or personality.”`.

Do we still remember the question we are trying to answer? Or have we substituted an easier one?

Part II: Heuristics and biases

Anchoring effect: Any number that we are asked to consider when solving a problem that involves approximation. For example, is the height of the tallest redwood more or less than 1200ft? Since we likely have no idea the height of the redwood, this hint number might affect our answer and we might think it’s close to it. Both system 1 and system 2 are affected by anchoring biases.

She has been watching too many spy movies recently, so she’s seeing conspiracies everywhere.

Availability is the process of judging frequency by the ease of instances that come to mind. For example, in a marriage it’s easier to think of oneself’s achievements than the partner’s (or both combined) – the availability bias. Simply being aware of this bias can contribute to a more peaceful marriage.

The CEO has had several successes in a row, so failure doesn’t come easily to her mind. The availability bias is making the overconfident.

Availability and emotion: “How do I feel about it” vs “What do I think about it”. Another interesting statement in the same chapter was “When experts and the public disagree on their priorities, each side must respect the insights and intelligence of the other” since both sides might have insights.

This is an availability cascade: a nonevent that is inflated by the media and the public until it fills our TV screens and becomes all anyone is talking about.

The lawn is well trimmed, the receptionist looks competent, and the furniture is attractive, but this doesn’t mean it is a well-managed company. I hope the board does not go by representativeness

They keep making the same mistake: predicting rare events from weak evidence. When the evidence is weak, one should stick with the base rates

They constructed a very complicated scenario and insisted on calling it highly probable. It is not – it is only a plausible story.

They added a cheap gift to the expensive product, and made the whole deal less attractive. Less is more in this case.

Some of the chapters talk about Bayesian inference; one of the examples is fully solved here.

I think this was an interesting observation about stereotypes: “stereotypes, both correct and false, are how we think of categories”.

There was an experiment done which was similar to the Bystander effect. Even though people knew they weren’t doing the “right” thing, they still were doing it. “Changing one’s mind about human nature is hard work”.

I like how the author went a little meta in chapter 16:

People who are taught surprising statistical facts about human behaviour may be impressed to the point of telling their friends about what they have heard, but this does not mean that their understanding of the world has really changed. The test of learning psychology is whether your understanding of situations you encounter has changed, not whether you have learned a new fact

There was one situation where the author taught about positive reinforcement and some of the students said they experience exactly the opposite. “She says experience has taught her that criticism is more effective than praise. What she doesn’t understand is that it’s all due to regression to the mean.”

Our screening procedure is good but not perfect, so we should anticipate regression. We shouldn’t be surprised that the very best candidates often fail to meet our expectations

Part III: Overconfidence

Begins with a chapter that talks about the illusions of understanding and how we mostly rely on luck even though we might not be aware of it.

It proceeds to talk about illusions of validity and gives an example where they assessed some candidates based on some criteria and then they reached at a point where “The evidence that we could not forecast success accurately was overwhelming”, and that “We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each of our specific predictions was valid” – the illusion of validity. “Confidence is a feeling”. Talks about forecasting and how imperfect it is.

She is a hedgehog. She has a theory that explains everything, and it gives her the illusion that she understands the world.

The question is not whether these experts are well trained. It is whether their world is predictable.

First he talks about (a research that talks about) the benefits of algorithms:

Why are experts inferior to algorithms? One reason, […] is that experts try to be clever, think outside the box […]. Complexity may work in the odd case […]. They [experts] feel that they can overrule the formula because they have additional information about the case, but they are wrong more often than not

He then proceeds to talk about the benefits of having standardized procedures.

Statistical method criticized for being “mechanical, atomistic, artificial, incomplete, forced, static, rigid, sterile, academic”, while the clinical method praised for “dynamic, global, meaningful, holistic, subtle, sympathetic, patterned, rich, deep, sensitive, real, living, natural, true to life, understanding”.

When a human competes with a machine, […] our sympathies lie with our fellow human. The aversion of algorithms making decisions that affect humans is rooted in the strong preference that many people have for the natural over the synthetic or artificial.

Whenever we can replace human judgment by a formula, we should at least consider it.

Talks about the well-known 10000 hours to becoming an expert in a subject.

Did he really have an opportunity to learn? How quick and how clear was the feedback he received on his judgments?

People bring “greedy” decisions (assume the best case) but they should also consider other cases to maximize success.

This is a case of overconfidence. They seem to believe they know more than they actually do know.

Part IV: Choices

Skimmed.

Considering her vast wealth, her emotional response to trivial gains and losses makes no sense.

When they raised their prices, demand dried up.

When you see cases in isolation, you are likely to be guided by an emotional reaction of System 1.

Part V: Two selves

Skimmed.

The easiest way to increase happiness is to control your use of time. Can you find more time to do the things you enjoy doing?

Learned this new focusing illusion:

Nothing in life is as important as you think it is when you are thinking about it.

Leave a comment