The psychologist Paul Slovic has proposed an affect heuristic in which people let their likes and dislikes determine their beliefs about the world.
Self-criticism is one of the functions of System 2. In the context of attitudes, however, System 2 is more of an apologist for the emotions of System 1 than a critic of those emotions – an endorser rather than an enforcer.
Most impressions and thoughts arise in your conscious experience without your knowing how they got there.
Constantly questioning our own thinking would be impossibly tedious, and System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other people’s mistakes than our own.
The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness:.
Amos and I stumbled on the central flaw in Bernoulli’s theory by a lucky combination of skill and ignorance.
More generally, the financial benefits of self-employment are mediocre: given the same qualifications, people achieve higher average returns by selling their skills to employers than by setting out on their own. The evidence suggests that optimism is widespread, stubborn, and costly.
Regression to the mean was discovered and named late in the nineteenth century by Sir Francis Galton, a half cousin of Charles Darwin and a renowned polymath.
We soon knew that we had overcome a serious case of theory-induced blindness, because the idea we had rejected now seemed not only false but absurd.
Consistent overweighting of improbable outcomes – a feature of intuitive decision making – eventually leads to inferior outcomes.
We were not the first to notice that people become risk seeking when all their options are bad, but theory-induced blindness had prevailed. Because the dominant theory did not provide a plausible way to accommodate different attitudes to risk for gains and losses, the fact that the attitudes differed had to be ignored.
You may believe that you are subtler, more insightful, and more nuanced than the linear caricature of your thinking. But in fact, you are mostly noisier.
The core of his argument is that rationality should be distinguished from intelligence.
His ego was depleted after a long day of meetings. So he just turned to standard operating procedures instead of thinking through the problem.
Modern tests of working memory require the individual to switch repeatedly between two demanding tasks, retaining the results of one operation while performing the other.
The essential keys to disciplined Bayesian reasoning can be simply summarized: Anchor your judgment of the probability of an outcome on a plausible base rate. Question the diagnosticity of your evidence.
In short, doctors are significantly more likely to order cancer screenings early in the morning than late in the afternoon.
Klein and I eventually agreed on an important principle: the confidence that people have in their intuitions is not a reliable guide to their validity.
The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between algorithms and humans has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy, but a tie is tantamount to a win for the statistical rules, which are normally much less expensive to use than expert judgment.
It follows that patients with appointment times later in the day were less likely to receive guideline-recommended cancer screening.