Skip to content

Heuristics and Biases 2A- Two Systems of Reasoning

01/25/2014

22. Two Systems of Reasoning

The main line of evidence that there are two systems of reasoning is that humans often simultaneously come to “believe” two different contradictory conclusions. Optical illusions are one example. One “believes” that one line is shorter than another, but after measuring the lines, comes to also believe they are of different length. And yet the original belief stays stuck in the mind. One’s perception is not altered. Same for logical illusions, like the Linda problem, or prejudices as revealed by the Implicit Attribution Test.

The case for two systems of thought can be made on several grounds (see Sloman, 1996). For the purposes of this chapter, I focus on one – the existence of simultaneous, contradictory beliefs (8728).

In all the demonstrations of simultaneous contradictory belief, associative responses were shown to be automatic in that they persisted in the face of participants’ attempts to ignore them. Despite recognition of the decisiveness of the rule-based argument, associative responses remained compelling (8889).

The rule-based system can suppress the response of the associative system in the sense that it can overrule it. However, the associative system always has its opinion heard and, because of its speed and efficiency, often precedes and thus neutralizes the rule-based response (8892).

23. The Affect Heuristic

Contrary to the traditional view of judgment and decision making, emotions/intuition/the associative mind play an extremely important part. Antonio Damasio seems to be a big name in this area, and his research shows some of the best examples of emotions facilitating rationality, like in the deck choosing task. The majority of the examples seem to be of the biases in the affect heuristic (using subjective affect to decide between choices instead of the information specific to the judgment. I would wager that the associative mind is almost always leading us to generally good decisions. It’s just the exceptions that are more interesting, noticeable, and investigated by the science.

There’s some good information for making the case that emotions are necessary for being rational, but that they can get in the way as well. Take the fact that people seem to support a 98% chance of saving 150 lives than the bare choice of saving 150 lives (when not contrasted directly). One might say this why emotions are bad for rationality, since they make people value 98% chance of saving 150 lives over 100%. True, but it is only because of emotions that we care about people at all. Without emotions, there is no reason to prefer one choice to another.

This chapter introduces a theoretical framework that describes the importance of affect in guiding judgments and decisions (9023).

Damasio observes: The instruments usually considered necessary and sufficient for rational behavior were intact in him. He had the requisite knowledge, attention, and memory; his language was flawless; he could perform calculations; he could tackle the logic of an abstract problem. There was only one significant accompaniment to his decision-making failure: a marked alteration of the ability to experience feelings (9075).

Specifically, it is proposed that people use an affect heuristic to make judgments; that is, representations of objects and events in people’s minds are tagged to varying degrees with affect. In the process of making a judgment or decision, people consult or refer to an “affect pool” containing all the positive and negative tags consciously or unconsciously associated with the representations (9099).

Fetherstonhaugh et al. found that people’s willingness to intervene to save a stated number of lives was determined more by the proportion of lives saved than by the actual number of lives that would be saved. However, when two or more interventions were directly compared, number of lives saved become more important than proportion saved (9265).

people, in a between-groups design, would more strongly support an airport-safety measure expected to save 98% of 150 lives at risk than a measure expected to save 150 lives. Saving 150 lives is diffusely good, and therefore only weakly evaluable, whereas saving 98% of something is clearly very good because it is so close to the upper bound on the percentage scale, and hence is readily evaluable and highly weighted in the support judgment. Subsequent reduction of the percentage of 150 lives that would be saved to 95%, 90%, and 85% led to reduced support for the safety measure but each of these percentage conditions still garnered a higher mean level of support than did the Save 150 Lives Condition (9271).

A study by Alhakami and Slovic (1994) found that the inverse relationship between perceived risk and perceived benefit of an activity (e.g., using pesticides) was linked to the strength of positive or negative affect associated with that activity. This result implies that people base their judgments of an activity or a technology not only on what they think about it but also on what they feel about it. If they like an activity, they are moved to judge the risks as low and the benefits as high; if they dislike it, they tend to judge the opposite – high risk and low benefit (9326).

24. Individual Differences in Reasoning: Implications for the Rationality Debate?

Stanovich looks to rule out certain interpretations of the heuristics and biases research. Some alternatives include: the norms against which performance is measured is not the correct norm, people interpret the tasks differently than the experimenters, problems are due to computational limits, not reasoning errors. Many experiments have been done to narrow down the interpretations, and come to the basic conclusion that humans do often perform in suboptimal ways.

we have found moderate correlations between measures of cognitive ability and several tasks well-known in the heuristics and biases literature (e.g., informal argument evaluation tasks, belief bias in syllogistic reasoning, covariation detection, causal base-rate use, selection task performance). In addition, much smaller but still significant correlations were observed between cognitive ability and a host of other tasks in this literature (e.g., assessing the likelihood ratio, sunk cost effects, outcome bias, “if only” thinking, searching for unconfounded variables). Finally, there are some tasks in the heuristics and biases literature which lack any association at all with cognitive ability – the so-called false consensus effect in the opinion prediction paradigm (Krueger & Clement, 1994; Krueger & Zeiger, 1993) and the overconfidence effect in the knowledge calibration paradigm (e.g., Lichtenstein, Fischhoff, & Phillips, 1982) (9576).

Advertisements
Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: