Skip to content

Heuristics and Biases 1B- Anchoring, Contamination, and Compatibility


6. Incorporating the Irrelevant: Anchors in Judgements of Belief and Value

When asked to estimate uncertain values, people often take into account irrelevant information, which biases their judgment in a certain direction. For example, if asked to state the last two digits of your phone number, then asked if the percentage of African countries in the U.N. is higher or lower than that number, estimates of the true value will be biased in the direction of the numbers you provided.

Interestingly, this effect occurs even when people are instructed to ignore the numbers they provide. Anchors are also more powerful when they are made of the same type of value (weight vs. weight > weight vs. height). Reminds me of Buonomano’s Brain Bugs.

In this chapter, we review what is currently known about the causes and effects of anchoring. We start by offering some definitions, and then identify some stylized facts about this heuristic. We next examine two families of causes of anchoring. We close by reviewing other phenomena related to anchoring and potential applications (2974).

We define anchoring as an experimental result or outcome; the influence of an anchor that renders the final judgment too close to the anchor. Thus, anchoring is defined as assimilation rather than contrast (Sherif, Sherif, & Nebergall, 1965; Sherif & Hovland, 1961). The vast majority of decision-making studies on anchoring have found a positive relation between anchors and judgments (3010).

7. Putting Adjustment Back in the Anchoring and Adjustment heuristic

The authors argue that some of the difficulty in finding an adjustment effect in anchoring studies is that the studies do not introduce realistic environments in which anchoring may occur. Self-anchoring is that natural environment. That involves beginning with a known value (my age), and adjusting from there to estimate an unknown value (my cousin’s age). This is “self-anchoring,” and in such cases, adjustment can be demonstrated.

8. Self-Anchoring in Conversation: Why Language Users Do Not Do What They “Should”

This is anchoring and adjustment applied to language use. The ambiguities of our language use are lost on ourselves, since we have an inside view of what it “feels like” to know what something means. When There is ambiguity in meaning, when there’s sarcasm, hidden meanings, idioms, etc., we are typically overconfident in how likely others are to understand which side of the ambiguity the reality is.

In this chapter, we propose that speakers, addressees, and overhearers reduce the uncertainty of linguistic utterances by using an anchoring and adjustment heuristic. We review evidence that language users tend to anchor on their own perspective and attempt to adjust to the perspective of others. These adjustments are typically insufficient, and can occasionally cause miscommunication (3579).

Our studies converge with Gilovich, Savitsky, and Medvec’s (1998) and Gilovich and Savitsky’s (1999) studies that documented a related illusion of transparency – the illusion that one’s own internal states, such as private preferences or emotions, are accessible to uninformed others. They showed that the illusion results from anchoring on one’s own internal states and not adjusting sufficiently to the perspective of others. Unlike the illusion of transparency we demonstrated for overhearers (i.e., June), in which people thought others’ intentions shone through, these studies show that people perceive their own internal states as relatively transparent (3820).

9. Inferential Correction

It’s nice to see that Gilbert is a legit scientist. This chapter covers attribution theory/the fundamental attribution error, and weaves it in with human belief formation and distraction/cognitive load. People attribute the actions of others to something innate to them, even when those actions are already explained by the environment/circumstances. This is the automatic reaction, but people can override it with system 2 thought. As predicted and demonstrated in multiple studies, the attribution error occurs more frequently when people are distracted, or carrying a cognitive load.

Similarly, people have a belief bias, tending towards believing statements before system 2 comes in in case override is necessary. Statements are more readily accepted under distraction/cognitive load.

Attribution theory’s fundamental distinction leads quite naturally to its fundamental rule: When a behavior occurs in the presence of a sufficiently strong, facilitative force, an observer should not infer that the actor is predisposed to perform that behavior. Just as one should not confidently conclude that a balloon that rises on a windy day is filled with helium, one cannot make confident inferences about the abilities of an athlete, the convictions of a politician, or the mental health of a student when poor lighting, a roomful of opinionated voters, or sudden bad news may have induced their behaviors. In other words, we should not explain with dispositions that which has already been explained by the situation (3986).

The memory test revealed that busy participants were, as predicted, more likely than non-busy participants to misremember the false statements as true. These data suggest that when people are prevented from unbelieving the assertions they comprehend, they act as though they believe them. Follow-up studies ruled out a variety of alternative interpretations (4292).

What do these studies tell us about belief? Spinoza knew that people do have the power assent, reject, and suspend their judgment – but he thought they could do this only after they had briefly believed any information to which they were exposed. We performed six experiments to examine this notion, and each provided support for Spinoza’s account (4295).

10. Mental Contamination and the Debiasing Problem

This chapter looks at ways our evaluations are “contaminated” by biasing information, and different strategies to debias. In sum, it’s not pretty. We’re pretty bad at correcting for contamination on almost every level. Of note is the breakdown of biases and each step in which potential corrections may occur (summarized from location 4355).

Biasing information occurs:

  1. Aware of the biasing effect? Yes = continue. No = BIAS!
  2. Motivated to correct bias? Yes = continue. No = BIAS!
  3. Aware of size and direction of bias? Yes = continue. No = BIAS!
  4. Able to adjust (exhibit mental control)? Yes = NO BIAS! No = BIAS!

Concerning areas of politics/religion, I’d say the vast majority of people fail at step one. A painfully large number of people suck at step two. Step three and four are hard for anyone to correct for, and I believe I frequently fail those. Much, much must go right to avoid bias successfully.

People do not have access to or control over the recipe of their impressions, feelings, and attitudes. There exists a vast, adaptive unconscious that does much of the work of sizing up and evaluating the world (Nisbett & Wilson, 1977b; Wilson, 2001); people often know the final product with no awareness of exactly how it was formed (4339).

In this chapter, we discuss the consequences of unwanted judgments and ways they might be avoided, with an emphasis on issues that have arisen since Wilson and Brekke’s (1994) article was published (4385).

Despite evidence for the extremely low validity of the interview (e.g., Hunter & Hunter, 1984), the members of most departments of psychology are as confident in their impressions of a candidate after meeting him or her for half an hour as anyone else would be, and are as willing to let this impression override information known to be more valid (e.g., letters of recommendation, publication record) (4454).

The lesson from our discussion so far should be clear: The best way to avoid biased judgments and emotions is exposure control, whereby we avoid stimuli that might influence our responses in unwanted ways (4546).

11. Sympathetic Magical Thinking: The Contagion and Similarity “Heuristics”

People show an automatic, heuristic aversion to apparent contamination, even when they know that the contamination is in appearance only, or is not rational. A sterilized cockroach, for example, or a sterilized spoon used by an AIDS patient would both make a person very uncomfortable to come into contact with, or consume food that they have come into contact with. This is the “contagion” heuristic. Some people act as though it is based on germs, whereas others have a more magical aversion, where physical cleansing does not reduce their aversion at all.

Similarity heuristic comes up when a person is asked to consume chocolate realistically shaped like shit. They might know it’s not what it appears to be, but the aversion remains.

Educated Westerners, after careful consideration of the situation, overrule their feelings to the extent that they hold that the feelings of disgust do not require a negative moral judgment (because no organism was hurt in this process), whereas less-educated folk are more inclined to move from the emotion or feeling to a moral judgment (Haidt, Koller, & Dias, 1993) (4722).

12. Compatibility Effects in Judgement and Choice

When making judgments, people will overweight data that matches the outcome in type, i.e. dollars to dollars or rank to rank.

H(33/36, $50)  L(18/36, $X) = what dollar amount would X need to be for the two bets (H and L) to be even?

H(33/36, $50)  L(Y, $125) = what odds would Y have to be for the bets to be even?

The inferred preferences from giving such values should be the same, but what is being compared affects what choice is given more weight. Subjects give either overly high dollar amounts for X, or overly high ratios for Y. Their preferences reverse depending on how the question is framed.

One of the main ideas that has emerged from behavioral decision research is a constructive conception of judgment and choice. According to this view, preferences and beliefs are actually constructed – not merely revealed – in the elicitation process. This conception is entailed by findings that normatively equivalent methods of elicitation often give rise to systematically different responses (e.g., Slovic, Fischhoff, & Lichtenstein, 1982; Tversky, Sattath, & Slovic, 1988) (5042).

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: