Skip to content

Heuristics and Biases 3B- Expert Judgment

02/10/2014

37. Assessing Uncertainty in Physical Constants

Even something as seemingly concrete as physical constants (speed of light, gravitational constant) undergo refinement, and the expectations that specific values are correct are susceptible to overconfidence bias.

38. Do Analysts Overreact?

Yup. They take negative and positive information as more predictive of future performance than actually occurs.

39. The Calibration of Expert Judgement: Heuristics and Biases Beyond the Laboratory

This chapter explores the question of how well the heuristics and biases research may generalize outside the laboratory to professional domains, where people have more training, more is at stake, and they aren’t just dumb psych 101 students. Turns out that biases don’t just exist in the lab (this happens to be a good example of why psych can still come to reasonable conclusions about humans as a whole based on psych 101 student data- it’s not perfect, but it’s evidence).

Some experts are awesome though, like bridge players and meteorologists (predicting rain in Chicago). They’re very well calibrated. A common theme throughout the many cases is base-rate neglect, which helps explain both the successes and the failures. When base rates are nearer 50/50, predictions are better. When base rates are very high or very low, experts under or over predict.

In this chapter, we examine the calibration of expert probabilistic predictions “in the wild” and assess how well the heuristics and biases perspective on judgment under uncertainty can account for the findings. We then review alternate theories of calibration in light of the expert data (15309).

We review research on calibration from five applied domains: medicine, meteorology, law, business, and sports (15493).

In all domains of expert judgment surveyed, systematic miscalibration was observed. In each case, the observed patterns matched the qualitative predictions of the heuristics and biases perspective, as embodied by the direct support account. Nonetheless, there were notable differences among the domains in the magnitude of miscalibration, such that the judgments of experts with the greatest training and technical assistance in statistical modeling (meteorologists and economists) were less biased than the direct support account predicted (15784).

In the expert data sets we examined, there is little or no indication of a general bias in favor of the focal hypothesis, as implied by the confirmatory bias model. In particular, there was little evidence of optimistic bias in these data sets. Note, however, that most of the judgments were generally not self-relevant. When the issues were extremely self-relevant, such as the patients’ predictions of their own survival, there was considerable optimistic bias shown (15872).

We find the general framework of support theory in which RST is based, however, to provide a useful and psychologically plausible interpretation of the patterns that we found: Assessments of probability typically reflect a direct translation of the support provided by the evidence for the target hypotheses, with little regard to the reliability of the evidence or the base rate of the outcome (15878).

40. Clinical versus Actuarial Judgement

Reminds me of Epistemology and the Psychology of Human Judgment. Actuarial methods (statistical prediction rules) nearly always outperform clinical judgment, even when the judgers are given the results of the actuarial methods. If they just followed the math, and not their subjective professional opinion, the outcomes would have been better.

In the clinical method the decision-maker combines or processes information in his or her head. In the actuarial or statistical method the human judge is eliminated and conclusions rest solely on empirically established relations between data and the condition or event of interest (15908).

even after repeated sessions with these training protocols culminating in 4,000 practice judgments, none of the judges equaled the Goldberg Rule’s 70% accuracy rate with these test cases. Rorer and Goldberg finally tried giving a subset of judges, including all of the experts, the outcome of the Goldberg Rule for each MMPI. The judges were free to use the rule when they wished and knew its overall effectiveness. Judges generally made modest gains in performance but none could match the rule’s accuracy; every judge would have done better by always following the rule(15955).

In virtually every one of these studies, the actuarial method has equaled or surpassed the clinical method, sometimes slightly and sometimes substantially (15984).

The research reviewed in this article indicates that a properly developed and applied actuarial method is likely to help in diagnosing and predicting human behavior as well or better than the clinical method even when the clinical judge has access to equal or greater amounts of information (16172).

41. Heuristics and Biases in Application

Just a brief glimpse at the application potential of the heuristics and biases research. Each area (of the three below) has its own caveats. While it may be a net positive for people to acknowledge the general truth that humans are biased, even experts, the application of predicting or changing behavior gets pretty complicated. This is just the tip of the iceberg in applied rationality.

This chapter critically examines the application of heuristics-and-biases research from these dual perspectives, asking what theory and practice can learn from each other. It focuses, in turn, on applications to (1) explaining past judgments, (2) predicting future judgments, and (3) improving future judgments (16232).

42. Theory-Driven Reasoning about Plausible Pasts and Probable Futures in World Politics

Experts are susceptible to rationalizations that work to protect their explanatory theories from falsification. As a general rule, when predictions are falsified, experts do not downwardly adjust their confidence level enough. The more theory based their thinking was, the more confident they were in their predictions.

This chapter explores the applicability of the error-and-bias literature to world politics by examining experts’ expectations for the future as well as their explanations of the past. One set of studies tracks the reactions of experts to the apparent confirmation or disconfirmation of conditional forecasts of real-world events in real time (16619).

Although there is a general tendency among our experts to rely on theory-driven modes of reasoning and to fall prey to theory-driven biases such as overconfidence and belief perseverance, these tendencies are systematically more pronounced among experts with strong preferences for parsimony and explanatory closure (16628).

although experts only sporadically exceeded chance predictive accuracy, they regularly assigned subjective probabilities that exceeded the scaling anchors for “just guessing.” Most experts, especially those who valued parsimony and explanatory closure, thought they knew more than they did. Moreover, these margins of error were larger than those customarily observed in laboratory research on the calibration of confidence. Across all predictions elicited across domains, experts who assigned confidence estimates of 80% or higher were correct only 45% of the time, a hit rate not appreciably higher than that for experts who endorsed the “just-guessing” subjective probabilities of 0.50 and 0.33 (for 2 and 3 outcome scenarios, respectively). Expertise thus may not translate into predictive accuracy but it does translate into the ability to generate explanations for predictions that experts themselves find so compelling that the result is massive over-confidence (16703).

Converging evidence suggests that experts not only rely, but over-rely, on their preconceptions in generating expectations about the future (producing overconfidence) and in revising preconceptions in response to unexpected events (producing belief underadjustment by Bayesian standards) (16786).

The more value experts place on parsimony and explanatory closure, the more prone they are to overconfidence (ex ante) and Bayesian under-adjustment (ex post) (16791).

Advertisements
Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: