Skip to content

Thinking and Deciding 7: Hypothesis Testing


Hypotheses in science

Baron uses the classic example of Semelweiss’ hypotheses regarding the causes of puerpal fever in new mothers. It’s pretty bizarre to think that at some point (well 200 years is a long time ago I suppose), doctors didn’t think to wash their hands when they operated on patients. But I suppose that’s just hindsight bias. To think that one would actually need to test multiple hypotheses to discover this fact is mind-boggling.

The psychology of hypothesis testing

When people test their own hypotheses, they tend to use the congruence heuristic. That means that they seek information in areas that, if their hypothesis is true, will be consistent with their hypothesis. They are less likely to look in places that, if their hypothesis is false, would be inconsistent with their hypothesis. In other words, they look for congruence, not for lack of incongruence. Don’t know how to state this clearly. “To test a hypothesis, think of a result that would be found if the hypothesis were true and then look for that result” (173).

The lesson: don’t just look for confirming evidence. Look for what would falsify your hypothesis. If you can’t think of what would falsify your hypothesis (I’m looking at you Atheist Experience (episode 828 or 829?), then it’s likely you’re falling for this less than perfect heuristic.

Baron offers two counter-heuristics. #1: Ask “How likely is a yes answer if I assume that my hypothesis is false?” (174). #2: Try to think of alternative hypotheses, then choose a test most likely to distinguish them” (175). That’s a good exercise for anyone. Prayer testing? Specific economic predictions? Expected advances in understanding consciousness, morality?

There’s also information bias, where people seek more information, even if that information will not affect their decisions one way or another. The example used is that of physicians ordering tests, for which neither a positive nor negative result would affect the chosen treatment.

Baron introduces utility theory as a way of thinking about when to seek more information. If the test has a positive expected utility, do it. If not, then the information is not worth getting. This sounds like strategic reliabilism. I wonder if those authors are familiar with this work, because it seems like pretty much the same thing. Bishop and Trout’s book was basically combining probability theory with utility theory.

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: