Skip to content

Worldviews: Part 2- Aristotelian to Newtonian Worldview

Chapter 9: The Structure of the Universe on the Aristotelian Worldview

Those holding the Aristotelian worldview (people before 1600’s) had a certain set of beliefs. Earth is the center of the universe, the moon followed, then the sun and planets, then the stars. Teleology and Essentialism were also big parts of the worldview. Teleology is the idea that scientific explanation is suitably done in terms of goals, purposes, or functions fulfilled. These teleological explanations are in contrast to mechanistic explanations.

Essentialism is the idea that all objects have essential natures that explain why they act as they do. The essential nature of most objects was thought to be teleological.

Chapter 10: The Preface to Ptolemy’s Almagest- the Earth as Spherical, Stationary, and the Center of the Universe

Ptolemy in his Almagest (150 AD) defended the idea that Earth is spherical (people believed this since the time of Plato ~ 400 BC), stationary, and the center of the universe. According to DeWitt, although these beliefs may appear hopelessly naive now, Ptolemy had strong reasons to believe as he did.

To support the Earth as a sphere, Ptolemy noted the differences in sunrise time in different areas of the Earth, and the change is uniform, in line with a sphere. Also ships and mountains appear to emerge out of the horizon.

The Earth is stationary because otherwise if you threw a ball into the air, it would fall away from you (commonsense physics). Also we do not notice reasonable signs of high speed, like winds or vibration. Also the Earth would eventually slow down, just as rocks and things do when not constantly acted upon. Lastly, no stellar parallax was viewed (I think this is almost decisive).

Earth being the center of the universe is supported mostly by gravity. No matter where you are on the sphere, things tend to go perpendicular to the surface- they seek the center of the Earth. Since objects naturally seek the center of the universe, the Earth must be the center of the universe.

Chapter 11: Astronomical Data- The Empirical Facts

Theories must be able to explain and predict the existing facts. So what is that existing empirical data?

  • The stars move on a repeating 24 hour pattern, and they stay in the same position relative to each other.
  • The sun moves in an arc from east to west, and the point where the sun rises moves north and south. The sun also changes position relative to the stars.
  • The moon goes through phases, and also changes position relative to the stars.
  • The planets are “wanderers” and also move relative to the stars in various ways.

Chapter 12: Astronomical Data- The Philosophical/Conceptual Facts

Some philosophical beliefs came into play in preserving the Aristotelian view. Heavenly bodies were believed to move perfect circles and move uniformly, not speeding or slowing. There was also the principle of motion, that objects naturally come to rest unless acted upon. Only agents can move continuously or spontaneously, which is where the teleology comes from.

Chapter 13: The Ptolemaic System

Ptolemy’s system from the Almagest was an Earth centered system that preserved the perfect circle and uniform motion “laws.” Ptolemy had most celestial bodies going in perfect circles around the Earth, although the Earth was not always in the exact center. Planets were a special case, so he had them traveling around an equant point. The travel also involved epicycles, a circle around a point which would revolve around the equant point in a circle itself. These epicycles added flexibility to the system to account for complex motions. There also had to be minor epicycles, cycles upon the epicycles, to account for further discrepancies.

Taking all these things into account led to a system that made predictions very close to observations.

Chapter 14: The Copernican System

1,400 years after Ptolemy, Copernicus came up with an alternate heliocentric system that still preserved the perfect circle and uniform motion “laws.” Copernicus could account for Mars’ retrograde motion in terms of the Earth passing Mars. Also the appearance of Venus near the sun was more easily explained by Venus’ close position.

The Copernican System was still basically the same in terms of complexity and predictive power.

Chapter 15: The Tychonic System

Tycho Brahe was convinced for reasons outlined above that the Earth is stationary, but he took many of Copernicus’ ideas into account. He had the sun and moon go around the earth, but the planets revolving around the sun. This solved at least some inconsistencies.

Chapter 16: Kepler’s System

Kepler took over Brahe’s data, and got rid of some basic assumptions, like the uniform motion and perfect circle “laws.” He came up with the ellipse shape to explain and predict the motion of the planets. He also created laws that were basically accurate, the first being that the planets revolve in an elliptical shape. His second law of planetary motion is that planets sweep out equal orbital areas in equal time. He did have a few more erroneous laws involving geometrical shapes in predicting the planetary orbits, but he got a lot right.

Chapter 17: Galileo and the Evidence from the Telescope

Galileo’s views were largely informed by information from the telescope. Importantly, none of the observations directly proved that the Earth is not stationary. Instead it challenged some basic assumptions and led to an abandoning of the Aristotelian worldview, opening the door for Newton.

Mountains on the moon and sunspots disproved the idea that celestial bodies are perfect. Saturn’s rings disproved the idea that planets, made of ether, must be perfect spheres. Moons around Jupiter answered the argument that the Earth around the sun plus the moon around the Earth would be awkward as a configuration. The phases of Venus was only explained if Venus is in closer orbit to the sun, because the phases align perfectly with this fact. The discovery of many more stars was evidence that the universe is more vast than previously thought.

Also worth noting is that the Catholic church, by and large, had a history of being tolerant of new scientific views. For example, for the most part the church was not opposed to the Copernican system. Of course, up until the new evidence from the telescope, the Copernican system was generally taken with an instrumentalist attitude, and as such was not contrary to scripture. But the point is that the church did not generally oppose new scientific views, and was generally willing to reinterpret scripture when required by new discoveries (3566).

Chapter 18: A Summary of Problems Facing the Aristotelian Worldview

If the issues from the above chapter show that the Earth is not the center of the galaxy, many long held ideas from the Aristotelian worldview are challenged. The tendency for objects to move toward the center of the universe no longer explains things falling. There has to be some explanation for Earth’s movement, and the fact that we don’t fly off the planet, or that things fall back in our hand when thrown upwards. Also challenged is the perfect sphere idea, and the relatively small universe.

All these issues and more opened the door for new explanations.

Chapter 19: Philosophical and Conceptual Connections in the Development of the New Science

In order to bridge the gap from Aristotelian to Newtonian worldviews, a few conceptual band-aids may have been necessary. An infinite or very large universe was wholly foreign. Thinkers like Giordano Bruno argued that an infinite universe was compatible with, or even necessitated by an infinite God. They also proposed the idea of atomism, which was more compatible with new ideas about inertia than Aristotelianism. This may have helped more people accept views that previously would have been unpalatable.

Chapter 20: Overview of the New Science and the Newtonian Worldview

Newton’s Principia marks the beginning of the New Science. The three laws of motion, as outlined in the Principia mark a significant departure from the Aristotelian worldview. First is the law of inertia- an object in motion tends to stay in motion; an object at rest tends to stay at rest unless acted upon by a force. Second, F=ma. Third, for any action there is an equal and opposite reaction.

Another “New Science” idea is universal gravitation. The force keeping the moon in orbit around the Earth is the same one that keeps the Earth in orbit around the sun and Jupiter’s moons in orbit around it.

Newton also marked a change from an essentialist and teleological view of the universe to a mechanistic view- the universe as a machine, or a clock.

Chapter 21: Philosophical Interlude- What is a Scientific Law?

Scientific laws are commonly seen as approximations of natural laws- fundamental features of the universe that make it work as it does.

Laws are commonly seen as being how things must behave without exception, not just how they happen to behave. They are also seen as objective, that is, in DeWitt’s view, independent of humans (I don’t really prefer this definition, but DeWitt acknowledges that there are different conceptions of the word). So according to him, chocolate mousse is subjective, but the orbit of Jupiter is objective.

Each of these aspects of scientific laws have some bizarre nuances.

Chapter 22: The Development of the Newtonian Worldview- 1700-1900

The more mechanistic Newtonian worldview worked its way into chemistry, biology, and electromagnetic theory in different ways over 1700-1900. Chemistry became less qualitative and more quantitative view in line with atomic theory. Biology abandoned vitalism, and embraced the idea that life broken into its elements is not different from non-life. Electricity and magnetism were unified. The Michelson-Morley experiments, black body radiation and x-rays all provided some early challenges to the so far hugely successful Newtonian worldview.

Worldviews: Part 1- Fundamental Issues

Chapter 1: Worldviews

DeWitt introduces the concept of a worldview, which is more an interlocking set of beliefs than a list. He also stresses that even though each person has a different worldview, he is using the word more broadly to characterize similar sets of beliefs, like the Aristotelian or Newtonian worldview.

Also of note is the distinction between core beliefs, which would lead to a great deal of alteration in the overall worldview if changed, and border (I forget what word he uses) beliefs, that can change without affecting much.

Also we often lack direct evidence of our beliefs (why believe Earth goes around sun?) and often our worldviews are counter to common sense (objects in motion stay in motion).

a worldview is not merely a collection of separate, independent, unrelated beliefs, but is instead an intertwined, interrelated, interconnected system of beliefs (435).

It seems to be a fairly widespread belief that the accumulation of facts is a relatively straightforward process, and that science is, in large part at least, geared toward generating true theories that account for such facts. Both of these are largely misconceptions about facts, truth, and their relations to science (670).

Chapter 2: Truth

What is truth? What makes something true or a fact?

The philosophical answer can be broken up into two major groups- correspondence theories, and coherence theories.

Correspondence theories state that correspondence with reality is what makes true statements true.

according to correspondence theories of truth, what makes a true belief true is that the belief corresponds to reality. What makes a false belief false is that the belief fails to correspond to reality (732)

“reality” refers to “real” reality: a reality that is completely objective, generally independent of us, and generally speaking in no way depends on what people believe that reality to be like (744).

Coherence theories state that beliefs are made true by their coherence with other beliefs. Coherence statements differ in who’s “other beliefs” need to be cohered with to make the beliefs true. Examples include one’s own beliefs, or the beliefs of western scientists.

According to coherence theories of truth, what makes a belief true is that the belief coheres, or ties in, with other beliefs (753).

I think correspondence theories sound the best, but DeWitt states some problems. The objection is that our beliefs come from our representations of reality. In order to know for certain that our beliefs are true, we would need to compare our representations to the reality and see if they match.  But we can’t do this. We can only compare some representations to others, so according to the representational theory of reality, we can never know for certain that our representations are accurately reflect reality.

The bottom line is that, although we all believe our experiences are caused by a “normal” reality, we have no way of knowing for sure that they are not caused by the sort of reality envisioned in the Total Recall scenario. In short, we have no way of knowing for sure what reality is really like (896).

My response to this is to say, “so what?” It’s an epistemelogical objection. It doesn’t point to any incoherence or contradiction. I already accept that I can’t know with absolute certainty what is true or not. To claim otherwise, to me, seems over confident.

Coherence theories suffer from the flaw of severe relativity. I was expecting DeWitt to use something of a self-defeating example (correspondence theory could be true for you, coherence theory true for me). If we make group beliefs the standards by which we judge coherence, then defining and delineating the group becomes a problem as well.

In summary, individualistic versions of coherence theories seem to degenerate into an unacceptable sort of relativism. Group versions of coherence theories, on the other hand, seem to avoid the relativism problem, but in doing so they introduce new and substantial problems (958).

Chapter 3: Empirical Facts and Philosophical/Conceptual Facts

DeWitt distinguishes from empirical facts, which can be directly experienced (this pencil is on the table), and philosophical facts, which work in assumptions (the same pencil that was on the table is now hidden in the drawer). There is no clear distinction. Even empirical facts need some sort of philosophical assumptions (my senses are generally reliable). But the point is that many of our beliefs are built from inferences from our worldviews. They fit into the and come from our entire web of beliefs, not from our direct experiences.

Also, when he says fact, he means not something necessarily true, but something generally believed, and reasonably justified at the time (e.g. all things in motion tend to come to a stop).

Chapter 4: Confirming and Disconfirming Evidence and Reasoning

Confirming reasoning goes:

If T then O
O
Therefore probably T

This can be problematic because an enormous number of theories (T) can entail O. So in come disconforming reasoning.

If T then O
Not O
Therefore not T

Confirming evidence is inductive (true premises make the conclusions more likely, not certain), and disconfirming evidence is deductive (true premises logically guarantee the conclusion).

The main problem with disconfirming evidence is that one can add auxiliary hypotheses to prevent the theory from being disconfirmed. One can discard any part of a web of beliefs instead of the one attempting to be tested (my instruments were off, I made a mistake, there were skeptic waves in the air, you did not pray with sincerity, cold fusion messes with neutrons differentl, etc.) .

Chapter 5: The Quine-Duhem Thesis and Implications for Scientific Method

There are three parts to this, from the note:

we will look at three of the key ideas associated with the Quine – Duhem thesis, namely, the idea that (to borrow a phrase from Quine) our beliefs face the “tribunal of experience” not singly, but in a body; the claim that there can typically be no “crucial experiments” to decide which of two competing theories is correct; and the notion of underdetermination, that is, the idea that the available data typically does not pick out a unique theory as being correct (1362).

Aristotle’s view of science was to begin with axioms and then draw out all the implications through deductive reasoning. Science was meant to achieve certainty. Descartes tried to achieve knowledge in the same way, but couldn’t really come to any good agreed upon first principles.

On the hypothetico-deductive model of science:

The basic idea behind the hypothetico-deductive method is that from a hypothesis or set of hypotheses (or theories, broadly speaking) one deduces observational consequences, and then tests to see if those consequences are observed. If so, then for the reasons discussed earlier in relation to confirmation reasoning, this is taken as support for the hypothesis. If the consequences are not observed, then again for the reasons discussed earlier in the context of disconfirmation reasoning, this is taken as evidence against the hypothesis (1603).

it is safe to say that this method plays an important role in science. However, consider again the issues we have discussed -the inductive nature of confirmation reasoning, the possibility of rejecting auxiliary hypotheses in the face of disconfirming evidence, the underdetermination of theories, the difficulty if not impossibility of designing crucial experiments, the notion that hypotheses are tested in groups rather than singly, and so on. The view that science proceeds by a relatively simple process of generating predictions from hypotheses and then accepting or rejecting hypotheses depending on whether the prediction is observed seems, given what we have discussed, to be at best an overly simplistic account of science (1616).

Chapter 6: Problems and Puzzles of Induction

Hume’s problem of induction points out that using inductive reasoning is circular. Inductive reasoning rests on the assumption that the future will resemble the past. We can’t justify this assumption, except with more inductive reasoning- since the past has resembled the future before, it will do so in the future. But this assumes exactly what we’re trying to prove, that the future will resemble the past.

Hempel’s Raven paradox points out the fact that if we’re trying to prove that all X’s are Y, then both X’s that are Y are evidence for the proposition, but so is each Non-X Non-Y thing. E.g. if we want to prove that all quasars are a great distance from earth, then each quasar we see that is a great distance from earth is evidence for this. But each non-quasar thing that is not a great distance from earth is also evidence. My computer next to me is evidence that all quasars are a great distance from earth. This is supposed to be a paradox, but extremely weak evidence is still evidence, and my computer is extremely weak evidence for the proposition.

Lastly is Goodman’s Grue problem. Each green emerald we find supports the statement, “All emeralds are green.” But it also supports the statement, “All emeralds are grue,” where grue means: “green until 2050, then blue.” It’s hard to figure out why one conclusion is better. I’d say something about complexity, assumptions, etc. Not sure what a good answer is though.

Chapter 7: Falsifiability

We call an idea falsifiable when evidence may potentially lead one to discard it. The concept is more complicated than it seems because some people may not accept the same considerations as evidence. This makes it appear that their ideas are unfalsifiable when it is actually the case that they have a different idea of what counts as evidence. For example, one may only accept textual information from the the Bible as evidence, so when they resist empirical data, it looks like their ideas are unfalsifiable. In fact, their ideas are falsifiable, just by different evidence.

So it may be important to get on the same page as to what counts as evidence instead of presenting facts that will not affect the other person’s ideas.

DeWitt’s view is that it is more people’s views of ideas that are unfalsifiable than the ideas themselves. One can have an unfalsifiable opinion of pretty much any view.

Chapter 8: Instrumentalism and Realism

one says that a theory explains a piece of existing data or observation if one could have predicted the data from the theory (1961).

whether we require theories to reflect the way things really are – is a controversial issue. It is the issue that distinguishes instrumentalists and realists. For an instrumentalist, an adequate theory is one that predicts and explains, and whether that theory reflects or models reality is not an important consideration. For a realist, on the other hand, an adequate theory must not only predict and explain, but in addition it must reflect the way things really are (1991).

Contemporary Metaethics: An Introduction

Chapter 1: Introduction

Metaethics examines what people are doing when engaging in normative ethics. How are they using the words? How does one judge between one ethical answer and another? What data counts?

The main point of contention in metaethics is between cognitivists, who think that ethics is a cognitive pursuit with a truth value, and non-cognitivists, who think that ethics is made of judgments like emotions or desires, which do not have truth values (my desire for a beer is not true of false).

Strong cognitivists think that moral judgments can be true and false and that they can result from accessing the facts that make them up. Naturalist strong-cognitivists think that the facts that go into the moral truth of the matter is made up of facts about the natural world- the world studied by the natural sciences and psychology. Non-naturalists think moral facts are not reducible, or are sui generis- self generated.

Moral realism is basically the idea that moral facts are true of false independent of human opinion. Mackie’s error theory says that moral judgments are always false, and is an example of an anti-realist theory.

Weak cognitivists think there is a truth or falsity of moral judgments, but that they cannot be judged through cognitive access to moral facts.

Among the non-cognitivists, emotivism says that moral judgments are expressions of approval or disapproval, quasi-realism says moral judgments are our dispositions to form sentiments of approval or disapproval.

Finally, internalists see a necessary connection between moral judgments and motivations to act. Externalists do not see this necessary connection.

Chapter 2: Moore’s Attack on Ethical Naturalism

Moore’s basic point is that since we can, without conceptual confusion, ask “Is X good?” where X is any natural or unnatural property of being, that means that X cannot be identical/defined as good.

My first reaction is that this is just begging the question. Some people might say that yes there is in fact conceptual confusion if you have to ask whether some X is good.

My second reaction is to throw up my hands since this is yet another philosophical issue in which tabooing our words would cut through 95% of the confusion and debate.

People have multiple contradictory ideas about what counts as “good.” So replace it with a word that gets to what you actually are striving for, and don’t pretend your definition actually fits everyone else’s use. I don’t care about what a God commands us, because there is no such thing. I do care about suffering and joy, because those things do exist. Let’s admit some words are amorphous, and move on to more productive debates.

it makes sense to ask ‘Is a pleasurable action good?’ or ‘Is something which we desire to desire good?’ Someone asking these questions betrays no conceptual confusion (420).
So: (6) The property of being good cannot as a matter of conceptual necessity be identical to the property of being N. This argument is often referred to as ‘the Open-Question Argument’ (424).
Note: Just taboo the word!
(13) If ‘good’ and ‘N’ are analytically equivalent, then ceteris paribus competent speakers should – after conceptual reflection – come to find it natural to guide their evaluative judgements by the analysis (528).

Note: Optical illusions!

Chapter 3: Emotivism

This chapter explores the idea that moral judgments are basically just evaluations (boo murder! yay charity!). Ayer believes this by ruling out all other alternatives. He rules out non-naturalism based on logical positivism, the idea that something is only literally significant if it is empirically verifiable, or analytic. This ruling out comes into question right away because logical positivism has some problems.

Ayer rules out naturalism in a with objections like Moore’s. Miller goes on the assumption that Moore’s objections work, and goes from there.

Miller’s biggest objections note that Moore’s argument would rule out Ayer’s view as well (3.6), and the Moral Attitude Problem which is that emotivists cannot plausibly answer what sort of emotion or feeling moral judgments express. It’s simply a “special” sentiment.

As far as I see, I just don’t use moral judgments in the way Ayer seems to think they are used. It is not descriptively correct about my usage. So, as usual, let’s taboo our words about our “moral” judgments and make some actual progress.

Philosophers often want to say that properties of one sort supervene on properties of a different sort. What does it mean to say that the moral properties of an object supervene on its natural properties? It means if two things have exactly the same natural properties, then they also have exactly the same moral properties. If you find that two things have different moral properties, you must also find that they differ in some way in respect of their natural properties (745).

Chapter 4 and Chapter 5 outline two more non-cognitivist theories. I won’t go over them here because there’s way too much, but I swear I wouldn’t mind never hearing the Frege-Geach problem described again in my life. These theories get into trouble when accounting for if-then statements that seem to use moral judgments as facts that can be true or false (this is the Frege-Geach problem).  They also seem to suffer from the moral attitude problem- they have trouble accounting for what kind of emotion or sentiment moral judgments are reflecting.

Blackburn’s quasi-realism is also a version of projectivism, explicitly designed to meet the problems raised for emotivism (1156).

Projectivism is the philosophy of evaluation which says that evaluative properties are projections of our own sentiments (emotions, reactions, attitudes, commendations). Quasi-realism is the enterprise of explaining why our discourse has the shape it does, in particular by way of treating evaluative predicates like others, if projectivism is true. It thus seeks to explain, and justify, the realistic-seeming nature of our talk of evaluations – the way we think we can be wrong about them, that there is a truth to be found, and so on (1159).

Thus, if what I have argued in this chapter is correct, the quasi-realist has at least the beginnings of replies to the four problems which plagued emotivism: the problem of implied error, the Frege-Geach problem, the problem of mind-dependence and (via the construction of moral truth) the problem of the schizoid attitude. But, until the moral attitude problem is solved, the quasi-realist position will not fully satisfy those who seek a metaethical theory with a plausible psychology of morals (1986).

Gibbard’s theory is non-cognitivist in this sense: according to Gibbard, a moral judgement expresses an agent’s acceptance of norms (2008).

Gibbard is first and foremost a non-cognitivist about rationality. To say that X is rational is not to ascribe a property to X, to utter a truth-conditional statement about X; rather, it is to express acceptance of a system of norms which permits X.Read more at location (2019).

Chapter 6: Mackie’s Error Theory, the Argument from Queerness, and Moral Fictionalism

Finally, A cognitivist account! Eat it Frege and Geach! Mackie thinks that moral judgments can have a true/false value, but he thinks that they happen to always be false. This is because moral terms like “good” and “evil” entail things about the world that are not true, namely that categorical, objective prescriptions do not exist. There is nothing that always and everywhere obligates us to commit or refrain from an act. That means that judging X as wrong is false, because nothing is wrong, i.e. we are never categorically, objectively obligated to avoid anything.

Mackie’s argument from queerness is the main reason to reject moral realism. True moral judgments would reflect something absolutely bizarre about the world which has no other precedent or likeness.

Wright says that we should just be able to alter our use of language. So there is no absolute obligation. But there are still things that help us and things that don’t, things that make people happy, etc. Why not adjust our use of good and evil in that direction? The answer is that the universality and categorical bindingness of the obligations are inherent in our use of language. Better to abandon the words than radically redefine them.

The way I see it, there can be professional views of things that do in fact contrast with the regular use and intuitions of everyday people. Even if everyone thought that water was an element, I don’t think it would be necessary for scientists to proclaim that there is no water, and then use another word. Same for things like moral, or matter, or sunrise, etc. Sometimes the expert use of words differs, is more precise, and more accurate. I don’t judge my definitions necessarily by how Joe Bag-o-doughnuts uses the term.

An error-theory about a particular region of discourse is the claim that the positive, atomic sentences of that discourse are systematically and uniformly false (2344).

Mackie’s conceptual claim is that our concept of a moral requirement is the concept of an objectively, categorically prescriptive requirement. What does this mean? To say that moral requirements are prescriptive is to say that they tell us how we ought to act, to say that they give us reasons for acting (2417).

Chapter 7 is an attempt to answer the argument from queerness by making an account of moral judgments in analogy to color judgments. Miller judges this to be unsuccessful.

Unlike our colour concepts, our concept of a moral fact is the concept of a categorical reason for action. So moral facts, if such there be, must be capable of providing a reason for agents to act in a particular way independently of facts about their desires or affective make-up (3150).

Chapter 8 involves some side issues of metaethics. Just gonna skip that for now.

Chapter 9 and 10 finally get to different forms of naturalism, but I must say they’re not anything near what I was expecting. Cornell realism tries to portray morality as something irreducible. What is right and wrong doesn’t get reduced to other natural properties like water might be reduced to H2O. Chapter 10 is something nearer to what seems realistic. Railton attempts to reduce moral value to naturalistic properties. The account of non-moral value seemed worth reading. Railton sees something as valuable insofar as it would be desired by a sort of idealized, fully informed version of oneself, with a bunch of philosophical adjustments to make it more consistent. I suppose that’s an interesting idea, but I’m still a bit confused as to how this fully informed person would form these desires. How do certain beliefs actually affect the desires, and why? It seems this account glosses over much of what makes the question difficult. What is desirable, and why should it be?

Moral value is similar to non-moral value, but from the level of society. What I believe this account lacks is normative pull for the individuals. I don’t see a reason for action that exists for a person if something only has moral value, but not non-moral value. I don’t mind still calling such things morally good or bad, but it would violate some of what Miller thinks is essential for moral value.

Chapter 11 covers some non-naturalism, which Miller ultimately abandons as defective, for reasons I don’t feel like going into.

Conclusion:

Overall I got a little fed up by what seemed to me to be inconsequential refinements of relatively useless ideas. If we taboo-ed our words, and appealed more to moral psychology, as well as the psychology of choice and reasoning and action, then I think we could skip over a lot of this philosophical work. I think we already have a good account of how people make moral judgments in a descriptive sense. I’m thinking Haidt’s Righteous Mind, Hauser’s Moral Minds, Wright’s, Moral Animal.

Speaking more normatively, I think Railton’s naturalistic morality is worth developing a little better. I’d read a whole book on that subject. Alonzo Fyfe‘s desirism still seems to capture “oughtness” better than any account I’ve come into contact with so far, and it seems to share much with Railton’s view. Given that desires are the only reasons for action that exist, figuring out what desires are, and what people actually desire,as well as how to pursue such things, seems to be the best way to pursue that idea, something similar to Harris’ science of morality program.

A Contemporary Introduction to Free Will

Chapter 1 (Introduction) introduces the main issues of contention regarding free will. Defining types of freedom (from surface to ultimate), how freedom relates to responsibility, how determinism and necessity threaten (or don’t) free will, choices/could-have-done-otherwiseness and their relation to free will, and lastly how modern science changes our conception of free will.

Chapter 2 (Compatibilism) describes the views of compatibilists (Dennett, Hume, Mill). Compatibilists say that determinism and free will are compatible. They argue this by saying that what we normally mean by freedom, or the freedom worth having, is made up of 1) the power or ability to make some decision, and 2) an absence of constraints like physical restraints, coercion, or compulsion. The ability to do otherwise is accounted for not by indeterminism, but by the ability to do otherwise if one wanted to. I can eat or not eat a pie, if I want to, therefore I am free to eat it and do otherwise, and free from constraints. Therefore I have freedom of will regarding eating my pie.

Some think of freedom as the ultimate power to do choose, free from even desires, moods, character, etc. Compatibilists say that such freedom is incoherent, because it would entail that to be free, one may act totally contrary to her desires, beliefs, and everything that would appear to determine a choice. This is the opposite of true freedom.

Lastly, the chapter answers common misunderstanding that lead to objections to free will. People mistakenly think that such a view entails constraint, control, fatalism, or mechanism. It doesn’t. Kane agrees that such objections result from misunderstandings, and goes on to say that a correct objection would have to show that determinism in itself is a challenge to the possibility of free will, not because it entails the other objections.

Chapter 3 (Incompatibilism) outlines the major argument against compatibilism, the consequence argument. Shortened:

1) There is nothing we can now do to change the past and the laws of nature. 2) Our present actions are the necessary consequence of the past and the laws of nature
3) There is nothing we can now do to change the laws of nature and nothing we can now do to change the fact that our present actions are the necessary consequences of the past and the laws of nature.
4) There is nothing we can do to change the fact that our present actions occur (i.e. we cannot ever do otherwise).
5) If we cannot ever do otherwise, then there is no free will.
6) Therefore there is no free will.

The real work goes into defining the bolded “can” above. If we take that can to mean how compatibilists take it, then there’s no problem. They define can as “would have done otherwise if we had wanted to). So we could change our present actions, if we had the desire to do so, refuting premise four.

Indeterminists respond by saying that this definition of “can” doesn’t work. If you took someone’s power to desire something away (I cannot ever desire chocolate ice cream because I get a lobotomy), then of course we “can’t” choose chocolate. But by the compatibilist definition, I still “can” choose chocolate ice cream, because I would if I wanted to.

This seems to be a clear case of where tabooing the word “can” would be appropriate. Sure, by some definitions I “can” choose the chocolate ice cream. By others, I “can’t.” There is no “true” meaning of can or can’t that would resolve this debate. Reality is the same either way one argues.

What matters is what type of freedom is worth having. I think the compatibilist one is the winner in this case.

Chapter 4 (Libertarianism, Indeterminism, and Chance) outlines the libertarian position. Kane defines them as incompatibilists that believe in free will. So determinism and free will are not compatible, and they think free will exists, therefore determinism is false.

Libertarians need to both argue that free will is in fact incompatible with determinism, and  also that indeterminist free will is coherent. Kane lists 8 problems with indeterminist free will that must be surmounted. They all look pretty much like the arbitrariness/randomness problem that Thomas Pink described.

The chapter ends with the common escape plan. To get away from this, people tend to posit something “outside” of the random/determined. Something that is neither determined by past causes, but not indeterminate either. The different extra-factors are the subject of the next chapter.

Chapter 5 (Minds, Selves, and Agent Causes) introduces the attempts to escape the arbitrariness problem. One way is to be a dualist, and say that something non-physical/outside of the laws of nature is the source of the free will choice. This doesn’t escape the problem though. Because one may say that the choice is “determined” by the non-physical thing and it’s non-physical laws, otherwise it would appear to be random as well. The same problems exist for a non-physical source of a choice as a physical one.

Kant’s noumenal self, and agent causation are both other attempts, but they don’t solve the question. Instead they hide it in a black mystery box, and say that whatever the source of the non-random, non-determined choice is impossible to see by science, or totally different and unknown and mysterious. Mostly starting with the conclusions, and reasoning backwards as far as possible, before stopping at a black box filled with the rest of the unanswered questions.

Chapter 6 (Actions, Reasons, Causes) explores alternate ways to be libertarian. They all end up mysterian, or simply claiming their truth by stipulation, and doing no actual explanatory work.

Chapter 7 (Is Free Will Possible? Hard Determinists and Other Skeptics) looks at the third main party of the debate, the hard determinists who think free will and determinism are incompatible, and side with the truth of determinism, ruling out free will. Some of these had determinists actually allow the possibility of indeterminism due to modern physics, but still rule out free will on conceptual grounds.

One basic argument against free will is that it would require humans to be an ultimate cause of themselves, which is severely implausible. To be ultimately responsible, one would need to cause everything that led to one’s actions. Then, one would need to cause everything about oneself that led to that cause, and so on to infinity.

Kane looks at different views of crime and punishment giving hard determinism. Some in this camp say that the retributive idea of justice must be given up, but reforming/deterring criminals and protecting non-criminals are still good reasons for the justice system. Some say love is suddenly not as wonderful since it is not “freely” given. But I don’t see how such “freely” (randomly?) given love would be worth more than love that is determined by something. Lastly, some toolbag named Smilansky thinks people must maintain the illusion of free will or chaos will reign. I just scoff at that. I Smilansky can somehow refrain from killing people for fun, then I don’t see why others can’t do the same. I also think he’s flat out wrong that love and justice lose their meaning given determinism.

Chapter 8 (Moral Responsibility and Alternative Possibilities) introduces the principle of alternative possibilities (PAP) which states that “Persons are morally responsible for what they have done only if they could have done otherwise.” It then goes through a bunch of examples (Frankfurt examples) and examines when the person intuitively seems morally responsible. It almost seems like a waste of time. Why should my base intuitions be the ultimate guide to whether someone is morally responsible? It seems like many of these philosophers have conflicting ones. Why not find the reasons that exist to hold someone responsible, and see if those reasons apply? I suppose it’s outside of the scope of this book to solve morality. It seems like this chapter helps show why some people think philosophy is a huge waste of time.

Chapter 9 and 10 cover some alternative compatibilist theories of free will. It all gets rather frustrating because they seem to rely nearly totally on looking at thought experiments (Frankfurt style) and thinking “does this seem intuitively like freedom?” Why would our intuitions be a good judge in these unbelievably contrived examples? And if we don’t want to call some scenarios “free,” what difference does it make in the end? When we’re looking at moral issues, or responsibility, or blame, etc., none of these theories of free will do a good job of actually connecting the concepts of freedom with the outcomes. Instead, it’s all about “does Black intuitively seem blameworthy?” Seems to be a pretty bad way of judging moral matters.

Chapter 11 (Ultimate Responsibility) explores some further strategies that libertarians make to maintain their lame view. It looks at alternate possibilities and ultimate responsibility, and explores the claim that both are necessary (but not sufficient) for free will. Ultimate responsibility is not compatible with determinism, because at some point the agent needs to be the ultimate determinant of the outcome, not past events.

There is also the regress problem- one must be the cause of the choice. One must also be the cause of being the cause of the choice. And so on. If at any point one is not the cause the cause of the cause, then one is no longer ultimately responsible. Kane says that indeterminism can somehow break the regress. I’m not buying it.

Austin-style examples show that indeterminism and AP are not sufficient for free will.

Chapter 12 (Free Will and Modern Science) looks at how sciences like quantum physics and neuroscience affect our view of free will. Quantum theory is supposed to introduce indeterminism, but Kane points out that this alone does not lead to free will. Also, at the level of neurons, the indeterminacy is almost negligible. So Kane adds chaos theory, the idea that tiny changes can cascade into large changes, to leave some chance for quantum changes to actually matter.

It’s maddening how many extra assumptions are necessary to make this possibility even possible. And even if a quantum fluctuation led to a different choice, that still does not lead to free will. It is still the opposite of the “free will worth having.”

Another frustration comes in the form of the “parallel processing” example. A person has two conflicting desires, and it is not determined which is to be chosen. Kane is trying to argue that it is not random or arbitrary when one occurs, but he doesn’t actually solve the arbitrariness problem. The decision is “willed” either way and has reasons either way, but there was nothing determining which of the two occurred. It is still arbitrary, even if there were reasons for both actions, and the person ends up willing whichever is chosen. Why did she will A over B? No reason. It’s arbitrary. Kane seems to be hiding the same bad arguments under more philosophical and scientific baggage. It also seems to be pure philosophical speculation.

Chapter 13 (Predestination, Divine Knowledge, and Free Will) relates free will to God’s attributes. If God knows everything that will happen, doesn’t that mean that people are not free to do otherwise? If their decisions were indeterminate, then it would seem there would be no fact of whether we would do A or not-A. But that messes up God’s foreknowledge.

There’s a lame response by Augustine, saying that foreknowledge doesn’t cause the future events. I remember someone who made this point over and over as if it was relevant. So what? That doesn’t mean that events are indeterminate. If free will needs the libertarian ability to do otherwise, knowing the fact of whether A or not-A is freely chosen beforehand is a contradiction.

Boethius and Aquinas try to escape by claiming that God is eternal, meaning outside of time. That means he doesn’t foreknow what will happen. He sees time as one cooccurring block. Again, so what? The block seems just as determinate. There are the same problems. If God eternally/extemporally knows the events of the universe, there is still no room for indeterminate freedom.

Molinists came up with “middle knowlege” which is basically a way of labeling the problem away. God knows what is necessarily true, and what might happen. He also knows which contingent things are true. Molinists say that between these two is “middle knowledge” which is knowledge of what people would freely do in different situations.

But saying God has middle knowledge just begs the question. It’s middle knowledge that is presumably incoherent. Molinists just label it middle knowledge, and say God has it. Not a solution as far as I can see. By their own words they are basically saying it is “magically and mysteriously” true. “The third type is middle knowledge, by which in virtue of the most profound and inscrutable comprehensions of each free will, God saw in His own essence what each such will would do with its innate freedom. . .” It seems to be a case of arguing from assumed “facts” of prophecy, and God’s omniscience.

And then open theists strip away some of God’s knowledge, making him arguably not all-knowing. This seems to be the most honest approach of the lot.

Chapter 14 (Conclusion: Five Freedoms) is a nice summary of the entire book. Three compatibilist definitions of freedom and two incompatibilist ones are examined. Freedom of self-realization and reflective self-control seem to be the most plausible and satisfying freedoms. These are the classical and new compatibilist views of freedom. The third compatibilist freedom, that of self-perfection, seems to take away too much responsibility for bad acts. People are only responsible for the good ones. Not too convincing.

Freedom of self-determination and self-formation are the libertarian freedoms, and they both seem to suffer from the flaw of arbitrariness and incoherency.

Final Thoughts: It seems like a great deal of this debate could be passed over if we just tabooed our words, and acknowledged that in some senses we are free, and others we are not. Of course there is a more factual debate about what kind of freedom is worth having, and where things like moral condemnation and responsibility come from. On these views I much more side with the compatibilists. They seem to provide reasons for punishment and rewards and praise and blame that are simply assumed by libertarians, but are never really grounded in anything.

Understanding Naturalism

Introduction

Naturalism is the predominant view of modern philosophers, but there is much confusion over what that means. There is no necessary and sufficient definition of naturalism that successfully encompasses how people use the term. Ritchie examines the term “natural” in contrast to the supernatural, the artificial, and normative.

1. First philosophy

Naturalists commonly say “there is no first philosophy.” This is in part a response to Descartes’ attempt to start from the very basics (first philosophy) and build an entire worldview from there in order to solve the problem of scepticism. Similarly, grounding induction is another first philosophy problem. Attempts to create a first philosophy have failed for thousands of years, all the way up to a recent philosopher named Carnap who came up with some creative ways of levying language and pragmatic concerns to create a first philosophy. As always, these highly sophisticated attempts fail when you take a closer look.

2. Quine and naturalized epistemology

Quine acknowledged the failure of first philosophy , and more or less gives up on the attempt. Instead, his naturalized epistemology begins with the findings of science and common sense, and keeps everything open to revision, like Neurath’s boat. We start with the boat, knowing that some, if not all the planks, are in need of replacing. But we must replace them little by little so as not to sink the boat. And we need to use the boat we start with, lest we immediately sink to the bottom.

Ritchie makes a few criticisms against Quine’s epistemology. He acknowledges that Quine levied some decent criticisms against first philosophy, but Quine’s epistemology seems to suffer from the same dangers of skepticism as first philosophy. The most important objection is that it does not successfully link up our sensory data with the real world. There is always a gap, and no reason to suppose that gap is crossed successfully.

3. Reliabilism

Quine’s (allegedly) failed epistemology leads us to consider reliabilism. Instead of seeing knowledge as (consciously and explicitly) justified true belief, reliabilists see knowledge as true belief that comes from generally reliable mechanisms for obtaining truth. These mechanisms need not be understood, and conscious reasons need not be proposed to claim knowledge, but the method of obtaining the solution must be good at obtaining truth (take the example of those who know chicken sexes without knowing how they know- their method is reliable, and leads to true beliefs).

Ritchie claims two major problems for reliabilism. First, the “type” of mechanism is hard to judge. A judgment may fall into multiple different “types” that are both reliable and unreliable depending on how you look at it. When I look outside, do we judge reliability by how good sight is generally, or how good sight is when looking out a window during the day while it is snowing, etc.? This may lead to contradictory outcomes.

Second, Ritchie says reliabilism doesn’t account for science and its success. This is because it regards our immediate, unreflective judgments, not more elaborate, built up judgments, which science is all about.

To me, it seems to suffer from circularity as well. How do we judge what is reliable in the first place? We’d have to first know what is true and false to know how well our judgments come to that standard, and therefore what kinds of methods are reliable. But the only way we can do that is by using the methods we already have, for which we have no reliability data yet.

4. Naturalized philosophy of science

Naturalism is largely characterized by it’s stance regarding science. Respect for science is an essential aspect of naturalism, and yet multiple different views account for the success of science. Naturalists can be realists and anti-realists, but Ritchie prefers the stance of the natural ontological attitude, which is allegedly neither. This stance is a basic trust of what we perceive, and a basic trust of the findings of science. This somehow falls outside of both realism and anti-realism because it does not assume the correspondence theory of truth, or any theory of truth. This sounds like gobbledygook to me.

Finally, Ritchie compares the views of naturalists on what is real, and the view of scientists. Since science is an essential part of naturalism, it is pretty important that there be an agreement, but Quine’s and others’ views don’t end up actually matching the view of scientists, so Maddy offers a correction to that. There’s really no need to assume a lot of philosophical ontology regarding mathematics, because it doesn’t always fit observations perfectly, and because scientists use many kinds of constructs to get things done.

5. Naturalizing metaphysics

This chapter looks at naturalism as entailing physicalism- the view that reality can be described basically as physics says it is. The first argument for this is the argument from reductive success. Different phenomenon have been reduced to more basic levels, eventually reaching physics, therefore all things are likely to reduce as well. Biology -> chemistry -> physics. Ritchie says this argument is very bad because it only takes the hits, ignoring the misses, and applies its conclusion too broadly to everything.

Ritchie moves on to supervenience. Instead of reducing properties of things directly to physical events, properties may instead supervene on them. This means that no changes can occur in the supervening state without a change in the subvening one. So consciousness may supervene on te brain if changes in consciousness necessitate a change in the brain. Morality may supervene on the physical world if changes in whether something is “good” or “bad” necessitate a change in the physical make-up of the situation.

6. Naturalism without physicalism?

This chapter begins with a challenge to physicalism by introducing three arguments: the argument from consciousness (qualia), the Mary/bat argument, and the zombie argument. In addition, trying to define what physics is becomes very difficult. If physics is what describes subatomic things, then there is no specific content that it has. If physics is what current science says is likely to be true, then it is almost certainly false. What counts as physics is always changing, and a fairly amorphous picture, so to link oneself to it is to link oneself to an amorphous blob of changing ideas.

7. Meaning and truth

What are meaning and truth on naturalism? A good place to start is to point out that what something represents must have a causal connection to what is being represented. This can’t be quite correct, because for one, we couldn’t account for representations of false things, or things not ever present. There is also no error, no norm, simply a cause and effect relationship.

Teleosemantics attempt to solve this problem. Something’s proper function is basically the function it evolved to perform- the reason it continued as a trait. If the proper function of brains is to create correct representations of the world, then errors occur when it fails to do so.

The swampman objection takes away the evolutionary history, but leaves what intuitively looks like a thing that can create representations.

Ritchie says that meaning and truth are two unresolved problems in naturalism, with no satisfying solution just yet. I’m not sure if that means there are any satisfying solutions elsewhere, or what such an answer would even look like.

Heuristics and Biases 3B- Expert Judgment

37. Assessing Uncertainty in Physical Constants

Even something as seemingly concrete as physical constants (speed of light, gravitational constant) undergo refinement, and the expectations that specific values are correct are susceptible to overconfidence bias.

38. Do Analysts Overreact?

Yup. They take negative and positive information as more predictive of future performance than actually occurs.

39. The Calibration of Expert Judgement: Heuristics and Biases Beyond the Laboratory

This chapter explores the question of how well the heuristics and biases research may generalize outside the laboratory to professional domains, where people have more training, more is at stake, and they aren’t just dumb psych 101 students. Turns out that biases don’t just exist in the lab (this happens to be a good example of why psych can still come to reasonable conclusions about humans as a whole based on psych 101 student data- it’s not perfect, but it’s evidence).

Some experts are awesome though, like bridge players and meteorologists (predicting rain in Chicago). They’re very well calibrated. A common theme throughout the many cases is base-rate neglect, which helps explain both the successes and the failures. When base rates are nearer 50/50, predictions are better. When base rates are very high or very low, experts under or over predict.

In this chapter, we examine the calibration of expert probabilistic predictions “in the wild” and assess how well the heuristics and biases perspective on judgment under uncertainty can account for the findings. We then review alternate theories of calibration in light of the expert data (15309).

We review research on calibration from five applied domains: medicine, meteorology, law, business, and sports (15493).

In all domains of expert judgment surveyed, systematic miscalibration was observed. In each case, the observed patterns matched the qualitative predictions of the heuristics and biases perspective, as embodied by the direct support account. Nonetheless, there were notable differences among the domains in the magnitude of miscalibration, such that the judgments of experts with the greatest training and technical assistance in statistical modeling (meteorologists and economists) were less biased than the direct support account predicted (15784).

In the expert data sets we examined, there is little or no indication of a general bias in favor of the focal hypothesis, as implied by the confirmatory bias model. In particular, there was little evidence of optimistic bias in these data sets. Note, however, that most of the judgments were generally not self-relevant. When the issues were extremely self-relevant, such as the patients’ predictions of their own survival, there was considerable optimistic bias shown (15872).

We find the general framework of support theory in which RST is based, however, to provide a useful and psychologically plausible interpretation of the patterns that we found: Assessments of probability typically reflect a direct translation of the support provided by the evidence for the target hypotheses, with little regard to the reliability of the evidence or the base rate of the outcome (15878).

40. Clinical versus Actuarial Judgement

Reminds me of Epistemology and the Psychology of Human Judgment. Actuarial methods (statistical prediction rules) nearly always outperform clinical judgment, even when the judgers are given the results of the actuarial methods. If they just followed the math, and not their subjective professional opinion, the outcomes would have been better.

In the clinical method the decision-maker combines or processes information in his or her head. In the actuarial or statistical method the human judge is eliminated and conclusions rest solely on empirically established relations between data and the condition or event of interest (15908).

even after repeated sessions with these training protocols culminating in 4,000 practice judgments, none of the judges equaled the Goldberg Rule’s 70% accuracy rate with these test cases. Rorer and Goldberg finally tried giving a subset of judges, including all of the experts, the outcome of the Goldberg Rule for each MMPI. The judges were free to use the rule when they wished and knew its overall effectiveness. Judges generally made modest gains in performance but none could match the rule’s accuracy; every judge would have done better by always following the rule(15955).

In virtually every one of these studies, the actuarial method has equaled or surpassed the clinical method, sometimes slightly and sometimes substantially (15984).

The research reviewed in this article indicates that a properly developed and applied actuarial method is likely to help in diagnosing and predicting human behavior as well or better than the clinical method even when the clinical judge has access to equal or greater amounts of information (16172).

41. Heuristics and Biases in Application

Just a brief glimpse at the application potential of the heuristics and biases research. Each area (of the three below) has its own caveats. While it may be a net positive for people to acknowledge the general truth that humans are biased, even experts, the application of predicting or changing behavior gets pretty complicated. This is just the tip of the iceberg in applied rationality.

This chapter critically examines the application of heuristics-and-biases research from these dual perspectives, asking what theory and practice can learn from each other. It focuses, in turn, on applications to (1) explaining past judgments, (2) predicting future judgments, and (3) improving future judgments (16232).

42. Theory-Driven Reasoning about Plausible Pasts and Probable Futures in World Politics

Experts are susceptible to rationalizations that work to protect their explanatory theories from falsification. As a general rule, when predictions are falsified, experts do not downwardly adjust their confidence level enough. The more theory based their thinking was, the more confident they were in their predictions.

This chapter explores the applicability of the error-and-bias literature to world politics by examining experts’ expectations for the future as well as their explanations of the past. One set of studies tracks the reactions of experts to the apparent confirmation or disconfirmation of conditional forecasts of real-world events in real time (16619).

Although there is a general tendency among our experts to rely on theory-driven modes of reasoning and to fall prey to theory-driven biases such as overconfidence and belief perseverance, these tendencies are systematically more pronounced among experts with strong preferences for parsimony and explanatory closure (16628).

although experts only sporadically exceeded chance predictive accuracy, they regularly assigned subjective probabilities that exceeded the scaling anchors for “just guessing.” Most experts, especially those who valued parsimony and explanatory closure, thought they knew more than they did. Moreover, these margins of error were larger than those customarily observed in laboratory research on the calibration of confidence. Across all predictions elicited across domains, experts who assigned confidence estimates of 80% or higher were correct only 45% of the time, a hit rate not appreciably higher than that for experts who endorsed the “just-guessing” subjective probabilities of 0.50 and 0.33 (for 2 and 3 outcome scenarios, respectively). Expertise thus may not translate into predictive accuracy but it does translate into the ability to generate explanations for predictions that experts themselves find so compelling that the result is massive over-confidence (16703).

Converging evidence suggests that experts not only rely, but over-rely, on their preconceptions in generating expectations about the future (producing overconfidence) and in revising preconceptions in response to unexpected events (producing belief underadjustment by Bayesian standards) (16786).

The more value experts place on parsimony and explanatory closure, the more prone they are to overconfidence (ex ante) and Bayesian under-adjustment (ex post) (16791).

Heuristics and Biases 3A- Everyday Judgment and Behavior

33. The Hot Hand in Basketball: On the Misprediction of Random Sequences

People, at least when judging streaks in basketball, are bad at knowing what a random sequence would look like. They tend to impart streaks of multiple hits with more meaning than they really deserve. They tend to think that randomly varying sequences of heads and tails are not really random because of the streaks, and that streaks that vary more than chance would demand are more random.  More poor probabilistic thinking.

34. Like Goes With Like: The Role of Representativeness in Erroneous and Pseudo-Scientific Beliefs

Instead of looking at probability judgments like most representativeness research does, this chapter links the heuristic with causal judgments, in particular pseudoscientific ones like homeopathy, traditional medicine (rhino horn cures . . .), and perhaps the resistance to non-representative occurrences, like mosquitos causing malaria (or a mindless process leading to minds?).

In ancient Chinese medicine, for example, people with vision problems were fed ground bat in the (typically) mistaken belief that bats have particularly keen vision and that some of this ability might be transferred to the recipient (Deutsch, 1977). Evans-Pritchard (1937) noted many examples of the influence of representativeness among the African Azande (13891).

35. When Less is More: Counterfactual thinking and Satisfaction among Olympic Medalists

Specifically among Olympic medalists, and probably among everyone else, those who are objectively better off may be subjectively less happy/satisfied. Silver medalists compare themselves to the Gold (soooo close!), and bronze medalists compare themselves to squat diddly (phew! barely made it), so the bronze are happier. It took three studies to confidently triangulate such a conclusion.

36. Understanding Misunderstandings: Social Psychological Perspectives

This chapter applies the many heuristics and biases outlined in this book to human social interactions. The fundamental attribution error looms large here. So does the above average effect. People feel that they learn more from interactions with others than others learn about them, that they are less biased than average (even when the above average affect in this area is explained). Every time I heard that I cringed, seeing as how I continue to feel above average. But wouldn’t pretty much anyone reading this book be better than average in reducing bias?

The present thesis is that blindness about the role that such biases play in shaping our own political views, and a penchant for seeing self-serving or ideologically determined biases in other’s views, exacerbates group conflict (14296).

We argue that people readily recognize biases in others that they do not recognize in themselves, and as a result, they make overly negative attributions about others whose views and self-interested motives seem “conveniently” congruent (14314).

Assumptions about top-down processing may also lead partisans to overestimate the ideological consistency and extremity of those on their own side of the conflict. The result is an overestimation of the relevant construal gap between the modal views of the two sides and an underestimation of the amount of common ground that could serve as a basis for conciliation and constructive action (14593).

Partisans in the express-own-position condition in these studies showed the expected false polarization effect, markedly overestimating the gap between the positions of the two sides. By contrast, participants in the express-other-position condition (and, in one study, those in a third condition in which they expressed both positions) hardly overestimated this gap at all (14636).

Implicit in our discussion of biases contributing to conflict and misunderstanding is the assumption that people recognize or presume the influence of such biases more readily when they are evaluating other actors’ responses than when they are evaluating their own (14642).