Which Comes First?

haidtI would like to think that I and others around me have the ability to distinguish moral right from wrong in at least some very straightforward cases. But am I justified in thinking that? After all, there seems to be considerable disagreement on key moral issues between the various cultures of the world today, between our society today and those in the past, and between different members of the same societies. If we know that certain things are objectively right or wrong, and it feels as though we do, why do so many people disagree with us?

We don’t see this much disagreement when it comes to simple arithmetic, simple matters of whether there is or isn’t a body of water between Nanaimo and the mainland, whether there is literally a full-sized, live elephant in the room, and so on. If our moral thinking is so reliable, how can the extensive moral disagreements be explained? There seem to be three main options: a hopelessly bigoted and conceited explanation, a pessimistic explanation, and an optimistic explanation. Let’s consider these in order.

The hopelessly bigoted and conceited explanation: according to this view, we are able to perceive the moral truth because there is something special about us. If any group or individual disagrees with us on any moral matter, that must be taken as evidence of the stupidity of that group or individual. The more strongly people feel that we are mistaken, the more stupid they must be.

The problem with this view should not be difficult to fathom. A dogged insistence on one’s own view and a bigoted dismissal of the other are just too easy to come by. It could just as easily be that we’re the stupid ones and that the other people have morality right, for all we know. We’ve all seen this story play itself out in religion (‘I know in my heart that my religion is right, so all others must be wrong’); in nationality (‘I know in my heart that my country is the greatest that ever was, so…’) etc. For a reflective person, this explanation has nothing going for it.

The pessimistic explanation: a more plausible view is that we disagree with one another because we humans tend to be very easily misled in our moral views by some combination of morally irrelevant factors. Our moral views are shaped to at least a significant extent by some combination of: powerful and influential people who exert an influence over society; our physical environment; our contingent cultural traditions; our limited circles of friends; our genes; our self-interest; and so on. Our moral views are largely or solely a product of all these factors, and yet we live in constant denial of this fact. We kid ourselves that we are reasoning and perceiving our way toward moral knowledge, whereas in fact the ‘reasoning’ we engage in is generally speaking a sham: we come to our moral conclusions on the basis of irrelevant and quite silly factors, and then rationalize these decisions on the basis of spurious reasoning.

This view actually has a good deal going for it: if true, it would certainly explain the extent of moral disagreement. However, it had better not be true if we want to keep believing in moral knowledge! For since there is nothing special about our moral beliefs (let’s avoid the hopelessly bigoted and conceited explanation), we have to admit on this pessimistic view that we’re just as bad at distinguishing right from wrong as anyone else is!

The optimistic view: there is, however, an apparently plausible explanation that avoids both the silly arrogance of the first view and the pessimism of the second. On this third, optimistic view, we admit that our fundamental moral principles are no better or worse than anyone else’s: they’re the same! (this is the non-arrogant part). However, the other people out there are demonstrably at some disadvantage: they have some false non-moral beliefs about biology, religion, economics, etc. Our moral thinking, on this view, is a matter of reasoning our way from our shared moral principles to our moral conclusions. If only the other cultures could learn the truth about these non-moral facts and reflect carefully on the moral issues we disagree on, they’d find themselves agreeing with us (this is the optimistic part).

Throwing away the bigoted explanation, we are left with 1) a pessimistic one in which morally irrelevant factors cause our moral beliefs, which we then rationalize by making up some ad hoc story about how we arrived at them, on the one hand; and 2) an optimistic one in which our moral reasoning comes first and our moral views tend to be conclusions we derive from that reasoning. But which explanation is correct? Which comes first: your moral reasoning, or your moral conclusion?

This is a difficult question. If, in the heat of a moral dispute, someone were to ask you whether you actually reasoned your way to your conclusion on the basis of sound moral principles or whether you simply picked your conclusion as a result of social, environmental and psychological pressures, you would almost certainly say and believe that your reasoning led you to your view. But if your opponent were asked instead, (s)he would almost certainly say that your emotions and biases led you to stubbornly maintain an unreasonable view in the face of all counter-arguments, and that your defenses of it were flimsy and after-the-fact.

But perhaps there’s a way to sort this out scientifically! The famous psychologist Jonathan Haidt (pronounced ‘height’) thought so. He recently conducted a number of rather brilliant experiments on this matter, and all the results seem to point to the pessimistic view, depressingly enough. I’ll explain two of his experiments now.

Experiment 1: Moral Dumbfounding

In one experiment, Haidt sought to ‘morally dumbfound’ his subjects, as he put it: that is, to make them feel strongly that something is morally wrong while realizing that, despite initial appearances, they can’t think of any good reason for their strong moral judgments.

To morally dumbfound his subjects, Haidt and his assistants had them listen to various vignettes he had composed. In one vignette, an adult brother and adult sister decide to try having sex together as an experiment. They use two reliable methods of birth control, and enjoy the result. No pregnancy results, no diseases are transmitted, both siblings are happy with the experience, which enhances their relationship and has no bad effects; and they decide not to try it again and to keep it a secret forever. In a second vignette, someone accidentally hits the family dog with the car while driving home from work; the family decide to eat the dog for dinner (after cooking it carefully), which they enjoy. Again, nobody else ever finds out and they never regret it. In a third, a man buys a chicken at the butcher shop each week and has sex with it before putting it in the freezer.

In these and many other cases, Haidt asked the subjects whether the characters in the story did anything wrong. More or less everyone said yes; but then Haidt asked them to tell him the reasoning that led to their verdict. At first, the subjects had no difficulty coming up with reasons: incest is illegal, the family could have got sick from eating the dog, and so on. But the vignettes were set up in such a way that it was easy to show that none of the obvious reasons for condemning the characters justify a moral condemnation in such cases. Having had all their lines of reasoning refuted, the subjects were at last morally dumbfounded.

This is where the critical observations took place. On what I’ve called the ‘optimistic view’ above, the subjects arrive at their moral condemnations through a quick process of moral reasoning and the reasons they present when urged are the reasons that led to their verdicts. On this view, when they find that none of their lines of reasoning hold water, the subjects will strongly tend to abandon their view and conclude that the characters were not wrong after all.

If the pessimistic view is correct, on the other hand, the subjects’ moral judgments were not made on the basis of reasoning from moral principles at all: the emotional ‘snap judgment’ came first and everything else is a rationalization. That view would predict the opposite response: if the pessimistic view were correct, the subjects would maintain their moral judgments even if they could come up with no good reason for doing so.

In practice, the pessimistic prediction was overwhelmingly correct. When Haidt came to our Human Diversity and Human Nature course, he actually showed us some videos of the experiments (with the faces of the subjects blurred out). When morally dumbfounded, they become visibly uncomfortable: jumpy, twitchy, even apparently on the edge of a violent outburst. But at length, they would say ‘It’s funny: I can’t think of why it’s wrong; I just know that it is, somehow.”

Experiment 2: The Post-Hypnotic Suggestion

What would happen if, in the middle of deliberating on some moral matter, you suddenly felt a pang of discomfort for a completely irrelevant reason? On the optimistic view, this would be unlikely to affect your moral judgment of the matter: you might at first wonder what made you feel the pang, but you’d double-check your reasoning and find that nothing morally interesting is ‘off’. On the pessimistic view, however, it would affect your moral judgment by tampering with your emotions, and if pressed later on you would automatically make up some rationalization for your decision: a rationalization you yourself would believe. Which view is correct?

To test this, Haidt brought in a hypnotist to cause half his subjects to feel a pang of nausea whenever they heard an innocuous word like ‘often’. Some of these subjects then read vignettes in which the word ‘often’ appeared, while others read ones that told exactly the same story without using that word. These vignettes dealt with a familiar range of moral issues (trolley problems, etc.). Sure enough, those hypnotized subjects who read the word ‘often’ in the vignette were far more likely to condemn whatever action was described. But when asked why, none of them expressed confusion or doubt: they immediately made up some ad hoc rationalization.

But this experiment involved something even more remarkable. Almost as an afterthought, Haidt and his colleagues included a final vignette in which it was obvious that absolutely nothing immoral was taking place. In this vignette, subjects were asked to morally evaluate the actions of a student whose job it is to co-ordinate meetings of students and faculty on department matters. All the subjects were told was that the student generally (or in half the vignettes, ‘often’) prepared for these meetings by asking students and faculty members what they would like to discuss.

The story in this vignette is so straightforwardly not immoral that many of the subjects caught themselves before making a preposterous rationalization of a condemnatory verdict. Amazingly, though, a third of Haidt’s subjects actually condemned the student in the vignette! Needless to say, absolutely nobody in the control group did that. When asked, the subjects who condemned the student, as the pessimistic view predicts, rationalized their decision: the student sounded as though he was being sneaky, or was clearly a social climber, and so on.

There are many more experiments along these lines, but I’ll leave it at that for now!

Enhanced by Zemanta

18 thoughts on “Which Comes First?

    • Nagel’s review is worth reading. E.g., One of the main culprits of Haidt’s book, for insisting that morality must be based on reason, not on feelings, is Immanuel Kant. Kant’s influence has been very great, and Haidt believes it has sidetracked philosophers and others from the true path of understanding indicated by his predecessor David Hume, who maintained that reason was always subordinate to feeling or “sentiment” in the control of action, and therefore in morality. But it should be said that Hume’s influence has been equally great, and that contemporary ethical theory continues to be dominated by the disagreement between these two giants.
      The Kant vs. Hume battle goes on!!

      Like

  1. On Sundays I am a pessimistic curmudgeon. On Mondays my usual optimism returns. On Tuesdays the world news kicks in and depression takes me. Wednesdays often announce a new war somewhere in the world. Thursdays I read history. A deeper depression creeps over me. Fridays I think about my family. We are all arguing about religion and politics. On Saturdays I drink some wine and think of my childhood on the farm and the one room school that was so small and so safe.

    Like

  2. “We don’t see this much disagreement when it comes to simple arithmetic” – true, jfc. But we do see “much disagreement” about the nature and scope of mathematics. Some things are simple and some not so simple.

    Like

    • Fantastic point, ucsbalum. I’m really interested in this. Perhaps the moral questions we disagree on are like the _difficult_ questions in quantum physics, the nature and scope of mathematics, who killed JFK, and so on. These questions have objective answers, but our ability to get those answers is limited by the complexity of the issue. We shouldn’t worry about the moral case any more than we worry about the rest of the cases.

      Still, an enduring worry: doesn’t it seem that _some_ moral questions that we disagree on are just mind-numbingly obvious? Do we really need to engage in arcane, theoretical reasoning to know whether slavery or Chinese foot-binding are wrong? If not, then those cases seem to be dissimilar from the complicated cases in other domains.

      Like

    • Good point. Moral questions are certainly complex at times because human beings are involved. And that means we have to attempt to negotiate a vocabulary that allows us to talk about those problems without going to war. Hence, we reach for a viewpoint beyond all view points and call it divine rule, or, the United Nations declaration of human rights.

      Like

      • But my point was actually something different! Many moral questions are very _simple_, so the disagreement cannot be explained by their complexity… so the disagreement still presents an uncomfortable problem.

        Like

        • Yes, I was responding to UCSB, jfc. There is doing and then there is justifying! Do you have a suggestion for those complex cases, a foundation upon which to build ?

          Like

  3. Pingback: Looking Back | Episyllogism

Please join the discussion!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s