Look “Two paradoxes” – republished from 6 years ago

I’ve got two new paradoxes (at least, I think they’re new!) that I hope to write an article about this coming week. Here’s a preview. Remember, folks, you saw ’em first here on Episyllogism!

A note of clarification: ‘paradox’ can mean several different things. Logicians generally use it to mean a statement that cannot consistently be true or false, and English lit. people often use it to mean something surprising. I mean something weaker than what logicians mean and stronger than what literary critics mean. In calling these ‘paradoxes’, I’m saying that they are problems whose obvious solutions seem counterintuitive in one way or another.

The first paradox has to do with morality. I accept all the following claims, and take it that others do as well:

1. More or less nobody entirely follows the moral principles he or she maintains as true.

2. Moreover, it is extremely unlikely that any of us will ever live in accordance with his or her own moral principles.

3. Any intellectually honest and reasonably observant person should be able to see the truth of 1 and 2.

4. To maintain a moral principle that one does not follow and knows that one will almost certainly never follow is hypocritical.

5. It is wrong to be a hypocrite.

6. Therefore, we must resolve the tension either by living up to our moral principles or else abandoning or by watering down those principles.

7. But experience shows that we cannot successfully live up to our moral principles even if we recognize that it would be hypocritical of us not to.

8. Therefore, we must abandon or water down our moral principles in order to avoid hypocrisy.

9 But it is wrong to abandon or water down a moral principle in order to avoid hypocrisy.

It is difficult to see how 1, 2, 3, 4, 5, 7 or 9 could plausibly be incorrect, and 6 and 8 follow logically from the rest. But clearly, 8 and 9 are in tension.

So what should we do? Maybe we just have to give up on 4 or 5: maybe, that is, maintaining a moral principle that one has excellent reason to believe one will never follow is OK.

Well, suppose you go that way. Then it seems that you should not feel so bad when you fail to live up to your moral principles. But doesn’t this seem worrisome? If you don’t feel bad about violating your own moral principles, then a) it’s not entirely clear what is meant by their being your moral principles and b) regardless, it’s not difficult to see that your commitment to your moral principles will become weaker by your no longer caring as much about failing to live by them, and that certainly would be morally irresponsible on your part.

The second paradox is very similar, but this time the principles in question are epistemic ones. Epistemic principles are the principles that determine whether it’s right or wrong to believe something. If you see that the sun is up, people are awake and walking or driving to work, and your clock says 8:30am, etc., then it would be a violation of basic epistemological principles or norms to think that it’s 11pm. If you know that Nanaimo is part of BC and that BC is part of Canada, it would be irrational for you to believe that BC is not a part of Canada. Those are examples of epistemic norms or principles. But sometimes, we have a difficult time following our epistemic principles. We tend to believe we’re much smarter, more ethical, better at sex, funnier, more reasonable, etc. than most or all of our peers. We tend to remember confirmations of our religious, political, etc. beliefs but forget the problems with them. Psychologists have shown this in study after study. And while we can improve on our epistemic conduct, these problems seem extremely difficult (and probably impossible) to eliminate entirely. So:

1. More or less nobody entirely follows the epistemic principles he or she maintains as true.

2. Moreover, it is extremely unlikely that any of us will ever live in accordance with his or her own epistemic principles.

3. Any intellectually honest and reasonably observant person should be able to see the truth of 1 and 2.

4. To maintain an epistemic principle that one does not follow and knows that one will almost certainly never follow is hypocritical.

5. It is wrong to be a hypocrite.

6. Therefore, we must resolve the tension either by living up to our epistemic principles or else abandoning or by watering down those principles.

7. But experience shows that we cannot successfully live up to our epistemic principles even if we recognize that it would be hypocritical of us not to.

8. Therefore, we must abandon or water down our epistemic principles in order to avoid hypocrisy.

9 But it is wrong to abandon or water down an epistemic principle in order to avoid hypocrisy.

 

And here we are again!

What’s going wrong in these paradoxes? How can the matter be resolved?

Approaching the moral perils of the day, Part I

It’s been a few years since I’ve been here. It’s my loss: I have a very busy schedule now. But Bob has been asking me to come back and contribute something, and a couple of  days ago he gently requested, for what was not the first time, that I come by and talk a little about some hot button topics I’ve briefly discussed with him, including some questions touching on the morality of the #MeToo movement.

Calling these topics ‘hot button’ is quite an understatement, actually. Present the right view and you are roundly ‘liked’ and feted, and more so if you can distinguish yourself from the crowd by going even further than anyone else in your comments and sentiments. Present the wrong view and you can plant the seeds of your own ruin, and that ruin may be swift. Those who have read Jon Ronson’s important book, So You’ve Been Publicly Shamed, will need no clarification or convincing on this point. Those who have not read it should pick it up with all speed and read it through (it’s quite a page-turner). Combine these two facts with the presence of a number of  radicals patrolling (anti-)social media on a moral crusade — ‘If you’re not joining us in beating up the evil ones, then you’re clearly a defender of evil (and we’re coming after you next)’ — and you have a very, very dangerous situation in which to stand up to the status quo. And yet, it is just in the face of such proud and confident moral crusades, which are quite possibly the most flagrantly immoral enterprises of all, that speaking up against what is going on, or at least planting some seeds of doubt, is objectively the most important.

I’ve decided to limit myself here, as much as possible, to just raising questions rather than providing answers to them. The sorts of issues I think Bob wants me to talk about here — the right response to sexual harassment allegations, the best ways to accommodate diversity, and so on — are complex and multifaceted and require careful attention to many considerations in thinking them through, and I don’t feel I have an adequate solution to the problems. But, as Socrates said, I at least know that I don’t know, whereas others give evidence of not having thought these issues through very well but still insist on moral claims that they just can’t be justified in asserting. The most I’d like to do in the things I’ll write about here is to draw attention to some of the considerations that seem very important but are being entirely or almost entirely neglected in all the public discussions I’ve seen. I’d rather leave it to others to figure out how to resolve the tensions. I hope they provide stimulating bases for conversations here.

When I raised a question about due process and fair dealing on Facebook recently, I was angrily attacked by someone who sneered at my comment, reading the worst motivations into it, and then sarcastically concluded “but it was never really about due process, was it?”

What I’m concerned with is very much about due process. Due process, fair dealing and self-critical investigations are the cornerstones not only of reasoned inquiry but of a civilized society. They can be attacked by mobs from the political left or the political right, by religious mobs or atheistic mobs, and by any demographic you care to name. The details of this particular instance of the attack on the pillars of civilization are of much less concern to me than the fact that they are under attack yet again in familiar ways. So I’d like to approach these issues by way of many broader, less ‘hot button’, difficulties that I think underlie the current instantiation of the war on fairness, reason and due process.

To set the mood, here’s a personal anecdote and a couple of online essays I think will be helpful.

The personal anecdote: I grew up in a Jewish family in Vancouver, and attended a Jewish day school. I had a brilliant 3rd grade teacher — Gita Kron was her name — who had survived the Nazi holocaust. More than any other teacher, she instilled in me the philosophical spirit. She insisted that we think for ourselves and not blindly trust her authority or anyone else’s. I came to think later on that she may well have come to adopt this approach because of the horrors she had witnessed, and in particular the easy ways in which ordinary and even highly educated people, at the same time pulled along by and  pulling along the spirit of their times, can become a pack of vicious animals with horrible moral blind spots and empathy gaps that nobody in the pack notices until the dust settles years later. And the deeply nuanced story of the Nazi holocaust, like the story behind any other moral panic (and yes, I see the Nazi holocaust as a moral panic on the part of the Nazis — Peter Hayes’ masterful Why? Explaining the Holocaust makes that case well) is a wonderful object lesson in the perils of such moral crusades, and how these particularly nasty forms of evil can arise so quickly from seemingly noble and innocent beginnings. This is a lesson so infrequently learned: how often, in the very act of rallying the mob to punch and kill Nazis, one creates and becomes the next round of Nazis. The Nazis, after all, saw themselves as doing what they could to stand up for the poor, neglected, unfairly wronged, underprivileged demographic to which they belonged, and to do what they could to stand tall in the face of a privileged elite that had been running the world and keeping them down for too long, and had betrayed everyone else. To achieve that end, quaint quibbles about due process and fair treatment, and about the need to spare all the innocent and treat even the members of the powerful subgroup as worthy of dignified treatment, had to be waved aside, and those who refused to wave them aside were merely declaring themselves as enemies. Sound familiar? Oh, of course the Nazis were factually mistaken in their Jewish conspiracy beliefs, so we can blame them for that today, because we would never make such mistakes, right? After all, unlike the Nazis, we get our sociological information from reliable processes that depend on free, fair, and open debate, in which all views, however contrary to the prevailing ethos, are given equal weight. Don’t we?

But I digress. I had thought that this was clearly the lesson Mrs. Kron wanted us to take away from her classes: that the future of civilization, and our humanity even today, are fragile things that depend on our ability to question ourselves, to see ourselves objectively as best we can, and always try to do better; to engage thoughtfully and sympathetically with one’s interlocutors as a check on overconfidence and  misrepresentation of the other side, and to play devil’s advocate as well as can be done if nobody else is taking on the sacred role of critical opposition: and to realize that moral crusades are fraught with corrosive danger.

Armed with this disposition, I became perhaps the first in my extended family to have meaningful conversations with Arab Muslims, including Palestinians. I was very curious to hear their views on the Israel/Palestine conflict, even though it made me uncomfortable to do so (but how can one ever make progress on topical moral and political issues if one is too squeamish about being uncomfortable or making others uncomfortable?). I debated, I investigated, and I came to feel things were not as simple as I had been led to believe. When a distant relative sent along a report on the Israel/Palestine conflict that I found to be seriously lacking in nuance, I argued against it, naively thinking that I was simply applying the principle we  had all agreed upon. As a result, I came to be regarded by some in my extended family as brainwashed at best and morally compromised at worst. I tried to argue my way out of this black sheep status with reasoning and evidence, only to find that nobody was interested in playing by those rules. And later, I met some other students of Mrs. Kron from a different year, and discovered that the great lessons they had learned from her were tribal rather than universal: they saw her great teaching as  being a conviction that the Jews had to do what they could to look out for themselves in a hostile world, whatever it takes.

I’ve now come to feel that that difference — between those who interpret the lessons of history in a universal way and those who think that the greatest lesson is to stick with one’s  tribe and smash the outgroup when the tribe feels threatened, which is  pretty well always to a tribalist– is perhaps the most important of all.

With that as background, I present two essays. The first is a more abstract one about the growing turn toward the subjective, and indeed a growing inability and unwillingness to see the world in a non-tribal manner (ah, for the days in the early part of the century when this awful subjectivist tendency seemed to be dying…)

http://quillette.com/2018/03/19/the-tyranny-of-the-subjective/

The second essay is a personal account of a very disturbing trend in academia (and increasingly elsewhere) today: the turning  away from the altruistic, morality of fairness and universal consideration of others toward the narrow and self-absorbed, to the point where many in academia can (amazingly enough) no longer even understand the principle of equal treatment that makes morality morality:

http://quillette.com/2017/04/20/crucible-application-process/

More when I have a moment for it! Best to all.

Scientism and the Is/Ought Gap

moral

Justinthecanuck has generously provided the following essay, which he wrote for a soon to be published book, for us here at Episyllogism, as a part of the discussion on morality, naturalism, constructionism,  the is/ought distinction, and a number of important themes in moral philosophy. Thank you!

Readers may want to review our recent discussion:

  1. Moral philosophy and empirical psychology;
  2. Point/Counterpoint – Moral Psychology: An Exchange;
  3. Philosophy, psychology, anthropology: morality

 

The following essay will appear, with minor edits, as a chapter in a forthcoming volume on scientism edited by Maarten Boudry and Massimo Pigliucci. I include it here as a sneak preview in case any episyllogists are interested: please cite only with permission.

 

Scientism is the view that the only facts are those that could in principle be learned exclusively from the natural and social sciences, empirical observations, mathematics, and logic, and that any beliefs that can only be justified in some other way are merely sham knowledge. And yet, adherents of scientism often speak and act as though we ought to do things (for instance, that we ought to accept scientism). Clearly, if it is a fallacy to derive an ought from an is, then these devotees of scientism are in trouble.

The putative fallacy of deriving an ought from an is, sometimes called the naturalistic fallacy, arises from the simple observation that an argument whose conclusion non-trivially contains a concept that does not appear in any of its explicit or implicit premises cannot be valid. An apparently straightforward application of this general principle is that an argument whose conclusion non-trivially contains the concept ‘ought’ (or ‘morally right’, ‘morally good’, etc.), but whose premises do not contain that concept, cannot be valid.

Is it possible that the naturalistic fallacy is not in fact a fallacy? While the reasoning behind the fallacy diagnosis seems airtight, there is one possibility that could, in theory at least, make room for a scientistic approach: the possibility that the move from an ought to an is may be legitimate in some cases. As Charles Pigden remarks in ‘The Is-Ought Gap’[1], certain deontic logics may employ logical ‘ought’ operators that appear in the premises without being mentioned in the conceptual content of the statements, but that become more salient in the conclusion. Since it is a matter of considerable controversy which modal logical system, if any, is to be preferred in such cases, and which system of deontic logic, if any, is the correct correlate of that general modal system, it remains a live possibility that an ought can legitimately be derived from an is[2]. I explore this interesting possibility a little later.

Much of the discussion of scientism takes place in popular books and lectures, far away from technical controversies over operators and metatheorems in deontic logics. In these popular discussions, it is common for moral philosophers to be portrayed in caricature as benighted longhairs stumbling around in the dark for want of empirical enlightenment – enlightenment that scientism alone can provide. Sam Harris, perhaps the most notorious among the ‘scient-ists’ in the popular press, has written a book and given several talks in support of the supposedly revolutionary view that science can inform human values and steer us away from moral relativism.[4] The fact that prominent philosophers have been articulating clear moral views, and opposing relativism, for millennia appears not to impress Harris: in his self-presentation, the attempt to use observations (and, curiously enough, non-empirical thought experiments such as one involving a fictional society that systematically blinds every third child in obedience to some religious scriptures[5]) represents a new and hitherto neglected direction in moral thinking.

It is hard not to wonder why Harris thinks his approach of considering the overall levels of happiness in objective or hypothetical cases is some sort of novelty, or why he would brazenly present this view with great fanfare as a significant achievement if he recognized that it is what most philosophers, and perhaps most people, already employ and have perhaps always employed in working through moral issues. It is also tempting to wonder, at times, whether Harris is entirely capable of grasping the nature of the is/ought problem, as when he says:

Moore felt that his “open question argument” was decisive here: it would seem, for instance, that we can always coherently ask of any state of happiness, “Is this form of happiness itself good?” The fact that the question still makes sense suggests that happiness and goodness cannot be the same. I would argue, however, that what we are really asking in such a case is “Is this form of happiness conducive to (or obstructive of) some higher happiness?” This question is also coherent, and keeps our notion of what is good linked to the experience of sentient beings.[6]

Unfortunately, Harris does not tell us what his argument for that conclusion would be in the unspecified circumstances under which he “would argue” for it. But to be as fair as possible to Harris, I focus on what I take to be his clearest articulation of how he thinks it possible to cross the is/ought divide. His plan is to make the crossing via an articulation of the synonymy of certain moral and non-moral terms:

To say we “should” follow some of these paths and avoid others is just a way of saying that some lead to happiness and others to misery. “You shouldn’t lie” (prescriptive) is synonymous with “Lying needlessly complicates people’s lives, destroys reputations, and undermines trust” (descriptive). “We should defend democracy from totalitarianism” (prescriptive) is another way of saying “Democracy is far more conducive to human flourishing than the alternatives are” (descriptive) … Imagine that you could push a button that would make every person on earth a little more creative, compassionate, intelligent, and fulfilled — in such a way as to produce no negative effects, now or in the future. This would be “good” in the only moral sense of the word that I understand. However, to make this claim, one needs to posit a larger space of possible experiences (e.g., a moral landscape). What does it mean to say that a person should push this button? It means that making this choice would do a lot of good in the world without doing any harm. And a disposition to not push the button would say something very unflattering about him. After all, what possible motive could a person have for declining to increase everyone’s well-being (including his own) at no cost? I think our notions of “should” and “ought” can be derived from these facts and others like them. Pushing the button is better for everyone involved. What more do we need to motivate prescriptive judgments like “should” and “ought”? [7]

Harris’s partner-in-scientism Michael Shermer attempts to cross the is/ought boundary in a similar way:

Morality involves how we think and act toward other moral agents in terms of whether our thoughts and actions are right or wrong with regard to their survival and flourishing. By survival I mean the instinct to live, and by flourishing I mean having adequate sustenance, safety, shelter, bonding, and social relations for physical and mental health. Any organism subject to natural selection – which includes all organisms on this planet and most likely on any other planet as well – will by necessity have this drive to survive and flourish, for if they didn’t they would not live long enough to reproduce and would therefore no longer be subject to natural selection … Given these reasons and this evidence, the survival and flourishing of sentient beings is my starting point, and the fundamental principle of this system of morality. It is a system based on science and reason, and is grounded in principles that are themselves based on nature’s laws and on human nature – principles that can be tested in both the laboratory and in the real world.[8]

Continue reading

The Great But Silent Man on the Stage

Emeritus

At Vancouver Island University’s convocation ceremony on June 3rd this summer, something called the ‘Recognition of Academic Emeritus Designation Award’ was bestowed upon the very deserving Bob Lane. Bob, who was referred to in the program only as “Robert Lane, Professor, English (Retired)” did not speak, nor was anything really said about him when the award was conferred. There was nothing else to indicate who this man, who sat in his paradoxically imposing but gentle-looking way, is. He smiled and nodded benevolently as his name was mentioned and a witty comment was made, betraying to nobody that he was stoically sitting through the event with a sore and aching back – a ‘mala spina’, as he joked later on. It occurred to me then how odd it was that the great majority of the audience members, and indeed those on the stage behind him, had any idea who this enigmatic man was or what he had been to the former college that was now granting the degrees being conferred on stage. Indeed, it is difficult to imagine what, if anything, Vancouver Island University would be if Bob had not been a part of it. And as a friend of fan of Bob’s and a former member of the VIU Philosophy department, I haven’t been able to shake the feeling that it falls in part to me to fill in that on-stage silence with a few words after the fact.

Continue reading

Is there a basis for universal morality among humans?

Mikhail moral mapImage

While some have their doubts, John Mikhail thinks the evidence points to yes. Today’s post is an interview with Mikhail in which he summarizes his case that, beneath all the surface differences we see on moral issues, a common moral sense is as much a part of the human makeup as is Chomsky’s universal grammar.

Enjoy!

http://philosophybites.com/2011/06/john-mikhail-on-universal-moral-grammar.html

 

The moral sense in young infants

Good morning, everyone. My apologies for not posting anything yesterday: I have a new Thursday class to prepare for each week and I spend the rest of the day in other activities. I’ve decided to switch to a Friday/Monday sequence for these posts.

So far in the series, I’ve posted on the ways our moral views seem to be shaped by environmental factors. Haidt’s experiments show that we tend to retain our moral views despite losing our reasons for holding them, and suggest that we often tend to rationalize the moral views we have (and believe the rationalizations) rather than arrive at our moral views on the basis of reasons. So we are open to many morally irrelevant influences in our moral views. I’ve also shown evidence that seems to imply that our views on the moral acceptability of violent responses to insults can be conditioned by whether our culture is descended from herders or farmers; that our views on slavery can be conditioned by whether we’re living in a climate that’s good for agriculture; and that our views on cannibalism and slavery may have their roots in details about the caloric intake of various food sources. It’s been shown that many of these factors exert influence over us whether or not we are educated, intelligent, non-religious, and so on. I’ve also explored the possibility that those who disagree with us over fundamental moral issues really have the same moral views as we do, but are just understandably ignorant of some non-moral facts. I argued there that it’s unclear what non-moral facts others could be mistaken about in the cases of Hopi animal torture, Roman gladiatorial practices, and Chinese foot-binding (it can’t be the non-moral fact that not binding one’s daughters’ feet will worsen her marriage prospects, since that is a non-moral fact that people would be correct in believing).

As Frank pointed out last time, this may give the impression that morality is just a matter of training: our culture, party in response to environmental factors, develops a moral view and then inculcates it on the ‘blank slates’ its young are born with. In fact, though, there is a growing body of research that shows the opposite: while some of our moral dispositions are supplied by society, some is innate.

I’d like to turn to that research now. Here’s a good way into the material: http://www.cbsnews.com/videos/born-good-babies-help-unlock-the-origins-of-morality-50135408/

If that link doesn’t work where you’re reading this, you can read a transcript of the broadcast here:

http://www.centertao.org/media/Why-of-it-all.pdf

Culture of Honour

Image

I’ll get to the above graph in a moment, but first I want to tell a little story.

A man sits in a bar with a bunch of his friends one evening. The group is having a pleasant night out until a stranger walks up to the table and speaks insultingly to the man. He sneeringly claims to have been having sex with the man’s fiancée on an ongoing basis, and states that he and the man’s fiancée have had many jokes about the man’s diminished sexual attributes and abilities. Hey lays it on pretty thick for another minute and then says, “I’m heading out to the parking lot now, and if you’re any man at all — which we all know you’re not already — you’ll follow me out and prove it.” What should the man do? When asked about cases like this, men and women from the southern US were much more likely to think that the man should go and punch it out, however bad the fight may be. While northerners, on reflection, tended to think that the moral course of action would be to ignore the provocation and laugh off the stranger’s insults, southerners tended to think that one wouldn’t be ‘much of a man’ if he didn’t respond with violence.

This and many similar cases are discussed by Richard Nisbett and Dov Cohen in Culture of Honor: The Psychology of Violence in the South. To further explore the different ways northerners and southerners think of fighting to defend one’s honour, they sent off job applications to businesses in Michigan and Tennessee with bogus resumes and cover letters. These letters were all the same except for one crucial difference. Half the letters sent to employers in each state ended by admitting that the writer had served time for a felony — beating another man to death outside of a bar in a fight that got out of hand after the other man had insulted the first man’s wife. The other half also confessed to a felony, but this time it was stealing expensive cars from a car lot when the writer had no other way to support his family. The results were curious. The applications that confessed to stealing the cars were rejected by all employers, and the northern employers similarly wanted nothing to do with the man who had beaten someone to death outside the bar. But the southern employers tended to feel quite differently about the man who stood up for his wife’s honour. One wrote to applaud him for his honesty in admitting his criminal history up front, and said that it sounded like one of those understandable but unfortunate things “that could happen to anyone”. Another southern employer expressed regret that he didn’t have a job opening at present. And so on. Continue reading