Justinthecanuck has generously provided the following essay, which he wrote for a soon to be published book, for us here at Episyllogism, as a part of the discussion on morality, naturalism, constructionism, the is/ought distinction, and a number of important themes in moral philosophy. Thank you!
Readers may want to review our recent discussion:
- Moral philosophy and empirical psychology;
- Point/Counterpoint – Moral Psychology: An Exchange;
- Philosophy, psychology, anthropology: morality
The following essay will appear, with minor edits, as a chapter in a forthcoming volume on scientism edited by Maarten Boudry and Massimo Pigliucci. I include it here as a sneak preview in case any episyllogists are interested: please cite only with permission.
Scientism is the view that the only facts are those that could in principle be learned exclusively from the natural and social sciences, empirical observations, mathematics, and logic, and that any beliefs that can only be justified in some other way are merely sham knowledge. And yet, adherents of scientism often speak and act as though we ought to do things (for instance, that we ought to accept scientism). Clearly, if it is a fallacy to derive an ought from an is, then these devotees of scientism are in trouble.
The putative fallacy of deriving an ought from an is, sometimes called the naturalistic fallacy, arises from the simple observation that an argument whose conclusion non-trivially contains a concept that does not appear in any of its explicit or implicit premises cannot be valid. An apparently straightforward application of this general principle is that an argument whose conclusion non-trivially contains the concept ‘ought’ (or ‘morally right’, ‘morally good’, etc.), but whose premises do not contain that concept, cannot be valid.
Is it possible that the naturalistic fallacy is not in fact a fallacy? While the reasoning behind the fallacy diagnosis seems airtight, there is one possibility that could, in theory at least, make room for a scientistic approach: the possibility that the move from an ought to an is may be legitimate in some cases. As Charles Pigden remarks in ‘The Is-Ought Gap’, certain deontic logics may employ logical ‘ought’ operators that appear in the premises without being mentioned in the conceptual content of the statements, but that become more salient in the conclusion. Since it is a matter of considerable controversy which modal logical system, if any, is to be preferred in such cases, and which system of deontic logic, if any, is the correct correlate of that general modal system, it remains a live possibility that an ought can legitimately be derived from an is. I explore this interesting possibility a little later.
Much of the discussion of scientism takes place in popular books and lectures, far away from technical controversies over operators and metatheorems in deontic logics. In these popular discussions, it is common for moral philosophers to be portrayed in caricature as benighted longhairs stumbling around in the dark for want of empirical enlightenment – enlightenment that scientism alone can provide. Sam Harris, perhaps the most notorious among the ‘scient-ists’ in the popular press, has written a book and given several talks in support of the supposedly revolutionary view that science can inform human values and steer us away from moral relativism. The fact that prominent philosophers have been articulating clear moral views, and opposing relativism, for millennia appears not to impress Harris: in his self-presentation, the attempt to use observations (and, curiously enough, non-empirical thought experiments such as one involving a fictional society that systematically blinds every third child in obedience to some religious scriptures) represents a new and hitherto neglected direction in moral thinking.
It is hard not to wonder why Harris thinks his approach of considering the overall levels of happiness in objective or hypothetical cases is some sort of novelty, or why he would brazenly present this view with great fanfare as a significant achievement if he recognized that it is what most philosophers, and perhaps most people, already employ and have perhaps always employed in working through moral issues. It is also tempting to wonder, at times, whether Harris is entirely capable of grasping the nature of the is/ought problem, as when he says:
Moore felt that his “open question argument” was decisive here: it would seem, for instance, that we can always coherently ask of any state of happiness, “Is this form of happiness itself good?” The fact that the question still makes sense suggests that happiness and goodness cannot be the same. I would argue, however, that what we are really asking in such a case is “Is this form of happiness conducive to (or obstructive of) some higher happiness?” This question is also coherent, and keeps our notion of what is good linked to the experience of sentient beings.
Unfortunately, Harris does not tell us what his argument for that conclusion would be in the unspecified circumstances under which he “would argue” for it. But to be as fair as possible to Harris, I focus on what I take to be his clearest articulation of how he thinks it possible to cross the is/ought divide. His plan is to make the crossing via an articulation of the synonymy of certain moral and non-moral terms:
To say we “should” follow some of these paths and avoid others is just a way of saying that some lead to happiness and others to misery. “You shouldn’t lie” (prescriptive) is synonymous with “Lying needlessly complicates people’s lives, destroys reputations, and undermines trust” (descriptive). “We should defend democracy from totalitarianism” (prescriptive) is another way of saying “Democracy is far more conducive to human flourishing than the alternatives are” (descriptive) … Imagine that you could push a button that would make every person on earth a little more creative, compassionate, intelligent, and fulfilled — in such a way as to produce no negative effects, now or in the future. This would be “good” in the only moral sense of the word that I understand. However, to make this claim, one needs to posit a larger space of possible experiences (e.g., a moral landscape). What does it mean to say that a person should push this button? It means that making this choice would do a lot of good in the world without doing any harm. And a disposition to not push the button would say something very unflattering about him. After all, what possible motive could a person have for declining to increase everyone’s well-being (including his own) at no cost? I think our notions of “should” and “ought” can be derived from these facts and others like them. Pushing the button is better for everyone involved. What more do we need to motivate prescriptive judgments like “should” and “ought”? 
Harris’s partner-in-scientism Michael Shermer attempts to cross the is/ought boundary in a similar way:
Morality involves how we think and act toward other moral agents in terms of whether our thoughts and actions are right or wrong with regard to their survival and flourishing. By survival I mean the instinct to live, and by flourishing I mean having adequate sustenance, safety, shelter, bonding, and social relations for physical and mental health. Any organism subject to natural selection – which includes all organisms on this planet and most likely on any other planet as well – will by necessity have this drive to survive and flourish, for if they didn’t they would not live long enough to reproduce and would therefore no longer be subject to natural selection … Given these reasons and this evidence, the survival and flourishing of sentient beings is my starting point, and the fundamental principle of this system of morality. It is a system based on science and reason, and is grounded in principles that are themselves based on nature’s laws and on human nature – principles that can be tested in both the laboratory and in the real world.
Many others have pointed out the numerous moral and evaluative assumptions Harris and Shermer must rely on in order for their approach to work. My aim here is not to repeat these important criticisms, but rather to point out a separate but very general difficulty that faces any proponent of scientism who attempts to go this route. For what Shermer, Harris and others have in common is that they attempt to cross the is/ought gap by tucking the real work into semantics, in the sense that they can only do their work if one assumes evaluative definitions of quasi-naturalistic terms like ‘survival‘ and ‘flourishing‘ that are meant to do the heavy lifting. Is this a legitimate solution to the problem? While it would be difficult to refute all such attempts categorically, the prospects seem extremely bleak for scientism. For – and this is the crucial point – scientism is not only committed to ethical naturalism. It is committed, much more importantly, to the broader and more extreme view that all facts, including not only ethical facts but also philosophical facts more generally (including semantic facts), must ultimately be observational, logical, mathematical or scientific facts in a non-trivial sense of ‘scientific’.
Once this has been understood, the prospects for scientism in ethics appear far dimmer than the prospects of merely bridging the is/ought divide by appropriately defining one’s logical operators or converting normative theory problems into semantics problems. Perhaps Shermer and Harris hold that questions about the proper semantics of logical operators or ordinary language can be resolved by appeals to common usage, hence making semantic facts a subset of social science facts. But how would this work in the case of logical operators? And if the claim that morality is whatever best promotes survival and flourishing of sentient beings is meant to be another way of making a statistical prediction about what English speakers would assent to, then the difficulty is not only one of squaring this with what we tend to be interested in when we try to determine what is morally right or wrong. For there is then also the further difficulty that the empirical predictions that seem to be implied about survey results seem unlikely to turn out in Harris and Shermer’s favor.
To see this clearly, it is useful to consider how Shermer and Harris seem to have arrived at their very odd definitions of key terms. If they were to say that acting morally were by definition identical with whatever leads to the maximum survival of members of our species or the majority of its members (using ‘survival’ as most competent English speakers understand it), and if one were also to use the linguistic intuitions of English speakers as the criterion that determines the correct definition of a term, then the moral claims they are making are empirical but very likely false. Most English speakers, one suspects, would not agree with statements like “In the event of a nuclear holocaust that leaves the human race irreparably genetically altered in such a way that the current generation and all future generations will consist entirely of individuals in constant and excruciating agony, seldom capable of thought (and then only at the lowest level), and uninterested in reproduction, it is morally necessary to force humans to keep reproducing for the longest possible time.” If acting morally were instead defined as whatever leads to the maximum survival of conscious or self-conscious beings regardless of species, then the same counterexample applies if we add the condition that there are no other conscious beings in the universe. This is why, in order to make their views even slightly plausible, Shermer and Harris are both obliged to add the evaluative (and conveniently slippery) term ‘flourishing’ to the non-evaluative ‘survival’. It is also, presumably, part of why Shermer parts ways with dictionaries and defines ‘survival’ not in terms of actually continuing to exist, but rather in terms of the instinct that impels people to wish to continue existing.
Shermer must do much more than simply define his key terms plausibly and naturalistically, though: for his project to work, he must also build moral principles out of them without adding in any moral assumptions. It is here that his difficulties become particularly revealing. What he offers as a moral ‘principle’ is clearly nothing of the sort. Again, what he tells us is that “the survival and flourishing of sentient beings is my starting point, and the fundamental principle of this system of morality. It is a system based on science and reason, and is grounded in principles that are themselves based on nature’s laws and on human nature – principles that can be tested in both the laboratory and in the real world.” But “the survival and flourishing of sentient beings” cannot be a ‘principle’ in the moral or even the scientific sense of the term: it doesn’t tell us what to do or what to expect. Whatever real work Shermer does to get himself from an is to an ought is hidden elsewhere, and the reference to ‘laboratory testing’ is merely a convenient scientific-sounding cover, devoid of content, whose purpose seems to be that of giving credibility to the apparently impossible enterprise without giving his audience a glimpse of how things are meant to work under that cover. It is surprising that such a committed opponent of pseudoscience in its other forms would fail to recognize its hallmark features here.
Harris does slightly better than Shermer in making clear how he intends to bridge the gap, as we have seen. He says, for instance, that “’You shouldn’t lie’ (prescriptive) is synonymous with ‘Lying needlessly complicates people’s lives, destroys reputations, and undermines trust’ (descriptive).” In saying this, Harris owes us a plausible account of synonymy according to which this and other such claims come out true – but then it is not clear how that account could be arrived at in a genuinely scientific manner. Also, surely things cannot be as simple as Harris claims even in this limited case, since one could by the same token say that the implausible prescriptive statement ‘You shouldn’t expose the diabolical plot of this corrupt public figure’ is synonymous with the plausible descriptive statement ‘Exposing the diabolical plot of this corrupt public figure would needlessly complicate people’s lives, destroy reputations, and undermine trust.’
Harris does of course provide himself with the weasel word ‘needlessly’ here, but now the question becomes, what counts as needless, and how does one determine this in a purely scientific manner? The problem is now not only that Harris needs to derive an ought-statement from is-statements, but that he also has to balance the resulting, defeasible ought statement against many other defeasible ought-statements, and (for his project to succeed in an interesting way) he must make these comparative judgments fully within the confines of empirical science. Later in the same quoted passage, Harris adverts to the disposition and motive of a potential button-pusher in support of a hypothetical moral verdict. Since Harris is arguing that morality follows logically from some empirical facts plus the correct definitions of moral and related terms, then surely he owes us an account of what those correct definitions are and how he knows that they are the correct definitions. So he has two formidable challenges here, not one. He first has to come up with a clear, non-evasive standard by which to bridge the is/ought gap (“An act is moral if and only if it has property X”) This is the part of the problem that involves the threat of the so-called naturalistic fallacy, since one could (as Moore says) respond by saying, “I understand that it has property X; but it’s not yet clear to me that it must be moral.”). But Harris also faces another challenge, which is to articulate and defend accounts of synonymy and correct definitions that are both plausible and wholly a matter of empirical science, logic, mathematics, and observation. And about this, we have so far heard nothing.
To summarize the discussion so far: there may not be a general basis for thinking that the is/ought gap is unbridgeable in principle, for the reasons Pigden discusses. But scientism has a more serious, and more general, problem to deal with in the naturalization of ethics: the problem not of deriving an ought from an is, but rather of deriving the philosophical from the non-philosophical. If the is/ought gap can be crossed legitimately, the crossing will only be possible (and its legitimacy can only be defended) by means of some careful semantic or other maneuvering. But each of these maneuvers incurs a further cost which can only be paid off in the currency of philosophy; and to naturalize that part of philosophy also, one must take on yet further costs. While this need not be a problem at all for Pigden and other philosophers who merely wish to derive an ought from an is, it is a significant problem for Shermer, Harris and other devotees of scientism who are perpetually stuck borrowing large sums at high interest rates from one area of philosophy to pay off smaller debts incurred in other areas of philosophy, all in the name of freeing themselves of philosophical debt entirely. If there are serious prospects for scientism in ethics, any case for those prospects had better address this problem, which is more general and pressing than the limited problem of the naturalistic fallacy in ethics.
Still, while Harris, Shermer and other advocates of scientism are far from successful in their attempts to discharge their argumentative burdens, it seems a step too far to dismiss outright some other scientific challenges to substantive moral views. Jonathan Haidt and Joshua Greene are two prominent examples of recent thinkers who have brought the tools of empirical investigation into the moral sphere with somewhat more philosophical sophistication than Harris or Shermer have shown. Greene, indeed, is a former student of the renowned utilitarian, Peter Singer. Both Haidt and Greene present their readers with unsettling evidence that our moral judgments rest are affected much more by emotional influences than most of us find it natural to assume.
One example of a non-rational influence on our moral judgements that Haidt discusses is the phenomenon of moral dumbfounding. In a series of experiments, Haidt and others present various subjects with disturbing moral scenarios: a family member accidentally runs over the household pet in a car, and the family then cook and eat the pet; an adult brother and sister both willingly decide to try having sexual intercourse just once, using several birth control methods, and then tell no-one; someone regularly has sex with animal carcasses purchased as meat; another character tears up a flag and uses the strips to clean the toilet in her private home; etc. these vignettes are carefully constructed to allow for no easy explanation as to why exactly the situations they depict are immoral, but most subjects in Haidt’s experiments tend to find them morally troubling and condemn the characters’ actions unequivocally. It is at this point that the interesting part of the experiment begins. The subjects are pressed to justify their negative moral judgments, and find it difficult to do so. The experimenters are trained to respond to all the most likely moral justifications with counterarguments showing that the subject would be inconsistent if he or she relied on the same moral reasoning elsewhere. This produces the so-called ‘moral dumbfounding’ Haidt is so interested in: the strong sense that something is morally wrong even though one seems unable to explain why. Faced with the strong replies to their attempts to ground their moral intuitions in plausible moral principles, Haidt’s subjects must choose between abandoning their judgments and maintaining them in the absence of any adequate support. Haidt’s subjects tend strongly to choose the latter course, rather unlike the process we seem to follow in cases where we learn of information that undermines key premises we have used in support of some view. This, Haidt, argues, provides evidence against the view that we arrive at moral conclusions on the basis of abstract reasoning from moral principles.
Greene, like Haidt, sees our capacity for moral judgment as bearing clear marks of evolutionary adaptation. On Greene’s view, many of our most common emotional tendencies – moral disgust, the desire for even self-destructive vengeance when we have been seriously betrayed, awe, loyalty, and gossip – are best explained as arising from a successful set of evolutionary ‘strategies’ for getting us to cooperate rather than defect in prisoner’s dilemma cases. To take one example of Greene’s reasoning here: a member of a species of perfectly rational and purely self-interested beings would be likely not only to defect in a prisoner’s dilemma case but also to make dishonest promises in advance about cooperating in such cases, so long as the being recognized that the person he or she betrayed would find it personally disadvantageous to take revenge for the betrayal later on. But since vengeance is typically costly, and members of a perfectly rational species would recognize that other members of the same species would recognize both this and the fact that everyone else would recognize it, the empty threat of revenge would not be very effective at motivating them to make and keep promises to cooperate. But the threat of being avenged would be very compelling if the beings were aware of universally powerful urges that lead members of the species to avenge themselves for betrayals even at the cost of a significant personal sacrifice. Greene then seeks to verify these game-theoretical explanations of morality and examine how they play out neurologically by presenting subjects in fMRIs with familiar moral dilemmas like the trolley problem and comparing their spoken solutions with their brain activity. In the end, Greene presses for agnosticism on what he calls the ‘deeper’ question of what makes certain actions objectively moral, but makes a case for adopting utilitarianism in cases of entrenched moral conflict as a pragmatic means of conflict resolution.
Not all empirical investigators of our ethical tendencies seek to explain our moral judgments in evolutionary terms. Marvin Harris, Jesse Prinz, Richard Nisbett and Dov Cohen offer plausible cultural and environmental explanations of a wide range of moral attitudes and practices ranging from dietary restrictions to the localized tendency to react violently against insults to the taboo against cannibalism.
Read in what I see as a plausibly charitable light, Haidt, Greene, Marvin Harris, Prinz, Nisbett, Cohen and others – perhaps even Shermer and Sam Harris, at their better moments – are engaged in a less radical project than out-and-out scientism. Rather than suggesting that scientific research into moral judgments and practices eliminates the need for philosophical reflection in normative reasoning, they present empirical evidence that challenges our commonsense moral picture in ways that can undermine our confidence in some of our deepest philosophical intuitions but seems to sharpen, rather than eliminate, the need to reflect philosophically on what follows.
One way of replying to those who employ scientistic thinking in this weaker sense is to contest some presented finding or their interpretation of it. For instance, Kahane et al. have recently argued that experiments used by Greene and others to support various substantive conclusions about morality actually misidentify other, perhaps psychopathic, tendencies as utilitarian ones While such responses can cast serious doubt on certain weakly scientistic arguments about morality by beating them at their own game (Kahane et al. argue against the original findings by conducting a study of their own), they at best respond to particular moves in the game of empirical moral research. They cannot stand as categorical objections against empirical moral research as a whole, since they are a part of that very research.
To rule out a priori the relevance of all empirical research and evolutionary reasoning to substantive morality, one needs a general philosophical argument. Massimo Pigliucci provides such an argument in a recent response to Greene:
Let me interject here with my favorite example of why exactly Greene’s reasoning doesn’t hold up: mathematics. Imagine we subjected a number of individuals to fMRI scanning of their brain activity while they are in the process of tackling mathematical problems. I am positive that we would find the following (indeed, for all I know, someone might have done this already):
There are certain areas, and not others, of the brain that lit up when someone is engaged with a mathematical problem.
There is probably variation in the human population for the level of activity, and possibly even the size or micro-anatomy, of these areas.
There is some sort of evolutionary psychological story that can be told for why the ability to carry out simple mathematical or abstract reasoning may have been adaptive in the Pleistocene (though it would be much harder to come up with a similar story that justifies the ability of some, people to understand advanced math, or to solve Fermat’s Last Theorem).
But none of the above will tell us anything at all about whether the people in the experiment got the math right. Only a mathematician — not a neuroscientist, not an evolutionary psychologist — can tell us that.
Pigliucci is surely right to point out that the justification of a moral judgment is very different from its causal explanation, and that a moral theory cannot be straightforwardly vindicated by a discovery that that theory is common, innate, or even game-theoretically optimal unless one has an accompanying evaluative or normative premise. And in an important sense, mathematical reasoning may well stand in the same relation to mathematics that moral thinking stands in to morality. But the success of Pigliucci’s interesting analogy in ruling out Greene’s project or anything along similar lines depends on how the mathematical analogy accommodates what Greene, Haidt and others would presumably see as significant differences between the two. Among those apparent differences are these: there is simply far greater inter-cultural and intra-cultural disagreement about morality than there is in mathematics; it is at least much more difficult to verify simple moral claims objectively than it is to find objective support for the claims of basic arithmetic; and mathematical judgments tend to survive emotional manipulation unchanged in ways that moral judgements do not. It seems that Pigliucci’s argument by analogy could be defended in one of two ways.
The first strategy would be to argue that the empirical differences between the psychology of mathematical judgment and the psychology of moral judgement are not really so important. Perhaps there is a good case to be made that there is as much evidence of disagreement on mathematical issues as there is of moral issues, or that, where moral and mathematical judgments of the same level of sophistication are made, the evidence of emotional influence turns out to be equally strong or weak, and so on. The problem with this approach is that, even if it is successful, it gives the larger game away by engaging the debate on empirical terms, thus implicitly conceding the relevance of empirical issues to moral ones.
The second strategy is the one taken by Pigliucci himself in a brief, private conversation with me on the subject. This involves seeing the study of morality as a completely abstract enterprise on a par with pure mathematics or geometry. There is no purely mathematical reason to opt for non-Euclidian over Euclidian geometry, and the theorems one proves in Euclidian geometry are equally well-supported whether or not Euclidian geometry turns out to hold in the real world. Similarly, one could take morality to be a domain of inquiry that is, by definition, completely closed to empirical support or refutation. One could then fashion any number of moral systems, each with its own set of axioms, and it would be trivially true that any attempt to evaluate those moral systems on the basis of experimentation or evolutionary theorizing would commit a category mistake. The difficulty for this strategy is that we generally want to know, in the actual world, which actions are morally permissible and which are not. If the moralist can only point to an unlimited range of moral systems and say of some action that it is permitted in these but not those, and that we have no way of knowing which moral system (if any) applies to our world, then it is hard to see that the project of ethical inquiry is very helpful. Conversely, if there are signs in our world that indicate which a priori moral theory applies, akin to the scientific evidence that shows Euclidian geometry not to apply to our world, then the door is open again to the relevance of empirical findings in practical moral thinking.
It is also possible to take neither an empirical or an anti-empirical strategy in defending Pigliucci’s analogy with mathematics, and to insist instead that the lines of evidence brought by Haidt, Greene and others against the reliability of our moral intuitions simply doesn’t matter to the content of morality, since the project of moral inquiry has to do with what is moral, not with the accuracy or trustworthiness of our judgments about what is moral. But this approach has its own drawbacks. To continue the analogy with mathematics: if we were to discover that people in one psychological state could reliably be predicted to make mathematical judgments that were noticeably and importantly different from those who are not in that psychological state, and if there were no clear, objective way to compare the judgments of the two groups with some mind-independent facts to see whether the psychological state enhanced or hindered our mathematical judgments, then it would surely be unwise to remain as confident as before in our ability to do mathematics objectively. Similarly, if the empirical evidence that Haidt, Greene and others present for the unreliability of our moral judgments came to be supplemented with further empirical evidence that is a thousand times stronger, and to affect all our moral judgments, then it would be unreasonable for us to maintain confidence in our moral theories. Perhaps a general agnosticism about morality would be the most reasonable course, then; and this epistemological position could in turn be used to support a metaphysical theory of subjectivism, error theory, etc. and thereby to support some currently implausible views in normative theory and applied ethics. But the view that empirical arguments along the lines of Haidt’s and Greene’s should be discounted immediately on the grounds that they attempt to bring empirical issues into a non-empirical discussion implies that such empirical findings would continue to be irrelevant even if they were a thousand, or even a million, times stronger.
To sum up: the crude attempts of Shermer and Sam Harris to completely replace philosophical reasoning with scientific investigations in morality seem hopeless, since that project implicitly relies on at least as many philosophical assumptions as it seeks to replace; and moreover, the philosophical assumptions the project relies on are highly dubious for familiar reasons. But there is room for many more moderate empirical projects in ethics that do not originate from a naïve misunderstanding of the philosophical enterprise, and it is possible for those projects to legitimately call into question not only the application of normative ethical theories to particular cases, but also our basis for believing in some substantive normative and metaethical theories. Rather than dismiss all such projects out of hand, philosophers ought to take them seriously and accept or reject them on a case by case basis.
Gray, John, 2014: ‘Moral Tribes: Emotion, Reason, and the Gap Between Us and Them by Joshua Greene – review: Is this call for rational thinking to resolve major conflicts crude reductionism?‘ The Guardian, January 17th, 2014. http://www.theguardian.com/books/2014/jan/17/moral-tribes-joshua-greene-review
Greene, Joshua D., 2013: Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. New York: Penguin Press
Haidt, Jonathan, 2012: The Righteous Mind: Why Good People Are Divided by Politics and Religion. Pantheon.
Harris, Marvin, 1975: Cows, Pigs, Wars and Witches: The Riddles of Culture. London: Hutchinson & Co.
Ibid., 1998: Good to Eat: Riddles of Food and Culture. Illinois: Waveland Press.
Harris, Sam, 2005: The End of Faith. W. W. Norton, New York; p.283, note 24.
Ibid., , 2010a: The Moral Landscape. Free Press.
Ibid., 2010b: ‘Moral Confusion in the Name of Science’. Online at http://www.samharris.org/blog/item/moral-confusion-in-the-name-of-science/
Ibid., 2014, ‘Clarifying the Landscape’. Online at https://www.samharris.org/blog/item/clarifying-the-landscape
Kahane, Guy; Everett, Jim A. C.; Earp, Brian D.; Farias, Miguel; and Savulescu, Julian, 2014. ‘”Utilitarian” Judgments in Sacrificial Moral dilemmas Do Not Reflect Impartial Concern for the Greater Good’. Cognition, Vol. 134, Jan. 2015, pp.193-209.
Nisbett, R. E., & Cohen, D, 1996: Culture of Honor: The Psychology of Violence in the South. Boulder, CO: Westview.
Pigden, Charles, 2013: ‘The Is-Ought Gap’. Lafollette et al., The International Encyclopedia of Ethics. Oxford, Wiley-Blackwell. https://www.academia.edu/5664257/The_Is-Ought_Gap , p.8
Pigliucci, Massimo, 2016: ‘The Problem with Cognitive and Moral Psychology’. The Philosophers’ Magazine Online, Feb. 10th 2016.
Prinz, Jesse, 2007: The Emotional Construction of Morals. Oxford; Oxford University Press.
Schurz, Gerhard 1997: The Is-Ought Problem. A Study in Philosophical Logic. Dordrecht, Kluwer.
Shermer, Michael 2015, The Moral Arc. Henry Holt and Co.; pp.11-12
 Pigden is referring here to Schurz 1997.
 Harris, Sam 2011a
 Harris, Sam 2005
 Harris, Sam 2014
 Greene 2013
 Harris, 1975 and Harris, 1998
 Prinz, 2007
 Nisbett and Cohen, 1996
 An example of what I mean by reading in a charitable light here: John Gray (Gray, 2014) makes much of the following quote by Greene: “Morality is a set of psychological adaptations that allow otherwise selfish individuals to reap the benefits of social co-operation.” Does Greene really feel that morality itself is a set of psychological adaptations? (It is difficult to know what that could mean, since any faculty we use for thinking about, reacting to, learning or even creating facts of some sort cannot, presumably, be identical with those facts themselves). Or is this a brief, albeit misleading, way of saying that our capacity for moral judgment is a set of psychological adaptations? (That would be true, but almost trivially so). I read Greene as saying here something parallel to “The human capacity for mathematical thinking is a set of psychological adaptations that allows individuals to reap the benefits of precise calculations”, neither explicitly rejecting nor accepting the existence of a mind-independent component to the domain. If I am mistaken in my interpretation of Greene, but my attempt to read him charitably is at least points to a consistent alternative position on the issue, I propose that it will be useful for his critics to read him as if he were adopting this position. The important issue, after all, is not whether Haidt and Greene are out of their element in discussing morality: it is whether a range of ethical and metaethical views many of us hold are undermined by certain empirical considerations, and more broadly, whether such empirical undermining of core ethical or metaethical views is possible.
 Kahane et al., 2014
 Pigliucci, 2016
 I should state here that I am not at all convinced that Greene is in fact making any of these errors. But I leave that issue aside, since my purpose here is to examine whether it is legitimate to rule out any possible project along the lines of Greene’s, not to assess the merits of Greene’s project itself.
 This was in the early summer of 2014.