Hide table of contents

This is the sixth post in my moral anti-realism sequence; I wrote it as a standalone piece.

This post explains why I think confident belief in moral realism and moral uncertainty don’t go together. Since moral uncertainty often comes up in a moral realist context, I think this causes some problems for the concept.

In this sequence’s final post, titled The “Moral Uncertainty” Rabbit Hole, Fully Excavated, I will present three related but more cleanly defined concepts with which to potentially replace “moral uncertainty:”

  • Deferring to moral reflection (and uncertainty over one’s “idealized values”)
  • Having under-defined values (deliberately or by accident)
  • Metaethical uncertainty (and wagering on moral realism)

My goal is for these concepts (alongside further distinctions and caveats) to capture as much of the original meaning of “moral uncertainty” as we can salvage.

What is moral uncertainty?

In a moral realist context, “moral uncertainty” means uncertainty about what we all-things-considered morally ought to do (MacAskill, Bykvist and Ord, 2020).

Moral anti-realists may use the same phrase, but to my knowledge, there’s been little discussion on how to conceptualize moral uncertainty under anti-realism. That will be the subject of my sequence’s final post, where I’ll also discuss how to allow for metaethical uncertainty (in case moral realism is correct after all).

Why I consider the concept unsatisfying

Moral realism implies the existence of a speaker-independent moral reality. Being morally uncertain means having a vague or unclear understanding of that reality. So there’s a hidden tension: Without clearly apprehending the alleged moral reality, how can we be confident it exists?

Moral realists may give various replies to this challenge. However, as I will argue in the following section, I think the only path to moral realism worthy of the name involves gaining clarity on the true object-level morality.

Note that philosophers use “moral realism” to mean different things. In this sequence’s first post (What Is Moral Realism?), I explained how I’m reserving the term for views that have action-relevant implications for effective altruists.

Also, note that my claim isn’t that moral uncertainty is altogether unworkable. Instead, I argue that moral uncertainty almost by necessity[1] implies either metaethical uncertainty,[2] (uncertainty between moral realism and moral anti-realism) or confident anti-realism.

Three inadequate replies

Below, I’ll describe three responses people might give in reply to my challenge and discuss why I find them inadequate.

(1) We know at least some moral facts, but certain aspects of morality could be forever unknowable

One way to disagree with me and claim that confident belief in moral realism and moral uncertainty do go together relies on the concept “irreducible normativity.” (See this post for a detailed discussion of the concept.) Proponents of irreducible normativity face a challenge. They have to explain how their concepts even operate, how normative concepts can be “irreducible” yet still meaningful. They must also explain how irreducible normative statements can successfully refer to a “normative reality.” I don’t think that a successful explanation exists, but I’ll now describe how someone could attempt to provide one and how it would relate to moral uncertainty.

A proponent of irreducible normativity might argue that we can ground moral realism in the introspective analysis of moral concepts, specifically of morally self-evident statements like “Torturing innocent children is wrong.” Their claim, then, is twofold:

(1) Moral statements are statements about a speaker-independent normative reality.
(2) Some such statements are self-evident (and therefore true).

Thereby, so goes the argument, we now have an existence proof for at least some moral facts. (They would further seek to establish that those are irreducibly normative moral facts.) From there, we are free to remain uncertain about other moral facts. We don't even need a recipe for discovering other moral facts because, for all we know, aspects of the true moral reality may – conceivably? – remain forever inaccessible to us.

No matter how I interpret it, I find the position sketched above, the idea that there are well-defined moral facts that could remain forever inaccessible to us, unintelligible. Even if there were an interpretation to make it intelligible, the stance would still be pointless. By “pointless,” I mean that, ex hypothesi, there would be absolutely no way to learn anything about specific aspects of the “moral reality” (beyond the self-evident statements), never, not even in theory. Therefore, those parts of the moral reality would be irrelevant for all practical purposes.

Still, I think the notion of “forever inaccessible moral facts” is incomprehensible, not just pointless. Perhaps(?) we can meaningfully talk about “unreachable facts of unknown nature,” but it seems strange to speak of unreachable facts of some known nature (such as “moral” nature). By claiming that a fact is of some known nature, aren’t we (implicitly) saying that we know of a way to tell why that fact belongs to the category? If so, this means that the fact is knowable, at least in theory, since it belongs to a category of facts whose truth-making properties we understand. If some fact were truly “forever unknowable,” it seems like it would have to be a fact of a nature we don’t understand. Whatever those forever unknowable facts may be, they couldn’t have anything to do with concepts we already understand, such as our “moral concepts” of the form (e.g.,) “Torturing innocent children is wrong.”

To conclude, “irreducible normativity” seems like a confused concept.[3] At the very least, it would be pointless if there was a moral reality that we could never access. For these reasons, irreducible normativity cannot stand up to the challenge at the outset of this post.

(2) Uncertainty between a minimalist version of moral realism and more far-reaching versions

I will now discuss another attempt at replying to my challenge. This attempt is compatible with moral naturalism, so it doesn’t rely on irreducible normativity. It goes as follows:

Someone might say that moral realism is true because we can see that some moral truths are self-evident. (So far, this argument resembles the one in the section above). Further, they may say we can be morally uncertain because of the possibility of other moral truths, the chance that there are moral truths in addition to the ones we’ve already recognized. Either way, the person might say, moral realism is true – what we’re uncertain about is simply the extent of the moral reality. In one scenario, that moral reality consists of only the moral statements we already recognize (a “minimalist” moral reality). In the other scenario, it also contains further-reaching moral truths.

I will call the above an argument for “minimalist moral realism.” My reply is that, sure, if we want to use terminology this way, we can say, “minimalist moral realism is true.” However, I wouldn’t consider “minimalist moral realism is true” exciting news for effective altruists. After all, it just implies that aspects of the moral reality are clear/unambiguous, so that some moral statements are (near-) universally compelling and agreed-upon. That’s a sound basis for rejecting sentiments of nihilism or moral relativism – but as I’ve argued in previous posts, moral anti-realism differs from those two views! Minimalist moral realism doesn’t say anything about how to address open normative-ethical questions within effective altruism (e.g., it doesn’t promise answers to questions of population ethics or theories of value).

That said, arguably “minimalist moral realism” has been the life-changing insight that got many people into effective altruism in the first place.[4]

In any case, when I look at how effective altruists speak of moral realism’s practical implications, “minimalist moral realism” seems too weak to qualify. Therefore, in my preferred terminology, “minimalist moral realism” is not moral realism.[5]

To summarize, the issue with self-evident moral statements like “Torturing innocent children is wrong” is that they don’t provide any evidence for a moral reality that covers disagreements in population ethics or accounts of well-being. To be confident moral realists, we’d need other ways of attaining moral knowledge and ascertaining the parts of the moral reality beyond self-evident statements. In other words, we can’t be confident moral realists about a far-reaching, non-trivial, not-immediately-self-evident moral reality unless we already have a clear sense of what it looks like.

(3) We may be seeing enough of a blurry moral reality (analogies to chemistry and mathematics)

Lastly, someone might argue that we don’t have to see the moral reality clearly to ascertain that it’s there. Instead, maybe we only have to discern enough of its blurred contours.

Imagine you’re seeing a big grey animal behind the bushes, but you’re uncertain whether it’s an elephant, a hippo, or a rhino. Either way, you are a “realist” about the big grey animal: there is an animal out there and a fact of the matter of its species membership.

Analogously, in the early days of research into mathematics or chemistry, the early investigators in these fields must have “seen” enough to understand that there’s something there, that there’s structure for them to excavate. Accordingly, one could argue that people’s earliest concepts for mathematics or chemistry (or “alchemy,” as people called it) were already pointing at the relevant “realities.”

Someone might now argue that the above domains are analogous to moral realism. Maybe moral concepts are pointing at a well-defined reality whose governing principles we don’t yet (fully) understand, but which could become clear to us eventually.[6]

Ultimately, these analogies fail. Depending on what we mean by “early days,” the pre-scientific concept of ”alchemy” was very much not the same as Lavoisier’s scientific concept of chemistry. Similarly, cavepeople who counted wooly mammoths on the grassland probably didn’t understand the idea of a formal system. Without that notion, one can’t understand why mathematics produces a rich “reality” the way it does. Even if some caveperson formed the thought “counting and things related to it,” that thought would have remained under-defined – it wouldn’t have been identical to our modern concept of mathematics. For a caveperson thinking “counting and things related to it,” it seems natural to include negative numbers. But what about fractions, irrational numbers, imaginary numbers? What about geometry? Set theory? These concepts have some similarities to “counting,” but how much similarity is enough to qualify? Of course, the point here is that, since the caveperson didn’t have any further specifying thoughts, there isn’t a correct answer to our question about reference. (Besides, even modern mathematics has branches grounded in different axiomatizations, so, in the absence of further clarifications, “mathematics” isn't wholly specified, either.)

What’s unique about mathematics and chemistry is precisely what’s absent in the domain of morality. Mathematics and chemistry have commonly accepted, rigorous methodologies for determining what counts as “domain knowledge.” Moral philosophy doesn't have that: no agreed-upon methods, not even well-defined building blocks. Moreover, even within the utilitarian tradition, where we find the sort of moral reasoning that’s most analogous to mathematical or scientific thought, philosophically sophisticated reasoners hold long-standing, foundational disagreements. (For instance, they disagree on defining morally relevant well-being/welfare or how to approach population ethics.) For these reasons, without already favoring a particular approach or object-level moral theory, we can’t expect the study of morality to function analogously to chemistry or mathematics. Moral realists may think they’re seeing the blurred contours of a crisp and far-reaching moral reality, but maybe “blurred contours” are all that’s there.[7]

What justifiably confident moral realism could look like

Convergence arguments seek to establish that, under ideal reasoning conditions, sophisticated reasoners agree about first-order (“object-level”) moral questions. If successful, these arguments could convince me that there is a moral reality which, if only we looked at it the right way, came to present itself to us like Mount Fuji on a clear summer day.[8] However, I’m an anti-realist because I think convergence arguments are ultimately unsuccessful. (I will argue for this in future posts.)

Summary and takeaways

Philosophers have identified and argued for different versions of moral realism. Some of them turn out to be unintelligible on close inspection (“irreducible normativity”), while others appear trivially true but inconsequential for effective altruism (“minimalist moral realism”). What I am interested in are versions of moral realism that are intelligible and action-relevant if true. The path to moral realism has to go through convergence arguments. Therefore, moral uncertainty implies metaethical uncertainty or confident anti-realism (“moral uncertainty and confident moral realism cannot go together”).

Under metaethical uncertainty, we must first specify the object of our uncertainty. For instance, are we explicitly hoping that moral realism worthy of the name is true and staking all our caring capacity into that? Alternatively, are we uncertain about what to value in a sense that's also compatible with anti-realism? How would we update our values – do we have in mind a targeted reflection strategy, or are we looking to reflect open-mindedly? Finally, how do we factor in the possibility of ending up with under-defined values?

I will explain all these concerns in a future post. In short, the upshot is that we need a more refined set of concepts to do justice to metaethical uncertainty.

Upcoming posts

  • Dismantling Hedonism-inspired Moral Realism applies this post’s theme (that moral realism and moral uncertainty don’t go together) to common arguments for hedonism-based moral realism
  • The Life-Goals Framework: How I Reason About Morality as an Anti-Realist introduces my framework for ethical reasoning and argues that “life goals” differ between people
  • The “Moral Uncertainty” Rabbit Hole, Fully Excavated explains how to approach various kinds of “morality-related uncertainties” (and related matters) within the life-goals framework

Acknowledgments

Many thanks to Adriano Mannino and Lydia Ward for helpful comments on the draft.

References

Harris, S. (2010). The Moral Landscape. New York: Free Press.

MacAskill, W., Bykvist, K., and T. Ord. (2020). Moral Uncertainty. Oxford: Oxford University Press.

Parfit, D. (1984). Reasons and Persons. Oxford: Oxford University Press.

Parfit, D. (2011). On What Matters, Volume I. Oxford: Oxford University Press.


  1. There’s a trivial exception to this. Imagine you trust that a certain reasoner is likely to be right about matters of philosophy. If that person tells you that moral realism is true, then you may – depending on how justified you were in regarding them as an expert – assign high confidence to moral realism without more direct reasons for the belief. ↩︎

  2. “Metaethics” has two components: 
    (1) The linguistic level: What do competent speakers mean when they use moral terminology. (2) The substantive level: To the degree that moral terminology refers to a speaker-independent moral reality, what kind of moral reality are we talking about? And: Is it true that such a moral reality exists?  When I say moral uncertainty implies metaethical uncertainty (or confident anti-realism), I mean specifically uncertainty at the substantive level of metaethics. That is, I mean uncertainty about the existence of a far-reaching, speaker-independent moral reality. ↩︎

  3. Given the brevity of my discussion of irreducible normativity, someone might object that I haven't considered all the ways to make the concept work. Note that I discussed and rejected some more options in this previous post. ↩︎

  4. Surprisingly many things in normative ethics follow from simple uncontroversial premises. For instance, even though many professional ethicists would scoff at this, one could argue that a vague version of utilitarianism (one that doesn’t commit to any claims about a specific theory of well-being or population ethics) is self-evident in a purely axiological sense. By this, I mean that it’s self-evident as a function that ranks world states in terms of “desirability from an altruistic perspective.” I don’t consider utilitarianism self-evident as an answer to “What are people’s moral obligations?” or “What should be everyone’s life goals.” ↩︎

  5. This also gives my answer to Sam Harris’s challenge for moral anti-realists. Harris talks about a “moral reality” that science can help elucidate (Harris, 2010). I am broadly on board with that idea. The way I describe it, morality, indeed, has “structure.” However, that structure doesn’t come with labels – it has to be interpreted by us based on subjectively chosen evaluation criteria. As long as Harris allows for the possibility that the “moral reality” remains under-defined, I may agree with him substance-wise, but I don’t call the position “moral realism.” ↩︎

  6. Derek Parfit expressed a related sentiment at the end of his book Reasons and Persons (Parfit, 1984):
    “Non-Religious Ethics is at a very early stage. We cannot yet predict whether, as in Mathematics, we will all reach agreement. Since we cannot know how Ethics will develop, it is not irrational to have high hopes.”
    Two things are noteworthy about this quote. Firstly, the phrasing “it is not irrational to have high hopes” makes me suspect that Parfit also thought that confident moral realism (the way I’d define it) and moral uncertainty couldn’t go together. Secondly, Parfit wrote the above passage in 1984 – arguably, his high hopes have not materialized in the meantime. Parfit placed a lot of weight on convergence arguments. He argued that the three moral theories, Utilitarianism, Kantianism, and Contractualism, are three ways of “climbing the same mountain” (Parfit, 2011). (For more thoughts on Parfit’s conception of moral realism along the lines I just described, I recommend the Future of Life podcast with Peter Railton from 00:53:57 onwards.) I strongly agree that convergence arguments are the way to establish moral realism. I even think there’s a grain of truth in the “climbing the same mountain” analogy. However, I believe the convergence doesn’t go far enough. As I will argue in upcoming posts, I self-identify as a moral anti-realist because I don’t believe the convergence arguments get us much further than “minimalist moral realism.” ↩︎

  7. In the Lesswrong post Arguments for moral indefinability, Richard Ngo has also argued for this option. I think that Ngo and I are pointing at the very same thing (morality being “under-defined” or “indefinable”), though we may(?) draw different conclusions when it comes to individual moral-reasoning approaches. ↩︎

  8. Someone could object that convergence arguments are never strong enough to establish moral realism with high confidence. Firstly (1), what counts as “philosophically sophisticated reasoners” or “idealized reasoning conditions” is under-defined. Arguably, subtle differences to these stipulations could influence whether convergence arguments work out. Secondly (2), even conditional on expert convergence, we couldn’t be sure whether it reflects the existence of a speaker-independent moral reality. Instead, it could mean that our philosophically sophisticated reasoners happen to have the same subjective values. Thirdly (3), what reasoners consider self-evident may change over time. Wouldn’t sophisticated reasoners born in (e.g.) the 17th century disagree with what we consider self-evident today? Those are forceful objections. If we only applied the most stringent criteria for what counts as “moral realism,” we’d arguably be left with moral non-naturalism (“irreducible normativity”). After all, the only reason some philosophers consider non-naturalism (with its strange metaphysical postulates) palatable is because they find moral naturalism too watered down as an alternative. Still, I would consider convergence among a pre-selected set of expert reasoners both relevant and surprising. Therefore, I’m inclined to consider naturalist moral realism an intelligible hypothesis. I think it’s false, but I could imagine situations where I’d change my mind. Here are some quick answers to the objections above: (1) We can imagine circumstances where the convergence isn’t sensitive to the specifics; naturalist moral realism is meant to apply at least under those circumstances. (2) Without the concept of “irreducible normativity,” any answers in philosophy will be subjective in some sense of the word (they have to somehow appeal to our reasoning styles). Still, convergence arguments would establish that there are for-us relevant insights at the end of moral reflection, and that the destination is the same for everyone! (3) When I talk about “morality,” I already have in mind some implicit connotations that the concept has to fulfill. Specifically, I consider it an essential ingredient to morality to take an “impartial stance” of some sort. To the degree that past reasoners didn’t do this, I’d argue that they were answering a different question. (When I investigate whether moral realism is true, I’m not interested in whether everyone who ever used the word “morality” was talking about the exact same thing!) Among past philosophers who saw morality as impartial altruism, we actually find a surprising degree of moral foresight. Jeremy Bentham’s Wikipedia article reads as follows: “He advocated individual and economic freedoms, the separation of church and state, freedom of expression, equal rights for women, the right to divorce, and (in an unpublished essay) the decriminalising of homosexual acts. He called for the abolition of slavery, capital punishment and physical punishment, including that of children. He has also become known as an early advocate of animal rights.” To get a sense for the clarity and moral thrust of Bentham’s reasoning, see also this now-famous quote: “The day may come when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may one day come to be recognised that the number of the legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps the faculty of discourse? But a fullgrown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose they were otherwise, what would it avail? The question is not, Can they reason? nor Can they talk? but, Can they suffer?” ↩︎

Comments4
Sorted by Click to highlight new comments since:

Thanks for this post, it seems really well researched.

As I understand, it sounds like you're saying moral uncertainty implies or requires moral realism to make sense, but since moral uncertainty means "having a vague or unclear understanding of that reality", it's not clear you can justify moral realism from a position of moral uncertainty. And you're saying this tension is problematic for moral realism because it's hard to resolve.

But I'm not sure what makes you say that moral uncertainty implies or requires moral realism? I do  think that moral uncertainty  strongly favours cognitivism about ethics (the view that moral statements express truth-evaluable beliefs). And it's true that cognitivism naturally suggests realism, because it's somewhat strange to be both a cognitivist and an antirealist. But it seems coherent to me to entertain a cognitivist kind of antirealism/nihilism/error theory as one of the theories you're uncertain about. If that's right, it's not clear to me that this kind of problematic tension exists for most kinds of moral uncertainty.

I say a bit more about this here, for what it's worth. Also note that I have not read the other posts in your sequence, so I may be lacking context. Likely I've missed something here — curious to hear your thoughts.

I don't say that moral uncertainty implies or requires moral realism to make sense. Primarily, my post is about how the only pathway to confident moral realism requires moral certainty. (So the post is primarily against confident moral realism, not against moral uncertainty.)

I do say that moral uncertainty often comes up in a moral realist context. Related to that, perhaps the part you’re replying to is this part:

"Since moral uncertainty often comes up in a moral realist context, I think this causes some problems for the concept.”

By “problems" (I think that phrasing was potentially misleading), I don’t mean that moral uncertainty is altogether unworkable or not useful. I mean only that, if we make explicit that moral uncertainty also includes uncertainty between moral realism vs. moral anti-realism, it potentially changes the way we'd want to deal with our uncertainty (because it changes what we're uncertain about).

A further premise here is that anti-realism doesn’t deserve the connotations of the term “nihilism.” (I argue for that in previous posts.)

If someone thought anti-realism is the same as nihilism, in the sense of "nothing matters under nihilism and we may as well ignore the possibility, for all practical purposes," then my point wouldn't have any interesting implications.

However, if the way things can matter under anti-realism is still relevant for effective altruists, then it makes a difference how much of our "moral uncertainty" expects moral realism vs. how much of it expects anti-realism.

To summarize, the "problem" with moral uncertainty is just that it's not precise enough, it doesn't quite carve reality at its joints. Ideally, we'd want more precise concepts that then tell us more about how to operate under various subtypes of uncertainty.

Ok, thanks for the reply Lukas. I think this clarifies some things, although I expect I should read some of your other posts to get fully clear.

It's not clear that your claim that "[mathematics has] commonly accepted, rigorous methodologies for determining what counts as 'domain knowledge' [while morality] does not" is true. See this paper for relevant counterarguments: http://www.pgrim.org/philosophersannual/34articles/clarkedoanemoral.pdf

In brief: the methodology used by mathematicians (postulate axioms and derive theorems from those axioms, in the long-run engaging in a process of reflective equilibrium to narrow down to the right set of axioms and theorems) can also be applied in moral philosophy (and it appears to be exactly what modern moral philosophers do). Moreover,  it's not at all clear that commonly-used mathematical axioms are less controversial than commonly-used moral axioms.

Curated and popular this week
Relevant opportunities