A popular notion for some people interested in effective altruism is moral uncertainty1.

There is much disagreement by intelligent people over what it is right to do, and while we each have our own beliefs about the matter, it would be foolish to be 100% certain that we are correct while all these others are not. Until further evidence is in, or until much wider consensus is reached, the only rational approach is to spread your degrees of belief between different ethical theories.

I find this notion unconvincing because I view morality as subjective. For example, if I believe bribery is wrong and someone else believes it's morally acceptable, I think neither of us is (in)correct. Instead, we simply differ in our moral preferences, similarly to how I might think apples are tasty while someone else thinks they're gross. This view is sufficient to reject the notion of moral uncertainty described above, but there are at least two ways we can plausibly attempt to preserve the conclusion that we should accommodate alternative moral frameworks in our decision-making.

First, although it seems to me very unlikely and even incomprehensible to have a correct2 morality, one that is objective true independent of whose perspective we use, the existence of a correct morality (this belief is often called moral realism) is an empirical belief so I do accept a modesty argument1 for its possibility.

Start with the premise of

Nonrealist Nihilism: If moral realism is false, then nothing matters.

Now, suppose you think the probability of moral realism is P. Then when you consider taking some action, the expected value of the action is

P * (value if moral realism is true) + (1-P) * (value if moral realism is false)

= P * (value if moral realism is true) + (1-P) * 0

where the substitution with 0 follows from the Nonrealist Nihilism premise. Therefore, we can prudentially assume that moral realism is true.

In this link above, Brian Tomasik points out some good reasons for rejecting this argument. But even given the argument holds, moral realism seems so weird and incomprehensible that I see no reason to prefer any possible moral realisms (e.g. utilitarianism, libertarianism) over any other, so each possible moral realism is canceled out by an equally likely opposite realism (e.g. utilitarianism says happiness is good, but anti-utilitarianism says happiness is bad so these cancel out) and the argument fails.

The second rescue of moral uncertainty is more tenable and, as far as I know, was first formalized by Ruairí Donnelly. We can call it anti-realist moral uncertainty. This is how I'd describe this idea:

I accept some uncertainty in what my future self will morally value. If I am a utilitarian today, I accept I might not be a utilitarian tomorrow. Therefore, I choose to give some weight to these future values.

I, personally, choose not to account for the values of my future self in my moral decisions. If I did, I might expect my future values to be more similar to the average human morality. So, it seems to be simply a matter of personal preference. I think this reframing of moral uncertainty makes sense, even if I don't see reason that I should adopt it.

Lastly, it is important to note that even without accommodating for moral uncertainty, there are still good reasons to cooperate with people whose value systems differ from our own.

 

[1] I am not sure who first came up with these two terms (moral uncertainty and the modesty argument), so I just used the most frequently used write-ups I know of for each term.

[2] By correct, I mean correct in the way it is correct that the sky is blue.

25

0
0

Reactions

0
0

More posts like this

Comments16
Sorted by Click to highlight new comments since: Today at 1:39 AM

Even if being a subjectivist means you don't need to account for uncertainty as to which normative view is correct, shouldn't you still account for meta-ethical uncertainty i.e. that you could be wrong about subjectivism? Which would then suggest you should in turn account for moral uncertainty over normative views.

I think you're kind of trying to address this in what you wrote about moral realism, but it doesn't seem clear or convincing to me. There are a lot of premises here (there's no reason to prefer one moral realism over another, we can just cancel each possible moral realism out by an equally likely opposite realism) that seem far from obvious to me, and you don't give any justification for.

In general, it seems overconfident to me to write off moral uncertainty in a few relatively short paragraphs, given how much time others have spent thinking about this in a lot of depth. Will wrote his entire thesis on this, and there are also whole books in moral philosophy on the topic. Maybe you're just trying to give a brief explanation of your view and not go into a tonne of depth here, though, which is obviously reasonable. But I think it's worth you saying more about how your view fits with and responds to these conflicting views, because otherwise it sounds a bit like you are dismissing them quite offhand.

[anonymous]9y6
0
0

Ah, I definitely could have went into more detail. This was just meant to prompt discussion on an important topic.

I'll avoid posting (things like this) in the future. I'm sorry :(

I'm sorry you had a bad experience with this post.

We definitely want to make sure everyone feels comfortable and welcome contributing.

So for reference (and for anyone reading) if you notice a down vote and realise you could've justified your post better, or framed it more sensitively, you can easily put it into your drafts to work it over and submit it (or a similar new post) again later.

You shouldn't feel sorry about this. Why did you delete your account?? There is absolutely no reason to feel bad.

As a guy who has written a lot of stuff people hated in his life, I sympathise!

But I don't think this should discourage you from continuing to post. I disagreed with this post, but the only way to be right every time is to say nothing. And as people said it's an important and difficult topic to take on.

If you found the counterarguments convincing, just say so and adjust your views. People admire that kind of thing here. If you didn't, let us know why! :)

Yeah, I think it was a really good thing to prompt discussion of, the post just could have been framed a little better to make it clear you just wanted to prompt discussion. Please don't take this as a reason to stop posting though! I'd just take it as a reason to think a little more about your tone and whether it might appear overconfident, and try and hedge or explain your claims a bit more. It's a difficult thing to get exactly right though and I think something all of us can work on.

Meta: it may be surprising this post received so many downvotes. It is making a contribution to an important topic. I'm not sure how useful the contribution is (other comments raise several issues), but we usually don't want to put people off offering ideas that may have flaws.

I guess that what led to the downvotes is tone: there seems to be a high level of confidence in the idea, which is not adequately justified while also running contrary to default opinion.

Good point to raise Owen! I strongly agree that we don't want to put people off contributing ideas that might run against default opinion or have flaws - these kinds of ideas are definitely really useful. And I think there were points in this post that did contribute something useful - I hadn't thought before about whether a subjectivist should take into account moral uncertainty, and that strikes me as an interesting question. I didn't downvote the post for this reason - it's certainly relevant and it prompted me to think about some useful things - although I was initially very tempted to, because it did strike me as unreasonably overconfident.

[anonymous]9y1
0
0

I didn't mean to appear overconfident. I just meant to state my own views on the topic.

I'll avoid posting (things like this) in the future. I'm sorry :(

This kind of thing is hard. I wholly approve of you stating your own views, and wouldn't want to discourage posting things like this.

I'd guess that just changing the framing slightly (e.g. saying "These are my current thoughts:" at the start and "What do you think?" at the end) or adding in a couple more caveats would have been enough to avoid the negative reaction.

I hope you end up taking this response as useful feedback, and not a negative experience!

[anonymous]9y7
0
0

You don't account for the value of your future self, but do you account for the values of a version of yourself that is idealized in some appropriate way? E.g. more rational, thought about morality for longer, smarter etc. Whether this would have significant impact on your values, is an open question, which also depends on how you'd 'idealize' yourself. I'd be very interested in thoughts on how much we should expect our moral views to change upon further deliberation by the way.

On moral realism, I assume you mean that we have absolutely no evidence about the truth of either utilitarianism or anti-utilitarianism so we should apply a principle of indifference as to which one is more likely? I think I agree with that idea, but there still remains slightly higher chance that utilitarianism is true - simply because more people think it is, even if we find their evidence for that questionable. Then of course there's still the question of why one should care about such an objective morality anyway - my approach would be to evaluate whether I'm an agent who's goal it is to do what's objectively moral or who's goal it is to do some other thing that I find moral.

This post raises a bunch of questions for me:

  • If you were in a simulation or a dream, would you hold uncertainty about its behaviour, within a framework of subjectivity?

  • Do you believe in changing the rules that you use to make moral decisions as you learn things?

In this link above, Brian Tomasik points out some good reasons for rejecting this argument. But even given the argument holds, moral realism seems so weird and incomprehensible that I see no reason to prefer any possible moral realisms (e.g. utilitarianism, libertarianism) over any other, so each possible moral realism is canceled out by an equally likely opposite realism

  • Do you think that these probabilities are nonzero and that they cancel each other out?

  • How do you respond to Will's thesis on this topic?:

However, the meta-ethical view that is required is realist only in a minimal sense: as long as one can make sense of a notion of moral proposition’s being true or false, and of one having better or worse evidence with respect to those propositions, then one can makes sense of it being important to gain new moral information. And very many metaethical views can make sense of that. Sophisticated subjectivist moral views certainly can: it’s certainly non-obvious, for example, what one would desire oneself to desire if one were fully rational; and one can certainly improve one’s evidence on the question of what such desires would look like. And the sorts of non-cognitivist views that are defended in the contemporary literature192 want to capture the idea that one’s moral views can be correct or incorrect, and that one can have greater or lesser credence in different moral views.

It’s true that the likelihood that one places on changing one’s view might vary depending on the meta-ethical view one endorses. If one is robustly realist, then the idea that common sense has got things radically wrong generally becomes more plausible than if one is some flavour of anti-realist. But it seems to me that anti-realist views actually support my argument rather than detract from it. If one is a subjectivist, one should be optimistic about the likelihood of finding the moral truth — as finding the moral truth is ultimately just about working out what one values. The subjectivist should therefore think it more likely that she will change her view in light of further study and reflection than the robust realist, and that makes the value of information higher.

Moreover, even if one endorsed a meta-ethical view that is inconsistent with the idea that there’s value in gaining more moral information, one should not be certain in that meta-ethical view. And it’s high-stakes whether that view is true — if there are moral facts out there but one thinks there aren’t, that’s a big deal! Even for this sort of antirealist, then, there’s therefore value in moral information, because there’s value in finding out for certain whether that meta-ethical view is correct.

Many moral questions are empirical questions in disguise. For example, you might value reducing suffering in conscious beings. You might believe animals are conscious, so you focus on reducing their suffering, since there seems to be the most low-hanging fruit there. However, it's wrong to have 100% certainty on empirical questions. Some people believe that animals aren't conscious (e.g. they believe that language is required for consciousness). If you focused on animal suffering and it turned out animals weren't conscious, you'd be wasting resources that could have been used to reduce human suffering.

I think that's approximately true, but I also think it goes the other way around as well. In fact, just a few hours before reading your comment, I made a post using basically the same example, but in reverse (well, in both directions):

For example, I might wonder “Are fish conscious?”, which seems on the face of it an empirical question. However, I might not yet know precisely what I mean by “conscious”, and only really want to know whether fish are “conscious in a sense I would morally care about”. In this case, the seemingly empirical question becomes hard to disentangle from the (seemingly moral) question “What forms of consciousness are morally important?”

(Furthermore, my answers to that question may in turn may be influenced by empirical discoveries. For example, I may initially believe avoidance of painful stimuli demonstrates consciousness in a morally relevant sense, but then change that belief after learning that this behaviour can be displayed in a stimulus-response way by certain extremely simple organisms.)"

One idea informing why I put it that way around as well is that "consciousness" (like almost all terms) is not a fundamental element of nature, with clear and unambiguous borders. Instead, humanity has come up with the term, and can (to some extent) decide what it means. And I think one of the "criteria" a lot of people want that term to meet is "moral significance".

(From memory, and in my opinion, this sequence did a good job discussing how to think about words/concepts, their fuzzy borders, and then extent to which we are vs aren't free to use them however we want.)

(Also, I know some theories would propose consciousness is fundamental, but I don't fully understand them and believe they're not very mainstream, so I set them aside for now.)

This page is also relevant, e.g.:

A given worldview represents a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty, as laid out in the third bullet point above).

I too reject moral realism!

It occurs to me that this has big consequences. For example, some guys talk about being obligated under utilitarianism to give away almost all their income, or devote themselves to far future folks who don't yet exist. Maybe the only barrier, they say, deflecting this crushing weight is that if you push yourself too hard then you might burn out. This never seemed satisfactory to me. But if morality is in our minds, then these obligations don't exist. There is no need to push ourselves even just shy of burning out. I am free.

One aspect of my moral uncertainty has to do with my impact on other people.

If other people have different moral systems/priorities, then isn't 'helping' them a projection of your own moral preferences?

On the one hand, I'm pretty sure nobody wants malaria - so it seems simple to label malaria prevention as a good thing. On the other hand, the people you are helping probably have very different moral tastes, which means they could think that your altruism is useless or even negative. Does that matter?

I think this is a pretty noob-level question, so maybe you can point me to where I can read more about this.