Hide table of contents
This is a linkpost for https://ja3k.com/blog/zeroutils

I did a lot of my thinking for this post over the summer for the EA criticism contest but didn't get around to writing it up. I don't think it amounts to very substantive criticism but I think it may be interesting to utilitarians. But to those who have "done the reading" (Harsayni and Von Neumann-Morgenstern) it may be a little basic.

TLDR:

  1. Utility must be a real number if agents have consistent preferences across randomized choices
  2. A utility function should be scale and shift invariant if all it does is tell us how an agent acts in different situations
  3. BUT Harsayni proves a theorem that says if the  collective utility function is rational in the same way and is determined from the utility function of individuals than it must be a weighted sum of the individual's utilities
  4. I conclude with some half baked thoughts about what I think they may or may not have to do with morality and EA.

The linked blog's text is reproduced below (Sorry copy pasting broke the footnotes, but the latex and formatting look the same).


If you give a man two numbers he will try to add them. One of the first things you're taught as a human is numbers and once you've got a handle on them you're taught to add them. Then you spend the next 20 years of your life being given numbers for your performance and having those numbers added together. But consider for a moment: what are the units of your GPA? If there is such a thing as utility, what are it's units?

It's not true that any two numbers can be added and a meaningful result obtained. Sometimes their types don't agree: what's $5 + 3cm? Some other quantities simply cannot be meaningfully added. What is 3 stars + 4 stars with regards to movie ratings?

But this is all very trite and obvious and of course the utilcels have thought of it.

Why Should Utility be a Real Number?

God made preferences. All else is the work of man.

- Leopold Kronecker, basically

Let's begin by considering the utility of an individual before moving on to aggregation. For concreteness let it be my utility.

In the beginning there are preferences. I prefer green tea to black. I prefer my eggs fried to scrambled. At first glance if all I have to work with is preferences you might think all I could give you is an ordering of scenarios. I could say green tea gives me 5 utility and black tea 1 or green tea 3.89 utility and black tea -300 and it'd make no difference to what happens when you offer me my choice of beverage.

But there is a tool to make meaningful different utility functions besides ordering: randomized mixing. Say there are three outcomes:

  • Green tea
  • Black tea
  • No drink

And I have the choice between Black tea or x% chance of Green tea otherwise nothing.

If I prefer Black tea to no drink than there should be some x where I would choose the guaranteed Black tea over the gamble. Once I've chosen random numbers for my utility for no drink and for black tea, say 0 and 1, My utility for green tea is determined. It is 1/x where x is the smallest probability for which I'd choose the gamble.

Of course none of this is original and if you work out the details you get the Von Neumann-Morgenstern utility theorem.

Already there are a number of possible objections:

1. What if there is no such x?

This could mean I actually prefer no drink to black tea. That's fine, and we can mix up the gamble to determine exactly what negative utility I give to black tea. But what if I prefer black tea to no drink it's just my preference for green tea is so strong I'd always pick the gamble? With tea that maybe seems implausible but what if I'm choosing between tea or getting eaten by a shark? For someone who takes this very seriously see Erik Hoel. I've never thought this is a particularly serious objection because I do risk a small probability of violent death for a little bit of entertainment all the time by getting in my car and driving. Maybe when you strip away the details and make it a pure trolley problem it's easy to have the intuition that you'd never choose a guaranteed beverage plus a small probability of death over staring at a wall but in practice everyone does take the gamble all the time.

2. Are we putting the cart before the horse?

If our goal is to build a moral theory should we be asking how should I prefer green and black tea not how do I prefer green and black tea? In the case of tea this seems unimportant because how could the choice be morally relevant? But maybe I'm choosing between beef or tofu? Maybe I don't intrinsically have a preference for the ethical treatment of the livestock and produce I eat, but maybe I should?

3. But 0 and 1 are arbitrary. This doesn't uniquely determine a utility function.

This isn't a problem if we're just looking at my value and decisions. But it becomes a problem when we wish to aggregate many people's preferences. It doesn't make sense to add together a bunch of functions which were determined in such a way as to be insensitive to shifts and scales! This brings us to Harsayni's Aggregation Theorem.

Harsayni's Argument

Now it doesn't make sense to add a bunch of functions determined up to scale and shift but it sure would be convenient, since all we know how to do is add. One piece of evidence that we should add is Harsayni's 1955 argument which I will reproduce almost in its entirety here. We just require 3 (and one unmentioned by Harsayni) assumptions:

1. Individuals have (or can be given/assigned) a utility function  consistent with EV as discussed in the previous section. [1]

2. Society, or the collective, can be given such a function  as well.

3. Society's utility function is a function of the utility of the individuals. E.g. for two different worlds where everyone has the same utility, society's utility should be the same. I'll write  when using this functional relationship.

3': There is some event for which all  and on this event . Call this event . This isn't so much an assumption as fixing a scale because remember utility in the VN-M sense is only determined up to shift.

Bonus assumption: Harsayni assumes this in his proof but as far as I can tell it doesn't follow from the previous 3 or is at all obvious: For each  there exists an event for which  and the rest are . To me this is a big independence assumption. We don't assume the  are selfish or egoistic. They're just the utility functions people happen to have which could be selfish but could also be altruistic. In practice two individuals who are married or business partners could have extremely correlated utility functions. If they're identical there's no issue but the nightmare scenario is something like one business partner being risk loving and having a utility function which is company profits and the other having log profits so their utility functions are monotonically related. [2] [3]

From this we can deduce a result which at first blush may seem surprisingly strong but will follow from considering what the expected value of just a few mixed scenarios must be. One take away from the theorem is consistency over all randomized scenarios is actually an extremely strong assumption.

Theorem where  is the total utility when individual  has utility  and all others have  e.g. Societal utility is a weighted sum of individual utility.

Proof. First we prove  is homogeneous in  that is,

Let  denote an event for which all  are  and  is . Let  be some other event for which the utility functions take the values  respectively. Now consider a mixing event which is  with probability  otherwise . Of course we have  and  in this scenario. Which is exactly the homogeneous claim. Two notes:

1. I've only shown the homogeneous claim for . Harsayni spends 4 times as much text dividing but I'll leave you to fill in the details or read the original paper.

2. It's not necessary in this step to assume the  could take on any value or even that they're nonzero.

Now let  denote a prospect for which individual  gets utility  and all other individuals get utility . As I said above that such a prospect exists is a big assumption but it slips in in the original paper. By our homogeneity result we know  on prospect .

Now take the mixed prospect that is equally likely to be each . By the linearity of expectation for each individual this prospect is worth  and for the collective it is worth .

Using homogeneity once more we get  for a prospect where each individual's utility is  (as opposed to  as it was in the previous paragraph).

Like I said, not much of a proof. Somehow just from the linearity of expected value we've derived a whole moral philosophy [4].

Aside on p-norms

I have a math friend who likes to joke that the problem of the repugnant conclusion is just a matter of choosing the right p-norm. At  we have Harsayni's addition, at  we have Rawl's (insane) position. By choosing the proper  in between  and  we can get an ethical theory to our tastes. But the choice of  is not arbitrary. It's the constant for which both the social utility and individual utility functions can both be rational in the Von Neumann-Morgenstern sense.

Why I am not convinced

I had planned to write a blog post making the point in the first section in May 2022 before even knowing about the Von Neumann-Morgenstern Theorem. When the Effective Altruism (EA) criticism contest was announced I decided to do a little more research and make my post a little better [5]. Having read Harsayni's Theorem I think there's better theoretical justification to add but I still have a number of qualms.

I am basically totally convinced that an organization founded to be altruistic has to be fundamentally utilitarian or irrational though. So in that sense this isn't a critique of EA but is possibly a critique of someone deciding to be EA.

  • What the theorem of course can't tell you how to do is how to choose the weights. In practice maybe this is a weak critique though. In altruistic practice it seems people focus their giving on people plausibly maxing out the utility scale in the negative direction. Maybe you can't prove a nice theorem in this context like Harsayni was able to do, but it seems reasonable to say dying of cancer is about as bad as dying of malaria and both are much worse than not getting your favorite flavor of ice cream.
  • I was turned onto Harsayni from this interview where MacAskill gives the aggregation theorem as tied for his second favorite argument for utilitarianism along with rejection of personhood arguments behind track record. I think there's something contradictory about taking the aggregation theorem and personhood rejection as your top two reasons. Why do our utility functions have to respect expected value in this way? Because otherwise we're exploitable as agents. We can be dutch booked. But doesn't concern for this scenario imply a strong sense of self? Why would I care that as I wandered in circles over my choices I ended up worse off if I didn't have a strong sense of self identification?
  • Similarly it seems like Preferentism is out of fashion. See this excellent critique. And listening to other 80k interviews it seems like hedonism is more mainstream than preferentism in the EA community [6]. But again it seems like the theory is built out of the primacy of preference.

[1] These utility functions need not be selfish. They shouldn't depend on each other or we may run into computability issues but they may depend on each other's inputs. e.g. It's fine for someone's utility to be lower if they have much more money than their friends.

[2] Though assuming  in this scenario as opposed to any other nonzero value is no issue as the  are only determined up to scale.

[3] Linearity of Expected Value is so powerful I wouldn't be surprised if a more careful argument could remove this assumption. With this assumption though the proof is very easy.

[4] For some deep ja3k/EV lore see this 2016 tumblr post.

[5] Missed that boat unfortunately. Criticism is its own reward though so I'm posting anyway.

[6] Sorry if this is a mischaracterization or there are existing surveys. I looked at this survey of philosophers but it doesn't seem to get at quite this question.

6

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Could you provide a tl;dr here (or there on the article, I suppose)?

Sorry, it's my first time making a link post. I just pasted the whole article in like I should have in the first place. For some reason I was hoping the EA forum would do something like that automatically but I guess there's no way to do it safely, or even in general determine what the "main content" is. I also wrote a bulleted TLDR at the top.

Most of these problems only occur when you are a foundationalist about preferences. If you consider degrees of desire (from negative to positive infinity) as basic, and "utilities" representing those desires, preferences are just an emergent phenomenon of an agent desiring some outcome more than another.

The interpersonal comparison problem is then mostly one of calibrating a "utility scale". Such a scale needs two points: one for and one for some (e.g. 1).

The zero point is already very elegantly handled in Richard Jeffrey's utility theory: If we axiomatically assume an algebra of propositions/events, excluding the contradictory proposition, with an probability and utility function defined over it, and assume that tautologies have utility zero (or indeed all probability one events), as they are "satisfied desires", it is provable that indifference between a proposition and its negation (i.e. ) implies that and also have utility zero. At which point we have defined an interpersonally comparable zero point - if people are measurably indifferent between getting and not getting something, they assign utility zero to it.

We could then go on to define as, e.g., utility "1" something which all humans - with psychological or neurophysiological plausibility - enjoy approximately the same. For example, drinking a glass of water after not drinking anything for 12 hours. If then someone says they desire something three times as much as said glass of water, we know they desire it more than someone else who desires something only two times as much as the glass of water.

More from Ja3k
Curated and popular this week
Relevant opportunities