D

djbinder

Research Analyst @ Open Philanthropy
327 karmaJoined Working (0-5 years)

Bio

I work at Open Philanthropy, doing research for the biosecurity and pandemic preparedness team. Before that I was a research scholar at FHI, and before that did a PhD in physics.

Comments
22

I think this is too bearish on the economic modeling. If you want to argue that climate change could pose some risk of civilization collapse, you have to argue that some pathway exists from climate to a direct impact on society that prevents the society from functioning. When discussing collapse scenarios from climate most people (I think) are envisiging food, water, or energy production becoming so difficult that this causes further societal failures. But the economic models strongly suggest that the perturbations on these fronts are only "small", so that we shouldn't expect these to lead to a collapse. I think in this regime we should trust the economic modeling. If instead the economic models were finding really large effects (say, a 50% reduction in food production), then I would agree that the economic models were no longer reliable. At this point society would be functioning in a very different regime from present, so we wouldn't expect the economic modeling to be very useful.

You could argue that the economic models are missing some other effect that could cause collapse, but I think it is difficult to tell such a story. The story that climate change will increase the number of wars is fairly speculative, and then you would have to argue that war could cause collapse, which is implausible excepting nuclear war. I think there is something to this story, but would be surprised if climate change were the predominant factor in whether we have a nuclear war in the next century.

Famine induced mass migration also seems very unlikely to cause civilization collapse. It would be very easy with modern technology for a wealthy country to defend itself against arbitrarily large groups of desperate, starving refuges. Indeed, to my knowledge there has been no analogy for a famine->mass migration->collapse of neighbouring society chain of events in the historic record, despite many horrific famines. I haven't investigated this quesiton in detail however, and would be very interested if such events have in fact occurred.

We have decided to extend the deadline to June 5th, if you'd still be to do advertise this in your forceasting newsletter that would be helpful!

Thanks for pointing this out, but unfortunately we cannot shift the submission deadline.

[This comment is no longer endorsed by its author]Reply

I agree with with your first question, the utilitarian needs a measure (they don't need a separate utility function from their measure, but there may be other natural measures to consider in which case you do need a utility function).

With respect to your second question, I think you can either give up on the infinite cases (because you think they are "metaphysically" impossible, perhaps) or you demand that a regularization must exist (because with this the problem is "metaphysically" underspecified). I'm not sure what the correct approach is here, and I think it is an interesting question to try and understand this in more detail. On the latter case you have to give up impartiality, but only in a fairly benign way, and that our intuitions about impartiality are probably wrong here (analogous situations occur in physics with charge conservation, as I noted in another comment).

With respect to your third question, I think it is likely that problems with no regularization are non-sensical. This is not to say that all problems involving infinities are themselves non-sense, nor to say that correct choice of regularization is obvious.

As an intuition pump maybe we can consider cases that don't involve infinities. Say we are in (rather contrived world) in which utility is literally a function of space-time, and we integrate to get the total utility. How should I assign utility for a function which has support on a non-measurable set? Should I even think such a thing is possible? After all, the existence of non-measuarable sets follows not from ZF alone, but requires also the axiom of choice. As another example, maybe my utility function depends on whether or not the continuum hypothesis is true or false. How should I act in this case?

My own guess is that such questions likely have no meaningful answer, and I think the same is true for questions involving infinities without specified ways to operationalize the infinities. I think it would be odd to give up on the utilitarian dream due to unmeasurable sets, and that the same is true for ill-defined infinities.

I think you are right about infinite sets (most of the mathematicians I've talked to have had distinctly negative views about set theory, in part due to the infinities, but my guess is that such views are more common amongst those working on physics-adjacent areas of research). I was thinking about infinities in analysis (such as continuous functions, summing infinite series, integration, differentiation, and so on), which bottom out in some sort of limiting process.

On the spatially unbounded universe example, this seems rather analogous to me to the question of how to integrate functions over the same space. There are a number of different sets of functions which are integrable over , and even for some functions which are not integrable over there are natural regularization schemes which allows the integral to be defined. In some cases these regularizations may even allow a notion of comparing different "infinities", as in cases where the integral diverges as the regularizer is taken to zero one integral may strictly dominate the other. When dealing with situations in ethics, perhaps we should always be restricting to these cases? There are a lot of different choices here, and it isn't clear to me what the correct restriction is, but it seems plausible to me that some form of restriction is needed. Note that such a restrictions include ultrafinitism, as an extreme case, but in general allows a much richer set of possibilities.

Expansionism is neceessarily incomplete, it assumes that the world has a specific causal structure (ie, one that is locally that of special relativity) which is an empirical observation about our universe rather than a logically necessary fact. I think it is plausible that, given the right causal assumptions, expansionism follows (at least for individual observers making decisions that respect causality).

As an aside, while neutrality-violations are a necessary consequence of regularization, a weaker form of neutrality is preserved. If we regularize with some discounting factor so that everything remains finite, it is easy to see that "small rearrangments" (where the amount that a person can move in time is finite) do not change the answer, because the difference goes to zero as . But "big rearrangments" can cause differences that grow with . Such situations do arise in various physical situations, and are interpretted as changes to boundary conditions, whereas the "small rearrangments" manifestly preserve boundary conditions and manifestly do not cause problems with the limit. (The boundary is most easily seen by mapping the infinite interval sequence onto a compact interval, so that "infinity" is mapped to a finite point. "Small rearrangments" leave infinity unchanged, whereas "large" ones will cause a flow of utility across infinity, which is how the two situations are able to give different answers.)

I think what is true is probably something like "neverending process don't exist, but arbitrarily long ones do", but I'm not confident. My more general claim is that there can be intermediate positions between ultrafinitism ("there is a biggest number"), and any laissez faire "anything goes" attitude, where infinities appear without care or scrunity. I would furthermore claim (but on less solid ground), that the views of practicing mathematicians and physicists falls somewhere in here.

As to the infinite series examples you give, they are mathematically ill-defined without giving a regularization. There is a large literature in mathematics and physics on the question of regularizing infinite series. Regularization and renormalization are used through physics (particular in QFT), and while poorly written textbooks (particularly older ones) can make this appear like voodoo magic, the correct answers can always be rigorously be obtained by making everything finite.

For the situation you are considering, a natural regularization would be to replace your sum with a regularized sum where you discount each time step by some discounting factor . Physically speaking, this is what would happen if we thought the universe had some chance of being destroyed at each timestep; that is, it can be arbitrarily long-lived, yet with probability 1 is finite. You can sum the series and then take and thus derive a finite answer.

There may be many other ways to regulate the series, and it often turns out that how you regulate the series doesn't matter. In this way, it might make sense to talk about this infinite universe without reference to a specific limiting process, but rather potentially with only some weaker limiting process specification. This is what happens, for instance, in QFT; the regularizations don't matter, all we care about are the things that are independent of regularization, and so we tend to think of the theories as existing without a need for regularization. However, when doing calculations it is often wise to use a specific (if arbitrary) regularization, because it guarantees that you will get the right answer. Without regularizations it is very easy to make mistakes.

This is all a very long-winded way to say that there are at least two intermediate views one could have about these infinite sequence examples, between the "ultrafinitist" and the "anything goes":

  1. The real world (or your priors) demands some definitive regularization, which determines the right answer. This would be the case if the real world had some probability of being destroyed, even if it is arbitrarily small.

  2. Maybe infinite situations like the one you described are allowed, but require some "equivalence class of regularizations" to be specified in order to be completely specified. Otherwise the answer is as indeterminant as if you'd given me the situation without specifiying the numbers. I think this view is a little weirder, but also the one that seems to be adopted in practice by physicists.

I think Section XIII is too dismissive of the view that infinities are not "real", conflating it with ultrafinitism. But the sophisticated version of this view is that infinities should only be treated as "idealized limits" of finite processes. This is, as far as understand, the default view amongst practicing mathematicians and physicists. If you stray from it, and use infinities without specifying the limiting process, it is very easy to produce paradoxes, or at least, indeterminancy in the problem. The sophisticated view, then, is not that infinities don't exist, but that, since they only exist as limiting cases of finite processes. One must always specify the limiting process, and in doing so any paradoxes or indeterminancies will disappear.

As Jaynes' summarizes in Chapter 15 of Probability: The Logic of Science:

[P]aradoxes caused by careless dealing with infinite sets or limits can be mass-produced by the following simple procedure:

(1) Start from a mathematically well-defined situation, such as a finite set, a normalized probability distribution, or a convergent integral, where everything is well-behaved and there is no question about what is the correct solution.

(2) Pass to a limit – infinite magnitude, infinite set, zero measure, improper pdf, or some other kind – without specifying how the limit is approached.

(3) Ask a question whose answer depends on how the limit was approached.

In principal I agree, although in practice there are other mitigating factors which means it doesn't seem to be that relevant.

This is partly because the 10^52 number is not very robust. In particular, once you start postulating such large numbers of future people I think you have to take the simulation hypothesis much more seriously, so that the large size of the far future may in fact be illusory. But even on a more mundane level we should probably worry that achieving 10^52 happy lives might be much harder than it looks.

It is partly also because at a practical level the interventions long-termists consider don't rely on the possibility of 10^52 future lives, but are good even over just the next few hundred years. I am not aware of many things that have smaller impacts and yet still remain robustly positive, such that we would only pursue them due to the 10^52 future lives. This is essentially for the reasons that asolomonr gives in their comment.

Attempts to reject fanatacism necessarily lead to major theoretical problems, as described for instance here and here.

However, questions about fanatacism are not that relevant for most questions about x-risk. The x-risks of greatest concern to most long-termists (AI risk, bioweapons, nuclear weapons, climate change) all have reasonable odds of occurring within the next century or so, and even if we care only about humans living in the next century or so we would find that these are valuable to prevent. This is mostly a consequence of the huge number of people alive today.

Load more