C

CMD

17 karmaJoined

Comments
2

Yes, agreed — thanks for pointing this out!

Hi Jim, thanks for this post — I enjoyed reading it!

I agree that the Upside-Focused Colonist Curse is an important selection effect to keep in mind.  Like you, I'm uncertain as to just how large of an effect it'll turn out to be, so I'm especially excited to see empirical work that tries to estimate its magnitude.  Regardless, though I'm glad that people are working on this!

I wanted to push back somewhat, though, on the first potential implication that you draw out — that the existence of the UCC diminishes the importance of x-risk reduction, since on account of the UCC, whether humans control their environment (planet, solar system, lightcone, etc.) is unlikely to matter significantly.  As I understand it, the argument goes like this:

  1. If humans fail to take control of their environment (due, presumably, to an existential catastrophe), another "grabby" civilization will likely take control of it instead.
  2. If humans do take control of their environment, they'll have become grabby.
  3. If humans become grabby, their values are unlikely to differ significantly from the values of the civilization that would've controlled it instead.
  4. So, whether humans take control of their environment or not is unlikely to make much of a difference to how things go in that environment, morally.

While I agree that the existence of selection effects on our future values diminishes the importance of x-risk reduction somewhat, I think (4) is far too strong of a conclusion to draw.  This is because, while I'm happy to grant (1) and (2), (3) seems unsupported.

In particular, for (3) to go through, it seems like it would need to be the case that (a) selection pressures in favor of grabby values are very strong, such that they are "major players" in determining the long-term trajectory of a civilization's values;  and (b) under the influence of these selection pressures, civilizations beginning with very different value systems converge on a relatively small region in "values space", such that there aren't morally significant differences between the values they converge on.  I find both of these relatively implausible:

  1. Naively, I don't expect that the relevant selection pressures will be that strong, for two reasons.  First, without empirical evidence, my prior credence in any factor being the dominant consideration that shapes our long-term future is low: predicting the long-term future is really hard, and without evidence-based estimates of an effect's size (for instance, that it successfully post-dicts many important facts about our historical trajectory), I'm reluctant to think that it's extremely large.  As such, as of now, I expect that other factors will play large roles in shaping our, and other civilizations', future trajectories, such that the mere existence of selection pressures in favor of grabby values isn't sufficient to underwrite strong claims about how our descendants will compare to other civilizations'.  (That said, like I say above, I'm excited to see work that aims to estimate their size, and I'm open to changing my mind on this point.)
  2. Moreover, even in the face of strong selection pressure, systems don't seem to converge on similar equilibria in general.  For instance, both biological and cultural evolution seem to yield surprisingly diverse results, even in the face of relatively uniform, strong selection pressures.  In cases of artificial optimization, we seem to see the same thing (see McCoy (2020), for instance).  So, even if selection pressures in favor of grabby values were extremely strong, I wouldn't naively expect them to eliminate morally relevant differences between humans and other civilizations.

These points aside, though, I want to reiterate that I think the main point of the post is very interesting and a potentially important consideration — thanks again for the post!