VG

Vasco Grilo

5607 karmaJoined Working (0-5 years)Lisbon, Portugal
sites.google.com/view/vascogrilo?usp=sharing

Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering, and part-time or full-time paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1297

Topic contributions
25

Hi titotal,

I think it makes sense to assess the annual risk of simulation shutdown based on the mean annual probability of simulation shutdown. However, I also believe the risk of simulation shutdown is much lower than the one guessed by the humble cosmologist.

The mean of a loguniform distribution ranging from a to 1 is -1/ln(a). If a = 10^-100, the risk is 0.434 % (= -1/ln(10^-100)). However, I assume there is no reason to set the minimum risk to 10^-100, so the cosmologist may actually have been overconfident. Since there is no obvious natural lower bound for the risk, because more or less by definition we do not have evidence about the simulators, I guess the lower bound can be arbitrarily close to 0. In this case, the mean of the loguniform distribution goes to 0 (= -1/(ln(0))), so it looks like the humblest view corresponds to 0 risk of simulation shutdown.

In addition, the probability of surviving an annual risk of simulation shutdown of 0.434 % (= 10^-5.44) over the estimated age of the universe of 13.8 billion years is only 10^-75,072,000,000 (= (10^-5.44)^(13.8*10^9)), which is basically 0. So the universe would need to be super super lucky in order to have survived for so long with such high risk. One can try to counter this argument saying there are selection effects. However, it would be super strange to have an annual risk of simulation shutdown of 0.434 % without any partial shutdowns, given that tail risk usually follows something like a power law[1] without severe jumps in severity.

  1. ^

    Although I think tail risk often decays faster than suggested by a power law.

Nice post, titotal!

This could be a whole post in itself, and in fact I’ve already explored it a bit here.

The link is private.

I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs.

Higher IQ in humans is correlated with better performance in all sorts of tasks too, but the probability of finding a single human performing better than 99.9 % of (human or AI) workers in each of the areas you mentioned is still astronomically low. So I do not expect a single AI system to become better than 99.9 % of (human or AI) workers in each of the areas you mentioned. It can still be the case that the AI systems share a baseline common architecture, in the same way that humans share the same underlying biology, but I predict the top performers in each area will still be specialised systems.

I think we've seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.

Going from GPT-3 to GPT-4 seems more analogous to a human going from 10 to 20 years old. There are improvements across the board during this phase, but specialisation still matters among adults. Likewise, I assume specialisation will matter among frontier AI systems (although I am quite open to a single future AI system being better than all humans at any task). GPT-4 is still far from being better than 99.9 % of (human or AI) workers in a given area.

For an agent to conquer to world, I think it would have to be close to the best across all those areas, but I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas.

Hi Erich,

Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.

Hi JP,

It would be nice to have the possibility of filtering the posts of a user by a given tag. As of now, it is not possible.

For reference, here is a seemingly nice summary of Fearon's "Rationalist explanations for war" by David Patel.

Nice point, Robi! That being said, it seems to me that having many value handshakes correlated with what humans want is not too different from historical generational changes within the human species.

Great post, titotal! I am surprised about how little upvoted it is (25 karma in 18 votes before my strong upvote).

Load more