A

Arepo

4370 karmaJoined

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
628

Topic contributions
17

What other methods do you have in mind for it?

I'd be interested to reread that, but on my version p41 has the beginning of the 'civilisational virtues' section and end of 'looking to our past', and I can't see anything relevant. 

I may have forgotten something you said, but as I recall, the claim is largely that there'll be leftover knowledge and technology which will speed up the process. If so, I think it's highly optimistic to say it would be faster:

1) The blueprints leftover by the previous civilisation will at best get us as far as they did, but to succeed we'll necessarily need to develop substantially more advanced technology than they had.

2) In practice they won't get us that far - a lot of modern technology is highly contingent on the exigencies of currently available resources. E.g. computers would presumably need a very different design in a world without access to cheap plastics.

3) The second time around isn't the end of the story - we might need to do this multiple times, creating a multiplicative drain on resources (e.g. if development is slowed by the absence of fossil fuels, we'll spend that much longer using up rock phosphorus), whereas lessons available from previous civilisations will be at best additive and likely not as good as that - we'll probably lose most of the technology of earlier civilisations when dissecting it to make the current one. So even if the second time would be faster, it would move us one civilisation closer to a state where it's impossibly slow.

Thanks Toby, that's good to know. As I recall, your discussion (much of which was in footnotes) focussed very strongly on effects that might be extinction-oriented, though, so I would be inclined to put more weight on your estimates of the probability of extinction than your estimates of indirect effects. 

E.g. a scenario you didn't discuss that seems seem plausible to me is approximately "reduced resource availability slows future civilisations' technical development enough that they have to spend a much greater period in the time of perils, and in practice become much less likely to ever successfully navigate through it" - even if we survive as a semitechnological species for hundreds of millions of years.

Very strong agree. The 'cons' in the above list are not clearly negatives from an overall view of 'make sure we actually do the most good, and don't fall into epistemic echo chambers' perspective.

I don't know if they're making a mistake - my question wasn't meant to be rhetorical.

I take your point about capacity constraints, but if no-one else is stepping up, it seems like it might be worth OP stepping up their capacity constraints.

I continue to think the EA movement systematically underestimates the x-riskiness of nonextinction events in general and nuclear risk in particular by ignoring much of the increased difficulty of becoming interstellar post-destruction/exploitation of key resources. I gave some example scenarios of this here (see also David's results) - not intended to be taken too seriously, but nonetheless incorporating what I think are significant factors that other longtermist work omits (e.g. in The Precipice, Ord defines x-risk very broadly, but when he comes to estimate the x-riskiness of 'conventional' GCRs, he discusses them almost entirely in terms of their probability of making humans immediately go extinct, which I suspect constitutes a tiny fraction of their EV loss).

You might be right, but that might also just be a failure of imagination. 20 years ago, I suspect many people would have assumed by the time we got AI the level of ChatGPT it would basically be agentic - as I understand it, the Turing test was basically predicated on that idea, and ChatGPT has pretty much nailed that while having very few characteristics that we might recognise in an agent. I'm less clear, but also have the sense that people would have believed something similar about calculators before they appeared.

I'm not asserting that this is obviously the most likely outcome, just that I don't see convincing reasons for thinking it's extremely unlikely.

It doesn't seem too conceptually murky. You could imagine a super-advanced GPT, which when you ask it any questions like 'how do I become world leader?' gives in-depth practical advice, but which never itself outputs anything other than token predictions.

nuclear security is getting almost no funding from the community, and perhaps only ~$30m of philanthropic funding in total.

Do we know why OP aren't doing more here? They could double that amount and it would barely register on their recent annual expenditures.

I'm curious which direction the disagree voters are disagreeing - are they expressing the view that quantifying people like this at all is bad, or that if you're going to do it, this is a more effective way?

Load more