P

poppingtonic

19 karmaJoined

Comments
6

I like both of them, but I'm wondering: why wait so long? Isn't there a way some group (maybe us) could build 10% of the kind of prediction market that gets us 90% of what we actually need? I need to think about this more, but waiting for Gnosis and Augur to mature seems risky. Unless de-risking that bet means joining both projects to accelerate their advent.

Your link [2] points to a .docx file in a folder on a computer. It isn't a usable download link. Was that the purpose?

'file:///C:/Users/futureuser/Downloads/Careers%20in%20AI%20strategy%20and%20policy%201.4.docx#_ftnref2'

I think that the Good Judgment Project (founded by Philip Tetlock, the author of Superforecasting) is trying to build this with their experiments.

A complication: Whole-brain emulation seeks to instantiate human minds, which are conscious by default, in virtual worlds. Any suffering involved in that can presumably be edited away if I go by what Robin Hanson wrote in Age of Em. Hanson also thinks that this might be a more likely first route for HLAI, which suggests that may be the "lazy solution", compared to mathematically-based AGI. However, in the S-risks talk at EAG Boston, an example of s-risk was something like this.

Analogizing like this isn't my idea of a first-principle argument, and therefore what I'm saying is not airtight either, considering the levels of uncertainty for paths to AGI.

Quoting Nate's supplement from OpenPhil's review of "Proof-producing reflection for HOL" (PPRHOL) :

there are basic gaps in our models of what it means to do good reasoning (especially when it comes to things like long-running computations, and doubly so when those computations are the reasoner’s source code)

How far along the way are you towards narrowing these gaps, now that "Logical Induction" is a thing people can talk about? Are there variants of it that narrow these gaps, or are there planned follow-ups to PPRHOL that might improve our models? What kinds of experiments seem valuable for this subgoal?

Thanks for doing this AMA! Which of the points in your strategy have you seen a need to update on, based on the unexpected progress of having published the "Logical Induction" paper (which I'm currently perusing)?