On August 1, I'll be moderating a panel at EA Global on the relationship between effective altruism, astronomical stakes, and artificial intelligence. The panelists will be Stuart Russell (UC Berkeley), Nick Bostrom (Future of Humanity Institute), Nate Soares (Machine Intelligence Research Institute), and Elon Musk (SpaceX, Tesla). I'm very excited to have this conversation with some of the leading figures in AI safety!

As part of the panel, I'd love to ask our panelists some questions from the broader EA community. To that end, please submit questions below that you'd like to be considered for the event. I'll be selecting a set of these questions and integrating them into our discussion. I can't guarantee that every question will fit into the time allotted, but I'm confident that you can come up with some great questions to facilitate high-quality discussion among our panelists.

Thanks in advance for your questions, and looking forward to seeing some of you at the event!

6

0
0

Reactions

0
0
Comments12
Sorted by Click to highlight new comments since: Today at 11:58 PM

Here are some questions of mine. I haven't done a ton to follow discussions of AI safety, which means my questions will either be embarrassingly naive or will offer a critical outside perspective. Please don't use any that fit the former case :)

  • It seems like there's a decent chance that whole brain emulations will come before de novo AI. Is there any "friendly WBE" work it makes sense to do to prepare for this case, analogous to "friendly AI" work?

  • Around the time AGI comes in to existence, it's important how cheap and fast the hardware available to run it on is. If hardware is relatively expensive and slow, we can anticipate a slower (and presumably more graceful) transition. Is there anything we can do to nudge the hardware industry away from developing ever-faster chips, so hardware will be relatively expensive and slow at the time of the transition? For example, Musk could try to hire away researchers at semiconductor firms to work on batteries or rocket ships, but this could only be a temporary solution: the wages for such researchers would rise in response to the shortage, likely leading to more students going in to semiconductor research. (Hiring away professors that teach semiconductor research might be a better idea, assuming American companies are bad at training employees.)

  • In this essay, I wrote: "At some point our AGI will be just as smart as the world's AI researchers, but we can hardly expect to start seeing super-fast AI progress at that point, because the world's AI researchers haven't produced super-fast AI progress." I still haven't seen a persuasive refutation of this position (though I haven't looked very hard). So: Given that human AI researchers haven't produced a FOOM, is there any reason to expect that an AI equal to the level of the human AI research community would produce a FOOM? (EDIT: A better framing might be whether or not we can expect chunky, important AI insights to be discovered in the future, or whether AGI will come out of many small cumulative developments. I suppose the idea that the brain implements a single cortical algorithm should push us in the direction of believing there is at least one chunky undiscovered insight?)

[anonymous]9y4
0
0

1) What careers that directly contribute to AI alignment should someone consider who is not likely suited as a researcher in fields like math or decision theory? 2) Which first steps are recommendable to end up in such a career?

Mr. Musk has personally donated $10 million via the Future of Life Institute towards a variety of AI safety projects. Additionally, MIRI is currently engaged in its annual fundraising drive with ambitious stretch goals, which include the hiring of several (and potentially many) additional researchers.

With this in mind, Is the bottleneck to progress in AI Safety research the availability of funding or researchers? Stated differently, If a technically-competent person assesses AI Safety to be the most effective cause, which is approach more effective: Earning-to-give to MIRI or FLI, or becoming an AI Safety researcher?

One answer: Apply for a job at a group like MIRI and tell them how much you plan to donate with your job if they don't hire you. This gives them a broader researcher pool to draw from and lets them adapt to talent/funding bottlenecks dynamically.

Related: What is your estimate of the field's room-for-funding for the next few years?

Would it be valuable to develop a university level course on AI safety engineering to be implemented in hundreds of universities that use Russell's book worldwide, to attract more talented minds to the field? Which are the steps that would cause this to happen?

GiveWell's Holden Karnofsky assessed the Singularity Institute in 2012, and provided a thoughtful, extensive critique of the mission and approach which still remains tied for the top post on Lesswrong. It seems the EA Meta-charity evaluators are still hesitant to name AI Safety (and more broadly, Existential Risk Reduction) as a potentially effective target for donations. What are you doing to change that?

Assuming human-level AGI is expensive and of potential military value, it seems likely the governments of USA and probably other powers like China will be strongly involved in its development.

Is it now time to create an official process of international government-level coordination about AI safety? Is it realistic and desirable?

Do you think that the human race is more likely to be wiped out in a world with AGI or in a world without AGI? Why?

It seems to me that even the most optimistic versions of friendly super-AI are discordant with current values across society. Why isn't there more discussion about how AI development itself can be regulated, delayed and stopped? What's going on in this space? What might work?

What about the harm Natural Intelligence is already doing? Global Warming, economic collapse, wars, and so forth.

1) Are there lessons we can learn from how Natural Intelligence already poorly serves the needs of humanity?

2) How can we apply those lessons to shape the Natural Intelligence already in control towards the good of humanity?

I would expand this to all sentient life, not just humanity. When you do that, natural intelligence looks far worse.