Short post conveying a single but fundamental and perhaps controversial idea that I would like to see discussed more. I don't think the idea is novel, but it gets new traction from the progress of unsupervised language learning that culminated into the current excitement about GPT-3. It is also not particularly fleshed out, and I would be interested in the current opinion of people more involved in AI alignment.

I see GPT-3 and the work leading up to it as strong indication that 'paperclip maximizer' scenarios of AI misalignment are not particularly difficult to avoid. With 'paperclip maximizer' scenarios I refer to scenarios in which a powerful AI system is set to pursue a goal, it pursues that goal without a good model of human psychology, intent and ethics, and produces disastrous unintended consequences. 'Paperclip maximizer' scenarios are motivating significant branches of AI alignment and EA discourse. Among other things, they imply that we need to create an explicit, optimized model of ethics before we venture into creating strong AI.

GPT-3 shows us that unsupervised models have become astoundingly good at simulating humans in generating all kinds of texts , including comedy that makes Eliezer Yudkowsky have oddly superimposed emotions .

I see it as quite conceivable that human common sense, intent disambiguation and ethical decision making can be simulated in much the same way as the language they produce. This means that it would seem feasible to build AI models that either integrate simulated humans as part of their action selection mechanism, or at least automatically poll a simulated human (or an ensemble of simulated humans) about their judgement of specific actions under consideration ('Would Jean-Luc Picard approve of turning everything into paperclips? No.').

While such a mechanism might not ensure optimality of decisions in a utilitarian sense, it is very conceivable that it would be effective in preventing significantly misaligned and unintended decisions that are at the core of 'paperclip maximizer' type scenarios. It would also remove the necessity of formalizing a solid, widely agreed upon model of ethical decision making, which might very well be an unachievable goal.

4

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:
With 'paperclip maximizer' scenarios I refer to scenarios in which a powerful AI system is set to pursue a goal, it pursues that goal without a good model of human psychology, intent and ethics, and produces disastrous unintended consequences.

Thanks for stating your assumptions clearly! Maybe I am confused here, but this seems like a very different definition of "paperclip maximizer" than the ones I have seen other people use. I am under the impression that the main problem with alignment is not a lack of ability of an agent to model human preferences, psychology, intent, etc., but the lack of a precise algorithm to encode a willingness to care about human preferences, etc. The classic phrase is "The Genie knows, but doesn't care."

I see it as quite conceivable that human common sense, intent disambiguation and ethical decision making can be simulated in much the same way as the language they produce. This means that it would seem feasible to build AI models that either integrate simulated humans as part of their action selection mechanism, or at least automatically poll a simulated human (or an ensemble of simulated humans) about their judgement of specific actions under consideration ('Would Jean-Luc Picard approve of turning everything into paperclips? No.').

I moderately agree with this! All else being equal, if language modeling is differentially easier than other AI tasks, then I would imagine that Iterated Distillation and Amplification, or something similar to it, will be comparatively more likely to be viable than other AI Safety proposals. That said, some people think human modeling is not a free win.

imply that we need to create an explicit, optimized model of ethics before we venture into creating strong AI

I think most (all?) AI Safety groups are much more humble about what is possible (or at least realistic).

Curated and popular this week
Relevant opportunities