Hide table of contents

This is part 4 of a 5-part sequence:

Part 1: summary of Bostrom's argument

Part 2: arguments against a fast takeoff

Part 3: cosmic expansion and AI motivation

Part 4: tractability of AI alignment

Part 5: expected value arguments

Premise 5: The tractability of the AI alignment problem

Critical to the question of artificial intelligence research as a cause for effective altruists is the argument that there are things which can be done in the present to reduce the risk of misaligned AI attaining a critical strategic advantage. In particular, it is argued that AI safety research and work on the goal alignment problem has the potential of being able to, after the application of sufficient creativity and intelligence, significantly assist our efforts in constructing an AI which is ‘safe’, and has goals aligned with our best interests. This is often presented as quite an urgent matter, something which must be substantively ‘solved’ before a superintelligent AI comes into existence if catastrophe is to be averted. This possibility, however, seems grossly implausible considering the history of science and technology. I know of not a single example of any significant technological or scientific advance whose behaviour we have accurately been able to predict, and whose safety we have been able to ensure, before it has been developed. In all cases, new technologies are only understood gradually as they are developed and put to use in practise, and their problems and limitations progressively become evident.

In order to ensure that an artificial intelligence would be safe, we would first need to understand a great deal about how artificially intelligent agents work, how their motivations and goals are formed and evolve (if it all), and how artificially intelligent agents would behave in society in their interactions with humans. It seems to me that, to use Bostrom’s language, this constitutes an AI-complete problem, meaning that there is no realistic hope of substantively resolving these issues before human-level artificial intelligence itself is developed. To assert the contrary is to contend that we can understand how an artificial intelligence would work well enough to control it and wisely plan with respect to possible outcomes, before we actually know how to build one. It is to assert that a detailed knowledge about how the AI's intellect, goals, drives, and beliefs would operate in a wide range of possible scenarios, and also the ability to control its behaviours and motivations in accordance with our values, would still not include essential knowledge needed to actually build such as AI. Yet what it is exactly that such knowledge would leave out? How could we know such much about AIs without being able to actually build one? This possibility seems deeply implausible, and not comparable to any past experiences in the history of technology.

Another major activity advocated by Bostrom is to attempt to alter the relative timing of different technological developments. This rests on the principle of what he calls differential technological development, that it is possible to retard the development of some technologies relative to the arrival time of others. In my view this principle is highly suspect. Throughout the history of science and technology the simultaneous discovery or development of new inventions or discoveries is not only extremely common, but appears to be the norm of how scientific research progresses rather than the exception (see ‘list of multiple discoveries’ on Wikipedia for examples of this). The preponderance of such simultaneous discoveries lends strong support to the notion that the relative arrival of different scientific and technological breakthroughs depends mostly upon the existing state of scientific knowledge and technology – that when a particular discovery or invention has the requisite groundwork to occur, then and only then will it occur. If on the other hand individual genius or funding initiatives were the major drivers of when particular developments occur, we would not expect the same special type of genius or the same sort of funding program to exist in multiple locations leading to the same discovery at the same time. The simultaneous discovery of so many new inventions or discoveries would under this explanation be an inexplicable coincidence. If discoveries come about shortly after all the necessary preconditions are available, however, then we would expect that multiple persons in different settings would take advantage of the common set of prerequisite conditions existing around the same time, leading to many simultaneous discoveries and developments.

If this analysis is correct, then it follows that the principle of differential technological development is unlikely to be applicable in practise. If the timing and order of discoveries and developments largely depends upon the necessary prerequisite discoveries and developments having been made, then simply devoting more resources to a particular emerging technology would do little to accelerate is maturation. These extra resources may help to some degree, but the major bottleneck on research is likely to be the development of the right set of prerequisite technologies and discoveries. Increased funding can increase the number of researchers, which in turn lead to a larger range of applications of existing techniques to slightly new uses and minor incremental improvements of existing tools and methods. Such activities, however, are distinct from the development of innovative new technologies and substantively new knowledge. These sorts of fundamental breakthroughs are essential for the development of major new branches of technology such as geoengineering, whole brain emulation, artificial intelligence, and nanotechnology. In this analysis is correct, however, they cannot simply be purchased with additional research money, but must await the development of essential prerequisite concepts and techniques. Nor can we simply devote research funding to the prerequisite areas, since these fields would in turn have their own set of prerequisite technologies and discoveries upon which they are dependent. In essence, science and technology is a strongly inter-dependent enterprise, and we can seldom predict what ideas or technologies will be needed for a particular future breakthrough to be possible. Increased funding for scientific research overall can potentially increase the general rate of scientific progress (though even this is somewhat unclear), but changing the relative order of arrival of different major new technologies is not something that we have any good reason to think is feasible. Any attempts therefore to strategically manipulate research funding or agendas to alter the relative order of arrival of nanotechnology, whole brain emulation, artificial intelligence, and other such technologies, are very unlikely to succeed.

Comments2
Sorted by Click to highlight new comments since:

A lot of baggage goes into the selection of a threshold for "highly accurate" or "ensured safe" or statements of that sort. The idea is that early safety work helps even though it won't get you a guarantee. I don't see any good reason to believe AI safety to be any more or less tractable than preemptive safety for any other technology, it just happens to have greater stakes. You're right that the track record doesn't look great; however I really haven't seen any strong reason to believe that preemptive safety is generally ineffective - it seems like it just isn't tried much.

Hi Zeke,

I give some reasons here why I think that such work won't be very effective, namely that I don't see how one can achieve sufficient understanding to control a technology without also attaining sufficient understanding to build that technology. Of course that isn't a decisive argument so there's room for disagreement here.

More from Fods12
Curated and popular this week
Relevant opportunities