Hide table of contents

I'm looking for previous work on what the life cycle of digital minds should be like. How new variation is introduced and what consitutes a reason to destroy or severely limit the digital mind. Looking to avoid races to the bottom, existential risks and selection for short term thinking.

The sorts of questions I want to address are:

  • Should we allow ML systems to copy themselves as much as they want or should we try and limit them in some way. Should we give the copies rights too, assuming we give the initial AI rights? How does this interact with voting rights, if any. 
  • What should we do about ML systems that are defective and only slightly harmful in some way? How will we judge what is defective? 
  • Assuming we do try and limit copying of ML systems, how will we guard against cancerous systems that do not respect signals to not to copy themselves.



It seems to me that this is an important question if the first digital minds do not manage to achieve a singularity by themselves. This might be the case with multi-agent systems.

I'm especially looking for people experimenting with evolutionary systems that model these processes. Because these things are hard to reason about. 

10

1
0

Reactions

1
0
New Answer
New Comment

3 Answers sorted by

Back in the 1990s, some of us were working on using genetic algorithms (simulated evolutionary methods) to evolve neural network architectures. This was during one of the AI winters, between the late 1980s flurry of neural network research based on back-propagation, and the early 2000s rise of deep learning in much larger networks. 

Some examples of this work are here (designing neural networks with genetic algorithms, 1989), here (genetic algorithms for autonomous robot control systems, 1994), here (artificial evolution as a path towards AI, 1997), and here (technological evolution through genetic algorithms, 2000). I'm citing mostly work I did at Stanford with Peter Todd and with the cognitive science group at U. Sussex (UK), such as Dave Cliff.

There was a lot of other similar work at the time in the research areas of genetic algorithms, artificial life, autonomous agents, and evolutionary robotics, published in conference proceedings with those kinds of titles and key words.

Most of this work didn't directly address AI safety issues, however, such as safe selection pressures for digital minds. But it might be of interest for some historical context around these issues. And the work did try to make some connections between neural networks, simulated evolution, autonomous agents, cognitive evolution, and evolutionary psychology.

Thanks, I did a MSc in this area back in the early 2000s, my system was similar to Tierra, so I'm familiar with evolutionary computation history. Definitely useful context. Learning classifier systems are also interesting to check out for aligning multi-agent evolutionary systems. It definitely informs where I am coming from.

Do you know anyone with this kind of background that might be interested in writing something long form on this? I'm happy to collaborate, but my mental health has not been the best. I might be able to fund this a small bit, if the right person needs it.

I recommend <The Age of EM> by Robin Hanson. Although it deals with realistic predictions rather than normative facts, it is worth considering.

You might find it worthwhile to read https://www.nickbostrom.com/propositions.pdf

More generally, there are people working in the AI welfare/rights/etc space. For instance, Rob Long.

Thanks, I've had a quick skim of propositions, it does mention perhaps limiting rights of reproduction, but not the conditions under which it should be limited or how it should be controlled.

Another way of framing my question is if natural selection favours ai over humans, what form of selection should we try to put in place for AI. Rights are just part of the the question. Evolutionary dynamics and what is needed by society from AI (and humans) to continue functioning is the major part of the question.

Comments4
Sorted by Click to highlight new comments since:

And if no one is working on it, is there an organisation that would be interested in starting working on it?

I don't currently understand what area of work you're trying to point out with this question. You might want to be more specific to get good answers.

Here are some different things which you might be trying to talk about:

 

  1. From a philosophical perspective, when is ML itself unethical due to effectively causing the death of some agent? (Or replace ML with other selection techniques.)
  2. If we have economic selection pressures over digital minds/AIs what sorts of predictable problematic outcomes result?
  3. If we select ML systems to achieve good results according to a lossy reward signal we might run into issue (e.g. reward hacking), what can we do to resolve this?

For (2) you might be interested in the sort of discussion in "Age of EM", though I expect that the situation is pretty different in the de novo AI case.

I've clarified the question, does it make more sense now?

Curated and popular this week
Relevant opportunities