Hello everyone, 

 

I've finished Superintelligence (amazing!) and Our Last Invention (also great), and I was wondering what else I can read regarding AI safety? Also, what should I be doing to increase my chances of being able to work on this as a career? I'm in college now, majoring in math. I'm guessing I should be learning computer science/programming as well.

 

 

2

0
0

Reactions

0
0
Comments8
Sorted by Click to highlight new comments since:

Well, you might be getting toward the frontiers of where published AI Safety-focused books can take you. From here, you might want to look to AI Safety agendas and specific papers, and AI textbooks.

MIRI has a couple of technical agendas for more foundational and more machine learning-based research on AI Safety. Dario Amodei of OpenAI, and some other researchers also put out a machine learning-focused agenda. These agendas cite and are cited by a bunch of useful work. There's also great unpublished work by Paul Christiano on AI Control.

In order to understand and contribute to current research, you will also want to do some background reading. Jan Leike (now of Deepmind) has put out a good syllabus of relevant reading materials through 80,000 Hours that includes some good suggestions. Personally, for a math student like yourself wanting to start out with theoretical computer science, George Boolos' book Computability and Logic might be useful. Learning Python and Tensorflow is also great in general.

To increase your chance of working on this career, you might want to look toward the entry requirements for some specific grad schools. You might also want to go for some internships at these groups (or at other groups that do similar work).

In academia some labs analyzing safety problems are:

  • UC Berkeley (especially Russell)
  • Cambridge (Adrian Weller)
  • ANU (Hutter)
  • Montreal Institute for Learning Algorithms
  • Oxford
  • Louisville (Yampolskiy)

In industry, Deepmind and OpenAI both have safety-focused teams.

Working on grad school or internships in any of these places (notwithstanding that you won't necessarily end up in a safety-focused team) would be a sweet step toward working on AI Safety as a career.

Feel free to reach out by email at (my first name) at intelligence.org with further questions, or for more personalized suggestions. (And the same offer goes to similarly interested readers)

I like Ryan's suggestions. (I also work at MIRI.) As it happens, we also released a good intro talk by Eliezer last night that talks more about 'what does alignment research look like?': link.

[anonymous]0
0
0

Any details on safety work in Montreal and Oxford (other than FHI I assume)? I might use that for an application there.

At Montreal, all I know is that the PhD student, David Krueger, is currently in discussions about what work could be done. At Oxford, I have in mind the work of folks at FHI like Owain Evans and Stuart Armstrong.

If you are interested in applying to Oxford, but not FHI, then Michael Osbourne is very sympathetic to AI safety, but doesn't currently work on it. He might be worth chatting to. Also, Shimon Whiteson does lots of relevant seeming work in the area of deep RL, but I don't know if he is at all sympathetic.

EDIT: I forgot to link to the Google group: https://groups.google.com/forum/#!forum/david-kruegers-80k-people

Hi! David Krueger (from Montreal and 80k) here. The advice others have given so far is pretty good.

My #1 piece of advise is: start doing research ASAP!
Start acting like a grad student while you are still an undergrad. This is almost a requirement to get into a top program afterwards. Find a supervisor and ideally try to publish a paper at a good venue before you graduate.

Stats is probably a bit more relevant than CS, but some of both is good. I definitely recommend learning (some) programming. In particular, focus on machine learning (esp. Deep Learning and Reinforcement Learning). Do projects, build a portfolio, and solicit feedback.

If you haven't already, please check out these groups I created for people wanting to get into AI Safety. There are a lot of resources to get you started in the Google Group, and I will be adding more in the near future. You can also contact me directly (see https://mila.umontreal.ca/en/person/david-scott-krueger/ for contact info) and we can chat.

Also, what should I be doing to increase my chances of being able to work on this as a career?

A PhD in computer science would probably be ideal, though lengthy. Research oriented master's degrees, and programs in neuroscience and math can be very good as well.

There's another reading list here: http://humancompatible.ai/bibliography but like the 80k one, I think the philosophy section should be modified with Reasons and Persons (Parfit), either Ethical Theory: An Anthology or Conduct and Character: Readings in Moral Theory, and articles on plato.stanford.edu as the most important readings.

Further study is best option, you can also look online for suggestions.

Curated and popular this week
Relevant opportunities