This is written primarily for a non-EA audience. Posting here mostly for reference/visibility.

In 2018, the ACM Turing Award was given to three pioneers of the deep learning revolution: Yoshua Bengio, Geoffrey Hinton, and Yann LeCun.

Last month, Yoshua Bengio endorsed a pause on advanced AI capabilities research, saying “Our ability to understand what could go wrong with very powerful A.I. systems is very weak.

Three days ago, Geoffrey Hinton left Google so that he could speak openly about the dangers of advanced AI, agreeing that “it could figure out how to kill humans” and saying “it's not clear to me that we can solve this problem.”

Yann LeCun continues to refer to anyone suggesting that we're facing severe and imminent risk as “professional scaremongers” and says it's a “simple fact” that “the people who are terrified of AGI are rarely the people who actually build AI models.”

The beliefs LeCun has are the beliefs LeCun has, but at this point it's fair to say that he's misrepresenting the field. There is not a consensus among professional researchers that AI research is safe. Rather, there is considerable and growing concern that advanced AI could pose extreme risk, and this concern is shared by not only both of LeCun's award co-recipients, but the leaders of all three leading AI labs (OpenAI, Anthropic, and Google DeepMind):

 

When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful. Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.

- Demis Hassabis, CEO of DeepMind, in an interview with Time magazine, Jan 2023

 

One particularly important dimension of uncertainty is how difficult it will be to develop advanced AI systems that are broadly safe and pose little risk to humans. Developing such systems could lie anywhere on the spectrum from very easy to impossible.

- Anthropic, Core Views on AI Safety, Mar 2023

 

"Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential."

- OpenAI, Planning for AGI and Beyond, Feb 2023

 

There are objections one could raise to the idea that advanced AI poses significant risk to humanity, but "it's a fringe idea that actual AI experts don't take seriously" is no longer among them. To a first approximation, "we have no idea how dangerous this is and we think there's a decent chance it's actually extremely dangerous" appears to be the dominant perspective among experts.

Comments1
Sorted by Click to highlight new comments since:

Thanks, this post is pretty relevant to me. I'm currently very interested in trying to understand Yann LeCun better. It's a sub-project in my attempt to form an opinion on AI and AI risk in general. Yann's twitter persona really gets under my skin; so I decided to look at him more broadly and try to see what I think when not perceiving him though the lense of the most toxic communication environment ever deviced by mankind ;)

 I'm trying to understand how he can come to conclusions that seem to be so different from nearly anyone else in the field. Am I suffering from a selection bias? (EA was one of the first things/perspectives I found when looking at the topic; I'm mostly branching out from here and feel somewhat bubbled in.)

Any recommendation on what to read to get the clearest, strongest version of Yann's thinking?

 

P. S.: Just 4h ago he tweeted this: No one is "unconcerned". But almost everyone thinks making superhuman AI safe is eminently doable. And almost no one believes that doing so imperfectly will spell doom on humanity. It will cause a risk that is worth the benefits, like driving, flying, or paying online.

It really feels like he is tweeting from a different reality than mine.

More from jai
Curated and popular this week
Relevant opportunities