Geoffrey Miller

Psychology Professor @ University of New Mexico
8364 karmaJoined Jan 2017Working (15+ years)Albuquerque, NM, USA
www.primalpoly.com/

Bio

Participation
4

Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues.  Interested in long termism, X risk,  longevity, pronatalism, population ethics, AGI, China, crypto.

How others can help me

Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3)  mate choice, relationships, families , pronatalism, and population ethics as cause areas.

How I can help others

I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.

Comments
677

As a tangent, I think EAs should avoid using partisan political examples as intuition pumps for situations like this. 

Liberals might think that 'engagement with criticism by Trump' would be worthless. But conservative crypto investors might think 'engagement with criticism by Elizabeth Warren' would be equally worthless.

Let's try to set aside the reflexive Trump-bashing.

This argument seems extremely naive.

Imitation learning could easily become an extinction risk if the individuals or groups being imitated actively desire human extinction, or even just death to a high proportion of humans. Many do. 

Radical eco-activists (e.g. Earth First) have often called for voluntary human extinction, or at least massive population reduction.

Religious extremists (e.g. Jihadist terrorists) have often called for death to all non-believers (e.g. the 6 billion people who aren't Muslim.)

Antinatalists and negative utilitarians are usually careful not to call for extinction or genocide as a solution to 'suffering', but calls for human extinction seem like a logical outgrowth of their world-view.

Many kinds of racists actively want the elimination, or at least reduction, of other races.

I fear that any approach to AI safety that assumes the whole world shares the same values as Bay Area liberals will utterly fail when advanced AI systems become available to a much wider range of people with much more misanthropic agendas.

Yarrow - I'm curious which bits of what I wrote you found 'psychologically implausible'?

Beautiful and inspiring. Thanks for sharing this.

I hope more EAs think about turning abstract longtermist ideas into more emotionally compelling media!

mikbp: good question. 

Finding meaningful roles for ordinary folks ('mediocrities') is a big challenge for almost every human organization, movement, and subculture. It's not unique to EA -- although EA does tend to be quite elitist (which is reasonable, given that many of its core insights and values require a very high level of intelligence and openness to understand.) 

The usual strategy for finding roles for ordinary people in organizations is to create hierarchical structures in which the ordinary people are bossed around/influenced/deployed by more capable leaders. This requires a willingness to accept hierarchies as ethically and pragmatically legitimate -- which tends to be more of a politically conservative thing, and might conflict with EA's tendency to attract anti-hierarchical liberals.

Of course, such hierarchies don't need to involve full-time paid employment. Every social club, parent-teacher association, neighborhood association, amateur sports team, activist group, etc involves hierarchies of part-time volunteers.  They don't expect full-time commitments. So they're often pretty good at including people who are average both in terms of their traits and abilities, and in terms of the time they have available for doing stuff, beyond their paid jobs, child care, and other duties.

Counterpoints:

  1. Humans are about as good and virtuous as we could reasonably expect from a social primate that has evolved through natural selection, sexual selection, and social selection (I've written extensively on this in my 5 books).
  2. Human life has been getting better, consistently, for hundreds of years. See, e.g. Steven Pinker (2018) 'Enlightenment Now'.
  3. Factory farming would be ludicrously inefficient for the first several decades, at least, of any Moon or Mars colonies, so would simply not happen.

My more general worry is that this kind of narrative that 'humans are horrible, we mustn't colonize space and spread our horribleness elsewhere' is that it feeds the 'effective accelerationist' (e/acc) cult that thinks we'd be better replaced by AIs.

A brief meta-comment on critics of EAs, and how to react to them:

We're so used to interacting with each other in good faith, rationally and empirically, constructively and sympathetically, according to high ethical and epistemic standards, that we EAs have real trouble remembering some crucial fact of life:

  • Some people, including many prominent academics, are bad actors, vicious ideologues, and/or Machiavellian activists who do not share our world-view, and never will
  • Many people engaged the public sphere are playing games of persuasion, influence, and manipulation, rather than trying to understand or improve the world
  • EA is emotionally and ideologically threatening to many people and institutions, because insofar as they understand our logic of focusing on tractable, neglected, big-scope problems, they realize that they've wasted large chunks of their lives on intractable, overly popular, smaller-scope problems; and this makes them sad and embarrassed, which they resent
  • Most critics of EA will never be persuaded that EA is good and righteous. When we argue with such critics, we must remember that we are trying to attract and influence onlookers, not trying to change the critics' minds (which are typically unchangeable).

I think there's a huge difference in potential reach between a major TV series and a LessWrong post.

According to this summary from Financial Times, as of March 27, '3 Body Problem' had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries. 

Whereas a good LessWrong post might get 100 likes. 

We should be more scope-sensitive about public impact!

PS: Fun fact: after my coauthor Peter Todd (Indiana U.) and I read '3 Body Problem' novel in  2015, we were invited to a conference on 'active Messaging to Extraterrestrial Intelligence' ('active METI') at Arecibo radio telescope in Puerto Rico. Inspired by Liu Cixin's book, we gave a talk about the extreme risks of active METI, which we then wrote up as this journal paper, published in 2017:

PDF here

Journal link here

Title: The Evolutionary Psychology of Extraterrestrial Intelligence: Are There
Universal Adaptations in Search, Aversion, and Signaling?

Abstract
To understand the possible forms of extraterrestrial intelligence (ETI), we need not only astrobiology theories about how life evolves given habitable planets, but also evolutionary psychology theories about how intelligence emerges given life. Wherever intelligent organisms evolve, they are likely to face similar behavioral challenges in their physical and social worlds. The cognitive mechanisms that arise to meet these challenges may then be copied, repurposed, and shaped by further evolutionary selection to deal with more abstract, higher-level cognitive tasks such as conceptual reasoning, symbolic communication, and technological innovation, while retaining traces of the earlier adaptations for solving physical and social problems. These traces of evolutionary pathways may be leveraged to gain insight into the likely cognitive processes of ETIs. We demonstrate such analysis in the domain of search strategies and show its application in the domains of emotional aversions and social/sexual signaling. Knowing the likely evolutionary pathways to intelligence will help us to better search for and process any alien signals from the search for ETIs (SETI) and to assess the likely benefits, costs, and risks of humans actively messaging ETIs (METI).

'3 Body Problem' is a new 8-episode Netflix TV series that's extremely popular, highly rated (7.8/10 on IMDB), and based on the bestselling 2008 science fiction book by Chinese author Liu Cixin. 

It raises a lot of EA themes, e.g. extinction risk (for both humans & the San-Ti aliens), longtermism (planning 400 years ahead against alien invasion), utilitarianism (e.g. sacrificing a few innocents to save many), cross-species empathy (e.g. between humans & aliens), global governance to coordinate against threats (e.g. Thomas Wade, the UN, the Wallfacers), etc.

Curious what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students?

Load more