Jacob Davis, a writer for the socialist political magazine Jacobin, raises an interesting concern about how current longermist initiatives in AI Safety are in his assessment escalating tensions between the US and China. This highlights a conundrum for the Effective Altruism movement which seeks to advance both AI Safety and avoid a great power conflict between the US and China.

This is not the first time this conundrum has been raised which has been explored on the forum previously by Stephen Clare.

The key points Davis asserts are that:

  • Longtermists have been key players in President Biden’s choice last October to place heavy controls on semiconductor exports.
  • Key longtermist figures advancing export controls and hawkish policies against China include former Google CEO Eric Schmidt (through Schmidt Futures and the longtermist political fund Future Forward PAC), former congressional candidate and FHI researcher Carrick Flynn, as well as other longtermists in key positions at Gerogetown Center for Security and Emerging Technology and the RAND Corporation.
  • Export controls have failed to limit China's AI research, but have wrought havoc on global supply chains and seen as protectionist in some circles.

I hope this linkpost opens up a debate about the merits and weaknesses of current strategies and views in longtermist circles.

37

0
0

Reactions

0
0
Comments16
Sorted by Click to highlight new comments since:

I think there's something to this, but:

  • My impression of Eric Schmidt is that he is not a longtermist, and if anything has done lots to accelerate AI progress.
  • The October 7 controls have not "devastated critical supply chains". The linked article gives no evidence for this claim. China has something like 10% or less of the chip market share, and the export controls don't affect other countries' abilities to produce chips (though they do prevent some chips from being sold to China). Most fabs right now have utilization rates well below 100%, meaning they produce fewer chips than they could due to weak demand.
  • The October 7 controls also have not "upset markets" globally, or at least the linked article gives no evidence for this claim. Memory chip-makers like Samsung have seen profits fall, but this seems to be a normal business cycle thing --- semiconductors, and especially memory chips, are a cyclical industry, sensitive to consumer demand, and the current downturn is almost certainly related to the global financial downturn and associated reduction in consumer demand.
    • I think the October 7 controls have affected and will affect markets, but mostly by reducing profits of companies selling chips and equipment to China, and reducing the supply of some chips and equipment within China (their intended purpose). There'll probably be other, indirect effects down the line, but it's hard to say what those will be now.
  • I also note a tension between those two points -- the first blames the October 7 controls for there being a chip supply shortage, and the second blames the controls for there being a chip oversupply. Neither is true.
  • I disagree with the claims that the October 7 controls have "failed spectacularly at achieving their stated ambitions" and that despite them "China’s AI research has managed to continue apace".
    • I basically disagree with the linked article.
      • It states that Nvidia is releasing export-control-adapted versions of its chips with lower memory interconnect (to be below the export control thresholds) for the Chinese market. This is true, but the gap between the state of the art and what can be sold to China will grow.
      • It seems to suggest that compute will be less important in future. I think that's unlikely, at least for developing frontier models.
      • Another purpose of the October 7 controls was to limit Chinese chip-makers' access to equipment, materials and software, and it seems tentatively pretty successful at that (though time will tell).
  • I think the "increased West-China tensions" point is right though and fairly concerning.
  • I also think the "CSET was a major contributor to the October 7 controls" point is right, but whether this was ex ante good or bad probably depends on one's views on AI x-risk.

My impression of Eric Schmidt is that he is not a longtermist, and if anything has done lots to accelerate AI progress.


This seems no-true-Scotsmany. It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards 'longtermism tends to be harmful in practice' much more than towards 'those people are not longtermists'.

It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards 'longtermism tends to be harmful in practice' much more than towards 'those people are not longtermists'.

I agree with this, but "longtermists may do harmful stuff" doesn't mean "this person doing harmful stuff is a longtermist". My understanding is that Schmidt (1) has never espoused views along the lines of "positively influencing the long-term future is a key moral priority of our time", and (2) seems to see AI/AGI kind of like the nuclear bomb -- a strategically important and potentially dangerous technology that the US should develop before its competitors.

I think it's fair for Davis to characterise Schmidt as a longtermist.

He's recently been vocal about AI X-Risk. He funded Carrick Flynn's campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF. His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.

And there are longtermists who are pro AI like Sam Altman, who want to use AI to capture the lightcone of future value.

https://www.cnbc.com/amp/2023/05/24/ai-poses-existential-risk-former-google-ceo-eric-schmidt-says.html

He's recently been vocal about AI X-Risk.

Yeah, but so have lots of people; it doesn't mean they're all longtermists. Same thing with Sam Altman -- I haven't seen any indication that he's longtermist, but would definitely be interested if you have any sources. This tweet seems to suggest that he does not consider himself a longtermist.

He funded Carrick Flynn's campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF.

Do you have a source on Schmidt funding Carrick Flynn's campaign? Jacobin links this Vox article which says he contributed to Future Forward, but it seems implied that it was to defeat Donald Trump. Though I actually don't think this is a strong signal, as Carrick Flynn was mostly campaigning on pandemic prevention and that seems to make sense on neartermist views too.

His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.

I know Schmidt Futures has "future" in its name, but as far as I can tell they're not especially focused on the long-term future. They seem to just want to boost innovation through scientific research and talent growth, but so does, like, nearly every government. For example, their Our Mission page does not mention the word "future".

His philanthropic organisation Schmidt Futures...funds various EA orgs

Can you give some examples? My impression was that the funding has been minimal at best, would be surprised if EA orgs receive say >10% of their funding, and likely <1%.

Also I don't want to overstate this point, but I don't think I've yet met a longtermist researcher who claims to have had a extended (or any) conversation with Schimdt. Given that there aren't many longtermist researchers to begin with (<500 worldwide defined rather broadly?), it'd be quite surprising for someone to claim to be a longtermist (or for others to claim that they are) if they've never even talked to someone doing research in the space. 

To be fair, I think a few of Schmidt Futures people were looking around EA Global for things to fund in 2022. I can imagine why someone would think they're a longtermist. 

I agree there are probably a few longtermist and/or EA-affliated people at Schimdt Futures, just as there are probably such people at Google, Meta, the World Bank, etc. This is a different claim than whether Schimdt Futures institutionally is longtermist, which is again a different claim from whether Eric Schimdt himself is.

My understanding is that Schmidt (1) has never espoused views along the lines of "positively influencing the long-term future is a key moral priority of our time"

I don't think that's so important a distinction. Prominent longtermists have declared the view that longtermism basically boils down to x-risk, which (again in their view) overwhelmingly boils down to AI risk. If, following their messaging, we get highly influential people doing harmful stuff in the name of AI risk, I think we should still update towards 'longtermism tends to be harmful in practice'. 

Not as much as if they were explicitly waving a longtermist banner, but the more we believe the longtermist movement has had any impact on society at all, the stronger this update should be.

The posts linked in support of "prominent longtermists have declared the view that longtermism basically boils down to x-risk" do not actually advocate this view. In fact, they argue that longtermism is unnecessary in order to justify worrying about x-risk, which is evidence for the proposition you're arguing against, i.e. you cannot conclude someone is a longtermist because they're worried about x-risk.

Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn't update that longtermism is bad? The first claim seems to be exactly what they think. 

Scott:

Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?

I think yes, but pretty rarely, in ways that rarely affect real practice... Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes

You could argue that he means 'socially promote good norms on the assumption that the singularity will lock in much of society's then-standard morality', but 'shape them by trying to make AI human-compatible' seems a much more plausible reading of the last sentence to me, given context of both longtermism.

Neel:

If you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA

He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as 'the core action relevant points of EA', since they certainly didn't come from the global poverty or animal welfare wings.

Also, at EAG London, Toby Ord estimated there were 'less than 10' people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who'd consider themselves longtermist is surely in the thousands.

I don't know how we got to whether we should update about longtermism being "bad." As far as I'm concerned, this is a conversation about whether Eric Schmidt counts as a longtermist by virtue of being focused on existential risk from AI.

It seems to me like you're saying: "the vast majority of longtermists are focused on existential risks from AI; therefore, people like Eric Schmidt who are focused on existential risks from AI are accurately described as longtermists."

When stated that simply, this is an obvious logical error (in the form of "most squares are rectangles, so this rectangle named Eric Schmidt must be a square"). I'm curious if I'm missing something about your argument.

This is a true claim in general, but seems quite an implausible claim for Schimdt specifically, who has been in tech and at Google for much longer than people in our parts have been around.

Mind if I re-frame this discussion? The relevant question here shouldn't be a matter of beliefs, "is he a longtermist?", it's a matter of identity and identity strength. This isn't to say beliefs aren't important and knowing his wouldn't be informative, but identity (at least to some considerable degree) precedes and predicts beliefs and behavior. 

 

But I also don't want to overemphasize particular labels, there are enough discernible positions out there that this isn't very helpful. Especially for individuals with some expertise, in positions of authority who may be reluctant to carelessly endorse particular groups.

Accepting this, here's some of what we could look into:

  • Amount of positive socialization with EAs and affiliates (Jason Matheny's FLI history is notable, how long and involved was this position?)
  • Amount of out-group derogation - if he's positioned against our out-group, this may indicate or induce sympathy. Mentioning X-risk seriously once did this, may still to a degree.
  • Effect of role identities (Matheny apparently did malaria work before EA. Not sure what tech industry or Google CEO entails, defensiveness or maybe self-importance(?), "yeah me quoting the Bhagavad Gita would sound good!")
  • Identities are correlated; what are his political, religious and cultural identities?

I agree that identity and identity strength are important variables for collective guilt assignment.

That said, I think the case for JM is substantially stronger than the case for Schimdt, which we were previously talking about upthread. 

Curated and popular this week
Relevant opportunities