Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Posts
25

Sorted by New
3
calebp
· · 1m read

Comments
300

Topic contributions
6

Answer by calebp19
2
0

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

That's fair, though I personally would be happy to just factor in neartermist interventions to marginal changes in economic growth (which in most cases I expect to be negligible) in the absence of some causal mechanism by which I should expect some neartermist intervention to have an outsized influence on the long-run future.

I don’t think this is norm-breaking for the EA forum or general discourse (though I might still prefer people act differently).

I think this should probably be in our next post, we are currently spending a large fraction of our time reflecting on what work has gone well/poorly in the past and trying to develop a more coherent strategy.

I was quite involved in the initial design and running of the most recent Open Phil GCR survey, which is a retrospective survey and informs a lot of my views on what works well in EA community building more broadly (though it’s not a drop-in replacement for publishing evaluations of our own grants).

I think this situation is pretty different. In my email, I said we would not be able to provide feedback, but I decided to provide feedback anyway. The grants were reviewed by other fund managers internally, who agreed that your application was not a good fit for the fund.

I will disclose the email I sent to Caleb below. In his defence: he did reply with feedback after this email for which I'm thankful. Unfortunately the feedback contained factual errors about our application and company, and made it clear that our application was not carefully reviewed (or reviewed at all). We recently got another application rejected by Caleb, even though I specifically asked for someone else to review it too, because I believe he has something against me (no clue what that would be since he always ignored me and we never met).

I also don't think I made factual errors when evaluating your application. I don't want to publicly share details of your grants, but it's probably at least somewhat helpful to have it on the record that I disagree. Other fund managers and I have actually reviewed your applications. I didn't evaluate all of them due to your request, but I do send the rejection emails.

Bottlenecks. AI progress relies on improvements in search, computation, storage and so on (each of these areas breaks down into many subcomponents). Progress could be slowed down by any of these subcomponents: if any of these are difficult to speed up, then AI progress will be much slower than we would naively expect. The classic metaphor here concerns the speed a liquid can exit a bottle, which is rate-limited by the narrow space near the opening. AI systems may run into bottlenecks if any essential components cannot be improved quickly (see Aghion et al., 2019).

 

This seems false to me.

  • I think it's helpful to consider the interaction of compute, algorithms, data, and financial/human capital. 
    • I'm not sure that many people think that "search" or "storage" are important levers for computing
      • I guess RAM isn't implausible as a bottleneck, but I probably wouldn't decouple that from chip progress more broadly.
  • Compute
    • progress seems driven by many small improvements instead of one large change. There are many ideas that might work when designing chips, manufacturing equipment, etc., and progress in general seems to be fairly steady and regular. and progress is pretty distributed
  • Algorithms
    • again, the story I tend to hear from people inside the labs, as well as various podcasts and Twitter, is that many ideas might work, and it's mostly a case of testing various ideas empirically in a compute-efficient manner
    • and performance gains can come from multiple places, e.g., performance engineering, better implementations of components, architectural improvements, hyperparameter search ... which are approximately independent of each other
  • Data - I think data is a more plausible bottleneck - it seems to me that either synthetic generation works or it doesn't.

That said my main issue is that you shouldn't consider any of these factors as "independent bottlenecks." If there isn't enough data, you can try to develop more data-efficient algorithms or dedicate more compute to producing more data. If you're struggling to make progress on algorithms, you can just keep scaling up, throwing more data and compute at the problem, etc.

I do think bottlenecks may exist, and identifying them is an important step in determining how to forecast, regulate, and manage AI progress, but, I don't think interactions between AI progress inputs  should be used as an argument against a fast take-off or approximately monotonically increasing rates of AI progress up to extremely powerful AI.

I really liked the post! Thanks for writing it.

I could see people upvoting this post because they think it should be more like -10 than -21. I personally don't see it as concerning that it's "only" on -21.

Sorry, it wasn't clear. The reference class I had in mind was cause prio focussed resources on the EA forum.

I think people/orgs do some amount of this, but it's kind of a pain to share them publicly. I prefer to share this kind of stuff with specific people in Google Docs, in in-person conversations, or on Slack.

I also worry somewhat about people deferring to random cause prio posts, and I'd guess that on the current margin, more cause prio posts that are around the current median in quality make the situation worse rather than better (though I could see it going either way).

Load more