Hide table of contents

This post is about GWWC's research plans for next year; for our giving recommendations this giving season please see this post and for our other activities see this post.

The public effective giving ecosystem now consists of over 40 organisations and projects. These are initiatives that either try to identify publicly accessible philanthropic funding opportunities using an effective-altruism-inspired methodology (evaluators), or to fundraise for the funding opportunities that have already been identified (fundraisers), or both.

Over 25 of these organisations and projects are purely fundraisers and do not have any research capacity of their own: they have to rely on evaluators for their giving recommendations, and in practice currently mainly rely on three of those: GiveWell, Animal Charity Evaluators and Founders Pledge. 

At the moment, fundraisers and individual donors have very little to go on to select which evaluators they rely on and how to curate the exact recommendations and donations they make. These decisions seem to be made based on public reputation of evaluators, personal impressions and trust, and perhaps in some cases a lack of information about existing alternatives or simple legacy/historical artefact. Furthermore, many fundraisers currently maintain separate relationships with the evaluators they use recommendations from and with the charities they end up recommending, causing extra overhead for all involved parties.

Considering this situation and from checking with a subset of fundraising organisations, it seems there is a pressing need for (1) a quality check on new and existing evaluators (“evaluating the evaluators”) and (2) an accessible overview of all recommendations made by evaluators whose methodology meets a certain quality standard. This need is becoming more pressing with the ecosystem growing both on the supply (evaluator) and demand (fundraiser) side.

The new GWWC research team is looking to start filling this gap: to help connect evaluators and donors/fundraisers in the effective giving ecosystem in a more effective (higher-quality recommendations) and efficient (lower transaction costs) way.

Starting in 2023, the GWWC research team plan to evaluate funding opportunity evaluators on their methodology, to share our findings with other effective giving organisations and projects, and to promote the recommendations of those evaluators that we find meet a certain quality standard. In all of this, we aim to take an inclusive approach in terms of worldviews and values: we are open to evaluating all evaluators that could be seen to maximise positive impact according to some reasonably common worldview or value system, even though we appreciate the challenge here and admit we can never be perfectly “neutral”.

We also appreciate this is an ambitious project for a small team (currently only 2!) to take on, and expect it to take us time to build our capacity to evaluate all suitable evaluators at the quality level at which we'd like to evaluate them. Especially in this first year, we may be limited in the number of evaluators we can evaluate and in the time we can spend on evaluating each, and we may not yet be able to provide the full "quality check" we aim to ultimately provide. We'll try to prioritise our time to address the most pressing needs first, and aim to communicate transparently about the confidence of our conclusions, the limitations of our processes, and the mistakes we are inevitably going to make.

We very much welcome any questions or feedback on our plans, and look forward to working with others on further improving the state of the effective giving ecosystem, getting more money to where it is needed most, and ultimately on making giving effectively and significantly a cultural norm.

Comments7
Sorted by Click to highlight new comments since:

Very cool! I actually recently asked, in a closely related post: "Has there been meta-evaluator work to establish which of the evaluators/advisors qualifies as an effective charity?" So I'm stoked to see this get some expert attention 😃

We also appreciate this is an ambitious project for a small team (currently only 2!) to take on...  we may not yet be able to provide the full "quality check" we aim to ultimately provide.

Are you soliciting volunteers? I'd be happy to help. I know that running a volunteer network is itself a serious undertaking but on paper that the EA community is well-suited for distributed research tasks.

If OP is interested in volunteers, I can volunteer as well.

I'm not an American but I'm a trained economist and have limited experience in research.

Thank you both for offering to help! I'm not yet clear on whether it'll make sense to work with volunteers on this, but it is certainly something we'll consider. Could you please indicate your interest by filling out this form? (select "skilled volunteering"-->"impact analysis and evaluation")

Conditional on fundraising for GWWC's 2023 budget, we'll very likely hire an extra researcher to work on this early next year. If this is something you'd be interested in as well, please do feel free to reach out at sjir@givingwhatwecan.org and I'll let you know once the position opens up for applications.

This is something that has been on my mind, and my organization Ge Effektivt has sometimes received questions about it, so I am very happy that you are doing this. Looking forward to your work, and hope it can improve the work of the effective giving landscape in more than one way!

Given the current state of evaluators, it seems like a good initiative!

A related thought I had:

I wonder how it could be set up so that we do not end up in a "turtles all the way down" situation, where we have an infinite number of evaluators, evaluating other evaluators, evaluating other evaluators... ad infinitum.

At the end of the day, the public will need to TRUST one evaluator.

Thanks for your comment Hendrik!

To address this, I think it's important to look at the value each additional layer of evaluation provides. It seems (with the multitude of evaluators and fundraisers) we are now at a point where at least some work in the second layer is necessary/useful, but I don't think a third layer would currently be justified (with 0-1 organisations active in the second layer).

Another way to see this is: the "turtles all the way down" concern already works for the first layer of evaluators (why do we need one if charities are already evaluating themselves and reporting on their impact? who is evaluating these evaluators?): the relevant question is whether the layer adds enough value, which this first layer clearly does (given how many charities and donors there are and the lack of public and independent information available on how they compare), and I argue above the second does as well.

FWIW I don't think this second layer should be fully or forever centralised in GWWC, and I see some value in more fundraising organisations having at least some research capacity to determine their recommendations, but we need to start somewhere and there are diminishing returns to adding more. Relatedly, I should say that I don't expect fundraising organisations to just "listen to whatever GWWC says": we provide recommendations and guidance, and these organisations may use that to inform their choices (which is a significant improvement to having no guidance at all to choose among evaluators).

I like the initiative!

I think one current major weakness of the evaluations of GiveWell is not accounting for nearterm effects on animals and longterm effects, which may well be a crucial consideration (see here).

Curated and popular this week
Relevant opportunities