Hide table of contents

Summary

Rethink Priorities’ Global Health and Development team is a multidisciplinary ten-person team conducting research around various global health, international development, and climate change topics. We have so far mostly done “shallow” style reports for Open Philanthropy, though we have also worked for other organizations, and have conducted some self-driven research. This post aims to share our current research process. The hope is to make our research as transparent as possible.

About the team

The Global Health and Development (GHD) team is one of the newer departments at Rethink Priorities (RP). It officially formed in Q3 2021, and throughout 2022 the team grew from the initial four hires to its current 10 members. Our team consists of two senior research managers (Tom Hird and Melanie Basnak) overseeing eight researchers of different seniority (Greer GosnellAisling LeowJenny KudymowaRuby DicksonBruce TsaiCarmen van SchoubroeckJames Hu, and Erin Braid). GHD team members have expertise in economics, health, science, and policy, and bring experience from academia, consultancy, medicine, and nonprofit work.

Our past research reports

Rethink Priorities is a research organization that strives to generate impact by providing relevant stakeholders with tools to make more informed decisions. The GHD team’s work to date has mainly been commissioned by donors looking to have a positive impact. Since its inception, the team has completed 23 reports for five different organizations/individuals, as well as two self-driven reports. We have publicly published four of these reports: 

  1. How effective are prizes at spurring innovation?
  2. Livelihood interventions: overview, evaluation, and cost-effectiveness
  3. The REDD+ framework for reducing deforestation and mitigating climate change: overview, evaluation, and cost-effectiveness
  4. Exposure to Lead Paint in Low- and Middle-Income Countries

Whenever possible, we want to disseminate our findings to maximize our impact. We intend to publish 13 of the remaining 19 reports that we have previously completed, but not yet shared publicly.[1] Going forward, we hope to be able to publish within three months of their completion.[2]

Most of our past reports (78%) have been commissioned by Open Philanthropy (OP). The projects we typically do for OP are “shallow” investigations looking into specific cause areas (e.g., hypertension, substandard and falsified drugs). These reports usually contain the following:

  • A basic introduction, especially for complex topics
  • An estimate of the burden for the specific problematic area (and potentially the impact one could have by focusing on that area)
    • This process usually involves critically engaging with existing estimates, as well as making our own
    • The burden estimation is often the most important part of the reports
  • Information about existing funding going into the area
  • An analysis of potential interventions to tackle the issue, which often includes:
    • Identifying potential interventions across different areas (e.g., policy, advocacy, market shaping, direct provision)
    • Evaluating potential interventions with a view to tractability and cost-effectiveness
  • A discussion of the main uncertainties about the area and/or the existing interventions

We have also done different types of work (for OP and others), including red-teaming (providing an outside skeptical challenge to existing work/ideas), investigating specific uncertainties around a topic following a previous report on it, and exploratory/strategy reports on relevant research within the effective altruism (EA) space.

Our research process

Our workflow

Most of our projects involve collaboration across two to three researchers of different seniority. We typically ensure that there is one senior researcher per project to act as “project lead,” making most of the coordination efforts and ensuring, along with the manager, that the project is on track.[3]

Our commissioned projects usually kick off with a brief from the client that contains research questions that guide and structure our research. For internal projects (and some commissioned projects), the managers put together the briefs.

Most of our research projects, regardless of their nature or topic, involve the following components:

  • Desk research. We generally begin by searching for appropriate literature to answer the questions at hand. We assess the evidence (e.g., the number of quality studies in support of each idea, and their generalizability to the context of interest) and identify our uncertainties based on gaps in the literature.
  • Expert interviews. We interview experts on the topics we research. We are a team of generalists, and as such, we remain humble about our limited expertise in a lot of the areas we research and seek to synthesize expert opinions. Experts include, but are not limited to, academics, CEOs, practitioners, and government officials. When possible, particularly when a topic is polarizing, we interview experts with (very) different perspectives (e.g., for our lead paint report). We find these experts through a combination of recommendations from clients, connections from our own networks, and cold messaging relevant people identified through desk research.
  • Quantitative analyses. We often do quantitative analyses to estimate how cost-effective an intervention might be, how many lives it may have saved to date or save in the future, and the like. These vary in complexity, from very rough back-of-the-envelope calculations based mostly on assumptions, to more complex cost-effectiveness analyses (CEAs) drawing from a mix of data and assumptions (e.g., see the Spark CEA in our livelihoods report). We often use Excel or Google Sheets, but will on occasion use Causal or Guesstimate, depending on the client’s preferences and the project’s needs.

The amount of time spent on a given project depends on features like its scope and the number of researchers involved. The average project has involved about 60% of two full-time researchers’ time over the course of five weeks, though some projects have taken just one to two weeks.

Our reports undergo several rounds of internal review. During these periods (often in the middle and at the end of each project), the manager overseeing that project will thoroughly review drafts. Often, the other manager (and sometimes a researcher not involved in that project) will also act as a reviewer. Reviews have usually taken place ~two days before the draft or final report was due to be completed, allowing some time for the researchers to address standing comments, doubts, or concerns. In the context of commissioned research, we send this version of the report to the client.

We then spend some extra time finalizing and polishing the report for publication. This step involves checking for consistent formatting, reaching out to experts to ensure their views are represented accurately and securing permission to quote them publicly, adding an editorial note and an acknowledgments section, and conducting a final (and particularly thorough) round of internal review.

The timeline of a typical project

Next is an example timeline for a typical project to date:

  • Week 1:
    • Engage with the project brief, identifying potential “cruxes” in the research, and trying to define the scope as thoroughly as possible
    • Kickoff meeting with the client, where we raise questions that arose from engaging with the brief and discuss logistics
    • “Premortem”: a process in which we try to identify the main difficulties of completing this project and define action items to ensure we can overcome them
    • Team meeting to divide and coordinate the work
    • Initial research, getting familiar with the topic
    • Identifying and reaching out to experts (sometimes it takes a while for experts to get back to us, so we try to do this task as soon as possible; over the course of the project we might reach out to additional experts)
    • Rough “initial takes” shared with client
  • Weeks 2-3:
    • Desk research
    • Expert interviews
    • Sometimes generate quantitative models, though this often takes place later in the project
    • First draft: internal review, send to client, debrief meeting with client to get feedback and discuss next steps
  • Weeks 4-5:
    • Desk research
    • Sometimes more expert interviews
    • Generate quantitative models
    • Write a section on remaining uncertainties, and sometimes a section on “what we would do with more time”
    • Write executive summary
    • Final draft: internal review, send to client, debrief meeting with client to receive and give feedback
  • Week 6+:
    • “Retrospective”: a process in which we discuss what worked and what didn’t when conducting this project, and distill learnings for future projects
    • Sometimes we are asked to do a few more hours of work to answer a key question that arose from our research; we usually follow up on those requests right after the project is completed
    • Polish the report for publication

Throughout the course of the project, we have recurring team meetings to discuss progress, and we may reach out to the client via email or have weekly check-in calls with them to ensure short feedback loops.

Some general principles

Across topics and project types, there are some underlying principles that remain constant:

  • Reasoning transparency. We try to make our reasoning as transparent as possible, specifying how all sources of information included in the report contribute to our conclusions, stating our certainty levels around different claims, and pointing out major sources of uncertainty in our analyses.[4]
  • Intellectual honesty/humility. Our team comprises diverse experience (academia, consulting, nonprofits) and areas of expertise (medicine, biology, climate change, economics, quantitative sciences). That being said, we view ourselves as generalists and are not usually experts in the specific topics we research. Additionally, most of our reports are carried out in a limited time frame. Thus, while we strive for rigor in our research, we recognize that our findings may not reflect the absolute truth, and we are always open and willing to review our conclusions in light of new information.
  • Collaboration. We think there is strength in collaboration, both within RP and across value-aligned organizations. We have started conversations with other researchers in the GHD and climate spaces and are always keen to share our unpublished reports (and any other resources that could be useful) with them. We strive to be kind and respectful in all of our interactions with external researchers and stakeholders.

Future developments

Our research process has been evolving and will continue to do so. To ensure our research continually improves in rigor and thoroughness, we periodically revisit our processes. As our emphasis shifts toward internally driven research, the features and format of our reports and methodological approaches could also change.

We aim to incorporate relevant aspects (e.g., assumptions, moral weights) of research outputs from other organizations if we think they are well supported and will improve the conclusions of our reports.

We have begun to assemble guides related to some of our primary research components. For example, we are currently working on a cost-effectiveness analysis guide to converge on a more unified and replicable framework. In the spirit of transparency and collaboration, we hope to eventually make our internal guide publicly available.

We mentioned above that our reports go through several rounds of internal review. We would like to encourage and participate in external review processes in the future, for instance among researchers in other global health, development and climate organizations and/or from academics with relevant expertise. We imagine this being a collaborative endeavor, where other researchers review some of our work, and we review some of theirs.

Contributions and acknowledgments

This post was written by Melanie Basnak with feedback from the full GHD team. We would like to thank Adam Papineau for copyediting and Rachel Norman for reviewing the post and providing useful suggestions. If you are interested in Rethink Priorities’ work, you can sign up for our newsletter. We use it to keep our readers updated about new research posts and other resources.


 

  1. ^

     Some of our reports cannot be published because we have not secured permission from our clients to do so, and there are good reasons to withhold some of them. Other reports are very niche and we do not think there would be a lot of value in publishing them, so the trade-off between time invested in preparing them for publication and the value readers might get out of them is not enough to compel us to publish them.

  2. ^

     Our publication process has been delayed in the past due to the limited size of our team, with researchers spending most of their time tackling new projects as soon as previous projects were completed. With more staff, we are now making progress to shorten the window between project completion and publication.

  3. ^

     This is not always the case. Three projects to date have been carried out by a single researcher, and four were completed without a senior researcher on board.

  4. ^

     For more on reasoning transparency, see this research report by Luke Muehlhauser of OP.

Comments12
Sorted by Click to highlight new comments since:

Thanks for sharing. I'm not a professional researcher, but spend a fair bit of time researching personal projects, areas of interest, etc., and enjoy learning about different exploration frameworks and processes. As a generalist myself, it can sometime be difficult to know if you're adding signal or noise to a picture you've yet to fully envisage -- particularly where a high-level of outside domain or technical knowledge is necessary. 

In my experience, beneficial answers are often the result of pinging the right sources with the right queries. This alone can be a difficult chain to establish,  but there's a deeper layer that strikes me as paradoxical: In most cases: the person/team/org seeking knowledge is also the arbiter of information. So...

  • How do you determine if you're asking the right questions?
  • What is your process for judging information quality?
  • Do you employ any audits or tools to identify/correct biases (e.g. what studies you select, whom you decide to interview, etc.)? 

Thanks for engaging! I'll speak for myself here, though others might chime in or have different thoughts.

  • How do you determine if you're asking the right questions?
    • Generally we ask our clients at the start something along the lines of "what question is this report trying to help answer for you?". Often this is fairly straightforward, like "is this worth funding", or "is this worth more researcher hours in exploring". And we will often push back or add things to the brief to make sure we include what is most decision-relevant within the timeframe we are allocated. An example of this is when we were asked to look into the landscape of the philanthropy spending for cause area X, but it turns out that excluding the non-philanthropic spending might end up being pretty decision relevant, so we suggested incorporating that into the report.
    • We have multiple check-ins with our client to make sure the information we get is the kind of information they want, and to have opportunities to pivot if new questions come up as a result of what we find that might be more decision-relevant.
  • What is your process for judging information quality?
    • I don't think we have a formalised organisational-level process around this; and I think this is just fairly general research appraisal stuff that we do independently. There's a tradeoff between following a thorough process and speed; it might be clear on skimming that this study is much less updating because of its recruitment or allocation etc, but if we needed to e.g. MMAT every study we read this would be pretty time consuming. In general we try to transparently communicate what we've done in check-ins with each other, with our client, and in our reports, so they're aware of limitations in the search and our conclusions.
  • Do you employ any audits or tools to identify/correct biases (e.g. what studies you select, whom you decide to interview, etc.)? 
    • Can you give me an example of a tool to identify biases in the above? I assume you aren't referring to tools that we can use to appraise individual studies/reviews but one level above that?
    • RE: interviews, one approach we frequently take is to look for key papers or reports in the field that are most likely to be decision-relevant and reach out to its author. Sometimes we will intentionally aim to find views that push us in opposing sides of the potential decision. Other times we just need technical expertise in an area that our team doesn't have. Generally we will reach out to the client with the list to make sure they're happy with the choices we've made, which is intended to reduce doubling up on the same expert, but also serves as a checkpoint I guess.
    • We don't have audits but we do have internal reviews, though admittedly I think our current process is unlikely to pick up issues around interviewee selection unless the reviewer is well connected in this space, and it will similarly likely only pick up issues in study selection if the reviewer knows specific papers or have some strong priors around the existence of stronger evidence on this topic. My guess is that the likelihood of the audits making meaningful changes to our report is sufficiently low that if it takes more than a few days it just wouldn't be worth the time for most of the reports we are doing. That being said, it might be a reasonable thing to consider as part of a separate retrospective review of previous reports etc! Do you have any suggestions here or are there good approaches you know about / have seen?

Thanks for your explanations!

Re: Questions

Apologies…I mean the questions your team decides upon during your research and interview processes (not the initial prompt/project question). As generalist, do you ever work with domain experts to help frame the questions (not just get answers)?

Re:  Audit tools

I realize that tools might have sounded like software or something, but I’m thinking more of frameworks that can help to weed out potential biases in data sets (ex. algorithm bias, clustering illusion, etc.), studies (ex., publication bias,  parachute science, etc.), and individuals (ex. cognitive bias(es), appeal to authority, etc.). I’m not suggesting you encounter these specific biases with your research, but I imagine there are known (and unknown) biases you have to check for and assess.

Re: Possible approach for less bias

Again, I’m not a professional researcher, so I don’t want to assume I have anything novel to add here. That said, when I read about research and/or macro analysis, I see a lot of emphasis on things like selection and study design — but not as much on the curation or review teams i.e. who decides?

My intuition tells me that — along with study designs — curation and review are particularly important to weeding out bias. (The merry-go-round water pump story in Doing Good Better comes to mind.) You mentioned sometimes interviewing differing or opposing views, but I imagine these are inside the research itself and are usually with other academics or recognized domain experts (please correct me if I'm wrong). 

So, in the case of say, a project by an org from the Global North that would lead to action/policy/capital allocation in/for the Global South, it would seem that local experts should also have a “seat at the table” — not just in providing data — but in curating/reviewing/concluding as well.

Great! I am curious why publishing has been so slow - I would have assumed it is easiest to put it up roughly immediately while the project is fresh in your mind and before the research is out of date. Also, I was pleased to see that the time estimates stack up pretty well in my ballpark calculation: research supply = 1.5 years * 48 work weeks/year * 7 researchers = 504 researcher-weeks research use = 6 weeks/report * 3 researchers * 23 reports = 414 researcher weeks Which is pretty close for a calculation like this I reckon :)

Thanks for this! Yeah, the research going out of date is definitely a relevant concern in some faster-moving areas. RE: easiest to put it up ~immediately - I think if our reports for clients could just be copy pasted to a public facing version for a general audience this would be true, but in practice this is often not the case, e.g. because the client has some underlying background knowledge that would be unreasonable to expect the public to have, running quotes by interviewees to see if they're happy with being quoted publicly etc.

There's a direct tradeoff here between spending time on turning a client-facing report to a public-facing version and just starting the next client-facing report. In most cases we've just prioritised the next client-facing report, but it is definitely something we want to think more about going forward, and I think our most recent round of hires has definitely helped with this.

In an ideal world the global health team just has a lot of unrestricted funding to use so we can push these things out in parallel etc, in part because it is one way (among many others we'd like to explore) of helping us increase the impact of research we've already done, and also because this would provide extra feedback loops that can improve our own process + work.

Thanks, makes sense re funding and tradeoffs. I think it would be understandable if you decided for some fraction of your research projects that it would be too much work to write up for a public audience, my guess would be that there is something of a bimodal distribution or something where writing it up immediately or never are best and writing it up later is dominated by immediately. Also, there may already be this somewhere that I have missed, but (except of course for any secret/extra-sensitive projects) it seems low cost and potentially quite valuable to put up a title and perhaps just a one-para abstract of all the projects you have done/are doing, so that anyone else researching a similar topic can reach out, or even deprioritise researching that if they know you already have and are just yet to publish.

it seems low cost and potentially quite valuable to put up a title and perhaps just a one-para abstract of all the projects you have done/are doing

This is a great suggestion, thanks!

Thanks for sharing your process!

We aim to incorporate relevant aspects (e.g., assumptions, moral weights) of research outputs from other organizations if we think they are well supported and will improve the conclusions of our reports.

Since you mention moral weights, are you considering addressing the effects on animals? I think it would quite important. I estimated:

  • Here the negative utility of farmed animals is 4.64 times the positive utility of humans.
  • Here the effects of GiveWell's top charities on wild arthropods are 1.50 k times their effects on humans (based on deforestation rates, and Rethink Priorities' median welfare range for silkworm). I do not know the sign of the effects on arthropods, but it seems important to figure out if they are indeed the major driver (for better or worse).

Hi Vasco, I apologize for the delayed response. Because of capacity constraints, we can’t always address all comments, so we prioritize them based on relevance/importance and upvotes. To answer your question, we don’t currently address the effects of the interventions on animals. As we mention in the post, most of our work to date has been commissioned. Because of this, the questions we seek to answer and the scope associated with those questions are often decided by the client (though we only work with value-aligned clients, in topics we think are relevant and could be impactful). So far, our clients haven’t wanted us to assess the effect of potential interventions on animals, and we haven’t done so. If we encountered prospective clients interested in this topic, or if RP was interested in conducting internal research on this topic, this is something that would likely fall under the new Worldview Investigations Team, since because of their mission and expertise they are better positioned to tackle this question. I encourage you to stay tuned to their future research and invite you to join our newsletter in case WIT publishes topics of interest to you! Thank you for your interest and for sharing your estimates.

Thanks for the update, Melanie!

This look great!

How do you select projects and how are you funded? 
Do do commissioned work or pro-bono work or both? 
Are you a buisness or an NGO? 


What would be the (ballpark) cost of a 6-week project? 

Hi Carl, thank you!

How do you select projects and how are you funded? 

We work a lot with Open Philanthropy. We believe in their mission and see a clear path to impact through them. We are value aligned, and are usually also aligned in terms of topics of interest/topics we think could be impactful to look into. We have a long-term arrangement with them and they commission projects from us. We also work with other clients, usually on a project-basis. For these other clients, the projects have been a mix of either them asking for a specific project, us pitching a specific project, or a combination (e.g. each party shares a list of projects). We decide to move forward with a commissioned project if we think it could be impactful (either because we are aligned with the funder and see the path to impact through their decision-making, because we think the topic is important and it could be impactful to publish research on it, or usually both of those).

Do you do commissioned work or pro-bono work or both? 
Our team mostly does commissioned work, though we have started doing some internal research which is self-driven but hopes to be helpful for the community. We would like to do more of it, but need more unrestricted funds to do so.

Are you a business or an NGO? 

We are an NGO


What would be the (ballpark) cost of a 6-week project? 

This depends on the size of the organization commissioning the project, and whether it's a standalone project or we have a longer term contract with them.

 

Curated and popular this week
Relevant opportunities