I've been awarded a small Lightspeed Grant to replicate empirical social science research. What research should I look at?

I'm a PhD economist with an interest in reanalyzing published research using different methods or data (eg. checking whether the results are robust to different regression models, rather than rerunning a lab experiment). I've looked at whether mayors in China are promoted based on GDP growth, the effect of racial violence on patenting, and the effect of medical marijuana legalization on crime. I've also done work on air pollution and mortality, the long-run impacts of the measles vaccine, and how tech clusters drive innovation.

12

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since:
  1. Paper on general equilibrium effects of cash transfers
  2. Estimates of the effect of water chlorination on mortality in the 20th century
  3. Clemens/Montenegro/Pritchett estimates of the price equivalent of migration barriers
  4. Kleven on the earned income tax credit and whether lower marginal rates raise employment rates

Can we hear about why you want to take another look at Egger et al. (2021)? This is a really important paper and it's important to get this stuff right; OTOH, its data and programs are publicly accessible (download link here), the journal has a pretty robust replication policy...I guess I'm thinking that if something is wrong in this paper it's going to be off in the text and not in the code, i.e. that any mistakes are going to be conceptual. WDYT?

I'd expect this article to be pretty solid, but errors in top journals do happen.

Yep, I recall this case from Bryan Caplan as well: https://betonit.substack.com/p/a-correction-on-housing-regulation

I happen to think Johannes is unusually careful about this stuff; per the original UCT evaluation:

Second, we also follow common practice by making public the data and code that produce the results we report in this paper. However, it has recently been shown that data and code used in economics papers frequently contains errors, making it difficult for readers to confirm the findings (Chang and Li 2015). We therefore hired two graduate students to audit the data and code for this paper. They were compensated on an hourly basis, and paid a bonus for any errors they identified. We report the errors they identified and changes they suggested in Online Appendix Section 20. The errors were minor and did not materially change the results and interpretation. We also report which suggested changes we rejected, and why.

so I assume a similar level of care in Egger et al., on which he is coauthor

It's worth noting that the second of those papers actually has recently been reanalyzed, and Cutler and Miller have now published a response to the reanalysis, as well. I think there is probably more work one could do on this (e.g., updating the difference-in-differences estimators in the original paper to reflect the current methodological state-of-the-art), but I also think it's fair to say that the result has already been subjected to thorough and meaningful scrutiny.

The effect of health insurance on health, such as the old RAND study, the Oregon Medicaid expansion, the India study from a couple years ago, or whatever else is out there.

Robin Hanson likes to cite these studies as showing that more medicine doesn't improve health, but I'm skeptical of the inference from 'not statistically significant' to 'no effect' (I'm in the comments there as "Unnamed"). I would like to see them re-analyzed based on effect size (e.g. a probability distribution or confidence interval for DALY per $).

Curated and popular this week
Relevant opportunities