C

ClimateDoc

456 karmaJoined

Bio

Mid-career climate science researcher in academia

Previously used display name "Pagw"

Comments
58

Thanks. OK, so currently the situation is one of arguing for legislation to be proposed rather than there being anything to vote on yet?

Are there particular "key legislative changes" that this could help achieve, or are they hypothetical at present?

"At a certain point, we just have to trust the peer-review process"

Coming here late, found it an interesting comment overall, but just thought I'd say something re interpreting the peer reviewed literature as an academic, as I think people often misunderstand what peer review does. It's pretty weak and you don't just trust what comes out! Instead, look for consistent results being produced by at least a few independent groups, without there being contradictory research (researchers will rarely publish replications of results, but if a set of results don't corroborate a single plausible theoretical picture, then something is iffy). (Note it can happen for whole communities of researchers to go down the wrong path, though - it's just less likely than for an individual study.) Also, talk to people in the field about it! So there are fairly low cost ways to make better judgements than believing what one researcher tells you. The scientific fraud cases that I know involved results from just one researcher or group, and sensible people would have had a fair degree of scepticism without future corroboration. Just in case anyone reading this is ever in the position of deciding whether to allocate significant funding based on published research.

"Science relies on trust, so it's relatively vulnerable to intentionally bad, deceptive actors"

I don't think science does rely on trust particularly highly, as you can have research groups corroborating or casting doubt on others' research. "Relatively" compared to what? I don't see why it would be more vulnerable to be actors than most other things humans do.

A very interesting summary, thanks.

However I'd like to echo Richard Chappell's unease at the praising of the use of short-term contracts in the report. These likely cause a lot of mental health problems and will dissuade people who might have a lot to contribute but can't cope with worrying about whether they will need to find a new job or even career in a couple of years' time. It could be read as a way of avoiding dealing with university processes for firing people - but then the lesson for future organisations may be to set up outside a university structure, and have a sensible degree of job security.

Thanks, it's good to know it's had input from multiple knowledgable people. I agree that this looks like a good thing even if it's implemented imperfectly!

Thanks for putting together the doc.

For the suggested responses, are they informed by expertise or based on a personal view? This would be useful to know where I'm not sure about them. E.g. for the question on including images, I wondered if they could be misleading if they show animals (as disease and other health problems aren't very visible, perhaps leading people to erroneously think "those animals look OK to me" or similar).

I also wonder if there's a risk from this that products get labelled as "high" welfare when the animals still suffer overall, reducing impetus for further reform. I think the scheme would still be good, but I wonder if there's scope to add an argument that labels like "high" should be reserved only for cases where welfare is independently assessed to indeed be probably positive and high.

the second most upvoted comment (27 karma right now) takes me to task for saying that "most experts are deeply skeptical of Ord’s claim"  (1/30 existential biorisk in the next 100 years).

I take that to be uncontroversial. Would you be willing to say so?

 

I asked because I'm interested - what makes you think most experts don't think biorisk is such a big threat, beyond a couple of papers?

I guess it depends on what the "correct direction" is thought to be. From the reasoning quoted in my first post, it could be the case that as the study result becomes larger the posterior expectation should actually reduce. It's not inconceivable that as we saw the estimate go to infinity, we should start reasoning that the study is so ridiculous as to be uninformative and so not the posterior update becomes smaller. But I don't know. What you say seems to suggest that Bayesian reasoning could only do that for rather specific choices of likelihood functions, which is interesting.

It's a potential solution, but I think it requires the prior to decrease quickly enough with increasing cost effectiveness, and this isn't guaranteed. So I'm wondering is there any analysis to show that the methods being used are actually robust to this problem e.g. exploring sensitivity to how answers would look if the deworming RCT results had been higher or lower and that they change sensibly? 

A document that looks to give more info on the method used for deworming looks to be here, so perhaps that can be built on - but from a quick look it doesn't seem to say exactly what shape is being used for the priors in all cases, though they look quite Gaussian from the plots.

Hmm it's not very clear to me that it would be effective at addressing the problem - it seems a bit abstract as described. And addressing Pascal's mugging issues seems like it potentially requires modifying how cost effectiveness estimates are done ie modifying one component of the "cluster" rather than it just being a cluster vs sequence thinking matter. It would be good to hear more about how this kind of thinking is influencing decisions about giving grants in actual cases like deworming if it is being used.

Load more