D

Dunja

182 karmaJoined

Comments
92

Thanks for writing this. The suggested criticism of debate is as old as debate itself, and in addition to the reasons you list here, I'd add the *epistemic* benefits of debating.

Competitive debating allows for the exploration of the argumentative landscape of the given topic in all its breath (from the preparation to the debating itself). That means that it allows for the formulation of the best arguments for either side, which (given all the cognitive biases we may have) may be hard to come by in a non-competitive context. As a result, debate is a learning experience, not only because one has to prepare for it, but because the consequences of what we have learned can be examined with the highest rigor possible. The latter is due to the fact that debate allows for critical interaction with `experts' whose views conflict with one's one, which has been considered essential for the justification of our beliefs already with Mill, and all the way to the contemporary social epistemology.

Update: this is all the more important in view of common ways one may accidentally cause harm by trying to do good, which I've just learned about through DavidNash's post). As the article points out, having an informed opinion of experts, and a dense network with them can decrease chances of harmful impacts, such as reputational harm or locking in on suboptimal choices.

Thanks for the explanation, Lewis. In order to make the team as robust as possible towards criticism, and as reliable as possible, wouldn't it be better to have a diverse team, consisting also of critics of ACE? That would send the right message to the donors as well as to anyone taking a closer look at EA organizations. I think it would also benefit ACE since their researchers would have an opportunity to work directly with their critics.

That should always depend on the project at hand: if the project is primarily in a specific domain of AI research, then you need reviewers working precisely in that particular domain of AI; if it's in ethics, then you need experts working in ethics; if it's interdisciplinary, then you try to get reviewers from the respective fields. This also shows that it will be rather difficult (if not impossible) to have an expert team competent to evaluate each candidate project. Instead, the team should be competent in selecting the adequate expert reviewers (similarly to journal editors who invite expert reviewers for individual papers submitted to the journal). Of course, the team can do the pre-selection of projects, determining which are worthy of sending for expert review, but for that, it's usually useful to have at least some experience with research in one of the relevant domains, as well as with research proposals.

Hi Matt, thanks a lot for the reply! I appreciate your approach, but I do have worries, which Jonas, for instance, is very well aware of (I have been a strong critic of EAF policy and implementation of research grants, including those directed at MIRI and FRI).

My main worry is that evaluating grants aimed at research cannot be done without having them assessed by expert researchers in the given domain, that is, people who have a proven track-record in the given field of research. I think the best way to see why this matters is to take any other scientific domain: medicine, physics, etc. If we wanted to evaluate whether a certain research grant in medicine should be funded (e.g. a discovery of an important vaccine), it wouldn't be enough to just like the objective of the grant. We would have to assess:

  • Methodological feasibility of the grant: are the announced methods conducive to the given goals? How will the project react to possible obstacles and which alternative methods will in such cases be employed?

  • Fitness of the project within the state of the art: how well the grant is informed by the relevant research in the given domain (e.g. are some important methods and insights overlooked, is another research team already working on a related topic where combining insights would increase the efficiency of the current project, etc.)

  • etc.

Clearly, answering these questions cannot be done by anyone who is not an expert in medicine. My point is that the same goes for the research in any other scientific domain, from philosophy to AI. Hence, if your team consists of people who are enthusiastic about the topic, and who do have experience in reading about it or who have experience in managing EA grants and non-profit organizations, that's not the adequate expertise for evaluating research grants. The same goes for your advisers: Nick has a PhD in philosophy, but that's not enough for being an expert e.g. in AI (it's not enough for being an expert in many domains of philosophy either unless he has a track record of continuous research in the given domain). Jonas has a background in medicine and economics and charity evaluations, but that has nothing to do with an active engagement in research.

Inviting expert-researchers to evaluate each of the submitted projects is the only way to award research grants responsibly. That's precisely what both academic and non-academic funding institutions do. Otherwise, how can we possibly argue that the given funded research is promising and that we have done the best we can to estimate its effectiveness? This is important not only to assure the quality of the given research, but also to handle the donors' contributions responsibly, according to the values of EA in general.

My impression is that so far the main criterion employed when assessing the feasibility of grants is how trust-worthy the given team (proposing the grant) is, how enthusiastic they are about the topic and how much effort they are willing to put in it. But we wouldn't take those criteria to be enough when it comes to the discovery of vaccinations. We'd also want to see the track-record of the given researchers in the field of vaccination, we'd want to hear what their peers think of the methods they wish to employ, etc. And the very same holds for the research on far future. While some may reply that the academic world is insufficiently engaged in some of these topics, or biased against them, that still doesn't mean there are no expert researchers competent to evaluate the given grants (moreover, requests for expert evaluations can be formulated in such a way to target specific methodological questions, and minimize the effect of bias). At the end of the day, if research should have an impact, it will have to gain attention of the same academic world, in which case it is important to engage with their opinions and inform projects of possible objections early on. I could say more about these dangers of bias in case of reviews and how to mitigate the given risks, so we can come back to this topic if anyone's interested.

Finally, I hope we can continue this conversation without prematurely closing it. I have tried to do the same with EAF and their research-related policy, but unfortunately, they have never provided any explanation for why expert reviewers are not asked to evaluate the research projects which they fund (I plan to do a separate longer post on that as soon as I catch some free time, but I'd be happy to provide further background in the meantime if anyone is interested).

I'd be curious to hear an explication for selecting the given team for the Long Term Future Funds. If they are expected to evaluate grants including research grants, how do they plan to do that, what qualifies them for this job, and in case they are not qualified, which experts do they plan to invite on such occasions.

From their bio page I don't see who of them should count as an expert in the field of research (and in view of which track-record), which is why I am asking. Thanks!

These are good points, and unless the area is well established so that initial publications come from bigger names (who will that way help to establish the journal), it'll be hard to realize the idea.

What could be done at this point though is have an online page that collects/reports on all the publications relevant for cause prioritization, which may help with the growth of the field.

I agree that journal publications certainly allow for a raise in quality due to the peer-review system. In principle, there could even be a mixed platform with an (online) journal + a blog which (re)posts stuff relevant for the topic (e.g. posts made on this forum that are relevant for the topic of cause prioritization).

My main question is: is there anyone on here who's actually actively doing research on this topic and who could comment on the absence of an adequate journal, as argued by kbog? I don't have any experience with this domain, but if more people could support this thesis, then it makes sense to actually go for it.

If others agree, I suppose that for further steps, you'd need an academic with expertise in the area, who'd get in touch with one of the publishing houses with a concrete proposal (including the editorial board, the condition that articles be open access, etc.), which would host the journal.

Thanks, Benito, that sums it up nicely!

It's really about the transparency of the criteria, and that's all I'm arguing for. I am also open for changing my views on the standard criteria etc. - I just care we start the discussion with some rigor concerning how best to assess effective research.

As for my papers - crap, that's embarrassing that I've linked paywall versions, I have them on academia page too, but guess those can be accessed also only within that website... have to think of some proper free solution here. But in any case: please don't feel obliged to read my papers, there's for sure lots of other more interesting stuff out there! If you are interested in the topic it's enough the scan to check the criteria I use in these assessments :) I'll email them in any case.

Load more