New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
189
· 4d ago · 9m read

Posts tagged community

Quick takes

Show community
View more
Excerpt from the most recent update from the ALERT team:   Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious. Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantially over the next decade, with the 5-year chance at 13% (range 10%-15%) and the 10-year chance increasing to 25% (range 20%-30%).   their estimated 10 year risk is a lot higher than I would have anticipated.
I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from. I really like Zvi's work, and he has been right about a lot of things I was initially on the other side of, so I would be curious to read more of his or similar people's thoughts on this. Seems like potentially a place where there is a motivation gap: non-animal welfare people have little incentive to convince me that they think the things I work on are not that useful.
My recommended readings/resources for community builders/organisers * CEA's groups resource centre, naturally  * This handbook on community organising  * High Output Management by Andrew Groves * How to Launch a High-Impact Nonprofit * LifeLabs's coaching questions (great for 1-1s with organisers you're supporting/career coachees) * The 2-Hour Cocktail Party * Centola's work on social change, e.g., the book Change: How to Make Big Things Happen * Han's work on organising, e.g., How Organisations Develop Activists (I wrote up some notes here) * This 80k article on community coordination * @Michael Noetel's forum post - 'We all teach: here's how to do it better'  * Theory of change in ten steps * Rumelt's Good Strategy Bad Strategy * IDinsight's Impact Measurement Guide
American Philosophical Association (APA) announces two $10,000 AI2050 Prizes for philosophical work related to AI, with June 23, 2024 deadline:  https://dailynous.com/2024/04/25/apa-creates-new-prizes-for-philosophical-research-on-ai/ https://www.apaonline.org/page/ai2050 https://ai2050.schmidtsciences.org/hard-problems/
In case you're interested in supporting my EA-aligned YouTube channel A Happier World: I've lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn't reached, you won't get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal. Manifund fundraising page EA Forum post announcement

Popular comments

Recent discussion

189
8

This post is easily the weirdest thing I've ever written. I also consider it the best I've ever written - I hope you give it a chance. If you're not sold by the first section, you can safely skip the rest.

I

Imagine an alternate version of the Effective Altruism movement,...

Continue reading

Interesting post. I've always wondered how sensitive the views and efforts of the EA community are to the arbitrary historical process that led to its creation and development. Are there any in-depth explorations that try to answer this question? 

Or, since thinking about alternative history can only get us so far, are there any examples of EA-adjacent philosophies or movements throughout history? E.g. Mohism, a Chinese philosophy from 400 BC, sounds like a surprisingly close match in some ways.

The White House has published a framework requiring providers of projects funded by federal research grants to implement nucleic acid screening techniques. This was mandated by the previous executive order on AI, which stipulated the creation of this framework to reduce...

Continue reading

Wow, this seems like really great news!

Given how bird flu is progressing (spread in many cows, virologists believing rumors that humans are getting infected but no human-to-human spread yet), this would be a good time to start a protest movement for biosafety/against factory farming in the US.

Continue reading

Btw, I don't think the virus has a high mortality rate in its current form, based on these reported rumors

More links:

April 22, Science:

But Russo and many other vets have heard anecdotes about workers who have pink eye and other symptoms—including fever, cough, and lethargy—and do not want to be tested or seen by doctors. James Lowe, a researcher who specializes in pig influenza viruses, says policies for monitoring exposed people vary greatly between states. “I believe there are probably lots of human cases,” he says, noting that most likely are asymptomatic. Russo says she is heartened that the Centers for Disease Control and Prevention has “really started to mobilize and do the right thing,” including linking with state and local health departments, as well as vets, to monitor the health of workers on affected farms. https://www.science.org/content/article/u-s-government-hot-seat-response-growing-cow-flu-outbreak

April 29, Daily Mail:

Experts have warned that human transmission of bird flu may be far more widespread than thought, as farmers in Texas and Wisconsin are reported to have symptoms of the virus but are avoiding testing.

Dr Barb Petersen, a dairy veterinarian in Amarillo, Texas, explained that workers at a local farm where cattle have tested positive for the virus are suffering tell-tale symptoms.

[...] Meanwhile, veterinary researchers in Wisconsin — where the virus has infected cows — have reported multiple cases of local farmers suffering bird flu-like symptoms.

But farmers are notoriously reluctant to seek medical help, meaning 'a lot of cases are not documented', according to Dr Keith Poulsen, director of the Wisconsin Veterinary Diagnostic Laboratory.

https://www.dailymail.co.uk/health/article-13363325/bird-flu-outbreak-humans-texas-farm-worker-sick.html

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

The Giving What We Can research team is excited to share the results of our first round of evaluations of charity evaluators and grantmakers! After announcing our plans for a new research direction last year, we have now completed five[1] evaluations that will...

Continue reading

Hi,

I appreciate THL has room for more funding. You say in the report on animal charity evaluators that:

A direct referral from Open Philanthropy’s Farm Animal Welfare team — the largest funder in the impact-focused animal welfare space — on THL indeed currently being funding-constrained, i.e. that it has ample room to cost-effectively use marginal funds on corporate campaigns and that there aren’t strong diminishing returns to providing THL with extra funding.

However, Open Philanthropy (OP), which granted 8.3 M$ to THL in 2023, presumably wants to fund... (read more)

TL;DR

  • Don’t spend too long thinking about the pros and cons of applying to an opportunity (e.g., a job, grant, degree program, or internship). Assuming the initial application wouldn’t take you long, if it seems worth thinking hard about, you should probably just apply instead
...
Continue reading

I had a decision and someone sent me this. Thanks for writing it.

ABishop commented on The Highly Sensitive EA 5h ago

An absurdly long and overwritten post about a little-known nervous system trait that will have nonetheless been worth the effort if it helps a handful of people

 

For some of us, experiencing the world has always been intense. From a young age we’ve known that there’...

Continue reading

I'm not very confident on this topic. I was also evaluated as a very weak-hearted and sensitive person. I don't think it's up to me to discuss whether they exist or not. But it's very difficult because HSPs are a shield for many people. I have observed something close to “covert narcissism.” I would like to point out that they tend to describe themselves as "competent and in need of protection." They want to be overly privileged.

6
dirk
12h
If you qualify as a Highly Sensitive Person, IMO it's also worth considering whether you're autistic; as far as I can tell, the two are synonyms.
8
Spencer Ericson
12h
Thank you Philippe. A family member has always described me as an HSP, but I hadn't thought about it in relation to EA before. Your post helped me realize that I hold back from writing as much as I can/bringing maximum value to the Forum because I'm worried that my work being recognized would be overwhelming in the HSP way I'm familiar with. It leads to a catch-22 in that I thrive on meaningful, helpful work, as you mentioned. I love writing anything new and useful, from research to user manuals. But I can hardly think of something as frightening as "prolific output, eventually changing the course of ... a discipline." I shudder to think of being influential as an individual. I'd much rather contribute to the influence of an anonymous mass. Not yet sure how to tackle this. Let me know if this is a familiar feeling.

A crucial consideration in assessing the risks of advanced AI is the moral value we place on "unaligned" AIs—systems that do not share human preferences—which could emerge if we fail to make enough progress on technical alignment.

In this post I'll consider three potential...

Continue reading
2
Matthew_Barnett
13h
To clarify, I think it's a reasonable heuristic that, if you want to preserve the values of the present generation, you should try to minimize changes to the world and enforce some sort of stasis. This could include not building AI. However, I believe you may be glossing over the distinction between: (1) the values currently held by existing humans, and (2) a more cosmopolitan, utilitarian ethical value system. We can imagine a wide variety of changes to the world that would result in a vast changes to (1) without necessarily being bad according to (2). For example: * We could start doing genetic engineering of humans. * We could upload humans onto computers. * A human-level, but conscious, alien species could immigrate to Earth via a portal. In each scenario, I agree with your intuition that "the correlation between my values and future humans is higher than the correlation between my values and X-values, because I share much more background with future humans than with X", where X represents the forces at play in each scenario. However, I don't think it's clear that the resulting change to the world would be net negative from the perspective of an impartial, non-speciesist utilitarian framework. In other words, while you're introducing something less similar to us than future human generations in each scenario, it's far from obvious whether the outcome will be relatively worse according to utilitarianism. Based on your toy model, my guess is that your underlying intuition is something like, "The fact that a tiny fraction of humans are utilitarian is contingent. If we re-rolled the dice, and sampled from the space of all possible human values again (i.e., the set of values consistent with high-level human moral concepts), it's very likely that <<1% of the world would be utilitarian, rather than the current (say) 1%." If this captures your view, my main response is that it seems to assume a much narrower and more fragile conception of "cosmopolitan utilitar
4
Rohin Shah
6h
No, this was purely to show why, from the perspective of someone with values, re-rolling those values would seem bad, as opposed to keeping the values the same, all else equal. In any specific scenario, (a) all else won't be equal, and (b) the actual amount of worry depends on the correlation between current values and re-rolled values. The main reason I made utilitarianism a contingent aspect of human values in the toy model is because I thought that's what you were arguing (e.g. when you say things like "humans are largely not utilitarians themselves"). I don't have a strong view on this and I don't think it really matters for the positions I take. The first two seem broadly fine, because I still expect high correlation between values. (Partly because I think that cosmopolitan utilitarian-ish values aren't fragile.) The last one seems more worrying than human-level unaligned AI (more because we have less control over them) but less worrying than unaligned AI in general (since the aliens aren't superintelligent). Note I've barely thought about these scenarios, so I could easily imagine changing my mind significantly on these takes. (Though I'd be surprised if it got to the point where I thought it was comparable to unaligned AI, in how much the values could stop correlating with mine.) It seems way better to simply try to spread your values? It'd be pretty wild if the EA field-builders said "the best way to build EA, taking into account the long-term future, is to prevent the current generation of humans from dying, because their preferences are most similar to ours".

The main reason I made utilitarianism a contingent aspect of human values in the toy model is because I thought that's what you were arguing (e.g. when you say things like "humans are largely not utilitarians themselves").

I think there may have been a misunderstanding regarding the main point I was trying to convey. In my post, I fairly explicitly argued that the rough level of utilitarian values exhibited by humans is likely not very contingent, in the sense of being unusually high compared to other possibilities—and this was a crucial element of my thesi... (read more)

Epistemic status: Confident I learned a lesson; unsure if this is worth sharing but hopeful it might be


 

What this post is:  

This is a brief, perhaps mundane story about how I was reminded of the importance of doing my research before diving deeply into a project that felt like "the most important thing ever" at the moment.

Summary

This weekend I learned a valuable lesson, or rather I had some cached wisdom solidified for me. What was this wisdom? 

Look before you leap

  • Do the research 
  • Check if similar work already exists.
  • Reevaluate your motivations

Note: This approach is aligned with the scout mindset, focusing not on being first to claim and conquer but to carefully explore the terrain, consider others' ideas, and understand your own motivations.

Sometimes, the desire to produce something brilliant can push us towards an ego-driven "conqueror" mindset.[1]
 

Quick Story:
&

...
Continue reading

The Data on the EA community tag collects posts that provide, analyze, or discuss data related to the EA community itself, including membership metrics, funding statistics, and the results of community surveys.