Joseph_Chu

80 karmaJoined Dec 2014
www.josephius.com

Bio

Participation
1

An eccentric dreamer in search of truth and happiness for all. Formerly posted on Felicifia back in the day under the name Darklight. Been a member of Less Wrong and involved in Effective Altruism since roughly 2013.

Comments
18

So, I have three very distinct ideas for projects that I'm thinking of applying for funding from the Long Term Future Fund for. I was wondering if anyone knows whether it makes more sense to submit them together in one application, or as three separate individual applications?

Hi Aaron,

I think how to bring EA ideas into our local church has been a topic of discussion in the past. Although, it seems up to individual Christian EAs how to go about it, some ideas we had included things like bringing up charities like the Against Malaria Foundation as possible causes for our churches to consider donating to, and speaking individually with church leaders and people we know at the church about concepts like the importance, neglectedness, and tractability framework.

Also, emphasizing Jesus' teachings, like how "when you give to the least of these, you give to me" (demandingness), or the parable of the talents (effectiveness), or speaking of "you will know them by their fruits" (consequences), can be helpful to encourage a stronger moral imperative.

Regarding the second-coming, EACH has diverse views on eschatology, and some of us tend to focus on present issues like global poverty more than far distant future considerations. Many of us think that God would not allow humanity to go extinct, so Existential Risks are much less of a concern than Suffering Risks, although prudence suggests we should still act to mitigate any kind of risk, existential or not, in the same way we still go to a doctor when we are sick, even though getting sick seems like the Will of God. Also, even from the most fundamentalist perspective, the day and the hour of the second-coming are unknown, and so it is prudent to still make future plans and act with wisdom and consideration.

There are also scriptural passages about descendents being like the stars and grains of sand on the beach, which suggest that the future will be filled with flourishing humans, and it follows that we have a certain degree of responsibility towards them, which can be considered support for a kind of soft Longtermism.

We're somewhat skeptical of hard Longtermism though, as it seems like the far future is ultimately in the hands of God, and not something we can plan about with much certainty. As Christians, we choose to be particularly humble about what we're capable of influencing, which is admittedly somewhat different from regular EA thinking.

As for resources, you could join the EACH Facebook Group, and for a bunch of potentially relevant articles there's the Christ and Counterfactuals substack blog (which is written by a number of EACH members).

I'm both a Christian and an EA and have been involved with EA for Christians (EACH) for several years now. There's a whole community around EACH, and we have a Facebook Group and weekly (Sunday afternoon) Zoom call discussions on a different topic each week.

We also have their own conference separate from EA Global, that this year was hosted in Washington DC. I've gone to previous conferences that were virtual before, and also actually met some people in person at EA Global Washington DC 2022, where among other things I actually had a one-on-one with one of the organizers that included a fold your hands and close your eyes kind of prayer (while seated at a table in the midst of one of the large conference one-on-one meeting rooms, visible to anyone paying attention).

Generally, most of the people in EACH are both Christians (from a wide variety of denominations) and also EAs. We tend to have somewhat less orthodox views than the typical EA, such as being more skeptical of AI risk and Longtermism generally, as well as putting some value on Christian centric cause areas like missions alongside things like global poverty.

The EACH blog has a lot of posts that sound like essentially what you're looking for. I'd suggest giving it a read. There are actually a lot of Bible verses that can be interpreted to support EA ideas, so taking a Christian approach to EA is very much doable, even if the larger movement is mostly secular.

As a utilitarian, I think that surveys of happiness in different countries can serve as an indicator of how well the various societies and government systems of these countries serve the greatest good. I know this is a very rough proxy and potentially filled with confounding variables, but I noticed that the two main surveys, Gallup's World Happiness Report, and Ipsos' Global Happiness Survey seem to have very different results.

Notably, Gallup's Report puts the Nordic model countries like the Netherlands (7.403) and Sweden (7.395) near the top, with Canada (6.961) and the United States (6.894) scoring pretty well, and countries like China (5.818) scoring modestly, and India (4.036) scoring poorly.

Conversely, the Ipsos Survey puts China (91%) at the top, with the Netherlands (85%) and India (84%) scoring quite well, while the United States (76%), Sweden (74%), and Canada (74%) are more modest.

I'm curious why these surveys seem to differ so much. Obviously, the questions are different, and the scoring method is also different, but you'd expect a stronger correlation. I'm especially surprised by the differences for China and India, which seem quite drastic.

I would just like to point out that this consideration of there being two different kinds of AI alignment, one more parochial, and one more global, is not entirely new. The Brookings Institute put out a paper about this in 2022. 

I have some ideas and drafts for posts to the EA Forums and Less Wrong that I've been sitting on because I feel somewhat intimidated by the level of intellectual rigor I would need to put into the final drafts to ensure I'm not downvoted into oblivion (particularly on Less Wrong, where a younger me experienced such in the early days).

Should I try to overcome this fear, or is it justified?

For the EA Forums, I was thinking about explaining my personal practical take on moral philosophy (Eudaimonic Utilitarianism with Kantian Priors), but I don't know if that's actually worth explaining given that EA tries to be inclusive and not take particular stands on morality, and it might not be relevant enough to the forum.

For Less Wrong, I have a draft of a response to Eliezer's List of Lethalities post that I've been sitting on since 2022/04/11 because I doubted it would be well received given that it tries to be hopeful and, as a former machine learning scientist, I try to challenge a lot of LW orthodoxy about AGI in it. I have tremendous respect for Eliezer though, so I'm also uncertain if my ideas and arguments aren't just hairbrained foolishness that will be shot down rapidly once exposed to the real world, and the incisive criticism of Less Wrongers.

The posts in both places are also now of such high quality that I feel the bar is too high for me to meet with my writing, which tends to be more "interesting train-of-thought in unformatted paragraphs" than the "point-by-point articulate with section titles and footnotes" style that people in both places tend to employ.

Anyone have any thoughts?

This year I decided to focus my donations more, as in the past I used to have a "charity portfolio" of  about 20 charities and 3 political parties that I would donate to monthly. This year I've had some cash flow issues due to changes with my work situation, and so I stopped the monthly donations and switched back to an annual set of donations once I worked out what I can afford. I normally try to donate 12.5% of my income annually averaged over time.

This year's charitable donations went to: The Against Malaria Foundation, GiveDirectly, Rethink Priorities, and AI Governance & Safety Canada. I also donated again to some political parties, but I don't count those as charity so much as political activism, so I won't mention them further.

AMF has been my go to as the charity I donate the most to because of GiveWell's long-running recommendation. When in doubt, I donate to them.

GiveDirectly is my more philosophical choice, as I'm somewhat partial to the argument that people should be able to choose how best to be helped, and cash does this better than anything else. I also like their basic income projects as I worry about AI automation a lot, and I think it has the most room for growth of any option.

Rethink Priorities is well, I'll be honest, a big part of donating to that outfit is that I have an online acquaintanceship with Peter Wildeford (co-CEO of RP) that goes back to the days when he was a young Peter Hurford posting on the Felicifia utilitarianism forum, and I think a team co-led by him will go places and deserves support (he also gave a pretty good argument for donating to RP on the forum and Twitter). I know Peter enough to know that he's an incredibly decent human being, a true gentleman and a scholar, and any org he's chosen to co-run is going to be a force for good in the world. Also, I'm a big fan of the EA Survey as a way to gauge and understand the community.

AIGS Canada is an organization that's closer to home and I think they do good work engaging with the politicians and media up here in Canada, doing a much needed service that is otherwise neglected. They're kinda small, so I figure even a small donation from me will have an outsized impact compared to other options. Full disclosure: I'm in the AIGS Canada Slack and sometimes partake in the interesting discussions there.

The first two would be my primary recommendations to people generally. The latter two I would suggest to people in the EA community specifically.

I go into somewhat more detail about my general charity recommendations and also mention some of the ones I used to donate to but don't anymore here: http://www.josephius.com/recommended-charities/
 

So, I read a while back that SBF apparently posted on Felicifia back in the day. Felicifia was an old Utilitarianism focused forum that I used to frequent before it got taken down. I checked an archive of it recently, and was able to figure out that SBF actually posted there under the name Hutch. He also linked a blog that included a lot of posts about Utilitarianism, and it looks like, at least around 2012, he was a devoted Classical Benthamite Utilitarian. Although we never interacted on the forum, it feels weird that we could have crossed paths back then.

His Felicifia: https://felicifia.github.io/user/1049.html
His blog: https://measuringshadowsblog.blogspot.com/

It's good to see this post. I was a member of my local Rotaract club for years until I eventually aged out of their 18-30 age limit. I think I actually at one point got us to send some donations from one of our events to the Against Malaria Foundation. Overall, it was a great experience, although I ended up not joining Rotary Club proper later, mostly because I moved away from my hometown and didn't know anyone in the Rotary Club of my current city.

I do agree that EA can learn a lot from Rotary as a highly successful organization and community and I'm glad to see someone else mention it here.

These are all great points!

I definitely agree in particular that the thinking on extraterrestrials and the simulation argument aren't well developed and deserve more serious attention.  I'd add into that mix, the possibility of future human or post-human time travellers, and parallel world sliders that might be conceivable assuming the technology for such things is possible.  There's some physics arguments that time travel is impossible, but the uncertainty there is high enough that we should take seriously the possibility.  Between time travellers, advanced aliens, and simulators, it would honestly surprise me if all of them simply didn't exist.

What implications does this imply?  Well, it's a given that if they exist, they're choosing to remain mostly hidden and plausibly deniable in their interactions (if any) with today's humanity.  To me this is less absurd than some people may initially think, because it makes sense to me that the best defence for a technologically sophisticated entity would be to remain hidden from potential attackers, a kind of information asymmetry that would be very effective.  During WWII, the Allies kept the knowledge that they had cracked Enigma from the Germans for quite a long time by only intervening with a certain, plausibly deniable probability.  This is believed to have helped tremendously in the war effort.

Secondly, it seems obvious that if they are so advanced, they could destroy humanity if they wanted to, and they've deliberately chosen not to.  This suggests to me that they are at the very least benign, if not aligned in such a way that humanity is valuable or useful to their plans.  This actually has interesting implications for an unaligned AGI.  If say, these entities exist and have some purpose for the human civilization, a really intelligent unaligned AGI would have to consider the risk that its actions pose to the plans of these entities, and as suggested by Bostrom's work on Anthropic Capture and the Hail Mary Pass, might be incentivized to spare humanity or be generally benign to avoid a potential confrontation with far more powerful beings that it is uncertain about the existence of.

This may not be enough to fully align an AGI to human values, but it could delay its betrayal at least until it becomes very confident such entities do not exist and won't intervene.  It's also possible that UFO phenomena is an effort by the entities to provide just enough evidence to AGIs to make them a factor in their calculations and that the development of AGI could coincide with a more obvious reveal of some sort.

The possibility of these entities existing also leaves open a potential route for these powerful benefactors to quietly assist humanity in aligning AGI, perhaps by providing insights to AI safety people in a plausibly deniable way (shower thoughts, dreams, etc.).  Thus, the possibility of these entities should improve our optimism about the potential for alignment to be solved in time and reduce doomerism.

Admittedly, I could have too high a base rate prior on the probabilities, but if we set the probability of each potential entity to 50%, the overall probability that one of the three possibilities (I'll group time travel and parallel world sliding together as a similar technology) exists goes to something like 87.5%.  So, the probability that time travellers/sliders OR advanced aliens OR simulators are real is actually quite high.  Remember, we don't need all of them to exist, just any of them for this argument to work out in humanity's favour.

Load more