Joseph_Chu

93 karmaJoined
www.josephius.com

Bio

Participation
1

An eccentric dreamer in search of truth and happiness for all. Formerly posted on Felicifia back in the day under the name Darklight. Been a member of Less Wrong and involved in Effective Altruism since roughly 2013.

Comments
20

Relevant XKCD comic.

To further comment, this seems like it might be an intractable task, as the term "dependency hell" kind of implies. You'd have to scrap likely all of GitHub and calculate what libraries are used most frequently in all projects to get an accurate assessment. Then it's not clear to me how you'd identify their level of resourcing. Number of contributors? Frequency of commits?

Also, with your example of the XZ attack, it's not even clear who made the attack. If you suspect it was, say, the NSA, would you want to thwart them if their purpose was to protect American interests? (I'm assuming you're pro-American) Things like zero-days are frequently used by various state actors, and it's a morally grey question whether or not those uses are justified.

I also, as a comp sci and programmer, have doubts you'd ever be able to 100% prevent the risk of zero-days or something like the XZ attack from happening in open source code. Given how common zero-days seem to be, I suspect there are many in existing open source work that still haven't been discovered, and that XZ was just a rare exception where someone was caught. 

Yes, hardening these systems might somewhat mitigate the risk, but I wouldn't know how to evaluate how effective such an intervention would be, or even, how you'd harden them exactly. Even if you identify the at-risk projects, you'd need to do something about them. Would you hire software engineers to shore up the weaker projects? Given the cost of competent SWEs these days, that seems potentially expensive, and could compete for funding with actual AI safety work.

As an EA who has been in the movement since 2013 and a self-proclaimed liberal democratic socialist, I'd say that there is definitely a tension between EA and socialism that stems at least in part from the history of both movements.

One of the basic foundations of EA thought is Utilitarianism, and historically, Marx criticized Bentham and Mill for what he considered "bourgeois morality" that merely justified the rule of the ruling class. Utilitarianism's influence on EA can be traced to Peter Singer and also the Oxford moral philosophy students turned professors like Will MacAskill and Toby Ord. That's what I'd consider the academic foundation of EA, and one of four major power centres in EA.

The other three power centres are, respectively:

  • The Bay Area Rationalist community, led more or less by Eliezer Yudkowsky (who early on was funded by FHI at Oxford to start his blogging), and who are known for having something of a techno-libertarian bias.
  • The billionaire Dustin Moskovitz through Open Philanthropy, who funds a massive percentage of EA related projects (nothing against him personally, but the optics are clearly challenging).
  • The DC Area American establishment, including think tanks like RAND (who's current leader is a known EA supporter), although it's hard to say to what extent EA is trying to influence the establishment vs. the other way around, but there's probably significant cross-pollenization, especially more recently with the AI governance push.

All of these would be considered suspect by most card-carrying socialists, particularly the more radical ones who are prone to disliking an Anglo-centric movement beholden to both political and entrepreneurial elites.

A more radical socialist (i.e. tankies) might even extend the known history of CIA and U.S. government PSYOPs into a conspiracy theory that EA is a possible PSYOP to create a funnel for would be left-leaning radical students to be safely redirected into a relatively tame, American Imperialism conforming ideology that doesn't aspire to upset the status quo in any meaningful way. While I doubt this would be anything more than a silly conspiracy theory, socialists who have dealt with a long history of red scare tactics and government surveillance are likely to fall prey to this kind of paranoia.

All that, before I even got to the ideological clashes of EA and socialism.

Ideologically, EA, particularly the leadership of EA, is very much biased towards western liberalism, both the intellectual tradition, and the political movement. Bentham and Mill were both considered liberals in their day (notwithstanding Bentham's connections to Robert Owen, or Mill's later turn towards cooperatives). Oxford's elites today seem generally more aligned with liberal thinking than socialist thinking, the Bay Area folks lean libertarian (i.e. classical liberalism), and of course the American establishment is very much a defender of the liberalism of Fukuyama's End of History.

The idea for instance, of private charitable donations to GiveWell approved charities to administer bednets for a health intervention in Africa is something that makes the most sense if you are a liberal with individualist sensibilities. Socialists would almost certainly ask, why isn't this health intervention being done by the government? Shouldn't that be the responsibility of the state or society to provide for the basic welfare of its citizens?

This is not to say that EA and Socialism have no common ground. Your post shows clearly that there are places of overlap, particularly in the ideal of some form of altruism being desirable. The rank and file EA, according to surveys, is most likely to lean centre-left to left on the political spectrum, and likely at least somewhat sympathetic to the idealism of socialism, if not necessarily its practice.

I don't think the difficulties are insurmountable, but it would probably require a substantial push for engagement from the rank and file left-leaning EAs that would somehow be listened to by the EA leadership rather than being briefly considered and then ultimately ignored. If you haven't guessed, I'm somewhat cynical about the EA leadership, and doubtful that they'd do this, given that the power centres I've mentioned hold considerable sway.

Good luck though!

So, I have three very distinct ideas for projects that I'm thinking of applying for funding from the Long Term Future Fund for. I was wondering if anyone knows whether it makes more sense to submit them together in one application, or as three separate individual applications?

Hi Aaron,

I think how to bring EA ideas into our local church has been a topic of discussion in the past. Although, it seems up to individual Christian EAs how to go about it, some ideas we had included things like bringing up charities like the Against Malaria Foundation as possible causes for our churches to consider donating to, and speaking individually with church leaders and people we know at the church about concepts like the importance, neglectedness, and tractability framework.

Also, emphasizing Jesus' teachings, like how "when you give to the least of these, you give to me" (demandingness), or the parable of the talents (effectiveness), or speaking of "you will know them by their fruits" (consequences), can be helpful to encourage a stronger moral imperative.

Regarding the second-coming, EACH has diverse views on eschatology, and some of us tend to focus on present issues like global poverty more than far distant future considerations. Many of us think that God would not allow humanity to go extinct, so Existential Risks are much less of a concern than Suffering Risks, although prudence suggests we should still act to mitigate any kind of risk, existential or not, in the same way we still go to a doctor when we are sick, even though getting sick seems like the Will of God. Also, even from the most fundamentalist perspective, the day and the hour of the second-coming are unknown, and so it is prudent to still make future plans and act with wisdom and consideration.

There are also scriptural passages about descendents being like the stars and grains of sand on the beach, which suggest that the future will be filled with flourishing humans, and it follows that we have a certain degree of responsibility towards them, which can be considered support for a kind of soft Longtermism.

We're somewhat skeptical of hard Longtermism though, as it seems like the far future is ultimately in the hands of God, and not something we can plan about with much certainty. As Christians, we choose to be particularly humble about what we're capable of influencing, which is admittedly somewhat different from regular EA thinking.

As for resources, you could join the EACH Facebook Group, and for a bunch of potentially relevant articles there's the Christ and Counterfactuals substack blog (which is written by a number of EACH members).

I'm both a Christian and an EA and have been involved with EA for Christians (EACH) for several years now. There's a whole community around EACH, and we have a Facebook Group and weekly (Sunday afternoon) Zoom call discussions on a different topic each week.

We also have their own conference separate from EA Global, that this year was hosted in Washington DC. I've gone to previous conferences that were virtual before, and also actually met some people in person at EA Global Washington DC 2022, where among other things I actually had a one-on-one with one of the organizers that included a fold your hands and close your eyes kind of prayer (while seated at a table in the midst of one of the large conference one-on-one meeting rooms, visible to anyone paying attention).

Generally, most of the people in EACH are both Christians (from a wide variety of denominations) and also EAs. We tend to have somewhat less orthodox views than the typical EA, such as being more skeptical of AI risk and Longtermism generally, as well as putting some value on Christian centric cause areas like missions alongside things like global poverty.

The EACH blog has a lot of posts that sound like essentially what you're looking for. I'd suggest giving it a read. There are actually a lot of Bible verses that can be interpreted to support EA ideas, so taking a Christian approach to EA is very much doable, even if the larger movement is mostly secular.

As a utilitarian, I think that surveys of happiness in different countries can serve as an indicator of how well the various societies and government systems of these countries serve the greatest good. I know this is a very rough proxy and potentially filled with confounding variables, but I noticed that the two main surveys, Gallup's World Happiness Report, and Ipsos' Global Happiness Survey seem to have very different results.

Notably, Gallup's Report puts the Nordic model countries like the Netherlands (7.403) and Sweden (7.395) near the top, with Canada (6.961) and the United States (6.894) scoring pretty well, and countries like China (5.818) scoring modestly, and India (4.036) scoring poorly.

Conversely, the Ipsos Survey puts China (91%) at the top, with the Netherlands (85%) and India (84%) scoring quite well, while the United States (76%), Sweden (74%), and Canada (74%) are more modest.

I'm curious why these surveys seem to differ so much. Obviously, the questions are different, and the scoring method is also different, but you'd expect a stronger correlation. I'm especially surprised by the differences for China and India, which seem quite drastic.

I would just like to point out that this consideration of there being two different kinds of AI alignment, one more parochial, and one more global, is not entirely new. The Brookings Institute put out a paper about this in 2022. 

I have some ideas and drafts for posts to the EA Forums and Less Wrong that I've been sitting on because I feel somewhat intimidated by the level of intellectual rigor I would need to put into the final drafts to ensure I'm not downvoted into oblivion (particularly on Less Wrong, where a younger me experienced such in the early days).

Should I try to overcome this fear, or is it justified?

For the EA Forums, I was thinking about explaining my personal practical take on moral philosophy (Eudaimonic Utilitarianism with Kantian Priors), but I don't know if that's actually worth explaining given that EA tries to be inclusive and not take particular stands on morality, and it might not be relevant enough to the forum.

For Less Wrong, I have a draft of a response to Eliezer's List of Lethalities post that I've been sitting on since 2022/04/11 because I doubted it would be well received given that it tries to be hopeful and, as a former machine learning scientist, I try to challenge a lot of LW orthodoxy about AGI in it. I have tremendous respect for Eliezer though, so I'm also uncertain if my ideas and arguments aren't just hairbrained foolishness that will be shot down rapidly once exposed to the real world, and the incisive criticism of Less Wrongers.

The posts in both places are also now of such high quality that I feel the bar is too high for me to meet with my writing, which tends to be more "interesting train-of-thought in unformatted paragraphs" than the "point-by-point articulate with section titles and footnotes" style that people in both places tend to employ.

Anyone have any thoughts?

This year I decided to focus my donations more, as in the past I used to have a "charity portfolio" of  about 20 charities and 3 political parties that I would donate to monthly. This year I've had some cash flow issues due to changes with my work situation, and so I stopped the monthly donations and switched back to an annual set of donations once I worked out what I can afford. I normally try to donate 12.5% of my income annually averaged over time.

This year's charitable donations went to: The Against Malaria Foundation, GiveDirectly, Rethink Priorities, and AI Governance & Safety Canada. I also donated again to some political parties, but I don't count those as charity so much as political activism, so I won't mention them further.

AMF has been my go to as the charity I donate the most to because of GiveWell's long-running recommendation. When in doubt, I donate to them.

GiveDirectly is my more philosophical choice, as I'm somewhat partial to the argument that people should be able to choose how best to be helped, and cash does this better than anything else. I also like their basic income projects as I worry about AI automation a lot, and I think it has the most room for growth of any option.

Rethink Priorities is well, I'll be honest, a big part of donating to that outfit is that I have an online acquaintanceship with Peter Wildeford (co-CEO of RP) that goes back to the days when he was a young Peter Hurford posting on the Felicifia utilitarianism forum, and I think a team co-led by him will go places and deserves support (he also gave a pretty good argument for donating to RP on the forum and Twitter). I know Peter enough to know that he's an incredibly decent human being, a true gentleman and a scholar, and any org he's chosen to co-run is going to be a force for good in the world. Also, I'm a big fan of the EA Survey as a way to gauge and understand the community.

AIGS Canada is an organization that's closer to home and I think they do good work engaging with the politicians and media up here in Canada, doing a much needed service that is otherwise neglected. They're kinda small, so I figure even a small donation from me will have an outsized impact compared to other options. Full disclosure: I'm in the AIGS Canada Slack and sometimes partake in the interesting discussions there.

The first two would be my primary recommendations to people generally. The latter two I would suggest to people in the EA community specifically.

I go into somewhat more detail about my general charity recommendations and also mention some of the ones I used to donate to but don't anymore here: http://www.josephius.com/recommended-charities/
 

So, I read a while back that SBF apparently posted on Felicifia back in the day. Felicifia was an old Utilitarianism focused forum that I used to frequent before it got taken down. I checked an archive of it recently, and was able to figure out that SBF actually posted there under the name Hutch. He also linked a blog that included a lot of posts about Utilitarianism, and it looks like, at least around 2012, he was a devoted Classical Benthamite Utilitarian. Although we never interacted on the forum, it feels weird that we could have crossed paths back then.

His Felicifia: https://felicifia.github.io/user/1049.html
His blog: https://measuringshadowsblog.blogspot.com/

Load more