Richard Y Chappell

Associate Professor of Philosophy @ University of Miami
4755 karmaJoined

Bio

Academic philosopher, co-editor of utilitarianism.net, blogs at https://rychappell.substack.com/

Comments
293

The sadist example can be "traced back" -- it casts doubt on a particular (hedonistic) axiology, i.e. a hedonistic interpretation of the beneficentric goals.

I mean, lots of fallacious reasoning is "normal and understandable", but I'm still confused when philosophers do it - I expect better from them!

But negligence / lack of concern for obvious risks to others is a classic form of vice? (In this case, the connection to toilet waste may amplify disgust reactions, for obvious evolutionary reasons.)

If you specify that the staff are from a distant tribe that never learned about basic hygiene facts, I think people would cease to be "appalled" in the same way, and instead just feel that the situation was very lamentable. (Maybe they'd instead blame the restaurant owner for not taking care to educate their staff, depending on whether the owner plausibly "should have known better".)

I don't think there's any such thing as non-inherent appallingness. To judge X as warranting moral disgust, revulsion, etc. seems a form of intrinsic evaluation (attributing a form of extreme vice rather than mere instrumental badness).

Hence the paradigmatic examples being things like racist attitudes, not things like... optimism about the prospects were one to implement communism.

Seems compatible if you simply refrain from hostility altogether? The constraint identifies a necessary condition for hostility, not a sufficient one.

Are you disagreeing with my constraint on warranted hostility? As I say in the linked post on that, I think it's warranted to be hostile towards naive instrumentalism, since it's actually unreasonable for limited human agents to use that as their decision procedure. But that's no part of utilitarianism per se.

You say: it could turn out badly to recommend X, if too many people would irrationally combine it with Y, and X+Y has bad effects. I agree. That's a good reason for being cautious about communicating X without simultaneously communicating not-Y. But that doesn't warrant hostility towards X, e.g. philosophical judgments that X is "deeply appalling" (something many philosophers claim about utilitarianism, which I think is plainly unwarranted).

There's a difference between thinking "there are risks to communicating X to people with severe misunderstandings" and "X is inherently appalling". What baffles me is philosophers who claim the latter, when X = utilitarianism (and even more strongly, for X = EA).

A tangential request: I'd welcome feedback about whether I should cross-post more or fewer of my (potentially EA-relevant) philosophy posts to this forum.

My default has generally been to err on the side of "fewer", since I figure anyone who generally likes my writing can freely subscribe to my substack. And others might dislike my posts and find them annoying (at least in too high a quantity, and I do tend to blog quite a lot). OTOH, someone did recently point out that they appreciate the free audio transcription offered on the forum. And maybe others like my writing but just would rather just find it here rather than in their email inbox or on another website.

So: a poll. Agree-vote this comment if you'd like me to cross-post marginally more of my posts (like this one, which seemed kind of "on the margin" by my current standards). Disagree-vote if you'd prefer fewer or about the same.

(I plan to leave most of my pure ethical theory stuff -- on utilitarianism, population ethics, etc. -- exclusively on the substack in either case.)

the evolution towards something complex [after human extinction] is not a certainty but since so much of longtermism rests on tiny probabilities, shouldn't we be factoring in the probability associated with this too

Note that this is one of the "exogenous nullifying events" that Tarsney's model incorporates. The mere possibility that human survival isn't needed to secure a far-future flourishing civilization does not by itself undermine the claim that human survival improves the odds of better longterm outcomes. (But it's generally true that the higher the chances you place on such positive 'nullification', the less expected value you should assign to existential risk mitigation.)

It's a neat video - thanks for sharing it! Maybe consider updating the OP with a brief summary? E.g. "it briefly introduces the importance of economic growth and innovation, and how greater global wealth will help spur greater innovation that ultimately helps all of us".

I'm posting it now because it's a pity that it wasn't uploaded...

fyi, I found this remark a bit cryptic -- it wasn't immediately obvious what you're referring to. (It made me wonder, e.g., whether someone obstructed a previous attempt to share the video.) Having clicked through, I now see that the video is 6 years old. So I guess you just mean something like, "The video is several years old, but I haven't seen it shared on this forum before. Sharing now since it still seems relevant and I personally found it highly motivating for supporting global development."

Load more