WD

Wei Dai

4091 karmaJoined

Posts
7

Sorted by New
9
· · 1m read

Comments
238

#5 seems off to me. I don’t know whether OpenAI uses nondisparagement agreements;

Details about OpenAI's nondisparagement agreements have come out.

Unlike FTX, OpenAI has now had a second wave of resignations in protest of insufficient safety focus.

Personally, I think fascism should be more upsetting than woke debate!

I'm not very familiar with Reactionary philosophy myself, but was suspicious of your use of "fascism" here. Asked Copilot (based on GPT-4) and it answered:

As an AI, I don’t form personal opinions. However, I can share that Reactionary philosophy and Fascism are distinct ideologies, even though they might share some common elements such as a critique of modernity and a preference for traditional social structures.

Fascism is typically characterized by dictatorial power, forcible suppression of opposition, and strong regimentation of society and of the economy which is not necessarily present in Reactionary philosophy. Reactionaries might advocate for a return to older forms of governance, but this does not inherently involve the authoritarian aspects seen in Fascism.

(Normally I wouldn't chime in on some topic I know this little about, but I suspect others who are more informed might fear speaking up and getting associated with fascism in other people's minds as a result.)

Also, I'm not Scott but I can share that I'm personally upset with wokeness, not because of how it changed debate, but based on more significant harms to my family and the community we live in (which I described in general terms in this post), to the extent that we're moving half-way across the country to be in a more politically balanced area, where hopefully it has less influence. (Not to mention damage to other institutions I care about, such as academia and journalism.)

(Yes, that is melodramatic phrasing, but I am trying to shock people out what I think is complacency on this topic.)

Not entirely sure what you're referring to by "melodramatic phrasing", but if this is an excuse for using "fascism" to describe "Reactionary philosophy" in order to manipulate people's reactions to it and/or prevent dissent (I've often seen "racism" used this way in other places), I think I have to stand against that. If everyone started excusing themselves from following good discussion norms when they felt like others were complacent about something, that seems like a recipe for disaster.

That said, I very much agree about the “weirdness” of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat.

I of course also think that philosophical progress, done right, is a good thing. However I also think genuine philosophical progress is much harder than it looks (see Some Thoughts on Metaphilosophy for some relevant background views), and therefore am perhaps more worried than most about philosophical "progress", done wrong, being a bad thing.

The salient thing to notice is that this person wants to burn your house down.

In your example, after I notice this, I would call the police to report this person. What do you think I should do (or what does David want me to do) after noticing the political agenda of the people he mentioned? My own natural inclination is to ignore them and keep doing what I was doing before, because it seems incredibly unlikely that their agenda would succeed, given the massive array of political enemies that such agenda has.

I was concerned that after the comment was initially downvoted to -12, it would be hidden from the front page and not enough people would see it to vote it back into positive territory. It didn't work out that way, but perhaps could have?

I want to note that within a few minutes of posting the parent comment, it received 3 downvotes totaling -14 (I think they were something like -4, -5, -5, i.e., probably all strong downvotes) with no agreement or disagreement votes, and subsequently received 5 upvotes spread over 20 hours (with no further downvotes AFAIK) that brought the net karma up to 16 as of this writing. Agreement/disagreement is currently 3/1.

This pattern of voting seems suspicious (e.g., why were all the downvotes clustered so closely in time). I reported the initial cluster of downvotes to the mods in case they want to look into it, but have not heard back from them yet. Thought I'd note this publicly in case a similar thing happened or happens to anyone else.

I think too much moral certainty doesn't necessarily cause someone to be dangerous by itself, and there has to be other elements to their personality or beliefs. For example lots of people are or were unreasonably certain about divine command theory[1], but only a minority of them caused much harm (e.g. by being involved in crusades and inquisitions). I'm not sure it has much to do with realism vs non-realism though. I can definitely imagine some anti-realist (e.g., one with strong negative utilitarian beliefs) causing a lot of damage if they were put in certain positions.

Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn’t feel like a stable solution.

This seems like a fair point. I can think of some responses. Under realism (or if humans specifically tend to converge under reflection) people would tend to converge to similar values as they think more, so increased certainty should be less problematic. Under other metaethical alternatives, one might hope that as we mature overall in our philosophies and social systems, we'd be able to better handle divergent values through compromise/cooperation.

(Not to mention that, as EAs tell themselves it’s virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.)

Yeah, there is perhaps a background disagreement between us, where I tend to think there's little opportunity to make large amounts of genuine philosophical progress without doing much more cognitive work (i.e., to thoroughly explore the huge space of possible ideas/arguments/counterarguments), making your concern not significant for me in the near term.

  1. ^

    Self-nitpick: divine command theory is actually a meta-ethical theory. I should have said "various religious moralities".

It's entirely possible that I misinterpreted David. I asked for clarification from David in the original comment if that was the case, but he hasn't responded so far. If you want to offer your own interpretation, I'd be happy to hear it out.

I'm saying that you can't determine the truth about an aspect of reality (in this case, what cause group differences in IQ), when both sides of a debate over it are pushing political agendas, by looking at which political agenda is better. (I also think one side of it is not as benign as you think, but that's besides the point.)

I actually don't think this IQ debate is one that EAs should get involved in, and said as much to Ives Parr. But if people practice or advocate for what seem to me like bad epistemic norms, I feel an obligation to push back on that.

Load more