P

prisonpent

48 karmaJoined

Comments
15

Now that you point it out I agree that's the more plausible reading, but it genuinely wasn't the one that occurred to me first. 

I didn't downvote you, but I would guess those who did were probably objecting to this

"Center left" (so quite similar)

Self-identified leftists, myself included, generally see modern liberalism as a qualitatively different ideology. Imagine someone at Charity Navigator[1] offhandedly describing EA as "basically the same as us". Now imagine that the longtermism discourse had gotten so bad that basically every successful EA organization could expect to experience periodic coup attempts, and "they're basically Charity Navigator" was the canonical way to insult people on the other side. That's what "left = very liberal" looks like from here. 

  1. ^

    before they started doing impact ratings

For a freely moving mass, yes, though "early enough" can be arbitrarily early depending on how much impulse you have to work with. But stars in a galaxy aren't freely moving. They're on highly chaotic trajectories, with characteristic timescales on the order of (very roughly) ~1MYr. 

but if we had the energy of our sun available, surely we could redirect the path of such an object if detected early enough?

Momentum is the limiting factor. Even redirecting all of the sun's light in the same direction only gets you about  of force. That's enough to accelerate a solar mass by a whopping , so you're going to need to see it coming millions of years out. Doing better than that would require somehow ejecting a significant amount of reaction mass from the solar system.

Yes and no: "evolution gave us reason" is the same sort of coarse approximation as "evolution gave us the ability and desire to compete in status games". What we really have is a sui generis thing which can, in the right environment, approximate ideal reasoning or Machiavellian status-seeking or coalition-building or utility maximization or whatever social theory of everything you want to posit, but which most of the time is trying to split the difference. 

People support impartial benevolence because they think they have good pragmatic reasons to do so and they think it's correct and it has an acceptable level of status in their cultural environment and it makes them feel good and it serves as a signal of their willingness to cooperate and and and and. Of course the exact weights vary, and it's pretty rare that every relevant reason for belief is pointing exactly the same way simultaneously, but we're all responding to a complex mix of reasons. Trying to figure out exactly what that mix is for one person in one situation is difficult. Trying to do the same thing for everyone all at once in general is impossible. 

I don't think it's possible to give an evolutionary account of impartiality in isolation, any more than you can give one for algebraic geometry or christology or writing or common-practice tonality. The underlying capabilities (e.g. intelligence, behavioral plasticity, language) are biological, but the particular way in which they end up expressed is not. We might find a thermodynamic explanation of the origin of self-replicating molecules, but a thermodynamic explanation of the reproductive cycle of ferns isn't going to fit in a human brain. You have to move to a higher level of organization to say anything intelligible. Reason, similarly, is likely the sort of thing that admits a good evolutionary explanation, but individual instances of reasoning can only really be explained in psychological terms.

From an evolution / selfish gene's perspective, the reason I or any human has morality is so we can win (or at least not lose) our local virtue/status game

If you're talking about status games at all, then not only have you mostly rounded the full selective landscape off to the organism level, you've also taken a fairly low resolution model of human sociality and held it fixed (when it's properly another part of the phenotype). Approximations like this, if not necessarily these ones in particular, are of course necessary to get anywhere in biology - but that doesn't make them any less approximate.

If you want to talk about the evolution of some complex psychological trait, you need to provide a very clear account of how you're operationalizing it and explain why your model's errors (which definitely exist) aren't large enough to matter in its domain of applicability (which is definitely not everything). I don't think rationalist-folk-evopsych has done this anywhere near thoroughly enough to justify strong claims about "the" reason moral beliefs exist.

If you accept the causal closure of the physical

I think the causal closure of the physical is very, very likely, given the evidence. I do not accept it as axiomatic. But if it turns out that it implies illusionism, i.e. that it implies the evidence does not exist, then it is self-defeating and should be rejected.

Or, do you mean that knowing itself is not entirely physical?

I am referring to my phenomenology, not (what I believe to be) the corresponding behavioral dispositions. E.g. so far as I know my visual field can be simultaneously all blue and all dark, but never all blue and all red. We have a clear path towards explaining why that would be true, and vague hints that it might be possible to explain why, given that it's true, I can think the corresponding thoughts and say the corresponding words.  But explaining how I can make that judgement is not an explanation of why I have visual qualia to begin with. 

Whether these are also physical in some broader sense of the word, I can't say.

Is phenomenality itself necessary/on the causal path here?

I have no idea what the causal path is, or even whether causation is the right conceptual framework here. But it has no bearing on whether phenomenal experiences exist: they're particular things out there in the world (so to speak), not causal roles in a model. 

Note also that the information in or states of a computer (including robots and AIs) also play a similar role for the computer.

It plays a similar role, for very generous values of "similar", in the computer qua physical system, sure. And I am perfectly happy to grant that "I" qua human organism am almost certainly a causally closed physical system like any other. (Or rather, the joint me-environment system is). But that's not what I'm talking about. 

For "us", everything goes through our access consciousness.

I'm not talking about access consciousness either! That's just one particular sort of mental state in a vast landscape. The existence of the landscape - as a really existing thing with really existing contents, not a model  - is the heart of the mystery. 

what's the extra explanatory work phenomenality is doing here?

My whole point is that it doesn't do explanatory work, and expecting it to is a conceptual confusion. The sun's luminosity does not explain its composition, the fact that looking at it causes retinal damage does not explain its luminosity, the firing of sensory nerves does not explain the damage, and the qualia that constitute "hurting to look at" do not explain the brain states which cause them.

Phenomenality is raw data: the thing to be explained. Not what I do, not what I say, not the exact microstate of my brain, not even the structural features of my mind - but the stuff being structured, and the fact there is any.

If you define phenomenality just by certain physical states, effects or responses, or functionalist or causal abstractions thereof

I don't define phenomenality! I point at it. It's that thing, right there, all the time. The stuff in virtue of which all my inferential knowledge is inferential knowledge about something, and not just empty formal structure. The relata which introspective thought relates[1]. The stuff at the bottom of the logical positivists' glass. You know, the thing.

  1. ^

    And again, I am only pointing at particular examples, not defining or characterizing or even trying to offer a conceptual prototype: qualia need not have anything to do with introspection, linguistic thought, inference, or any other sort of higher cognition. In particular, "seeing my computer screen" and "being aware of seeing my computer screen" are not the same quale.

but every theory other than strong illusionism needs to solve [the hard problem].

I agree in the sense that other theories can't simply dissolve it, but that's almost tautological. If you mean that other theories need to solve it in order to justify belief in them, or in other words if you mean that if we were all certain the hard problem would never be adequately resolved we would be forced to accept illusionism, then I don't think that's correct at all. 

Consider what we might call "the hard problem of physics": why this? Why anything? What puts the fire in the equations? Short of some galaxybrained PSR maneuver, which seems more and more dubious by the century, I doubt we're ever going to get an answer. It is completely inexplicable that anything should exist.

And yet it does. It's there, it's obviously there, everything you've ever seen or felt or thought bears witness to it, and someone who claims otherwise on the grounds that it doesn't make any sense has entirely misunderstood the nature of their situation. 


This is also how I feel about illusionism. Phenomenal experience is the only thing we have direct access to: all arguments, all inferences, all sense data, ultimately cash out in some regularity in the phenomenal content of consciousness. Whatever its ontological status, it's the epistemic ground of everything else. You can't justify the claim that phenomenal consciousness doesn't exist by pointing to patterns of phenomena, any more than you can demonstrate the nonexistence language in an essay or offer a formal disproof of modus ponens. 

So these illusionists explanations are, well, not really explanations of consciousness. They're explanations of a coarse world model in terms of a finer one, but the coarse world model wasn't the thing I wanted explained. On the contrary, it was a tiny provisional step towards an explanation: there are many lawlike regularities in the structure of my experiences, so I hypothesize a common cause and call it "my brain". It's a very successful hypothesis, and I'd like to know why - given that the world is more than just its shadow on the mind[1], why should the structure of one reflect the other? 

The illusionist response of "actually your hypothesis is the evidence and your data are just hypotheses" misses the point entirely. 

  1. ^

    the dumbest possible solution, but I can't rule it out

Load more