Stathis Papaioannou wrote:
> Brent meeker writes:
>>>Let's not try to define consciousness at all, but agree that we know what it 
>>>from personal experience. Computationalism is the theory that consciousness 
>>>as a result of computer activity: that our brains are just complex 
>>>computers, and
>>>in the manner of computers, could be emulated by another computer, so that
>>>computer would experience consciousness in the same way we do. (This theory 
>>>may be
>>>completely wrong, and perhaps consciousness is due to a substance secreted 
>>>by a
>>>special group of neurons or some other such non-computational process, but 
>>>leave that possibility aside for now). What we mean by one computer emulating
>>>another is that there is an isomorphism between the activity of two physical
>>>computers, so that there is a mapping function definable from the states of
>>>computer A to the states of computer B. If this mapping function is fully
>>>specified we can use it practically, for example to run Windows on an x86
>>>processor emulated on a Power PC processor running Mac OS. If you look at the
>>>Power PC processor and the x86 processor running side by side it would be
>>>extremely difficult to see them doing the "same" computation, but according 
>>>to the
>>>mapping function inherent in the emulation program, they are, and they still 
>>>be a thousand years from now even if the human race is extinct.
>>>In a similar fashion, there is an isomorphism between a computer and any 
>>>physical system, even if the mapping function is unknown and extremely
>>I don't see how there can be an isomorphism between any two systems.  Without 
>>structural constraint that seems to throw away the "iso" part and simply 
>>leave a
> The definition of the structural constraint is part of the isomorphism. Some 
> isomorphisms are 
> more economical than others, but there are no God-given isomorphisms or 
> structural constraints. 
> The limiting case is simply a lookup table mapping any arbitrary system to 
> another arbitrary 
> system. That this is inelegant does not make it invalid.
>>>That's not very interesting for non-conscious computations, because
>>>they are only useful or meaningful if they can be observed or interact with 
>>>environment. However, a conscious computation is interesting all on its own. 
>>>might have a fuller life if it can interact with other minds, but its 
>>>meaning is
>>>not contingent on other minds the way a non-conscious computation's is. 
>>Empirically, all of the meaning seems to be referred to things outside the
>>computation.  So if the conscious computation thinks of the word "chair" it 
>>provide any meaning unless there is a chair - outside the computation.  So it 
>>is not
>>clear to me that meaning can be supplied "from the inside" in this way.  I 
>>think this
>>is where Bruno talks about "the required level of substitution" and allows 
>>that the
>>level may be the brain at a neural level PLUS all the outside world.  So that 
>>this simulation the simulated brain is conscious *relative* to the rest of the
>>simulated world.
> I don't think it is right to say that the brain is *conscious* relative to 
> the environment. It is 
> intelligent relative to the environment, whether that means able to 
> communicate with another 
> conscious being or otherwise interacting with the environment in a meaningful 
> way. Although 
> we deduce that a being is conscious from its behaviour, and you can only have 
> behaviour 
> relative to an environment, only the being itself directly experiences its 
> consciousness. This is 
> the 3rd person/ 1st person distinction. 
>>>I know 
>>>this because I am conscious, however difficult it may be to actually define 
>>But do you know you would be conscious if you could not interact with the 
>>That seems doubtful to me.  Of course you can close your eyes, stop your 
>>ears, etc
>>and still experience consciousness - for a while - but perhaps not 
>>indefinitely and
>>maybe not even very long.
> Maybe there is something about my brain that would render me unconscious if 
> all outside 
> input stopped, but that seems to me a contingent fact about brains, like the 
> fact that I 
> would be rendered unconscious if my oxygen supply were cut off. A 
> hallucination is defined 
> as a perception without a stimulus 

Not really; it may just be a perception that doesn't match the stimulus, e.g. a 
perception of Christ brought about by hearing certain piece of music.

>and there are millions of people in the world who have 
> hallucinations all the time. Sometimes people are so overwhelmed by 
> hallucinatory experiences 
> that you could saw their leg off and they don't notice, which is in part how 
> dissociative 
> anaesthetics like ketamine work. If you like, you can say that consciousness 
> is maintained by 
> one part of the brain interacting with another part of the brain: one part is 
> program, the other 
> part data, or one part is computer, the other part environment. The point is, 
> whatever you 
> choose to call it, an isolated physical system can experience consciousness.

I won't insist, because you might be right, but I don't think that is proven.  
It may 
be that interaction with the environment is essential to continued 

  >>>The conclusion I therefore draw from computationalism is that every 
>>>conscious computation is implemented necessarily if any physical process 
>>That would seem to require mappings that are not isomorphisms.
> How do you define the non-isomorphic mappings?

Consider the physical process "tick tock tick tock..."  There are only two 
states so 
it can be isomorphic to "1010101..." or "abababa...". But it cannot be 
isomorphic to 
a process "rock scissors paper rock scissors paper..." with three states.  
There can 
be a mapping between them: there can be a mapping between "1" and the content 
of the 
Oxford English Dictionary, but there's no "iso" about the morphism unless there 
some structure that is preserved by the mapping.

>>>This seems to me very close to saying that every conscious computation is
>>>implemented necessarily in Platonia, as the physical reality seems hardly
>>It seems to me to be very close to a reductio ad absurdum.
> Like Bruno, I am not claiming that this is definitely the case, just that it 
> is the case if 
> computationalism is true. Several philosophers (eg. Searle) have used the 
> self-evident 
> absurdity of the idea as an argument demonstrating that computationalism is 
> false - 
> that there is something non-computational about brains and consciousness. I 
> have not 
> yet heard an argument that rejects this idea and saves computationalism. 
> Personally, 
> I would bet in favour of computationalism being true, but I cannot say that I 
> am sure.
> Stathis Papaioannou

I would bet on computationalism too.  But I still think the conclusion that 
physical process, even the null one, necessarily implements all possible 
consciousness is absurd.

Brent Meeker

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to