> From: stath...@gmail.com
> Date: Wed, 29 Apr 2009 23:24:35 +1000
> Subject: Re: Consciousness is information?
> To: everything-list@googlegroups.com
> 2009/4/29 Jesse Mazer <laserma...@hotmail.com>:
>> Kelly wrote:
>>> Not if information exists platonically. So the question is, what does
>>> it mean for a physical system to "represent" a certain piece of
>>> information? With the correct "one-time pad", any desired information
>>> can be extracted from any random block of data obtained by making any
>>> desired measurement of any physical system.
>>> If I take a randomly generated one-time pad and XOR it with some real
>>> block of data, the result will still be random. But somehow the
>>> original information is there. You have the same problem with
>>> computational processes, as pointed out by Putnam and Searle. The
>>> molecular/atomic vibrations of the particles in my chair could be
>>> interpreted, with the right mapping, as implementing any conceivable
>>> computation.
>>> So unambiguously connecting information to the "physical" is not so
>>> easy, I think.
>> This is essentially the problem discussed by Chalmers in "Does a Rock
>> Implement Every Finite-State Automaton"
>> at http://consc.net/papers/rock.html , and I think it's also the idea behind
>> Maudlin's Olympia thought experiment as well. But for anyone who wants to
>> imagine some set of "psychophysical laws" connecting physical states to the
>> measure of OMs I think there may be ways around it. For example, instead of
>> associating an OM with the passive idea of "information", can't you
>> associate with the causal structure instantiated by a computer program
>> that's actually running, as opposed to something like a mere static printout
>> of its states? Of course you'd need a precise mathematical definition of the
>> "causal structure" of a set of causally-related physical events, but I don't
>> see any reason why it should be impossible to come up with a good
>> definition. I think Chalmers attempts one based on counterfactuals in that
>> paper, though I'm not sure if I like that approach.
> The atoms vibrating in a rock have a causal structure, insofar as an
> atom moves when it is jiggled by its neighbours in perfect accordance
> with the laws of physics.
They do have *a* causal structure, but I don't see why we should expect to find 
a set of events in the rock whose causal structure is isomorphic to the causal 
structure of a computer running a detailed simulation of a human brain for some 
extended period of time.

>And in the possibility space of weird alien
> computers it seems to me that there will always be a computer
> isomorphic with the vibration of atoms in a given rock.
What do you mean by "weird alien computers"? If we had a way of defining the 
notion of "causal structure", I'm sure it would be true that in the space of 
all computer programs (running on any sort of computer) there would be programs 
whose causal structure was isomorphic of the causal structure of vibrations in 
a rock, but this might be quite distinct from the causal structure associated 
with the brains of sentient observers. If you take a panpsychist approach like 
Chalmers, it might be that all causal structures have *some* type of qualia 
associated with them, even the ones in a rock (just as we might suppose that 
even an insect or an amoeba is not totally void of inner experience), but the 
sort of self-aware conceptual thought that humans have would probably be 
limited to a small subset of all possible causal structures.

> requirement becomes even easier to satisfy if we allow a computation
> to be broken up into short intervals on separate computers of
> different design, with the final stream of consciousness requiring
> nothing to bind it together other than the content of the individual
> OM's.

As long as the separate computers are each passing the results of their 
computation on to the next computer in the series, then we can talk about the 
causal structure instantiated by the whole series. And if they aren't, then 
according to the idea of associating OMs with causal structures, we might have 
to conclude that these computers are not really instantiating an OM of a 
complex humanlike observer even if by some outrageous coincidence the output of 
all these separate computers *looked* just like the output of a single computer 
running a simulation of the brain of a humanlike observer.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to