Thank you both for replying!
On 7/22/2010 8:47 PM, Brent Meeker wrote:
Sure. Consider a Mars Rover. It has a camera with many pixels. The
voltage of the photodetector of each pixel is digitized and sent to a
computer. The computer processes the data and recognizes there is a
rock in its path. The computer actuates some controller and steers
the Rover around the rock. So information has been integrated and
used. Note that if the information had not been used (i.e. resulted
in action in the environment) it would be difficult to say whether it
had been integrated or merely transformed and stored.
Isn't this the same as the digital camera sensor chip? Aren't the
functions you're describing built on this foundation of independent,
minimal repertoires, all working independently of each other? I can see
how, from our external point of view, it seems like one entity, but when
we look at the hardware, isn't it functionally the same as the sensor
chip in the quote from Tononi? That is, even the CPU that is fed the
information from the camera works in a similar way. Tononi, in
/"Qualia: The Geometry of Integrated Information"/, says:
"Integrated information is measured by comparing the actual
repertoire generated by the system as a whole with the combined
actual repertoires generated independently by the parts."
So, what I mean is, the parts account for all the information in
the system, there is no additional information generated as integrated
information (Which Tononi refers to as "phi" ?.)
On 7/23/2010 12:15 AM, Jason Resch wrote:
A Turing machine can essentially do anything with information that can
be done with information. They are universal machines in the same
sense that a pair of headphones is a universal instrument, though
practical implementations have limits (a Turing machine has limited
available memory, a pair of headphones will have a limited frequency
and amplitude range), theoretically, each has an infinite repertoire.
I hope no one will be offended if I borrow a quote I found on
"At any moment there is one symbol in the machine; it is called the
scanned symbol. The machine can alter the scanned symbol and its
behavior is in part determined by that symbol, but the symbols on
the tape elsewhere do not affect the behavior of the machine."
(Turing 1948, p. 61)
I'm sure none of you needed the reminder, it's only so that I may
point directly to what I mean. Now, doesn't this - the nature of a
Turing machine - fundamentally exclude the ability to integrate
information? The computers we have today do not integrate information
to any significant extent, as Tononi explained with his digital camera
example. Is this a fundamental limit of the Turing machine, or just our
There is no conceivable instrument whose sound could not be
reproduced by an ideal pair of headphones, just as there is no
conceivable physical machine whose behavior could not be reproduced by
an ideal Turing machine. This implies that given enough memory, and
the right programming a Turing machine can perfectly reproduce the
behavior of a person's Brain.
If an ideal Turing machine cannot integrate information, then the
brain is a physical machine whose behavior can't be reproduced by an
ideal Turing machine. No matter how much memory the Turing machine has,
it's mechanism prevents it from integrating that information, and
without integration, there is no subjective experience.
Does this make the Turing machine conscious? If not it implies that
someone you know could have their brain replaced by Turing machine,
and that person would in every way act as the original person, yet it
wouldn't be conscious. It would still claim to be conscious, still
claim to feel pain, still be capable of writing a philosophy paper
about the mysteriousness of consciousness. If a non-conscious entity
could in every way act as a conscious entity does, then what is the
point of consciousness? There would be no reason for it to evolve if
it served no purpose. Also, what sense would it make for
non-conscious entities to contemplate and write e-mails about
something they presumably don't have access to? (As Turing machines
running brain software necessarily would).
I wonder if this is what the vast majority of AI work done so far
is working towards: philosophical zombies. We can very likely, and in
the not-too-distant future, build artifacts that are so life-like they
can trick some of us into believing they are conscious, but until
hardware has been constructed that can function in the same manner as
the neurons in the corticothalamic area of the brain, or surpass them,
we won't have significantly conscious artifacts. No amount of
computational modeling will make up for the physical inability to
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to
For more options, visit this group at