On Fri, Jul 23, 2010 at 10:17 AM, Allen <allenkallenb...@yahoo.ca> wrote:

>       Thank you both for replying!
>
>
> On 7/22/2010 8:47 PM, Brent Meeker wrote:
>
> Sure.  Consider a Mars Rover.  It has a camera with many pixels.  The
> voltage of the photodetector of each pixel is digitized and sent to a
> computer.  The computer processes the data and recognizes there is a rock in
> its path.  The computer actuates some controller and steers the Rover around
> the rock.  So information has been integrated and used.  Note that if the
> information had not been used (i.e. resulted in action in the environment)
> it would be difficult to say whether it had been integrated or merely
> transformed and stored.
>
> Brent
>
>
>      Isn't this the same as the digital camera sensor chip?  Aren't the
> functions you're describing built on this foundation of independent, minimal
> repertoires, all working independently of each other?  I can see how, from
> our external point of view, it seems like one entity, but when we look at
> the hardware, isn't it functionally the same as the sensor chip in the quote
> from Tononi?  That is, even the CPU that is fed the information from the
> camera works in a similar way.  Tononi, in *"Qualia: The Geometry of
> Integrated Information"*, says:
>
> "Integrated information is measured by comparing the actual repertoire
> generated by the system as a whole with the combined actual repertoires
> generated independently by the parts."
>
>       So, what I mean is, the parts account for all the information in the
> system, there is no additional information generated as integrated
> information (Which Tononi refers to as "phi" Φ.)
>
>
In the case of a digital camera, you could say the photodectors each map
directly to memory locations and so they can be completely separated and
their behavior remains the same.  That isn't true with the Mar's rover,
whose software must evaluate the pattern across the memory locations to
identify and avoid objects.  You cannot separate the mar's rover into
components which that behave identically in isolation.



>
>
> On 7/23/2010 12:15 AM, Jason Resch wrote:
>
>  A Turing machine can essentially do anything with information that can be
> done with information.  They are universal machines in the same sense that a
> pair of headphones is a universal instrument, though practical
> implementations have limits (a Turing machine has limited available memory,
> a pair of headphones will have a limited frequency and amplitude range),
> theoretically, each has an infinite repertoire.
>
>
>      I hope no one will be offended if I borrow a quote I found on
> Wikipedia:
>
> "At any moment there is one symbol in the machine; it is called the scanned
> symbol. The machine can alter the scanned symbol and its behavior is in part
> determined by that symbol, but the symbols on the tape elsewhere do not
> affect the behavior of the machine." (Turing 1948, p. 61)
>
>      I'm sure none of you needed the reminder, it's only so that I may
> point directly to what I mean.  Now, doesn't this - the nature of a Turing
> machine - fundamentally exclude the ability to integrate information?  The
> computers we have today do not integrate information to any significant
> extent, as Tononi explained with his digital camera example.  Is this a
> fundamental limit of the Turing machine, or just our current technology?
>
>
>
That quote reminds me of the Chinese Room thought experiment, in which a
person is used as the machine to do the sequential processing by blindly
following a large set of rules.  I think a certain pattern of thinking about
computers leads to this confusion.  It is common to think of the CPU reading
and acting upon one symbol at a time as the brains of the machine, at any
one time we only see that CPU acting upon one symbol, so it seems like the
it is performing operations on the data, but in reality the past data has in
no small part led to this current state and position, in this sense the data
is defining and controlling the operations performed upon itself.

For example, create a chain of cells in a spread sheet.  Define B1 = A1*A1,
C1 = B1 - A1, and D1 = B1+2*C1.  Now when you put data in cell A1, the
computation is performed and carried through a range of different memory
locations (positions on the tape), the CPU at no one time performs the
computation to get from the input (A1) to the output (D1), instead it
performs a chain of intermediate computations and goes through a chain of
states, with intermediate states determining the final state.  To determine
the future evolution of the of the system (The Machine and the Tape) the
entire system has to be considered.  Just as in the Chinese room thought
experiment, it is not the human following the rulebook which creates the
conscious, but the system as a whole (all the rules of processing together
with the one who follows the rules).


>    There is no conceivable instrument whose sound could not be reproduced
> by an ideal pair of headphones, just as there is no conceivable physical
> machine whose behavior could not be reproduced by an ideal Turing machine.
>  This implies that given enough memory, and the right programming a Turing
> machine can perfectly reproduce the behavior of a person's Brain.
>
>
>      If an ideal Turing machine cannot integrate information, then the
> brain is a physical machine whose behavior can't be reproduced by an ideal
> Turing machine.  No matter how much memory the Turing machine has, it's
> mechanism prevents it from integrating that information, and without
> integration, there is no subjective experience.
>
>
Do you consider a neuron receiving input from several other neurons as
integrating information?  Computers have an analogous behavior where the
result of a memory location is determined by multiple other (input) memory
locations.  I am very familiar with Tononi's definition of information
integration, but if it is something that neurons do it is certainly
something computers can do as well.



>
>  Does this make the Turing machine conscious?  If not it implies that
> someone you know could have their brain replaced by Turing machine, and that
> person would in every way act as the original person, yet it wouldn't be
> conscious.  It would still claim to be conscious, still claim to feel pain,
> still be capable of writing a philosophy paper about the mysteriousness of
> consciousness.  If a non-conscious entity could in every way act as a
> conscious entity does, then what is the point of consciousness?  There would
> be no reason for it to evolve if it served no purpose.  Also, what sense
> would it make for non-conscious entities to contemplate and write e-mails
> about something they presumably don't have access to?  (As Turing machines
> running brain software necessarily would).
>
>
>      I wonder if this is what the vast majority of AI work done so far is
> working towards: philosophical zombies.  We can very likely, and in the
> not-too-distant future, build artifacts that are so life-like they can trick
> some of us into believing they are conscious, but until hardware has been
> constructed that can function in the same manner as the neurons in the
> corticothalamic area of the brain, or surpass them, we won't have
> significantly conscious artifacts.  No amount of computational modeling will
> make up for the physical inability to integrate information.
>


Forgive me for my lack of knowledge in the subject, but what is it that
neurons in the corticothalamic area of the brain that is different from what
other neurons do or can do?


Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to