On Sat, Jul 24, 2010 at 3:17 PM, Allen <allenkallenb...@yahoo.ca> wrote:

>  On 7/24/2010 12:55 AM, Jason Resch wrote:
>
>  In the case of a digital camera, you could say the photodectors each map
> directly to memory locations and so they can be completely separated and
> their behavior remains the same.  That isn't true with the Mar's rover,
> whose software must evaluate the pattern across the memory locations to
> identify and avoid objects.  You cannot separate the mar's rover into
> components which that behave identically in isolation.
>
>
>      Thank you for replying.
>
>      Doesn't the rover's software run on hardware that is functionally
> similar to the photodetectors, in that the memory locations could be
> separated yet still behave the same?
>
>
>
I agree with Quentin's answer below.  When information is processed
recursively, iteratively, or hierarchically used to build upon results it
can no longer be viewed as conveying the same meaning.  An analogy is the
meaning of a Book, which is built of chapters, which is build of paragraphs,
sentences, words and letters.  There is little to no meaning in individual
letters, but when organized appropriately and combined in certain ways the
meaning appears.  Looking at individual operations performed by a machine is
like focusing on individual letters in a book.


>  That quote reminds me of the Chinese Room thought experiment, in which a
> person is used as the machine to do the sequential processing by blindly
> following a large set of rules.  I think a certain pattern of thinking about
> computers leads to this confusion.  It is common to think of the CPU reading
> and acting upon one symbol at a time as the brains of the machine, at any
> one time we only see that CPU acting upon one symbol, so it seems like the
> it is performing operations on the data, but in reality the past data has in
> no small part led to this current state and position, in this sense the data
> is defining and controlling the operations performed upon itself.
>
>  For example, create a chain of cells in a spread sheet.  Define B1 =
> A1*A1, C1 = B1 - A1, and D1 = B1+2*C1.  Now when you put data in cell A1,
> the computation is performed and carried through a range of different memory
> locations (positions on the tape), the CPU at no one time performs the
> computation to get from the input (A1) to the output (D1), instead it
> performs a chain of intermediate computations and goes through a chain of
> states, with intermediate states determining the final state.  To determine
> the future evolution of the of the system (The Machine and the Tape) the
> entire system has to be considered.  Just as in the Chinese room thought
> experiment, it is not the human following the rulebook which creates the
> conscious, but the system as a whole (all the rules of processing together
> with the one who follows the rules).
>
>
>      I'm sure I have confused patterns of thinking where computers are
> concerned.  I haven't spent very much time with the Chinese Room thought
> experiment, either.  I followed your instructions, with the spread sheet.
> Still, I don't understand how this can explain consciousness.
>
>
I was trying to show how multiple memory locations can be processed to
generate a result.  Extending this, multiple results can then be taken
together and processed to make a more meaningful result, and so on.  At the
highest levels of these layers of processing are where conscious as we know
it would appear.


>
>     Forgive me for my lack of knowledge in the subject, but what is it
> that neurons in the corticothalamic area of the brain that is different from
> what other neurons do or can do?
>
>
>      I apologize, I really should have explained this in post you've quoted
> from.  Reading it back to myself now, it seems out of context.  The mention
> of it again comes from my understanding of Tononi's work.  I have a very
> brief overview of the thalamocortical region, which I believe applies just
> as well (For illustrative purposes) to the corticothalamic system.  (I think
> the term "thalamo-cortico-thalamic system" refers to both as a single
> entity.)
>
> "There are hundreds of functionally specialized thalamocortical areas, each
> containing tens of thousands of neuronal groups, some dealing with responses
> to stimuli and others with planning and execution of action, some dealing
> with visual and others with acoustic stimuli, some dealing with details of
> the input and others with its invariant or abstract properties.  These
> millions of neuronal groups are linked by a huge set of convergent or
> divergent, reciprocally organized connections that make them all hang
> together in a single, tight meshwork while they still maintain their local
> functional specificity.  The result is a three-dimensional tangle that
> appears to warrant at least the following statement: Any perturbation in one
> part of the meshwork may be felt rapidly everywhere else.  Altogether, the
> organization of the thalamocortical meshwork seems remarkably suited to
> integrating a large number of specialists into a unified response." (From
> the book "A Universe of Consciousness: How Matter Becomes Imagination"
> written by Gerald M. Edelman and Giulio Tononi.)
>
>
Thanks that is helpful.  Perhaps there is a relation between these neurons
and the "highest layers of processing".  Take vision for example,  at the
lowest level are single neurons connected to the retina which carry
information regarding the reception of light by individual rods or cones,
first these impulses have to be mapped to a field, where the next layer
might determine the perceived color for each position in the field, then
taking this intermediate result, the next layer may apply  object
recognition to these patches of light and finally meaning (taken from
memories and information known about those objects) is applied to each
object.  In a normally functioning brain, we're only consciously aware of
this highest level, but some forms of brain damage can change this.  (See:
https://secure.wikimedia.org/wikipedia/en/wiki/Visual_agnosia ).


>       The important aspect of this, from my perspective with IIT in mind,
> is that it produces a great deal of what Tononi calls effective information,
> measured with the Kullback-Leibler divergence.  If you haven't read Tononi's
> work, I think this sums up that part I'm referring to very well:
>
> "Informally speaking, the integrated information owned by a system in a
> given state can be described as the information (in the
> Theory of Information sense) generated by a system in the transition from
> one given state to the next one as a consequence of the causal interaction
> of its parts above and beyond the sum of information generated independently
> by each of its parts." (Alessandro Epasto, Enrico Nardelli 2010 - "On a
> Model for Integrated Information")
>
>       An example (A very good example, I think.) can be found in both of
> the articles which I linked a the bottom of my original post, found in the
> sections "A Mathematical Analysis" --> "Integration" or "Model" -->
> "Integration", respectively.
>
>
>
The Nardelli quote sounds very similar to what is done in most computer
programs.  I would therefore say that Turing machines do integrate
information, in the sense Nardelli uses the term.  You must, however,
consider not just the Turing Machine, but the tape as well, since they are
casually connected/related to each other.


>
>  Do you consider a neuron receiving input from several other neurons as
> integrating information?  Computers have an analogous behavior where the
> result of a memory location is determined by multiple other (input) memory
> locations.  I am very familiar with Tononi's definition of information
> integration, but if it is something that neurons do it is certainly
> something computers can do as well.
>
>
>      I've taken this question out of the order you posted it, because I
> believe the quote above (From Epasto and Nardelli) is relevant.  I do not
> believe your neuron example would be considered integrated information,
> because a single neuron receiving the input of several neurons doesn't
> necessarily generate information beyond the input itself.  What I mean is,
> it doesn't generate information from "causal interaction of its parts".  I'm
> really just thinking about Tononi's photodiode example, and I don't think
> the neuron example is an analogous process.
>


Would you consider the firing or non-firing of a neuron to count as
information?


Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to