Re: Implications of Tononi's IIT?

2010-08-26 Thread Allen Kallenbach


--- On Sun, 7/25/10, Brent Meeker meeke...@dslextreme.com wrote:

From: Brent Meeker meeke...@dslextreme.com
Subject: Re: Implications of Tononi's IIT?
To: everything-list@googlegroups.com
Received: Sunday, July 25, 2010, 7:10 PM





  
  
On 7/24/2010 1:32 PM, Allen wrote:


  
  
On 7/23/2010 3:03 PM, Brent Meeker wrote:

  
I'd say the information comes from the surface of Mars - it is
integrated (which means summed into a whole) by the Rover and acted
upon.  Tononi seems to be abusing language and using integrated when
he actually means generated.  Whether there is information generated
would depend on how you defined it and where you draw the boundaries of
the system.  Shannon information is a measure of the reduction in
uncertainity - so if you were uncertain about what the Mars Rover would
do, then you could say it's action generated information.  But if you
knew every detail of it's programming and memory and the surface scene
it viewed you might say it didn't generate any information.



Brent

  
  

 Thanks for replying.

  

 I hope my comments to Jason explain my difference in perspective
here.  I don't think the information is integrated in the way Tononi
uses the term.  I don't view this system as being connected in such a
way that information is generated by causal interactions among
rather than within its parts. (Balduzzi D, Tononi G 2009)  I
think the physical structures of the computers involved in this example
exclude the generation of additional information via re-entrant
feedback between any of the components (I don't know the proper terms
to use here).  There's no component saying to its neighbour I see
you're not 'firing', which means possibilities p  q must be
excluded, everyone just goes about their business independently. 
Isn't that how it works at the fine scale, where everything is binary? 
Nobody checks which of their neighbours are 0's and which are 1's?




I think you're confused about Tononi's theory.  He talks about
generating effective information which he measures by the
Kullback-Lieber difference between the potential information, what
Shannon would call the bandwidth, and that which the mechanism actually
realizes.  So the effective information is greatest when the potential
states are large and the actual ones are small.  So the Mars Rover is
generating a lot of effective information when it picks out a single
action based on a whole range of potential inputs.  For example, it
choose to go around the rock - but it would have made the same choice
if dozens of pixels in it's camera switched digits.  It would have
chosen to go around a hole as well as a rock.  I would have chosen to
go around the rock if it were night or day - even though the camera
image would have been quite different.



Brent

 Brent,

 For some reason this message didn't make it's way to my inbox until today 
(Or yesterday).  I had been trying a new email client until yesterday.  It was 
not a success.

     I was confused about Tononi's theory, when I read the specific portion of 
the text regarding effective information, I made an unfounded mental leap, 
putting something there that didn't belong there.  Now that you've cleared it 
up, I can't even remember fully what that phantom was, I just know it wasn't 
what you've stated here.

 I have a few textbooks on information theory, most are beyond my ability 
and I've put them aside to read at a later date.  I never believed I knew a lot 
about it, but now I see I know even less about information than I thought.

 Sorry to have taken so long to reply, but I do appreciate your 
clarification.



 I hope some of this is sensible.  I've only ever read about these
things, this is the first time trying to explain any of them, and the
holes in my understanding have never been so blatantly obvious.

  

 -Allen

-- 

You received this message because you are subscribed to the Google
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.




 


-- 

You received this message because you are subscribed to the Google Groups 
Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.


For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.






-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Implications of Tononi's IIT?

2010-08-22 Thread Brent Meeker

On 7/24/2010 1:32 PM, Allen wrote:

On 7/23/2010 3:03 PM, Brent Meeker wrote:
I'd say the information comes from the surface of Mars - it is 
integrated (which means summed into a whole) by the Rover and acted 
upon.  Tononi seems to be abusing language and using integrated 
when he actually means generated.  Whether there is information 
generated would depend on how you defined it and where you draw the 
boundaries of the system.  Shannon information is a measure of the 
reduction in uncertainity - so if you were uncertain about what the 
Mars Rover would do, then you could say it's action generated 
information.  But if you knew every detail of it's programming and 
memory and the surface scene it viewed you might say it didn't 
generate any information.


Brent


 Thanks for replying.

 I hope my comments to Jason explain my difference in perspective 
here.  I don't think the information is integrated in the way Tononi 
uses the term.  I don't view this system as being connected in such a 
way that information is generated by causal interactions /among/ 
rather than /within/ its parts. (Balduzzi D, Tononi G 2009)  I think 
the physical structures of the computers involved in this example 
exclude the generation of additional information via re-entrant 
feedback between any of the components (I don't know the proper terms 
to use here).  There's no component saying to its neighbour I see 
you're not 'firing', which means possibilities p  q must be 
excluded, everyone just goes about their business independently.  
Isn't that how it works at the fine scale, where everything is 
binary?  Nobody checks which of their neighbours are 0's and which are 
1's?


I think you're confused about Tononi's theory.  He talks about 
generating effective information which he measures by the 
Kullback-Lieber difference between the potential information, what 
Shannon would call the bandwidth, and that which the mechanism actually 
realizes.  So the effective information is greatest when the potential 
states are large and the actual ones are small.  So the Mars Rover is 
generating a lot of effective information when it picks out a single 
action based on a whole range of potential inputs.  For example, it 
choose to go around the rock - but it would have made the same choice if 
dozens of pixels in it's camera switched digits.  It would have chosen 
to go around a hole as well as a rock.  I would have chosen to go around 
the rock if it were night or day - even though the camera image would 
have been quite different.


Brent



 I hope some of this is sensible.  I've only ever read about these 
things, this is the first time trying to explain any of them, and the 
holes in my understanding have never been so blatantly obvious.


 -Allen
--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Implications of Tononi's IIT?

2010-07-27 Thread Allen

 On 7/25/2010 5:19 AM, Quentin Anciaux wrote:
As Jason pointed out below, you must take the software + hardware as a 
whole like in the chinese room argument. If the chinese room room was 
running a conscious program, consciousness wouldn't be in the man 
acting on the symbols nor in the rules book... Like Bruno says, 
consciousness supervenes on all functionnaly equivalent computations 
(there is an infinity of them, ie: you have an infinity of possible 
implementations of the same computation). If consciousness was 
supervening on hardware your argument would stand... but it would work 
the same for a human brain.


 I'm sorry, I misunderstood Jason's message.  Thank you for making 
me see that.




Consciousness is a high level phenomena, it does not exists in parts 
taken separately. You won't find consciousness in a neuron nor you'll 
find it in an atom of a human being.


Information integration like what you're talking about only exists at 
a high level of information processing... So likewise, if you have a 
conscious program, you won't find consciousness as a subroutine.


 Thank you for your reply.  I see I've been confused about multiple 
things.  Your explanations were helfpul!



 -Allen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Implications of Tononi's IIT?

2010-07-27 Thread Allen

 On 7/25/2010 7:18 PM, Jason Resch wrote:
I agree with Quentin's answer below.  When information is processed 
recursively, iteratively, or hierarchically used to build upon results 
it can no longer be viewed as conveying the same meaning.  An analogy 
is the meaning of a Book, which is built of chapters, which is build 
of paragraphs, sentences, words and letters.  There is little to no 
meaning in individual letters, but when organized appropriately and 
combined in certain ways the meaning appears.  Looking at individual 
operations performed by a machine is like focusing on individual 
letters in a book.


 Your analogy was very helpful.  I misunderstood your previous 
post, sorry.


Would you consider the firing or non-firing of a neuron to count as 
information?


 Yes, I would consider it to count as information, but I think it 
only counts as integrated information if it is connected to other 
neurons in a way that 'tells them' the neuron is either firing or not 
firing, because if it is firing, that excludes certain possibilities, 
and if it is not firing, that excludes certain possibilities, and so it 
is informative.  I think of it as the neurons being so connected as to 
be able to 'watch' each other.


 Thank you for all of the other explanations you gave.  I don't 
have a response to them, but I do appreciate them.  You've cleared up 
some of my confusion.


 -Allen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Implications of Tononi's IIT?

2010-07-27 Thread Allen

 On 7/25/2010 12:14 PM, Bruno Marchal wrote:


On 24 Jul 2010, at 23:02, Allen wrote:


On 7/23/2010 1:55 PM, Bruno Marchal wrote:
I think this has nothing to do with technology.  It is just that 
consciousness is not related to the activity of the physical 
machine, but to the logic which makes the person supported by the 
computation integrating that information.


 Thank you for your reply.


You are welcome.





 I mention technology because, as of now, we don't seem to have 
any conscious artifacts.  Do you agree?



Yes I disagree. But this may be not important, especially at the 
beginning of the argument. I can argue that all universal purpose 
computer are conscious artifact. But I agree that they have no means 
to manifest their consciousness relatively to us. So it is only 
through theoretical computer science that you could eventually 
understand what I mean by that.

So I don't intend to insist on this point at all.


 I will have to begin learning theoretical computer science, then.  
I don't mean to trivialize it by stating it so lightly, but my interest 
has been excited.



If so, what do we need to construct conscious artifacts?


We need only to open our own mind. It is difficult because we are 
ourselves not really programmed to do that. With this respect I think 
that the Greek theologians, and many mystics (from East and West), and 
perhaps Salvia divinorum smoker can get some glimpse of what it could 
mean to be conscious, yet completely disconnected from our fives 
senses, and from time and space. Some yogic introspection technic can 
also leads to such an understanding. Some sleep experience too. But I 
don't expect most people to get the point without much more theorizing.


 I can appreciate this, but I have not yet really grasped it.

If you don't agree, what has man constructed that is or may be 
conscious?


Any concrete or abstract Löbian machine or theory. Such entities can 
reflex themselves entirely. Those are universal machine (thus 
conscious, but once Löbian, I would say that they are as conscious as 
ourselves).






Or is my question nonsensical?


It is juts a very difficult question, where we are deluded, by years 
of evolution together with 1500 years of Aristotelian brainwashing. 
The subject is really taboo. You have to be able to doubt about 
physicalism or materialism. It is better if your doubt are based on 
logic and observation, so that you can share them with others.


 I'm coming to realize I've been having problems here.






In a sense it is just false to relate consciousness to any third 
person describable activity, and in fine, if we are machine, our 
consciousness, which is a first person notion, is related (not even 
defined by) all the possible computations going through the logical 
state of the machine. This entails that any machine looking at 
itself below its substitution level (the level at which it feels 
surviving an artificial digital substitution) will discover that the 
apparent material reality is multiple:  matter relies on 
infinity of computations. This is retrospectively confirmed by 
quantum mechanics.


In fine, matter is a construction of the mind, in the case we are 
digital machine. The brain does not makes consciousness, it filters 
it from infinities of first person histories. Tononi is a bit naïve, 
like many, on the mind-body (consciousness-reality) relationship. 
The integration does not rely on what a machine do, but on what an 
infinity of possible machines can do, and how consistent environment 
reacts to what the machine (person) decides.
 I don't want to ignore this portion, it's just more advanced 
than I am, I don't have a comfortable grasp on the concepts, so I 
can't make even an attempt at a response.


Your honesty honor you. Note that I was summing up many years of 
solitary work in a highly counterintuive field, so it is not 
astonishing you have difficulties. My fault. Sorry.


 Thank you.  I appreciate the explanations even if I do not 
understand them well right now.  They're guidance, good for finding 
directions as to where I need to go.





  I want to ask a question about The Origin of Physical Laws and 
Sensations.  I don't understand it yet, I'll need to re-read the 
seventh step multiple times more before I figure it out comfortably.  
The fault is certainly my own ignorance, not your explanations. I'll 
be returning to it, and taking your advice on reading the List's 
archives.


 As to my question:  At the third step, you wrote Giving that 
Moscow and Washington are permutable without any noticeable changes 
for the experiencer, it is reasonable to ascribe a probability of ½ 
to the event 'I will be in Moscow (resp. Washington).'  I don't 
understand the probability here.  If I am duplicated, won't there 
just be two Allens, AllenM (for Moscow) and AllenW (Washington)?



To understand this it is useful to grasp the difference between third 
person 

Re: Implications of Tononi's IIT?

2010-07-25 Thread Bruno Marchal


On 24 Jul 2010, at 23:02, Allen wrote:


On 7/23/2010 1:55 PM, Bruno Marchal wrote:
I think this has nothing to do with technology.  It is just that  
consciousness is not related to the activity of the physical  
machine, but to the logic which makes the person supported by the  
computation integrating that information.


 Thank you for your reply.


You are welcome.





 I mention technology because, as of now, we don't seem to have  
any conscious artifacts.  Do you agree?



Yes I disagree. But this may be not important, especially at the  
beginning of the argument. I can argue that all universal purpose  
computer are conscious artifact. But I agree that they have no means  
to manifest their consciousness relatively to us. So it is only  
through theoretical computer science that you could eventually  
understand what I mean by that.

So I don't intend to insist on this point at all.




If so, what do we need to construct conscious artifacts?


We need only to open our own mind. It is difficult because we are  
ourselves not really programmed to do that. With this respect I think  
that the Greek theologians, and many mystics (from East and West), and  
perhaps Salvia divinorum smoker can get some glimpse of what it could  
mean to be conscious, yet completely disconnected from our fives  
senses, and from time and space. Some yogic introspection technic can  
also leads to such an understanding. Some sleep experience too. But I  
don't expect most people to get the point without much more theorizing.




If you don't agree, what has man constructed that is or may be  
conscious?


Any concrete or abstract Löbian machine or theory. Such entities can  
reflex themselves entirely. Those are universal machine (thus  
conscious, but once Löbian, I would say that they are as conscious as  
ourselves).






Or is my question nonsensical?


It is juts a very difficult question, where we are deluded, by years  
of evolution together with 1500 years of Aristotelian brainwashing.  
The subject is really taboo. You have to be able to doubt about  
physicalism or materialism. It is better if your doubt are based on  
logic and observation, so that you can share them with others.






In a sense it is just false to relate consciousness to any third  
person describable activity, and in fine, if we are machine, our  
consciousness, which is a first person notion, is related (not even  
defined by) all the possible computations going through the logical  
state of the machine. This entails that any machine looking at  
itself below its substitution level (the level at which it feels  
surviving an artificial digital substitution) will discover that  
the apparent material reality is multiple:  matter relies on  
infinity of computations. This is retrospectively confirmed by  
quantum mechanics.


In fine, matter is a construction of the mind, in the case we are  
digital machine. The brain does not makes consciousness, it filters  
it from infinities of first person histories. Tononi is a bit  
naïve, like many, on the mind-body (consciousness-reality)  
relationship. The integration does not rely on what a machine do,  
but on what an infinity of possible machines can do, and how  
consistent environment reacts to what the machine (person) decides.
 I don't want to ignore this portion, it's just more advanced  
than I am, I don't have a comfortable grasp on the concepts, so I  
can't make even an attempt at a response.


Your honesty honor you. Note that I was summing up many years of  
solitary work in a highly counterintuive field, so it is not  
astonishing you have difficulties. My fault. Sorry.



  I want to ask a question about The Origin of Physical Laws and  
Sensations.  I don't understand it yet, I'll need to re-read the  
seventh step multiple times more before I figure it out  
comfortably.  The fault is certainly my own ignorance, not your  
explanations. I'll be returning to it, and taking your advice on  
reading the List's archives.


 As to my question:  At the third step, you wrote Giving that  
Moscow and Washington are permutable without any noticeable changes  
for the experiencer, it is reasonable to ascribe a probability of ½  
to the event 'I will be in Moscow (resp. Washington).'  I don't  
understand the probability here.  If I am duplicated, won't there  
just be two Allens, AllenM (for Moscow) and AllenW (Washington)?



To understand this it is useful to grasp the difference between third  
person description and first person description.


In those thought experiment some simple definition can be used  
(without preventing a more thorough treatment later).


Consider the self-duplication experiment. I suppose you have a diary,  
in which you write the result of some personal testing, like where do  
I feel  to be?. So if you feel yourself to be in Brussels, you write  
I am in Brussels in your diary. An external observer can agree with  
you, here. Now 

Re: Implications of Tononi's IIT?

2010-07-25 Thread Jason Resch
On Sat, Jul 24, 2010 at 3:17 PM, Allen allenkallenb...@yahoo.ca wrote:

  On 7/24/2010 12:55 AM, Jason Resch wrote:

  In the case of a digital camera, you could say the photodectors each map
 directly to memory locations and so they can be completely separated and
 their behavior remains the same.  That isn't true with the Mar's rover,
 whose software must evaluate the pattern across the memory locations to
 identify and avoid objects.  You cannot separate the mar's rover into
 components which that behave identically in isolation.


  Thank you for replying.

  Doesn't the rover's software run on hardware that is functionally
 similar to the photodetectors, in that the memory locations could be
 separated yet still behave the same?



I agree with Quentin's answer below.  When information is processed
recursively, iteratively, or hierarchically used to build upon results it
can no longer be viewed as conveying the same meaning.  An analogy is the
meaning of a Book, which is built of chapters, which is build of paragraphs,
sentences, words and letters.  There is little to no meaning in individual
letters, but when organized appropriately and combined in certain ways the
meaning appears.  Looking at individual operations performed by a machine is
like focusing on individual letters in a book.


  That quote reminds me of the Chinese Room thought experiment, in which a
 person is used as the machine to do the sequential processing by blindly
 following a large set of rules.  I think a certain pattern of thinking about
 computers leads to this confusion.  It is common to think of the CPU reading
 and acting upon one symbol at a time as the brains of the machine, at any
 one time we only see that CPU acting upon one symbol, so it seems like the
 it is performing operations on the data, but in reality the past data has in
 no small part led to this current state and position, in this sense the data
 is defining and controlling the operations performed upon itself.

  For example, create a chain of cells in a spread sheet.  Define B1 =
 A1*A1, C1 = B1 - A1, and D1 = B1+2*C1.  Now when you put data in cell A1,
 the computation is performed and carried through a range of different memory
 locations (positions on the tape), the CPU at no one time performs the
 computation to get from the input (A1) to the output (D1), instead it
 performs a chain of intermediate computations and goes through a chain of
 states, with intermediate states determining the final state.  To determine
 the future evolution of the of the system (The Machine and the Tape) the
 entire system has to be considered.  Just as in the Chinese room thought
 experiment, it is not the human following the rulebook which creates the
 conscious, but the system as a whole (all the rules of processing together
 with the one who follows the rules).


  I'm sure I have confused patterns of thinking where computers are
 concerned.  I haven't spent very much time with the Chinese Room thought
 experiment, either.  I followed your instructions, with the spread sheet.
 Still, I don't understand how this can explain consciousness.


I was trying to show how multiple memory locations can be processed to
generate a result.  Extending this, multiple results can then be taken
together and processed to make a more meaningful result, and so on.  At the
highest levels of these layers of processing are where conscious as we know
it would appear.



 Forgive me for my lack of knowledge in the subject, but what is it
 that neurons in the corticothalamic area of the brain that is different from
 what other neurons do or can do?


  I apologize, I really should have explained this in post you've quoted
 from.  Reading it back to myself now, it seems out of context.  The mention
 of it again comes from my understanding of Tononi's work.  I have a very
 brief overview of the thalamocortical region, which I believe applies just
 as well (For illustrative purposes) to the corticothalamic system.  (I think
 the term thalamo-cortico-thalamic system refers to both as a single
 entity.)

 There are hundreds of functionally specialized thalamocortical areas, each
 containing tens of thousands of neuronal groups, some dealing with responses
 to stimuli and others with planning and execution of action, some dealing
 with visual and others with acoustic stimuli, some dealing with details of
 the input and others with its invariant or abstract properties.  These
 millions of neuronal groups are linked by a huge set of convergent or
 divergent, reciprocally organized connections that make them all hang
 together in a single, tight meshwork while they still maintain their local
 functional specificity.  The result is a three-dimensional tangle that
 appears to warrant at least the following statement: Any perturbation in one
 part of the meshwork may be felt rapidly everywhere else.  Altogether, the
 organization of the thalamocortical meshwork seems remarkably 

Re: Implications of Tononi's IIT?

2010-07-24 Thread Allen

 On 7/24/2010 12:55 AM, Jason Resch wrote:
In the case of a digital camera, you could say the photodectors each 
map directly to memory locations and so they can be completely 
separated and their behavior remains the same.  That isn't true with 
the Mar's rover, whose software must evaluate the pattern across the 
memory locations to identify and avoid objects.  You cannot separate 
the mar's rover into components which that behave identically in 
isolation.


 Thank you for replying.

 Doesn't the rover's software run on hardware that is functionally 
similar to the photodetectors, in that the memory locations could be 
separated yet still behave the same?


That quote reminds me of the Chinese Room thought experiment, in which 
a person is used as the machine to do the sequential processing by 
blindly following a large set of rules.  I think a certain pattern of 
thinking about computers leads to this confusion.  It is common to 
think of the CPU reading and acting upon one symbol at a time as the 
brains of the machine, at any one time we only see that CPU acting 
upon one symbol, so it seems like the it is performing operations on 
the data, but in reality the past data has in no small part led to 
this current state and position, in this sense the data is defining 
and controlling the operations performed upon itself.


For example, create a chain of cells in a spread sheet.  Define B1 = 
A1*A1, C1 = B1 - A1, and D1 = B1+2*C1.  Now when you put data in cell 
A1, the computation is performed and carried through a range of 
different memory locations (positions on the tape), the CPU at no one 
time performs the computation to get from the input (A1) to the output 
(D1), instead it performs a chain of intermediate computations and 
goes through a chain of states, with intermediate states determining 
the final state.  To determine the future evolution of the of the 
system (The Machine and the Tape) the entire system has to be 
considered.  Just as in the Chinese room thought experiment, it is not 
the human following the rulebook which creates the conscious, but the 
system as a whole (all the rules of processing together with the one 
who follows the rules).


 I'm sure I have confused patterns of thinking where computers are 
concerned.  I haven't spent very much time with the Chinese Room thought 
experiment, either.  I followed your instructions, with the spread 
sheet.  Still, I don't understand how this can explain consciousness.


Forgive me for my lack of knowledge in the subject, but what is it 
that neurons in the corticothalamic area of the brain that is 
different from what other neurons do or can do?


 I apologize, I really should have explained this in post you've 
quoted from.  Reading it back to myself now, it seems out of context.  
The mention of it again comes from my understanding of Tononi's work.  I 
have a very brief overview of the thalamocortical region, which I 
believe applies just as well (For illustrative purposes) to the 
corticothalamic system.  (I think the term thalamo-cortico-thalamic 
system refers to both as a single entity.)


   There are hundreds of functionally specialized thalamocortical
   areas, each containing tens of thousands of neuronal groups, some
   dealing with responses to stimuli and others with planning and
   execution of action, some dealing with visual and others with
   acoustic stimuli, some dealing with details of the input and others
   with its invariant or abstract properties.  These millions of
   neuronal groups are linked by a huge set of convergent or divergent,
   reciprocally organized connections that make them all hang together
   in a single, tight meshwork while they still maintain their local
   functional specificity.  The result is a three-dimensional tangle
   that appears to warrant at least the following statement: Any
   perturbation in one part of the meshwork may be felt rapidly
   everywhere else.  Altogether, the organization of the
   thalamocortical meshwork seems remarkably suited to integrating a
   large number of specialists into a unified response. (From the book
   A Universe of Consciousness: How Matter Becomes Imagination
   written by Gerald M. Edelman and Giulio Tononi.)

 The important aspect of this, from my perspective with IIT in 
mind, is that it produces a great deal of what Tononi calls effective 
information, measured with the Kullback-Leibler divergence.  If you 
haven't read Tononi's work, I think this sums up that part I'm referring 
to very well:


   Informally speaking, the integrated information owned by a system
   in a given state can be described as the information (in the
   Theory of Information sense) generated by a system in the transition
   from one given state to the next one as a consequence of the causal
   interaction of its parts above and beyond the sum of information
   generated independently by each of its parts. (Alessandro Epasto,
   Enrico Nardelli 

Re: Implications of Tononi's IIT?

2010-07-24 Thread Allen

 On 7/23/2010 3:03 PM, Brent Meeker wrote:
I'd say the information comes from the surface of Mars - it is 
integrated (which means summed into a whole) by the Rover and acted 
upon.  Tononi seems to be abusing language and using integrated when 
he actually means generated.  Whether there is information generated 
would depend on how you defined it and where you draw the boundaries 
of the system.  Shannon information is a measure of the reduction in 
uncertainity - so if you were uncertain about what the Mars Rover 
would do, then you could say it's action generated information.  But 
if you knew every detail of it's programming and memory and the 
surface scene it viewed you might say it didn't generate any information.


Brent


 Thanks for replying.

 I hope my comments to Jason explain my difference in perspective 
here.  I don't think the information is integrated in the way Tononi 
uses the term.  I don't view this system as being connected in such a 
way that information is generated by causal interactions /among/ rather 
than /within/ its parts. (Balduzzi D, Tononi G 2009)  I think the 
physical structures of the computers involved in this example exclude 
the generation of additional information via re-entrant feedback between 
any of the components (I don't know the proper terms to use here).  
There's no component saying to its neighbour I see you're not 'firing', 
which means possibilities p  q must be excluded, everyone just goes 
about their business independently.  Isn't that how it works at the fine 
scale, where everything is binary?  Nobody checks which of their 
neighbours are 0's and which are 1's?


 I hope some of this is sensible.  I've only ever read about these 
things, this is the first time trying to explain any of them, and the 
holes in my understanding have never been so blatantly obvious.


 -Allen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Implications of Tononi's IIT?

2010-07-24 Thread Allen

 On 7/23/2010 1:55 PM, Bruno Marchal wrote:
I think this has nothing to do with technology.  It is just that 
consciousness is not related to the activity of the physical machine, 
but to the logic which makes the person supported by the computation 
integrating that information.


 Thank you for your reply.

 I mention technology because, as of now, we don't seem to have any 
conscious artifacts.  Do you agree?  If so, what do we need to construct 
conscious artifacts?  If you don't agree, what has man constructed that 
is or may be conscious?  Or is my question nonsensical?


In a sense it is just false to relate consciousness to any third 
person describable activity, and in fine, if we are machine, our 
consciousness, which is a first person notion, is related (not even 
defined by) all the possible computations going through the logical 
state of the machine. This entails that any machine looking at itself 
below its substitution level (the level at which it feels surviving an 
artificial digital substitution) will discover that the apparent 
material reality is multiple: matter relies on infinity of 
computations. This is retrospectively confirmed by quantum mechanics.


In fine, matter is a construction of the mind, in the case we are 
digital machine. The brain does not makes consciousness, it filters it 
from infinities of first person histories. Tononi is a bit naïve, like 
many, on the mind-body (consciousness-reality) relationship. The 
integration does not rely on what a machine do, but on what an 
infinity of possible machines can do, and how consistent environment 
reacts to what the machine (person) decides.
 I don't want to ignore this portion, it's just more advanced than 
I am, I don't have a comfortable grasp on the concepts, so I can't make 
even an attempt at a response.


It is a subtle matter, which necessitate to revise the fundamental 
status of physics. No amount of third person description will ever 
define what is consciousness, and this for reason related to Mechanism 
and discoveries in computer science/mathematical logic. You may look 
at my url for more if interested. Of you can find sum up and 
explanation in the archive of the list.


Best,

- Bruno Marchal


http://iridia.ulb.ac.be/~marchal/


 I want to ask a question about The Origin of Physical Laws and 
Sensations.  I don't understand it yet, I'll need to re-read the 
seventh step multiple times more before I figure it out comfortably.  
The fault is certainly my own ignorance, not your explanations. I'll be 
returning to it, and taking your advice on reading the List's archives.


 As to my question:  At the third step, you wrote Giving that 
Moscow and Washington are permutable without any noticeable changes for 
the experiencer, it is reasonable to ascribe a probability of ½ to the 
event 'I will be in Moscow (resp. Washington).'  I don't understand the 
probability here.  If I am duplicated, won't there just be two Allens, 
AllenM (for Moscow) and AllenW (Washington)?


 When a probability becomes involved, doesn't it seem like you're 
saying that there is an entity I who is the real Allen, and that I 
may be AllenM or AllenW, but I will not be the other one.  The other 
one has some other I.  Am I misunderstanding, and - since it's very 
likely - to what extent?  I don't believe in I's, I think, for lack of a 
better phrase, that consciousness is all one.  How do you feel about this?


 P.S.  For anyone to answer:  Is this acceptable to reply to three 
separate posts with three separate posts of my own, all within such a 
short time?  I figured one would be quite lengthy, and maybe more 
confusing.  So I split them into replies according to who I was replying to.


 -Allen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Implications of Tononi's IIT?

2010-07-23 Thread Allen

  Thank you both for replying!

On 7/22/2010 8:47 PM, Brent Meeker wrote:
Sure.  Consider a Mars Rover.  It has a camera with many pixels.  The 
voltage of the photodetector of each pixel is digitized and sent to a 
computer.  The computer processes the data and recognizes there is a 
rock in its path.  The computer actuates some controller and steers 
the Rover around the rock.  So information has been integrated and 
used.  Note that if the information had not been used (i.e. resulted 
in action in the environment) it would be difficult to say whether it 
had been integrated or merely transformed and stored.


Brent


 Isn't this the same as the digital camera sensor chip?  Aren't the 
functions you're describing built on this foundation of independent, 
minimal repertoires, all working independently of each other?  I can see 
how, from our external point of view, it seems like one entity, but when 
we look at the hardware, isn't it functionally the same as the sensor 
chip in the quote from Tononi?  That is, even the CPU that is fed the 
information from the camera works in a similar way.  Tononi, in 
/Qualia: The Geometry of Integrated Information/, says:


   Integrated information is measured by comparing the actual
   repertoire generated by the system as a whole with the combined
   actual repertoires generated independently by the parts.

  So, what I mean is, the parts account for all the information in 
the system, there is no additional information generated as integrated 
information (Which Tononi refers to as phi ?.)



On 7/23/2010 12:15 AM, Jason Resch wrote:
A Turing machine can essentially do anything with information that can 
be done with information.  They are universal machines in the same 
sense that a pair of headphones is a universal instrument, though 
practical implementations have limits (a Turing machine has limited 
available memory, a pair of headphones will have a limited frequency 
and amplitude range), theoretically, each has an infinite repertoire.


 I hope no one will be offended if I borrow a quote I found on 
Wikipedia:


   At any moment there is one symbol in the machine; it is called the
   scanned symbol. The machine can alter the scanned symbol and its
   behavior is in part determined by that symbol, but the symbols on
   the tape elsewhere do not affect the behavior of the machine.
   (Turing 1948, p. 61)

 I'm sure none of you needed the reminder, it's only so that I may 
point directly to what I mean.  Now, doesn't this - the nature of a 
Turing machine - fundamentally exclude the ability to integrate 
information?  The computers we have today do not integrate information 
to any significant extent, as Tononi explained with his digital camera 
example.  Is this a fundamental limit of the Turing machine, or just our 
current technology?


 There is no conceivable instrument whose sound could not be 
reproduced by an ideal pair of headphones, just as there is no 
conceivable physical machine whose behavior could not be reproduced by 
an ideal Turing machine.  This implies that given enough memory, and 
the right programming a Turing machine can perfectly reproduce the 
behavior of a person's Brain.


 If an ideal Turing machine cannot integrate information, then the 
brain is a physical machine whose behavior can't be reproduced by an 
ideal Turing machine.  No matter how much memory the Turing machine has, 
it's mechanism prevents it from integrating that information, and 
without integration, there is no subjective experience.


Does this make the Turing machine conscious?  If not it implies that 
someone you know could have their brain replaced by Turing machine, 
and that person would in every way act as the original person, yet it 
wouldn't be conscious.  It would still claim to be conscious, still 
claim to feel pain, still be capable of writing a philosophy paper 
about the mysteriousness of consciousness.  If a non-conscious entity 
could in every way act as a conscious entity does, then what is the 
point of consciousness?  There would be no reason for it to evolve if 
it served no purpose.  Also, what sense would it make for 
non-conscious entities to contemplate and write e-mails about 
something they presumably don't have access to?  (As Turing machines 
running brain software necessarily would).


 I wonder if this is what the vast majority of AI work done so far 
is working towards: philosophical zombies.  We can very likely, and in 
the not-too-distant future, build artifacts that are so life-like they 
can trick some of us into believing they are conscious, but until 
hardware has been constructed that can function in the same manner as 
the neurons in the corticothalamic area of the brain, or surpass them, 
we won't have significantly conscious artifacts.  No amount of 
computational modeling will make up for the physical inability to 
integrate information.


 - Allen

--
You received this message because you 

Re: Implications of Tononi's IIT?

2010-07-23 Thread Bruno Marchal


On 23 Jul 2010, at 17:17, Allen wrote:

 I'm sure none of you needed the reminder, it's only so that I  
may point directly to what I mean.  Now, doesn't this - the nature  
of a Turing machine - fundamentally exclude the ability to integrate  
information?  The computers we have today do not integrate  
information to any significant extent, as Tononi explained with his  
digital camera example.  Is this a fundamental limit of the Turing  
machine, or just our current technology?



I think this has nothing to do with technology.  It is just that  
consciousness is not related to the activity of the physical machine,  
but to the logic which makes the person supported by the computation  
integrating that information.
In a sense it is just false to relate consciousness to any third  
person describable activity, and in fine, if we are machine, our  
consciousness, which is a first person notion, is related (not even  
defined by) all the possible computations going through the logical  
state of the machine. This entails that any machine looking at itself  
below its substitution level (the level at which it feels surviving an  
artificial digital substitution) will discover that the apparent  
material reality is multiple: matter relies on infinity of  
computations. This is retrospectively confirmed by quantum mechanics.


In fine, matter is a construction of the mind, in the case we are  
digital machine. The brain does not makes consciousness, it filters it  
from infinities of first person histories. Tononi is a bit naïve, like  
many, on the mind-body (consciousness-reality) relationship. The  
integration does not rely on what a machine do, but on what an  
infinity of possible machines can do, and how consistent environment  
reacts to what the machine (person) decides.


It is a subtle matter, which necessitate to revise the fundamental  
status of physics. No amount of third person description will ever  
define what is consciousness, and this for reason related to Mechanism  
and discoveries in computer science/mathematical logic. You may look  
at my url for more if interested. Of you can find sum up and  
explanation in the archive of the list.


Best,

- Bruno Marchal


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Implications of Tononi's IIT?

2010-07-23 Thread Brent Meeker

On 7/23/2010 8:17 AM, Allen wrote:

 Thank you both for replying!

On 7/22/2010 8:47 PM, Brent Meeker wrote:
Sure.  Consider a Mars Rover.  It has a camera with many pixels.  The 
voltage of the photodetector of each pixel is digitized and sent to a 
computer.  The computer processes the data and recognizes there is a 
rock in its path.  The computer actuates some controller and steers 
the Rover around the rock.  So information has been integrated and 
used.  Note that if the information had not been used (i.e. resulted 
in action in the environment) it would be difficult to say whether it 
had been integrated or merely transformed and stored.


Brent


 Isn't this the same as the digital camera sensor chip?  Aren't 
the functions you're describing built on this foundation of 
independent, minimal repertoires, all working independently of each 
other?  I can see how, from our external point of view, it seems like 
one entity, but when we look at the hardware, isn't it functionally 
the same as the sensor chip in the quote from Tononi?  That is, even 
the CPU that is fed the information from the camera works in a similar 
way.  Tononi, in /Qualia: The Geometry of Integrated Information/, says:


Integrated information is measured by comparing the actual
repertoire generated by the system as a whole with the combined
actual repertoires generated independently by the parts.

  So, what I mean is, the parts account for all the information in 
the system, there is no additional information generated as integrated 
information (Which Tononi refers to as phi ?.)


I'd say the information comes from the surface of Mars - it is 
integrated (which means summed into a whole) by the Rover and acted 
upon.  Tononi seems to be abusing language and using integrated when 
he actually means generated.  Whether there is information generated 
would depend on how you defined it and where you draw the boundaries of 
the system.  Shannon information is a measure of the reduction in 
uncertainity - so if you were uncertain about what the Mars Rover would 
do, then you could say it's action generated information.  But if you 
knew every detail of it's programming and memory and the surface scene 
it viewed you might say it didn't generate any information.


Brent




On 7/23/2010 12:15 AM, Jason Resch wrote:
A Turing machine can essentially do anything with information that 
can be done with information.  They are universal machines in the 
same sense that a pair of headphones is a universal instrument, 
though practical implementations have limits (a Turing machine has 
limited available memory, a pair of headphones will have a limited 
frequency and amplitude range), theoretically, each has an infinite 
repertoire.


 I hope no one will be offended if I borrow a quote I found on 
Wikipedia:


At any moment there is one symbol in the machine; it is called
the scanned symbol. The machine can alter the scanned symbol and
its behavior is in part determined by that symbol, but the symbols
on the tape elsewhere do not affect the behavior of the machine.
(Turing 1948, p. 61)

 I'm sure none of you needed the reminder, it's only so that I may 
point directly to what I mean.  Now, doesn't this - the nature of a 
Turing machine - fundamentally exclude the ability to integrate 
information?  The computers we have today do not integrate information 
to any significant extent, as Tononi explained with his digital camera 
example.  Is this a fundamental limit of the Turing machine, or just 
our current technology?


 There is no conceivable instrument whose sound could not be 
reproduced by an ideal pair of headphones, just as there is no 
conceivable physical machine whose behavior could not be reproduced 
by an ideal Turing machine.  This implies that given enough memory, 
and the right programming a Turing machine can perfectly reproduce 
the behavior of a person's Brain.


 If an ideal Turing machine cannot integrate information, then the 
brain is a physical machine whose behavior can't be reproduced by an 
ideal Turing machine.  No matter how much memory the Turing machine 
has, it's mechanism prevents it from integrating that information, and 
without integration, there is no subjective experience.


Does this make the Turing machine conscious?  If not it implies that 
someone you know could have their brain replaced by Turing machine, 
and that person would in every way act as the original person, yet it 
wouldn't be conscious.  It would still claim to be conscious, still 
claim to feel pain, still be capable of writing a philosophy paper 
about the mysteriousness of consciousness.  If a non-conscious entity 
could in every way act as a conscious entity does, then what is the 
point of consciousness?  There would be no reason for it to evolve if 
it served no purpose.  Also, what sense would it make for 
non-conscious entities to contemplate and write e-mails about 

Re: Implications of Tononi's IIT?

2010-07-23 Thread Jason Resch
On Fri, Jul 23, 2010 at 10:17 AM, Allen allenkallenb...@yahoo.ca wrote:

   Thank you both for replying!


 On 7/22/2010 8:47 PM, Brent Meeker wrote:

 Sure.  Consider a Mars Rover.  It has a camera with many pixels.  The
 voltage of the photodetector of each pixel is digitized and sent to a
 computer.  The computer processes the data and recognizes there is a rock in
 its path.  The computer actuates some controller and steers the Rover around
 the rock.  So information has been integrated and used.  Note that if the
 information had not been used (i.e. resulted in action in the environment)
 it would be difficult to say whether it had been integrated or merely
 transformed and stored.

 Brent


  Isn't this the same as the digital camera sensor chip?  Aren't the
 functions you're describing built on this foundation of independent, minimal
 repertoires, all working independently of each other?  I can see how, from
 our external point of view, it seems like one entity, but when we look at
 the hardware, isn't it functionally the same as the sensor chip in the quote
 from Tononi?  That is, even the CPU that is fed the information from the
 camera works in a similar way.  Tononi, in *Qualia: The Geometry of
 Integrated Information*, says:

 Integrated information is measured by comparing the actual repertoire
 generated by the system as a whole with the combined actual repertoires
 generated independently by the parts.

   So, what I mean is, the parts account for all the information in the
 system, there is no additional information generated as integrated
 information (Which Tononi refers to as phi Φ.)


In the case of a digital camera, you could say the photodectors each map
directly to memory locations and so they can be completely separated and
their behavior remains the same.  That isn't true with the Mar's rover,
whose software must evaluate the pattern across the memory locations to
identify and avoid objects.  You cannot separate the mar's rover into
components which that behave identically in isolation.





 On 7/23/2010 12:15 AM, Jason Resch wrote:

  A Turing machine can essentially do anything with information that can be
 done with information.  They are universal machines in the same sense that a
 pair of headphones is a universal instrument, though practical
 implementations have limits (a Turing machine has limited available memory,
 a pair of headphones will have a limited frequency and amplitude range),
 theoretically, each has an infinite repertoire.


  I hope no one will be offended if I borrow a quote I found on
 Wikipedia:

 At any moment there is one symbol in the machine; it is called the scanned
 symbol. The machine can alter the scanned symbol and its behavior is in part
 determined by that symbol, but the symbols on the tape elsewhere do not
 affect the behavior of the machine. (Turing 1948, p. 61)

  I'm sure none of you needed the reminder, it's only so that I may
 point directly to what I mean.  Now, doesn't this - the nature of a Turing
 machine - fundamentally exclude the ability to integrate information?  The
 computers we have today do not integrate information to any significant
 extent, as Tononi explained with his digital camera example.  Is this a
 fundamental limit of the Turing machine, or just our current technology?



That quote reminds me of the Chinese Room thought experiment, in which a
person is used as the machine to do the sequential processing by blindly
following a large set of rules.  I think a certain pattern of thinking about
computers leads to this confusion.  It is common to think of the CPU reading
and acting upon one symbol at a time as the brains of the machine, at any
one time we only see that CPU acting upon one symbol, so it seems like the
it is performing operations on the data, but in reality the past data has in
no small part led to this current state and position, in this sense the data
is defining and controlling the operations performed upon itself.

For example, create a chain of cells in a spread sheet.  Define B1 = A1*A1,
C1 = B1 - A1, and D1 = B1+2*C1.  Now when you put data in cell A1, the
computation is performed and carried through a range of different memory
locations (positions on the tape), the CPU at no one time performs the
computation to get from the input (A1) to the output (D1), instead it
performs a chain of intermediate computations and goes through a chain of
states, with intermediate states determining the final state.  To determine
the future evolution of the of the system (The Machine and the Tape) the
entire system has to be considered.  Just as in the Chinese room thought
experiment, it is not the human following the rulebook which creates the
conscious, but the system as a whole (all the rules of processing together
with the one who follows the rules).


There is no conceivable instrument whose sound could not be reproduced
 by an ideal pair of headphones, just as there is no conceivable 

Re: Implications of Tononi's IIT?

2010-07-23 Thread Jason Resch
2010/7/23 Jason Resch jasonre...@gmail.com

  I am very familiar with Tononi's definition of information integration,
 but if it is something that neurons do it is certainly something computers
 can do as well.




Sorry, I meant to say that I am *not *very familiar...

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Implications of Tononi's IIT?

2010-07-22 Thread Allen Kallenbach
  Buongiorno, Everything List!

 I have been lurking here since mid-2009, and had hoped to have a better 
intellectual foundation to support me before I posted anything of my own, but I 
would really like to ask this question.

 Giulio Tononi's Integrated Information Theory (IIT) states that 
consciousness is integrated information.  In Consciousness as Integrated 
Information: a Provisional Manifesto he writes, referring to the sensor chip 
in 
a digital camera:

 In reality, however, the chip is not anintegrated entity: since its 1 
million photodiodes have no wayto interact, each photodiode performs its own 
local discriminationbetween a low and a high current completely independent of 
whatevery other photodiode might be doing. In reality, the chipis just a 
collection of 1 million independent photodiodes, eachwith a repertoire of two 
states. In other words, there is nointrinsic point of view associated with the 
camera chip as awhole. This is easy to see: if the sensor chip were cut into1 
million pieces each holding its individual photodiode, theperformance of the 
camera would not change at all.

 Considering this, can consciousness be Turing emulable?  That is, can a 
Turing machine integrate information?  I want to expand my question here, but I 
don't have the knowledge to do so without distracting from the main question 
I'm 
asking.  So, all I can say is, details greatly appreciated!

 - Allen

Consciousness as Integrated Information: a Provisional Manifesto (Tononi G 
2008):
 http://www.ncbi.nlm.nih.gov/pubmed/19098144

Qualia: The Geometry of Integrated Information (Balduzzi D, Tononi G 2009):
 http://www.ncbi.nlm.nih.gov/pubmed/19680424


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Implications of Tononi's IIT?

2010-07-22 Thread Brent Meeker

On 7/22/2010 3:33 PM, Allen Kallenbach wrote:

  Buongiorno, Everything List!

 I have been lurking here since mid-2009, and had hoped to have a 
better intellectual foundation to support me before I posted anything 
of my own, but I would really like to ask this question.


 Giulio Tononi's Integrated Information Theory (IIT) states that 
consciousness is integrated information.  In Consciousness as 
Integrated Information: a Provisional Manifesto he writes, referring 
to the sensor chip in a digital camera:


 In reality, however, the chip is not an^ integrated entity: 
since its 1 million photodiodes have no way^ to interact, each 
photodiode performs its own local discrimination^ between a low and a 
high current completely independent of what^ every other photodiode 
might be doing. In reality, the chip^ is just a collection of 1 
million independent photodiodes, each^ with a repertoire of two 
states. In other words, there is no^ intrinsic point of view 
associated with the camera chip as a^ whole. This is easy to see: if 
the sensor chip were cut into^ 1 million pieces each holding its 
individual photodiode, the^ performance of the camera would not change 
at all.


 Considering this, can consciousness be Turing emulable?  That is, 
can a Turing machine integrate information?  I want to expand my 
question here, but I don't have the knowledge to do so without 
distracting from the main question I'm asking.  So, all I can say is, 
details greatly appreciated!


 - Allen

Consciousness as Integrated Information: a Provisional Manifesto 
(/Tononi/ G 2008):

http://www.ncbi.nlm.nih.gov/pubmed/19098144

Qualia: The Geometry of Integrated Information (/Balduzzi/ D, /Tononi/ 
G 2009):

http://www.ncbi.nlm.nih.gov/pubmed/19680424


Sure.  Consider a Mars Rover.  It has a camera with many pixels.  The 
voltage of the photodetector of each pixel is digitized and sent to a 
computer.  The computer processes the data and recognizes there is a 
rock in its path.  The computer actuates some controller and steers the 
Rover around the rock.  So information has been integrated and used.  
Note that if the information had not been used (i.e. resulted in action 
in the environment) it would be difficult to say whether it had been 
integrated or merely transformed and stored.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Implications of Tononi's IIT?

2010-07-22 Thread Jason Resch
On Thu, Jul 22, 2010 at 5:33 PM, Allen Kallenbach
allenkallenb...@yahoo.cawrote:


  Considering this, can consciousness be Turing emulable?  That is, can
 a Turing machine integrate information?  I want to expand my question here,
 but I don't have the knowledge to do so without distracting from the main
 question I'm asking.  So, all I can say is, details greatly appreciated!

  - Allen


A Turing machine can essentially do anything with information that can be
done with information.  They are universal machines in the same sense that a
pair of headphones is a universal instrument, though practical
implementations have limits (a Turing machine has limited available memory,
a pair of headphones will have a limited frequency and amplitude range),
theoretically, each has an infinite repertoire.  There is no conceivable
instrument whose sound could not be reproduced by an ideal pair of
headphones, just as there is no conceivable physical machine whose behavior
could not be reproduced by an ideal Turing machine.  This implies that given
enough memory, and the right programming a Turing machine can perfectly
reproduce the behavior of a person's Brain.

Does this make the Turing machine conscious?  If not it implies that someone
you know could have their brain replaced by Turing machine, and that person
would in every way act as the original person, yet it wouldn't be conscious.
 It would still claim to be conscious, still claim to feel pain, still be
capable of writing a philosophy paper about the mysteriousness of
consciousness.  If a non-conscious entity could in every way act as a
conscious entity does, then what is the point of consciousness?  There would
be no reason for it to evolve if it served no purpose.  Also, what sense
would it make for non-conscious entities to contemplate and write e-mails
about something they presumably don't have access to?  (As Turing machines
running brain software necessarily would).

There is a concept in which any Turing machine can emulate any other.  This
is what allows for such technology as virtual machines, and game system
emulators.  An old Atari game running on an emulator has no way to tell
whether it is running on a physical Atari game console or within an emulator
program running on a modern desktop computer.  In fact there is no way any
program can determine the ultimate, or actual physical substrate on which it
is running.  Extending this principle, if a brain's behavior can be
reproduced by software, such software will have no way of knowing whether it
is running on a real brain or on a bunch of computer chips.  If a person did
feel different, for example by not experiencing anything, or experiencing
consciousness differently, this would violate the idea that software can
never know for certain what its hardware is.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.