Re: Jack's partial brain paper

2010-03-18 Thread Stathis Papaioannou
On 18 March 2010 16:36, Brent Meeker meeke...@dslextreme.com wrote:

 Is it coherent to say a black box accidentally reproduces the I/O?  It is
 over some relatively small number to of I/Os, but over a large enough number
 and range to sustain human behavior - that seems very doubtful.  One would
 be tempted to say the black box was obeying a natural law.  It would be
 the same as the problem of induction.  How do we know natural laws are
 consistent - because we define them to be so.

Jack considers the case where the black box is empty and the remaining
neurological tissue just happens to continue responding as if it were
receiving normal input. That, of course, would be extremely unlikely
to happen, to the point where it could be called magic if it did
happen. But if there were such a magical black box, it would
contribute to consciousness.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-18 Thread Bruno Marchal


On 18 Mar 2010, at 07:01, Stathis Papaioannou wrote:


On 18 March 2010 16:36, Brent Meeker meeke...@dslextreme.com wrote:

Is it coherent to say a black box accidentally reproduces the I/ 
O?  It is
over some relatively small number to of I/Os, but over a large  
enough number
and range to sustain human behavior - that seems very doubtful.   
One would
be tempted to say the black box was obeying a natural law.  It  
would be
the same as the problem of induction.  How do we know natural laws  
are

consistent - because we define them to be so.


Jack considers the case where the black box is empty and the remaining
neurological tissue just happens to continue responding as if it were
receiving normal input. That, of course, would be extremely unlikely
to happen, to the point where it could be called magic if it did
happen. But if there were such a magical black box, it would
contribute to consciousness.


It is here that we may differ, but perhaps not essentially. Because  
the movie of the boolean graph is like that. You can suppress part of  
the movie, it will not disrupt or have any causal effect on the other  
part of the graph.
I prefer to say that consciousness does not supervene on the movie,  
given that the movie does not even execute a computation, but then  
consciousness does not any more supervene of the physical activity  
related to the special implementation of a computation (relatively to  
our most probable computations). We get the comp supervenience thesis,  
and this makes physics secondary on number (extensional and  
intensional) theory, or number/computer science.
Jack seems to want to put the counterfactual in the physical, but  
then, keeping comp, the physical becomes computational, and it  
eliminates the body problem in an ad hoc way. It makes physics  
depending of the base, when comp makes and derives physics from its  
invariance from the comp-base (even if the base elementary  
arithmetic may play some capital role for other more pedagogical or  
psychological reasons).


Bruno





--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-18 Thread Bruno Marchal


On 17 Mar 2010, at 18:34, Brent Meeker wrote:


On 3/17/2010 3:34 AM, Stathis Papaioannou wrote:


On 17 March 2010 05:29, Brent Meeker meeke...@dslextreme.com wrote:


I think this is a dubious argument based on our lack of  
understanding of
qualia.  Presumably one has many thoughts that do not result in  
any overt
action.  So if I lost a few neurons (which I do continuously) it  
might mean
that there are some thoughts I don't have or some associations I  
don't make,
so eventually I may fade to the level of consciousness of my  
dog.  Is my

dog a partial zombie?


It's certainly possible that qualia can fade without the subject
noticing, either because the change is slow and gradual or because  
the

change fortuitously causes a cognitive deficit as well. But this not
what the fading qualia argument is about. The argument requires
consideration of a brain change which would cause an unequivocal
change in consciousness, such as a removal of the subject's occipital
lobes. If this happened, the subject would go completely blind: he
would be unable to describe anything placed in front of his eyes, and
he would report that he could not see anything at all. That's what it
means to go blind. But now consider the case where the occipital  
lobes
are replaced with a black box that reproduces the I/O behaviour of  
the

occipital lobes, but which is postulated to lack visual qualia. The
rest of the subject's brain is intact and is forced to behave exactly
as it would if the change had not been made, since it is receiving
normal inputs from the black box. So the subject will correctly
describe anything placed in front of him, and he will report that
everything looks perfectly normal. More than that, he will have an
appropriate emotional response to what he sees, be able to paint it  
or

write poetry about it, make a working model of it from an image he
retains in his mind: whatever he would normally do if he saw
something. And yet, he would be a partial zombie: he would behave
exactly as if he had normal visual qualia while completely lacking
visual qualia. Now it is part of the definition of a full zombie that
it doesn't understand that it is blind, since a requirement for
zombiehood is that it doesn't understand anything at all, it just
behaves as if it does. But if the idea of qualia is meaningful at  
all,

you would think that a sudden drastic change like going blind should
produce some realisation in a cognitively intact subject; otherwise
how do we know that we aren't blind now, and what reason would we  
have

to prefer normal vision to zombie vision? The conclusion is that it
isn't possible to make a device that replicates brain function but
lacks qualia: either it is not possible to make such a device at all
because the brain is not computable, or if such a device could be  
made
(even a magical one) then it would necessarily reproduce the qualia  
as

well.



I generally agree with the above.  Maybe I misunderstood the  
question; but I was considering the possibility of having a  
continuum of lesser qualia AND corresponding lesser behavior.


However I think there is something in the above that creates the  
just a recording problem.  It's the hypothesis that the black box  
reproduces the I/O behavior.  This implies the black box realizes a  
function, not a recording.  But then the argument slips over to  
replacing the black box with a recording which just happens to  
produce the same I/O and we're led to an absurdum that a recording  
is conscious.  But what step of the argument should we reject?  The  
plausible possibility is that it is the different response to  
counterfactuals that the functional box and the recording realize.   
That would seem like magic - a different response depending on all  
the things that don't happen - except in the MWI of QM all those  
counterfactuals are available to make a difference..



This is confirmed by the material hypostases (Bp  Dt) which gives the  
counterfactual bisimulation of G.
And empirically by the fact that quantum logic can be seen as a logic  
of counterfactuals/conditionals (Hardegree).


Choosing QM is treachery, if you see the uda point. Neither  
consciousness nor appearance of primitive matter can depend on the  
choice of an initial universal computational base (universal system).


Bruno




Brent



I think the question of whether there could be a philosophical  
zombie is ill
posed because we don't know what is responsible for qualia.  I  
speculate
that they are tags of importance or value that get attached to  
perceptions
so that they are stored in short term memory.  Then, because  
evolution
cannot redesign things, the same tags are used for internal  
thoughts that
seem important enough to put in memory.  If this is the case then  
it might
be possible to design a robot which used a different method of  
evaluating
experience for storage and it would not have qualia like humans -  
but would
it have some other kind of 

Re: Jack's partial brain paper

2010-03-18 Thread Bruno Marchal


On 17 Mar 2010, at 18:50, Brent Meeker wrote:


On 3/17/2010 5:47 AM, HZ wrote:


I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are there known cases of zombies?  
Perhaps a

silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know whether
we are zombies or one can solve the problem of zombie identification.
I guess I'm new in the zombieness business.



For me the question of zombieness seems meaningful if I put it in  
the form of creating an artifiicially intelligent being, as opposed  
to replacing the components of a brain by functionally identical  
elements.  Julian Jaynes has a theory of the evolutionary  
development of consciousness as an internalization of hearing  
speech.  He supposes that early humans did not hear an inner  
narrative as we do but only heard external sounds and the speech of  
others and due to some biogenetic changes this became internalized  
so that we heard the instructions of parents in our heads even  
when they weren't present.  Then we came to hear ourselves in our  
head too, i.e. became conscious.


I don't know if this is true - it sounded like nonsense when I first  
heard of it - but after reading Jaynes I was impressed by the  
arguments he could muster for it.  But if it's true it would mean  
that I could create an artificially intelligent being who, for  
example, did not process verbal thoughts thru the same module used  
for hearing and then this being would not have the same qualia  
corresponding to hearing yourself in your head.  It might very  
well have some different qualia.  But since we don't know what  
qualia are in a third person sense, it's impossible to make sense  
of having qualia, but different from those we know.


As I understand Bruno's theory, he identifies qualia with certain  
kinds of computation; a third person characterization.


I define the qualia of the machine by the true and consistent  
(Sigma_1) propositions, and incommunicable as such (unprovable). It  
depends on computation (by the sigma_1), but it is an assertive state  
of the talking machine. It is not a computational state, nor a  
computation. It is more an arithmetico-geometrico-logical state.


Qualia lives in Z1* minus Z1, or X1* minus X1. They are  
(arithmetically) true ON the machine, but not communicable as such,  
nor specifiable, by them. And the machine is able to explain why there  
is a necessary remaining gap in this definition. This is of course the  
case for any 3-person definition of a 1-notion.




But I'm not sure what kind or whether I could say that my  
artificially intelligent being had them.


But you can evaluate its degrees of self-referential correctness with  
respect to *your universe*.
There will be fuzzy region of behavior, but then it is the same  
problem than with lower animals, etc.






Brent



But leaving the zombie definition and identification apart, I think
current science would/should see no difference between consciousness
and cognition, the former is an emergent property of the latter, and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains there,  
but

consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not  
disconnected
to cognition. In this view only damaging the cognitive capacities  
of a

person would damage its 'qualia', while its 'qualia' could not get
damaged but by damaging the brain which will likewise damage the
cognitive capabilities. In other words, there seems to be no
cognition/consciousness duality as long as there is no brain/mind  
one.
The use of the term 'qualia' here looks like a remake of the mind/ 
body

problem.


On Wed, Mar 17, 2010 at 11:34 AM, Stathis Papaioannou
stath...@gmail.com wrote:

On 17 March 2010 05:29, Brent Meeker meeke...@dslextreme.com  
wrote:



I think this is a dubious argument based on our lack of  
understanding of
qualia.  Presumably one has many thoughts that do not result in  
any overt
action.  So if I lost a few neurons (which I do continuously) it  
might mean
that there are some thoughts I don't have or some associations I  
don't make,
so eventually I may fade to the level of consciousness of my  
dog.  Is my

dog a partial zombie?


It's certainly possible that qualia can fade without the subject
noticing, either because the change is slow and 

Re: Jack's partial brain paper

2010-03-18 Thread Bruno Marchal


On 17 Mar 2010, at 19:12, Brent Meeker wrote:


On 3/17/2010 10:01 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 13:47, HZ wrote:


I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we  
are
not? But more importantly, are there known cases of zombies?  
Perhaps a

silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know  
whether
we are zombies or one can solve the problem of zombie  
identification.

I guess I'm new in the zombieness business.




I know I am conscious, and I can doubt all content of my  
consciousness, except this one, that I am conscious.

I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent, if  
human in appearance sort of zombie.
Tomorrow, we may be able to put in a museum an artificial machine  
imitating a humans which is sleeping, in a way that we may be  
confused and believe it is a dreaming human being ...


The notion of zombie makes sense (logical sense). Its existence may  
depend on the choice of theory.
With the axiom of comp, a counterfactually correct relation between  
numbers define the channel through which consciousness flows  
(select the consistent extensions). So with comp we could argue  
that as far as we are bodies, we are zombies, but from our first  
person perspective we never are.




But leaving the zombie definition and identification apart, I think
current science would/should see no difference between consciousness
and cognition, the former is an emergent property of the latter,



I would have said the contrary:

consciousness - sensibility - emotion - cognition - language -  
recognition - self-consciousness - ...


(and: number - universal number - consciousness - ...)

Something like that, follows, I argue, from the assumption that we  
are Turing emulable at some (necessarily unknown) level of  
description.



and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains there,  
but

consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not  
disconnected
to cognition. In this view only damaging the cognitive capacities  
of a

person would damage its 'qualia', while its 'qualia' could not get
damaged but by damaging the brain which will likewise damage the
cognitive capabilities. In other words, there seems to be no
cognition/consciousness duality as long as there is no brain/mind  
one.
The use of the term 'qualia' here looks like a remake of the mind/ 
body

problem.



Qualia is the part of the mind consisting in the directly  
apprehensible subjective experience. Typical examples are pain,  
seeing red, smell, feeling something, ... It is roughly the non  
transitive part of cognition.


The question here is not the question of the existence of degrees  
of consciousness, but the existence of a link between a possible  
variation of consciousness in presence of non causal perturbation  
during a particular run of a brain or a machine.


If big blue wins a chess tournament without having used the  
register 344, no doubt big blue would have win in case the register  
344 would have been broken.


Not with probability 1.0, because given QM the game might have (and  
in other worlds did) gone differently and required register 344.


Correct but irrelevant. We don't assume QM at the start, and if you  
use QM, you have to reason on the QM normal words to make the point  
relevant. Or you assume QM-comp, and not comp. It is physicalism. And  
you beg the point, which is that comp - QM-comp. (assuming QM is  
correct on the physical world).






Some people seems to believe that if big blue was conscious in the  
first case, it could loose consciousness in the second case. I  
don't think this is tenable when we assume that we are Turing  
emulable.


But the world is only Turing emulable if it is deterministic and  
it's only deterministic if everything happens as in MWI QM.


Newton mechanics is a counter-example. You lost me. I don't know in  
which theory you reason.


Also, arithmetical truth is deterministic although only a tiny part  
of it is computable. Consciousness, matter are higher order notion,  
some nameable (by numbers), some not. Most, by comp, are not  
computable. Computable things can have non computable qualities. By  
incompleteness, this is a very general phenomenon.


The full 

Re: Free will: Wrong entry.

2010-03-18 Thread m.a.
Bruno,
   Can you clarify the origins of the Lobian Machine? Does it arise 
out of the theorem of Hugo Martin Lob? Is it shorthand for the lobes of the 
human brain? What is the difference between a lobian machine and a universal 
lobian machine? And how do they relate to the question of free will? Many 
thanks,


   marty a.





  - Original Message - 
  From: Bruno Marchal 
  To: everything-list@googlegroups.com 
  Sent: Wednesday, March 17, 2010 1:30 PM
  Subject: Re: Free will: Wrong entry.




  On 17 Mar 2010, at 14:06, m.a. wrote:


But is there a deliberate feedback (of any kind) between first person and 
UD? 


  No. The UD can be seen as a set of elementary arithmetical truth, realizing 
through their many proofs, the many computations. It is the least 
block-universe fro the mindscape. (Assuming comp).






How does the UD identify and favor our normal histories? 


  Excellent question. This is the reason why we are hunting white rabbits and 
white noise. This why we have to extracts the structure of matter and time from 
a sum on infinity of computations (those below or even aside our level and 
sphere of definition). If we show that such sum does not normalize, then we 
refute comp.






How do the lobian numbers affect the UD. (I think you've answered these 
questions before but not in ways that are clear to me. Please give it one last 
try.)m.a.





  Löbian machine survives only in their consistent extension. It is the couple 
lobian-machine/its realities which emerge from inside the UD* (the execution of 
the UD, or that part of arithmetic).


  The free-will of a lobian number is defined with respect to its most probable 
realities. They can affect such realities, and be affected by them. But no 
lobian number/machine/entity/soul (if you think at its first person view) can 
affect the UD, for the same reason we cannot affect elementary arithmetic.  (or 
the physical laws, for a physicalist).


  Look at UD* (the infinite run of the UD), or arithmetic, as the block 
universe of the mindscape. Matter is a projective view of arithmetic, when 
viewed by universal numbers from inside it. Normality is ensured by relative 
self-multiplication, making us both very rare in the absolute, and very 
numerous in the relative. Like with Everett, except we start from the numbers, 
and shows how to derive the wave, not just the collapse.


  I just explain that if we take comp seriously, the mind body problem leads to 
a mathematical body problem.


  Bruno








-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



  http://iridia.ulb.ac.be/~marchal/







  -- 
  You received this message because you are subscribed to the Google Groups 
Everything List group.
  To post to this group, send email to everything-l...@googlegroups.com.
  To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
  For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-18 Thread Brent Meeker

On 3/17/2010 11:01 PM, Stathis Papaioannou wrote:

On 18 March 2010 16:36, Brent Meekermeeke...@dslextreme.com  wrote:

   

Is it coherent to say a black box accidentally reproduces the I/O?  It is
over some relatively small number to of I/Os, but over a large enough number
and range to sustain human behavior - that seems very doubtful.  One would
be tempted to say the black box was obeying a natural law.  It would be
the same as the problem of induction.  How do we know natural laws are
consistent - because we define them to be so.
 

Jack considers the case where the black box is empty and the remaining
neurological tissue just happens to continue responding as if it were
receiving normal input. That, of course, would be extremely unlikely
to happen, to the point where it could be called magic if it did
happen. But if there were such a magical black box, it would
contribute to consciousness.


   
Suppose there were a man with no brain at all but who just happened act 
exactly like a normal person.  Suppose there are no people and your 
whole idea that you have a body and you are reading an email is an 
illusion.


But I don't believe in magic.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-18 Thread L.W. Sterritt

Bruno and others,

Perhaps more progress can be made by avoiding self referential  
problems and viewing this issue mechanistically.  Where I start:  Haim  
Sompolinsky, Statistical Mechanics of Neural Networks, Physics Today  
(December 1988). He discussed emergent computational properties of  
large highly connected networks of simple neuron-like processors, HP  
has recently succeeded in making titanium dioxide memristors which  
behave very like the synapses in our brains,  i.e. the memristor's  
resistance at any time depends upon the last signal passing through  
it.  Work is underway to make brain-like computers with these devices;  
see Wei Lu, Nano letters, DOI:10.1021/nl904092h.  It seems that there  
is a growing consensus that conscious machines will be built, and  
perhaps with the new Turing test proposed by Koch and Tonini, their  
consciousness may be verified. Then we can measure properties that are  
now speculative.  I guess I'm in the QM camp that believes that  what  
you can measure is what you can know.


William



On Mar 18, 2010, at 1:44 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 19:12, Brent Meeker wrote:


On 3/17/2010 10:01 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 13:47, HZ wrote:

I'm quite confused about the state of zombieness. If the  
requirement

for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we  
are
not? But more importantly, are there known cases of zombies?  
Perhaps a

silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should)  
know
more. The questions seem related because either we don't know  
whether
we are zombies or one can solve the problem of zombie  
identification.

I guess I'm new in the zombieness business.




I know I am conscious, and I can doubt all content of my  
consciousness, except this one, that I am conscious.

I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent, if  
human in appearance sort of zombie.
Tomorrow, we may be able to put in a museum an artificial machine  
imitating a humans which is sleeping, in a way that we may be  
confused and believe it is a dreaming human being ...


The notion of zombie makes sense (logical sense). Its existence  
may depend on the choice of theory.
With the axiom of comp, a counterfactually correct relation  
between numbers define the channel through which consciousness  
flows (select the consistent extensions). So with comp we could  
argue that as far as we are bodies, we are zombies, but from our  
first person perspective we never are.




But leaving the zombie definition and identification apart, I think
current science would/should see no difference between  
consciousness

and cognition, the former is an emergent property of the latter,



I would have said the contrary:

consciousness - sensibility - emotion - cognition - language - 
 recognition - self-consciousness - ...


(and: number - universal number - consciousness - ...)

Something like that, follows, I argue, from the assumption that we  
are Turing emulable at some (necessarily unknown) level of  
description.



and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains  
there, but

consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not  
disconnected
to cognition. In this view only damaging the cognitive capacities  
of a

person would damage its 'qualia', while its 'qualia' could not get
damaged but by damaging the brain which will likewise damage the
cognitive capabilities. In other words, there seems to be no
cognition/consciousness duality as long as there is no brain/mind  
one.
The use of the term 'qualia' here looks like a remake of the mind/ 
body

problem.



Qualia is the part of the mind consisting in the directly  
apprehensible subjective experience. Typical examples are pain,  
seeing red, smell, feeling something, ... It is roughly the non  
transitive part of cognition.


The question here is not the question of the existence of degrees  
of consciousness, but the existence of a link between a possible  
variation of consciousness in presence of non causal perturbation  
during a particular run of a brain or a machine.


If big blue wins a chess tournament without having used the  
register 344, no doubt big blue would have win in case the  
register 344 would have been broken.


Not with probability 1.0, because given QM the game might have (and  
in other worlds did) gone differently and 

Re: Jack's partial brain paper

2010-03-18 Thread Brent Meeker

On 3/18/2010 10:06 AM, L.W. Sterritt wrote:

Bruno and others,

Perhaps more progress can be made by avoiding self referential 
problems and viewing this issue mechanistically.  Where I start:  Haim 
Sompolinsky, Statistical Mechanics of Neural Networks, /Physics 
Today /(December 1988). He discussed emergent computational 
properties of large highly connected networks of simple neuron-like 
processors, HP has recently succeeded in making titanium dioxide 
memristors which behave very like the synapses in our brains,  i.e. 
the memristor's resistance at any time depends upon the last signal 
passing through it.  Work is underway to make brain-like computers 
with these devices; see Wei Lu, /Nano letters/, DOI:10.1021/nl904092h. 
 It seems that there is a growing consensus that conscious machines 
will be built, and perhaps with the new Turing test proposed by Koch 
and Tonini, their consciousness may be verified.


But the question is,How does a Turing test verify consciousness?  Is 
it possible for something to act in a way that seems conscious to us (as 
my dog does) yet not have the inner experiences that I have.  It seems 
highly implausible that an being whose structure and internal function 
is very similar to mine (another person) could act conscious but not be 
conscious.  But it's not at all clear that would be true of an 
artificially intelligent being whose internal structure and function was 
quite different.


Incidentally, it is often forgotten that Turing proposed that the test 
be a contest between a man and a computer to see which one could better 
emulate a woman.


Brent

Then we can measure properties that are now speculative.  I guess I'm 
in the QM camp that believes that  what you can measure is what you 
can know.


William



On Mar 18, 2010, at 1:44 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 19:12, Brent Meeker wrote:


On 3/17/2010 10:01 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 13:47, HZ wrote:


I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are there known cases of zombies? Perhaps a
silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know whether
we are zombies or one can solve the problem of zombie identification.
I guess I'm new in the zombieness business.




I know I am conscious, and I can doubt all content of my 
consciousness, except this one, that I am conscious.

I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent, if 
human in appearance sort of zombie.
Tomorrow, we may be able to put in a museum an artificial machine 
imitating a humans which is sleeping, in a way that we may be 
confused and believe it is a dreaming human being ...


The notion of zombie makes sense (logical sense). Its existence may 
depend on the choice of theory.
With the axiom of comp, a counterfactually correct relation between 
numbers define the channel through which consciousness flows 
(select the consistent extensions). So with comp we could argue 
that as far as we are bodies, we are zombies, but from our first 
person perspective we never are.




But leaving the zombie definition and identification apart, I think
current science would/should see no difference between consciousness
and cognition, the former is an emergent property of the latter,



I would have said the contrary:

consciousness - sensibility - emotion - cognition - language - 
recognition - self-consciousness - ...


(and: number - universal number - consciousness - ...)

Something like that, follows, I argue, from the assumption that we 
are Turing emulable at some (necessarily unknown) level of description.



and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains there, but
consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not disconnected
to cognition. In this view only damaging the cognitive capacities of a
person would damage its 'qualia', while its 'qualia' could not get
damaged but by damaging the brain which will likewise damage the
cognitive capabilities. In other words, there seems to be no
cognition/consciousness duality as long as there is no brain/mind one.
The use of the term 'qualia' here looks like a remake of the mind/body
problem.



Qualia is the part of the mind consisting in the directly 

Re: Jack's partial brain paper

2010-03-18 Thread David Nyman
On 18 March 2010 17:06, L.W. Sterritt lannysterr...@comcast.net wrote:

 Perhaps more progress can be made by avoiding self referential problems and
 viewing this issue mechanistically.

Undoubtedly.

 I guess I'm in the QM camp
 that believes that  what you can measure is what you can know.

But if all that you could know was indeed limited to what you could
measure, there would have to be an infinite regress of measurement.
Before you can know anything in the sense of measuring it, it must
already have appeared in your consciousness.  This is just one of the
ways that the hardness of the problem of consciousness can be
discerned, if it isn't waved away linguistically (or indeed
mathematically).

Whether a TM can ever come to know anything, as opposed to measuring
aspects of its environment, is an open question.  An adequately
ingenious Turing test should indeed be capable of assessing whether a
machine is capable of measuring relevant aspects of its environment
well enough to deal appropriately with a given problem space.  Whether
this counts as evidence that human beings navigate the same problem
space with the same resources and methods is moot.  ISTM that, in
addition to the test, a comprehensive theory of mind for both machine
and human intelligences would be a prerequisite.  But this, of course,
is somewhat more problematic.

David

 Bruno and others,
 Perhaps more progress can be made by avoiding self referential problems and
 viewing this issue mechanistically.  Where I start:  Haim Sompolinsky,
 Statistical Mechanics of Neural Networks, Physics Today (December 1988).
 He discussed emergent computational properties of large highly connected
 networks of simple neuron-like processors, HP has recently succeeded in
 making titanium dioxide memristors which behave very like the synapses in
 our brains,  i.e. the memristor's resistance at any time depends upon the
 last signal passing through it.  Work is underway to make brain-like
 computers with these devices; see Wei Lu, Nano letters,
 DOI:10.1021/nl904092h.  It seems that there is a growing consensus that
 conscious machines will be built, and perhaps with the new Turing test
 proposed by Koch and Tonini, their consciousness may be verified. Then we
 can measure properties that are now speculative.  I guess I'm in the QM camp
 that believes that  what you can measure is what you can know.
 William


 On Mar 18, 2010, at 1:44 AM, Bruno Marchal wrote:

 On 17 Mar 2010, at 19:12, Brent Meeker wrote:

 On 3/17/2010 10:01 AM, Bruno Marchal wrote:


 On 17 Mar 2010, at 13:47, HZ wrote:

 I'm quite confused about the state of zombieness. If the requirement

 for zombiehood is that it doesn't understand anything at all but it

 behaves as if it does what makes us not zombies? How do we not we are

 not? But more importantly, are there known cases of zombies? Perhaps a

 silly question because it might be just a thought experiment but if

 so, I wonder on what evidence one is so freely speaking about,

 specially when connected to cognition for which we now (should) know

 more. The questions seem related because either we don't know whether

 we are zombies or one can solve the problem of zombie identification.

 I guess I'm new in the zombieness business.



 I know I am conscious, and I can doubt all content of my consciousness,
 except this one, that I am conscious.

 I cannot prove that I am conscious, neither to some others.

 Dolls and sculptures are, with respect to what they represent, if human in
 appearance sort of zombie.

 Tomorrow, we may be able to put in a museum an artificial machine imitating
 a humans which is sleeping, in a way that we may be confused and believe it
 is a dreaming human being ...

 The notion of zombie makes sense (logical sense). Its existence may depend
 on the choice of theory.

 With the axiom of comp, a counterfactually correct relation between numbers
 define the channel through which consciousness flows (select the consistent
 extensions). So with comp we could argue that as far as we are bodies, we
 are zombies, but from our first person perspective we never are.


 But leaving the zombie definition and identification apart, I think

 current science would/should see no difference between consciousness

 and cognition, the former is an emergent property of the latter,


 I would have said the contrary:

 consciousness - sensibility - emotion - cognition - language -
 recognition - self-consciousness - ...

 (and: number - universal number - consciousness - ...)

 Something like that, follows, I argue, from the assumption that we are
 Turing emulable at some (necessarily unknown) level of description.

 and

 just as there are levels of cognition there are levels of

 consciousness. Between the human being and other animals there is a

 wide gradation of levels, it is not that any other animal lacks of

 'qualia'. Perhaps there is an upper level defined by computational

 limits and as such once reached that 

Re: Jack's partial brain paper

2010-03-18 Thread L.W. Sterritt

David,

I think that I have to agree with your comments.  I do think that we  
will learn something from the quest for conscious machines, perhaps  
not what we had in mind.


Lanny


On Mar 18, 2010, at 10:45 AM, David Nyman wrote:

On 18 March 2010 17:06, L.W. Sterritt lannysterr...@comcast.net  
wrote:


Perhaps more progress can be made by avoiding self referential  
problems and

viewing this issue mechanistically.


Undoubtedly.


I guess I'm in the QM camp
that believes that  what you can measure is what you can know.


But if all that you could know was indeed limited to what you could
measure, there would have to be an infinite regress of measurement.
Before you can know anything in the sense of measuring it, it must
already have appeared in your consciousness.  This is just one of the
ways that the hardness of the problem of consciousness can be
discerned, if it isn't waved away linguistically (or indeed
mathematically).

Whether a TM can ever come to know anything, as opposed to measuring
aspects of its environment, is an open question.  An adequately
ingenious Turing test should indeed be capable of assessing whether a
machine is capable of measuring relevant aspects of its environment
well enough to deal appropriately with a given problem space.  Whether
this counts as evidence that human beings navigate the same problem
space with the same resources and methods is moot.  ISTM that, in
addition to the test, a comprehensive theory of mind for both machine
and human intelligences would be a prerequisite.  But this, of course,
is somewhat more problematic.

David


Bruno and others,
Perhaps more progress can be made by avoiding self referential  
problems and
viewing this issue mechanistically.  Where I start:  Haim  
Sompolinsky,
Statistical Mechanics of Neural Networks, Physics Today (December  
1988).
He discussed emergent computational properties of large highly  
connected
networks of simple neuron-like processors, HP has recently  
succeeded in
making titanium dioxide memristors which behave very like the  
synapses in
our brains,  i.e. the memristor's resistance at any time depends  
upon the

last signal passing through it.  Work is underway to make brain-like
computers with these devices; see Wei Lu, Nano letters,
DOI:10.1021/nl904092h.  It seems that there is a growing consensus  
that
conscious machines will be built, and perhaps with the new Turing  
test
proposed by Koch and Tonini, their consciousness may be verified.  
Then we
can measure properties that are now speculative.  I guess I'm in  
the QM camp

that believes that  what you can measure is what you can know.
William


On Mar 18, 2010, at 1:44 AM, Bruno Marchal wrote:

On 17 Mar 2010, at 19:12, Brent Meeker wrote:

On 3/17/2010 10:01 AM, Bruno Marchal wrote:


On 17 Mar 2010, at 13:47, HZ wrote:

I'm quite confused about the state of zombieness. If the requirement

for zombiehood is that it doesn't understand anything at all but it

behaves as if it does what makes us not zombies? How do we not we are

not? But more importantly, are there known cases of zombies?  
Perhaps a


silly question because it might be just a thought experiment but if

so, I wonder on what evidence one is so freely speaking about,

specially when connected to cognition for which we now (should) know

more. The questions seem related because either we don't know whether

we are zombies or one can solve the problem of zombie identification.

I guess I'm new in the zombieness business.



I know I am conscious, and I can doubt all content of my  
consciousness,

except this one, that I am conscious.

I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent, if  
human in

appearance sort of zombie.

Tomorrow, we may be able to put in a museum an artificial machine  
imitating
a humans which is sleeping, in a way that we may be confused and  
believe it

is a dreaming human being ...

The notion of zombie makes sense (logical sense). Its existence may  
depend

on the choice of theory.

With the axiom of comp, a counterfactually correct relation between  
numbers
define the channel through which consciousness flows (select the  
consistent
extensions). So with comp we could argue that as far as we are  
bodies, we

are zombies, but from our first person perspective we never are.


But leaving the zombie definition and identification apart, I think

current science would/should see no difference between consciousness

and cognition, the former is an emergent property of the latter,


I would have said the contrary:

consciousness - sensibility - emotion - cognition - language -
recognition - self-consciousness - ...

(and: number - universal number - consciousness - ...)

Something like that, follows, I argue, from the assumption that we  
are

Turing emulable at some (necessarily unknown) level of description.

and

just as there are levels of cognition there are levels of


Re: Jack's partial brain paper

2010-03-18 Thread L.W. Sterritt

Brent,

There are some quite interesting observations in the paper by Koch and  
Tonini, e.g.


Remarkably, consciousness does not seem to require many of the things  
we associate most deeply with being human: emotions, memory, self- 
reflection, language, sensing the world and acting in it...


 When we dream, for instance, we are virtually disconnected from the  
environment - we acknowledge almost nothing of what happens around us,  
and our muscles are largely paralyzed.  nevertheless, we are  
conscious , sometimes vividly and grippingly so.  This mental activity  
is reflected in electrical recordings of the dreaming brain showing  
that the corticothalamic system, intimately involved with sensory  
perception, continues to function more or less as it does in  
wakefulness...


The output of a neural network computer is not entirely predictable,  
not running on an instruction set like this computer.  So then, If we  
succeed in building conscious machines, and they happen to be mostly  
dreaming, is it easier or harder to test them for consciousness?


William


On Mar 18, 2010, at 10:29 AM, Brent Meeker wrote:


On 3/18/2010 10:06 AM, L.W. Sterritt wrote:


Bruno and others,

Perhaps more progress can be made by avoiding self referential  
problems and viewing this issue mechanistically.  Where I start:   
Haim Sompolinsky, Statistical Mechanics of Neural Networks,  
Physics Today (December 1988). He discussed emergent computational  
properties of large highly connected networks of simple neuron-like  
processors, HP has recently succeeded in making titanium dioxide  
memristors which behave very like the synapses in our brains,   
i.e. the memristor's resistance at any time depends upon the last  
signal passing through it.  Work is underway to make brain-like  
computers with these devices; see Wei Lu, Nano letters, DOI:10.1021/ 
nl904092h.  It seems that there is a growing consensus that  
conscious machines will be built, and perhaps with the new Turing  
test proposed by Koch and Tonini, their consciousness may be  
verified.


But the question is,How does a Turing test verify consciousness?   
Is it possible for something to act in a way that seems conscious to  
us (as my dog does) yet not have the inner experiences that I have.   
It seems highly implausible that an being whose structure and  
internal function is very similar to mine (another person) could act  
conscious but not be conscious.  But it's not at all clear that  
would be true of an artificially intelligent being whose internal  
structure and function was quite different.


Incidentally, it is often forgotten that Turing proposed that the  
test be a contest between a man and a computer to see which one  
could better emulate a woman.


Brent

Then we can measure properties that are now speculative.  I guess  
I'm in the QM camp that believes that  what you can measure is what  
you can know.


William



On Mar 18, 2010, at 1:44 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 19:12, Brent Meeker wrote:


On 3/17/2010 10:01 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 13:47, HZ wrote:

I'm quite confused about the state of zombieness. If the  
requirement
for zombiehood is that it doesn't understand anything at all  
but it
behaves as if it does what makes us not zombies? How do we not  
we are
not? But more importantly, are there known cases of zombies?  
Perhaps a
silly question because it might be just a thought experiment  
but if

so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should)  
know
more. The questions seem related because either we don't know  
whether
we are zombies or one can solve the problem of zombie  
identification.

I guess I'm new in the zombieness business.




I know I am conscious, and I can doubt all content of my  
consciousness, except this one, that I am conscious.

I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent,  
if human in appearance sort of zombie.
Tomorrow, we may be able to put in a museum an artificial  
machine imitating a humans which is sleeping, in a way that we  
may be confused and believe it is a dreaming human being ...


The notion of zombie makes sense (logical sense). Its existence  
may depend on the choice of theory.
With the axiom of comp, a counterfactually correct relation  
between numbers define the channel through which consciousness  
flows (select the consistent extensions). So with comp we could  
argue that as far as we are bodies, we are zombies, but from our  
first person perspective we never are.




But leaving the zombie definition and identification apart, I  
think
current science would/should see no difference between  
consciousness

and cognition, the former is an emergent property of the latter,



I would have said the contrary:

consciousness - sensibility - emotion - cognition - language  
- 

Re: Jack's partial brain paper

2010-03-18 Thread Brent Meeker

On 3/18/2010 12:03 PM, L.W. Sterritt wrote:

Brent,

There are some quite interesting observations in the paper by Koch and 
Tonini, e.g.


Remarkably, consciousness does not seem to require many of the things 
we associate most deeply with being human: emotions, memory, 
self-reflection, language, sensing the world and acting in it...


I couldn't find their paper (do you  have link or a elex copy?) but the 
above sounds doubtful to me.  Could you dream without having any 
memory?   I never dream I'm an animal or a machine.  I never dream I'm 
on Jupiter.  My dreams may include things I've never experienced, but 
they are made up out of pieces that I have experienced.  And of course 
if I didn't remember them how would I know I'd dreamed?


 When we dream, for instance, we are virtually disconnected from the 
environment - we acknowledge almost nothing of what happens around us, 
and our muscles are largely paralyzed.  nevertheless, we are conscious 
, sometimes vividly and grippingly so.  This mental activity is 
reflected in electrical recordings of the dreaming brain showing that 
the corticothalamic system, intimately involved with sensory 
perception, continues to function more or less as it does in 
wakefulness...


The output of a neural network computer is not entirely predictable, 
not running on an instruction set like this computer.  So then, If we 
succeed in building conscious machines, and they happen to be mostly 
dreaming, is it easier or harder to test them for consciousness?


I don't think dreaming can so easily be disconnected from perception.  
As I recall experiments with sensory deprivation tanks, which were a fad 
in the 60's, found that after an hour or so of sensory deprivation the 
brain tended to enter a loop.  When you're sleeping, and dreaming, you 
are not sensorially deprived.


Brent



William


On Mar 18, 2010, at 10:29 AM, Brent Meeker wrote:


On 3/18/2010 10:06 AM, L.W. Sterritt wrote:

Bruno and others,

Perhaps more progress can be made by avoiding self referential 
problems and viewing this issue mechanistically.  Where I start: 
 Haim Sompolinsky, Statistical Mechanics of Neural Networks, 
/Physics Today /(December 1988). He discussed emergent 
computational properties of large highly connected networks of 
simple neuron-like processors, HP has recently succeeded in making 
titanium dioxide memristors which behave very like the synapses in 
our brains,  i.e. the memristor's resistance at any time depends 
upon the last signal passing through it.  Work is underway to make 
brain-like computers with these devices; see Wei Lu, /Nano letters/, 
DOI:10.1021/nl904092h.  It seems that there is a growing consensus 
that conscious machines will be built, and perhaps with the new 
Turing test proposed by Koch and Tonini, their consciousness may be 
verified.


But the question is,How does a Turing test verify consciousness?  
Is it possible for something to act in a way that seems conscious to 
us (as my dog does) yet not have the inner experiences that I have.  
It seems highly implausible that an being whose structure and 
internal function is very similar to mine (another person) could act 
conscious but not be conscious.  But it's not at all clear that would 
be true of an artificially intelligent being whose internal structure 
and function was quite different.


Incidentally, it is often forgotten that Turing proposed that the 
test be a contest between a man and a computer to see which one could 
better emulate a woman.


Brent

Then we can measure properties that are now speculative.  I guess 
I'm in the QM camp that believes that  what you can measure is what 
you can know.


William



On Mar 18, 2010, at 1:44 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 19:12, Brent Meeker wrote:


On 3/17/2010 10:01 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 13:47, HZ wrote:


I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not 
we are
not? But more importantly, are there known cases of zombies? 
Perhaps a

silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know 
whether
we are zombies or one can solve the problem of zombie 
identification.

I guess I'm new in the zombieness business.




I know I am conscious, and I can doubt all content of my 
consciousness, except this one, that I am conscious.

I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent, if 
human in appearance sort of zombie.
Tomorrow, we may be able to put in a museum an artificial machine 
imitating a humans which is sleeping, in a way that we may be 
confused and believe it is a dreaming 

Re: Jack's partial brain paper

2010-03-18 Thread L.W. Sterritt

Brent,

This link should work.  IEEE sometimes makes their articles available  
to non-members and  non-subscribers:


http://spectrum.ieee.org/biomedical/imaging/can-machines-be-conscious/3

If this does not work, please let me know and I'll find another path  
to the article.  I could also go back to the original publication  
which I have somewhere.


William


On Mar 18, 2010, at 1:00 PM, Brent Meeker wrote:


On 3/18/2010 12:03 PM, L.W. Sterritt wrote:


Brent,

There are some quite interesting observations in the paper by Koch  
and Tonini, e.g.


Remarkably, consciousness does not seem to require many of the  
things we associate most deeply with being human: emotions, memory,  
self-reflection, language, sensing the world and acting in it...


I couldn't find their paper (do you  have link or a elex copy?) but  
the above sounds doubtful to me.  Could you dream without having any  
memory?   I never dream I'm an animal or a machine.  I never dream  
I'm on Jupiter.  My dreams may include things I've never  
experienced, but they are made up out of pieces that I have  
experienced.  And of course if I didn't remember them how would I  
know I'd dreamed?


 When we dream, for instance, we are virtually disconnected from  
the environment - we acknowledge almost nothing of what happens  
around us, and our muscles are largely paralyzed.  nevertheless, we  
are conscious , sometimes vividly and grippingly so.  This mental  
activity is reflected in electrical recordings of the dreaming  
brain showing that the corticothalamic system, intimately involved  
with sensory perception, continues to function more or less as it  
does in wakefulness...


The output of a neural network computer is not entirely  
predictable, not running on an instruction set like this computer.   
So then, If we succeed in building conscious machines, and they  
happen to be mostly dreaming, is it easier or harder to test them  
for consciousness?


I don't think dreaming can so easily be disconnected from  
perception.  As I recall experiments with sensory deprivation tanks,  
which were a fad in the 60's, found that after an hour or so of  
sensory deprivation the brain tended to enter a loop.  When you're  
sleeping, and dreaming, you are not sensorially deprived.


Brent




William


On Mar 18, 2010, at 10:29 AM, Brent Meeker wrote:


On 3/18/2010 10:06 AM, L.W. Sterritt wrote:


Bruno and others,

Perhaps more progress can be made by avoiding self referential  
problems and viewing this issue mechanistically.  Where I start:   
Haim Sompolinsky, Statistical Mechanics of Neural Networks,  
Physics Today (December 1988). He discussed emergent  
computational properties of large highly connected networks of  
simple neuron-like processors, HP has recently succeeded in  
making titanium dioxide memristors which behave very like the  
synapses in our brains,  i.e. the memristor's resistance at any  
time depends upon the last signal passing through it.  Work is  
underway to make brain-like computers with these devices; see Wei  
Lu, Nano letters, DOI:10.1021/nl904092h.  It seems that there is  
a growing consensus that conscious machines will be built, and  
perhaps with the new Turing test proposed by Koch and Tonini,  
their consciousness may be verified.


But the question is,How does a Turing test verify  
consciousness?  Is it possible for something to act in a way that  
seems conscious to us (as my dog does) yet not have the inner  
experiences that I have.  It seems highly implausible that an  
being whose structure and internal function is very similar to  
mine (another person) could act conscious but not be conscious.   
But it's not at all clear that would be true of an artificially  
intelligent being whose internal structure and function was quite  
different.


Incidentally, it is often forgotten that Turing proposed that the  
test be a contest between a man and a computer to see which one  
could better emulate a woman.


Brent

Then we can measure properties that are now speculative.  I guess  
I'm in the QM camp that believes that  what you can measure is  
what you can know.


William



On Mar 18, 2010, at 1:44 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 19:12, Brent Meeker wrote:


On 3/17/2010 10:01 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 13:47, HZ wrote:

I'm quite confused about the state of zombieness. If the  
requirement
for zombiehood is that it doesn't understand anything at all  
but it
behaves as if it does what makes us not zombies? How do we  
not we are
not? But more importantly, are there known cases of zombies?  
Perhaps a
silly question because it might be just a thought experiment  
but if

so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now  
(should) know
more. The questions seem related because either we don't know  
whether
we are zombies or one can solve the problem of zombie  
identification.

I guess I'm 

Re: Jack's partial brain paper

2010-03-18 Thread L . W . Sterritt

Brent,

I notice that the link that I forwarded opens on the 3rd page; just  
select view all, toward the upper right of the page.


This brief article on consciousness as integrated information may also  
be interesting:


http://spectrum.ieee.org/computing/hardware/a-bit-of-theory-consciousness-as-integrated-information

William



On Mar 18, 2010, at 1:00 PM, Brent Meeker wrote:


On 3/18/2010 12:03 PM, L.W. Sterritt wrote:


Brent,

There are some quite interesting observations in the paper by Koch  
and Tonini, e.g.


Remarkably, consciousness does not seem to require many of the  
things we associate most deeply with being human: emotions, memory,  
self-reflection, language, sensing the world and acting in it...


I couldn't find their paper (do you  have link or a elex copy?) but  
the above sounds doubtful to me.  Could you dream without having any  
memory?   I never dream I'm an animal or a machine.  I never dream  
I'm on Jupiter.  My dreams may include things I've never  
experienced, but they are made up out of pieces that I have  
experienced.  And of course if I didn't remember them how would I  
know I'd dreamed?


 When we dream, for instance, we are virtually disconnected from  
the environment - we acknowledge almost nothing of what happens  
around us, and our muscles are largely paralyzed.  nevertheless, we  
are conscious , sometimes vividly and grippingly so.  This mental  
activity is reflected in electrical recordings of the dreaming  
brain showing that the corticothalamic system, intimately involved  
with sensory perception, continues to function more or less as it  
does in wakefulness...


The output of a neural network computer is not entirely  
predictable, not running on an instruction set like this computer.   
So then, If we succeed in building conscious machines, and they  
happen to be mostly dreaming, is it easier or harder to test them  
for consciousness?


I don't think dreaming can so easily be disconnected from  
perception.  As I recall experiments with sensory deprivation tanks,  
which were a fad in the 60's, found that after an hour or so of  
sensory deprivation the brain tended to enter a loop.  When you're  
sleeping, and dreaming, you are not sensorially deprived.


Brent




William


On Mar 18, 2010, at 10:29 AM, Brent Meeker wrote:


On 3/18/2010 10:06 AM, L.W. Sterritt wrote:


Bruno and others,

Perhaps more progress can be made by avoiding self referential  
problems and viewing this issue mechanistically.  Where I start:   
Haim Sompolinsky, Statistical Mechanics of Neural Networks,  
Physics Today (December 1988). He discussed emergent  
computational properties of large highly connected networks of  
simple neuron-like processors, HP has recently succeeded in  
making titanium dioxide memristors which behave very like the  
synapses in our brains,  i.e. the memristor's resistance at any  
time depends upon the last signal passing through it.  Work is  
underway to make brain-like computers with these devices; see Wei  
Lu, Nano letters, DOI:10.1021/nl904092h.  It seems that there is  
a growing consensus that conscious machines will be built, and  
perhaps with the new Turing test proposed by Koch and Tonini,  
their consciousness may be verified.


But the question is,How does a Turing test verify  
consciousness?  Is it possible for something to act in a way that  
seems conscious to us (as my dog does) yet not have the inner  
experiences that I have.  It seems highly implausible that an  
being whose structure and internal function is very similar to  
mine (another person) could act conscious but not be conscious.   
But it's not at all clear that would be true of an artificially  
intelligent being whose internal structure and function was quite  
different.


Incidentally, it is often forgotten that Turing proposed that the  
test be a contest between a man and a computer to see which one  
could better emulate a woman.


Brent

Then we can measure properties that are now speculative.  I guess  
I'm in the QM camp that believes that  what you can measure is  
what you can know.


William



On Mar 18, 2010, at 1:44 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 19:12, Brent Meeker wrote:


On 3/17/2010 10:01 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 13:47, HZ wrote:

I'm quite confused about the state of zombieness. If the  
requirement
for zombiehood is that it doesn't understand anything at all  
but it
behaves as if it does what makes us not zombies? How do we  
not we are
not? But more importantly, are there known cases of zombies?  
Perhaps a
silly question because it might be just a thought experiment  
but if

so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now  
(should) know
more. The questions seem related because either we don't know  
whether
we are zombies or one can solve the problem of zombie  
identification.

I guess I'm new in the zombieness 

Re: Jack's partial brain paper

2010-03-18 Thread Brent Meeker

Thanks.  I got it.

Some assertions seem dubious:

Primal emotions like anger, fear, surprise, and joy are useful and 
perhaps even essential for the survival of a conscious organism. 
Likewise, a conscious machine might rely on emotions to make choices and 
deal with the complexities of the world. But it could be just a cold, 
calculating engine--and yet still be conscious.


I would say that merely attending to this or that is a form of emotion, 
an attachment of value, interest, to this or that, and as such is an 
essential aspect of consciousness.  You don't have to have those big 
primal emotions to be conscious, but I think you need the emotion of 
attention.  When they write:


And here's a surprise: the converse is also true. People can attend to 
events or objects--that is, their brains can preferentially process 
them--without consciously perceiving them. This fact suggests that 
/being conscious does not require attention/ .


It seems to me they a switching definitions around.  The first occurence 
consciously refers to the inner experience we ordinarily call 
consciousness, but the second, being conscious refers to reacting 
appropriately.


And they don't give any evidence to support:

 The same holds true for the sort of working memory you need to perform 
any number of daily activities--to dial a phone number you just looked 
up or measure out the correct amount of crushed thyme given in the 
cookbook you just consulted. This memory is called dynamic because it 
lasts only as long as neuronal circuits remain active. But as with 
long-term memory, you don't need it to be conscious.


If you've know someone with severe Alzheimer's you may find that 
dubious.  It may depend on the duration of short term memory.  Certainly 
forgetting things a few seconds in your past may leave you conscious - 
but what about a half-second?


They seem to have surreptitiously  equated consciousness with intelligence:

But that software will still be far from conscious. Unless the program 
is explicitly written to conclude that the combination of man, gun, 
building, and terrified customer implies robbery, the program won't 
realize that something dangerous is going on.


I'm reminded of the old aphorism, Intelligence is whatever the computer 
can't do yet.  Note that the person without emotion won't realize that 
something dangerous is going on either, nor would a five year old, or a 
Neanderthal.  A person without memory won't know what a gun or a liquour 
store is.  A person without language won't be able to describe what's 
going on.  I think it's flaw in their eliminative arguments: 
consciousness doesn't require x because here's an example of someone 
without x who is conscious.  But you can't apply that to x1, x2, 
x3,...and then conclude someone can be conscious while lacking all of them.


Brent

On 3/18/2010 1:17 PM, L.W. Sterritt wrote:

Brent,

This link should work.  IEEE sometimes makes their articles available 
to non-members and  non-subscribers:


http://spectrum.ieee.org/biomedical/imaging/can-machines-be-conscious/3

If this does not work, please let me know and I'll find another path 
to the article.  I could also go back to the original publication 
which I have somewhere.


William


On Mar 18, 2010, at 1:00 PM, Brent Meeker wrote:


On 3/18/2010 12:03 PM, L.W. Sterritt wrote:

Brent,

There are some quite interesting observations in the paper by Koch 
and Tonini, e.g.


Remarkably, consciousness does not seem to require many of the 
things we associate most deeply with being human: emotions, memory, 
self-reflection, language, sensing the world and acting in it...


I couldn't find their paper (do you  have link or a elex copy?) but 
the above sounds doubtful to me.  Could you dream without having any 
memory?   I never dream I'm an animal or a machine.  I never dream 
I'm on Jupiter.  My dreams may include things I've never experienced, 
but they are made up out of pieces that I have experienced.  And of 
course if I didn't remember them how would I know I'd dreamed?


 When we dream, for instance, we are virtually disconnected from 
the environment - we acknowledge almost nothing of what happens 
around us, and our muscles are largely paralyzed.  nevertheless, we 
are conscious , sometimes vividly and grippingly so.  This mental 
activity is reflected in electrical recordings of the dreaming brain 
showing that the corticothalamic system, intimately involved with 
sensory perception, continues to function more or less as it does in 
wakefulness...


The output of a neural network computer is not entirely predictable, 
not running on an instruction set like this computer.  So then, If 
we succeed in building conscious machines, and they happen to be 
mostly dreaming, is it easier or harder to test them for 
consciousness?


I don't think dreaming can so easily be disconnected from 
perception.  As I recall experiments with sensory deprivation tanks, 
which were a fad in the 60's, found 

Re: Jack's partial brain paper

2010-03-18 Thread Stathis Papaioannou
On 19 March 2010 04:01, Brent Meeker meeke...@dslextreme.com wrote:
 On 3/17/2010 11:01 PM, Stathis Papaioannou wrote:

 On 18 March 2010 16:36, Brent Meeker meeke...@dslextreme.com wrote:



 Is it coherent to say a black box accidentally reproduces the I/O?  It is
 over some relatively small number to of I/Os, but over a large enough number
 and range to sustain human behavior - that seems very doubtful.  One would
 be tempted to say the black box was obeying a natural law.  It would be
 the same as the problem of induction.  How do we know natural laws are
 consistent - because we define them to be so.


 Jack considers the case where the black box is empty and the remaining
 neurological tissue just happens to continue responding as if it were
 receiving normal input. That, of course, would be extremely unlikely
 to happen, to the point where it could be called magic if it did
 happen. But if there were such a magical black box, it would
 contribute to consciousness.




 Suppose there were a man with no brain at all but who just happened act
 exactly like a normal person.  Suppose there are no people and your whole
 idea that you have a body and you are reading an email is an illusion.

 But I don't believe in magic.

I don't believe it is possible but in the spirit of functionalism, the
empty-headed man would still be conscious, just as a car would still
function normally if it had no engine but the wheels turned magically
as if driven by an engine. Jack's point was that fading or absent
qualia in a functionally normal brain was logically possible because
obviously some qualia would be absent if a part of the brain were
missing and the rest of the brain carried on normally. But I don't see
that that is obvious.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.