Re: Intelligence-office-whistleblowers-craft-non-human-origin

2023-06-16 Thread Alan Grayson
Scientists Extremely Intrigued to Find Ingredients for Life on Enceladus 
(msn.com) 


On Monday, June 12, 2023 at 1:13:44 PM UTC-6 spudb...@aol.com wrote:

> Interesting intelligences from some nearby solar system appear too good to 
> be true for me. 
>
> On Friday, June 9, 2023 at 08:54:05 AM EDT, Alan Grayson <
> agrays...@gmail.com> wrote: 
>
>
> Stunning UFO crash retrieval allegations deemed ‘credible,’ ‘urgent’ 
> (msn.com) 
> 
>
> On Monday, June 5, 2023 at 11:07:51 PM UTC-6 Alan Grayson wrote:
>
> Whistleblower claims US has bodies of alien species | Banfield | Watch 
> (msn.com) 
> 
>
> On Monday, June 5, 2023 at 8:41:39 PM UTC-6 Alan Grayson wrote:
>
> Whistleblower Claims U.S. Has UFOs Of Non-Human Origin (brobible.com) 
> 
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com.
> To view this discussion on the web visit 
>
> https://groups.google.com/d/msgid/everything-list/2858a580-82ae-402f-9880-9b719ca6f584n%40googlegroups.com
>  
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b56fcf32-efb5-40a1-b2d0-a086a710b23bn%40googlegroups.com.


Re: Intelligence-office-whistleblowers-craft-non-human-origin

2023-06-12 Thread 'spudboy...@aol.com' via Everything List
 Interesting intelligences from some nearby solar system appear too good to be 
true for me. 
On Friday, June 9, 2023 at 08:54:05 AM EDT, Alan Grayson 
 wrote:  
 
 Stunning UFO crash retrieval allegations deemed ‘credible,’ ‘urgent’ (msn.com)

On Monday, June 5, 2023 at 11:07:51 PM UTC-6 Alan Grayson wrote:

Whistleblower claims US has bodies of alien species | Banfield | Watch (msn.com)

On Monday, June 5, 2023 at 8:41:39 PM UTC-6 Alan Grayson wrote:

Whistleblower Claims U.S. Has UFOs Of Non-Human Origin (brobible.com)




-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2858a580-82ae-402f-9880-9b719ca6f584n%40googlegroups.com.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/523901487.1728132.1686597218219%40mail.yahoo.com.


Re: Intelligence-office-whistleblowers-craft-non-human-origin

2023-06-09 Thread Alan Grayson
Stunning UFO crash retrieval allegations deemed ‘credible,’ ‘urgent’ 
(msn.com) 


On Monday, June 5, 2023 at 11:07:51 PM UTC-6 Alan Grayson wrote:

> Whistleblower claims US has bodies of alien species | Banfield | Watch 
> (msn.com) 
> 
>
> On Monday, June 5, 2023 at 8:41:39 PM UTC-6 Alan Grayson wrote:
>
>> Whistleblower Claims U.S. Has UFOs Of Non-Human Origin (brobible.com) 
>> 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2858a580-82ae-402f-9880-9b719ca6f584n%40googlegroups.com.


Re: Intelligence-office-whistleblowers-craft-non-human-origin

2023-06-08 Thread Alan Grayson
House Oversight plans UFO hearing after unconfirmed claims of crashed alien 
craft (msn.com) 


On Thursday, June 8, 2023 at 9:55:27 PM UTC-6 Alan Grayson wrote:

> UFO whistleblower has 'impeccable credentials': Reporter Leslie Kean | 
> Watch (msn.com) 
> 
>
> On Monday, June 5, 2023 at 11:07:51 PM UTC-6 Alan Grayson wrote:
>
>> Whistleblower claims US has bodies of alien species | Banfield | Watch 
>> (msn.com) 
>> 
>>
>> On Monday, June 5, 2023 at 8:41:39 PM UTC-6 Alan Grayson wrote:
>>
>>> Whistleblower Claims U.S. Has UFOs Of Non-Human Origin (brobible.com) 
>>> 
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e5d2a311-a8e0-4479-a035-2c1bd1195c0cn%40googlegroups.com.


Re: Intelligence-office-whistleblowers-craft-non-human-origin

2023-06-08 Thread Alan Grayson
UFO whistleblower has 'impeccable credentials': Reporter Leslie Kean | 
Watch (msn.com) 


On Monday, June 5, 2023 at 11:07:51 PM UTC-6 Alan Grayson wrote:

> Whistleblower claims US has bodies of alien species | Banfield | Watch 
> (msn.com) 
> 
>
> On Monday, June 5, 2023 at 8:41:39 PM UTC-6 Alan Grayson wrote:
>
>> Whistleblower Claims U.S. Has UFOs Of Non-Human Origin (brobible.com) 
>> 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2d601985-a4aa-4a00-9786-41daecb87d35n%40googlegroups.com.


Re: Intelligence-office-whistleblowers-craft-non-human-origin

2023-06-05 Thread Alan Grayson
Whistleblower claims US has bodies of alien species | Banfield | Watch 
(msn.com) 


On Monday, June 5, 2023 at 8:41:39 PM UTC-6 Alan Grayson wrote:

> Whistleblower Claims U.S. Has UFOs Of Non-Human Origin (brobible.com) 
> 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2acac042-5558-44b2-9ffd-c5c22f77995cn%40googlegroups.com.


Re: Intelligence Consciousness

2015-01-04 Thread John Clark
On Sun, Dec 28, 2014 at 2:33 AM, zibblequib...@gmail.com wrote:


  I've been over this many times on this list, a rock may be conscious


  But there's no reason to entertain a rock is conscious to begin with.


If the rock behaved intelligently then I would think it's conscious, but it
doesn't so I don't. But you don't think my reasoning is valid so  I want to
know why you believe a rock is not conscious.

In all cases, natural selection sits with the universal principle.the
 laws of symmetry, the conservation lawsall of which are variations on
 the concept Energy. The universal principles are always about energy.
 Natural selection.is just like 'conservation laws', 'symmetry laws',


What the hell??


  The more efficient energetic structure, out endures the lesser.


The organism that gets more of its gens into the next generation
out-competes the competition, energetic structure is just unnecessary
 bafflegab.


  So all this hocus pocus about consciousness being special and somehow
 immune from natural selection.


If consciousness effects behavior then it is NOT immune from natural
selection and the Turing test can detect both intelligence and
consciousness. If  consciousness has nothing to do with behavior then the
evidence that a rock is conscious is just as good as the evidence that one
of your fellow human beings is. I think a rock is not conscious. my fellow
human beings are, and intelagent behavior is a marker of consciousness.


  Consciousness is the product of millions of small or large efficiency
 differences,


Differences in the efficiency OF DOING SOMETHING. Behavior.

  We draw on common human understandings for the knowledge being under
 anesthesia or whatever knocks out consciousness.


What's with this we business? What makes you think that anybody except
you understands anything?


  in the technological civilization, despite blatently following a
 completely different sequence than biological evolution...and has
 access to energy sources and material bioloy never has.


So in effect you're saying that whatever biology came up with (including
consciousness) technology can come up with it too, and do it better. I
agree.


  So if nature came up with feeling first and high level intelligence
 only much much later I don't see why the opposite would be true for our
 computers. It's a hell of a lot easier to make something that feels but
 doesn't think than something that thinks but doesn't feel.


  Yeah?


Yeah.


  Historical biology was driven by NATURAL SELECTION.


And random mutation and natural selection is a ridiculously slow and
inefficient process, it is also incredibly cruel, but until it got around
to inventing brains (after about 3 billion years of screwing around) that
was the only way complex thing could get made. However now we have brains
and brains begat technology and it will very soon far outstrip anything in
biology.


  Conscious intelligent technological being choose their own preferred
 sequent.


So there was a reason that being X went left rather than right,  preference
Y caused him to go in that direction. And cog X in the cuckoo clock turned
left rather that right because cog Y caused it to go in that direction.

 There are values of a truism nature to what you say here. The Turing test
 may SEE insights popping up about intelligence and consciousness. Why not.
 But the point isl the test does not DEPEND on any useful measurements of
 such quantities taking place. More critically the test does not DEPEND on
 non-vague definitions of intelligence or consciousness. There is NO
 DEPENDENCE on progress being made defining and understanding this pair of
 nebulous vague conceptions.


Can't comment, I don't know what any of that means. Niels Bohr said I
refuse to speak more clearly than I think, perhaps you feel the same way.

   I repeat my question, if you don't use the same thing that the Turing
 Test  uses, behavior,  how in the world do you tell the difference between
 a genius  and a moron?



 You don't understand the turing test.


Fine I don't understand the Turing Test, but I repeat my question for a
third time, if it isn't behavior how do you tell the difference between a
genius and a moron?

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2015-01-03 Thread zibblequibble


On Sunday, December 28, 2014 7:33:16 AM UTC, zibble...@gmail.com wrote:



 On Monday, December 22, 2014 6:59:25 PM UTC, John Clark wrote:

 On Mon, Dec 22, 2014 at 6:20 AM, zibble...@gmail.com wrote:

   Something can be conscious but not intelligent, but if it's 
 intelligent then it's conscious. Consciousness is easy but intelligence  


  John - take the amount new knowledge you assert I just the above 
 sentence. From where or what do you acquire this position? 


 I've been over this many times on this list, a rock may be conscious 


 But there's no reason to entertain a rock is conscious to begin with. 
 Orchestrating consciousness is a mind-bogglingly complex accompaniment. As 
 conscious beings this is reasonable from observation. 

 We boldy conjecture vast compleixity and stick our necks out in doing so. 
 If no such complexity is therewe are all washed up and falsified . So 
 it's a big risk...but it pays off...because we observe our brains are the 
 most complex objects in the universe ..on some density measure of 
 complexity per cubic cc.

 You've absolutely no rational logical basis for starting with assumption 
 consciousness is something generated any old how. So you spun yourself dizz 
 all a fluster intoxicating potions of specially case Darwinian Natural 
 Selection. You think natural selection has to *see* subjective inner 
 experience? Whyit doesn't have to *see* subjective tree-eye views 
 ecological niches...or *see* the - highly complex computational modes of 
 the Liver. Or Bladder. Never sees the pretty girl. Mirror's reflection. The 
 evil of the psychopathic sadistNS knows evil al the same. 

 In all cases, natural selection sits with the universal principle.the 
 laws of symmetry, the conservation lawsall of which are variations on 
 the concept Energy. The universal principles are always about energy. 
 Natural selection.is just like 'conservation laws', 'symmetry laws', 
 non-creative laws...and so onjust otexts of expression for energy. 
 Natural Selection is simply one further context of energy. 

 The more efficient energetic structure, out endures the lesser. Because 
 they are one and same thingat different points in history. The more 
 efficient gstructure  is the young low entropy epoch. The lesser efficient 
 structure is the structure in its old age. Natural selection is a turn of 
 phrasethe more efficient energetic structure simply will out endure. 

 So all this hocus pocus about consciousness being special and somehow 
 immune from natural selectionreally is a big pile of steaming cock and 
 bull John. Consciousness is the product of millions of small or large 
 efficiency differences, both in terms of itself, and in terms of some 
 abstract problem spacea problem that came to be solved by the invention 
 of consciousness. 



 but because it doesn't behave intelligently I (and you too) assume it is 
 not. And neither of us could function if we thought we were the only 
 conscious being in the universe so we assume that our fellow human beings 
 are conscious too, be not all the time, not when they are sleeping or under 
 anesthesia or dead, in other words when they are not behaving 
 intelligently. 


 No..that's to be resting on a fallacy. We draw on common 
 human understandings for the knowledge being under anesthesia or 
 whatever knocks out consciousness. You've no business adding your arbitrary 
 layer 'in other words not behaving intelligent'. Anyone can add as many 
 layers as they like but it's just redundant. 

  

  

 Some of our most powerful emotions like pleasure, pain, and lust come 
 from the oldest parts of our brain that evolved about 500 million years 
 ago. About 400 million years ago Evolution figured out how to make the 
 spinal cord, the medulla and the pons, and we still have these brain 
 structures today just like fish and amphibians do, and they deal in 
 aggressive behavior, territoriality and social hierarchies. The Limbic 
 System is about 150 million years old and ours is similar to that found in 
 other mammals. Some think the Limbic system is the source of awe and 
 exhilaration because it is the active site of many psychotropic drugs, and 
 there's little doubt that the amygdala, a part of the Limbic system, has 
 much to do with fear. After some animals developed a Limbic system they 
 started to spend much more time taking care of their young, so it probably 
 has something to do with love too.


 You're going to summarize that with a totally misshapen and confused 
 notion about, why would life do things in this sequence, Yes there it is 
 ...I see it. So you you thinks, if natural did things in that order. The 
 conscious human intellect, in the technological civilization, despite 
 blatently following a completely different sequence than biological 
 evolution...and has access to energy sources and material bioloy never 
 has. And a sequence became defined from 

Re: Intelligence Consciousness

2014-12-27 Thread zibblequibble


On Tuesday, December 23, 2014 4:47:09 PM UTC, Bruno Marchal wrote:


 On 22 Dec 2014, at 20:14, LizR wrote:

 Sometimes allegedly conscious beings behave very unintelligently. However 
 using Bruno's distinction intelligent behaviour is conscious (goal directed 
 etc) but competent behaviour isn't.

 So we have 3 classes of being

 1 conscious
 2 intelligent
 3 competent

 2 inplies 1 but 1 doesn't imply 2 so 1 is wider than 2. 3 isexclusive from 
 1 and 2.

 But how then to distinguish competence from intelligence?



 With a joke.

 Competence discerns and builds of itself. 

 Intelligence laughs of itself.

 Intelligence is needed to recognize our error , which is needed to develop 
 competence, but competence when developed can make intelligence sleepy, 
 laughing of the others, feeling superior, saying a lot of stupidities, etc.


competence is admriable, always. Never mocked or belittled. it takes 
10 years to hit threshold competence in a major field of operation. Can't 
be sped up by intelligence. Isn't driven by intelligence much either. 

I couldn't endorse your rather histrionic turn of phrase...intelligence 
should be cherished. Not many have a brain with the kind of reach necessary 
for breakthrough physics, invention and social and economic revolution. Our 
geniuses are raritieswe must all try to find them, and be sure to keep 
their way clear. We owe everything to the few genius minds. Feeling it's 
all so unfairI want to be a genius, who says I shall not? 
Toughthank lucky stars for that peachy tight bum you've got, and the 
fact you've never had to earn a living.

intelligence, mostly gets pissed awayand where so intelligence is 
utterly worthless. I don't respect intelligence absent 
accomplishmentit's totally fucking unerned...like a pretty face. 
Breakthrough physics or bust, genius boy.

Competence...on the other hand doesn't exist unless it passes muster. 
Competance is respected..iit's much more worthy than 
intelligenceit's the stuff of the higher human things. The great 
adventurer, mountaineer, discoverer.these are men with broad 
competencies. 

The only point of order is just that, it isn't intelligence. 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-27 Thread zibblequibble


On Monday, December 22, 2014 6:59:25 PM UTC, John Clark wrote:

 On Mon, Dec 22, 2014 at 6:20 AM, zibble...@gmail.com javascript: 
 wrote:

   Something can be conscious but not intelligent, but if it's 
 intelligent then it's conscious. Consciousness is easy but intelligence  


  John - take the amount new knowledge you assert I just the above 
 sentence. From where or what do you acquire this position? 


 I've been over this many times on this list, a rock may be conscious 


But there's no reason to entertain a rock is conscious to begin with. 
Orchestrating consciousness is a mind-bogglingly complex accompaniment. As 
conscious beings this is reasonable from observation. 

We boldy conjecture vast compleixity and stick our necks out in doing so. 
If no such complexity is therewe are all washed up and falsified . So 
it's a big risk...but it pays off...because we observe our brains are the 
most complex objects in the universe ..on some density measure of 
complexity per cubic cc.

You've absolutely no rational logical basis for starting with assumption 
consciousness is something generated any old how. So you spun yourself dizz 
all a fluster intoxicating potions of specially case Darwinian Natural 
Selection. You think natural selection has to *see* subjective inner 
experience? Whyit doesn't have to *see* subjective tree-eye views 
ecological niches...or *see* the - highly complex computational modes of 
the Liver. Or Bladder. Never sees the pretty girl. Mirror's reflection. The 
evil of the psychopathic sadistNS knows evil al the same. 

In all cases, natural selection sits with the universal principle.the 
laws of symmetry, the conservation lawsall of which are variations on 
the concept Energy. The universal principles are always about energy. 
Natural selection.is just like 'conservation laws', 'symmetry laws', 
non-creative laws...and so onjust otexts of expression for energy. 
Natural Selection is simply one further context of energy. 

The more efficient energetic structure, out endures the lesser. Because 
they are one and same thingat different points in history. The more 
efficient gstructure  is the young low entropy epoch. The lesser efficient 
structure is the structure in its old age. Natural selection is a turn of 
phrasethe more efficient energetic structure simply will out endure. 

So all this hocus pocus about consciousness being special and somehow 
immune from natural selectionreally is a big pile of steaming cock and 
bull John. Consciousness is the product of millions of small or large 
efficiency differences, both in terms of itself, and in terms of some 
abstract problem spacea problem that came to be solved by the invention 
of consciousness. 



but because it doesn't behave intelligently I (and you too) assume it is 
 not. And neither of us could function if we thought we were the only 
 conscious being in the universe so we assume that our fellow human beings 
 are conscious too, be not all the time, not when they are sleeping or under 
 anesthesia or dead, in other words when they are not behaving 
 intelligently. 


No..that's to be resting on a fallacy. We draw on common 
human understandings for the knowledge being under anesthesia or 
whatever knocks out consciousness. You've no business adding your arbitrary 
layer 'in other words not behaving intelligent'. Anyone can add as many 
layers as they like but it's just redundant. 

 

  

 Some of our most powerful emotions like pleasure, pain, and lust come from 
 the oldest parts of our brain that evolved about 500 million years ago. 
 About 400 million years ago Evolution figured out how to make the spinal 
 cord, the medulla and the pons, and we still have these brain structures 
 today just like fish and amphibians do, and they deal in aggressive 
 behavior, territoriality and social hierarchies. The Limbic System is about 
 150 million years old and ours is similar to that found in other mammals. 
 Some think the Limbic system is the source of awe and exhilaration because 
 it is the active site of many psychotropic drugs, and there's little doubt 
 that the amygdala, a part of the Limbic system, has much to do with fear. 
 After some animals developed a Limbic system they started to spend much 
 more time taking care of their young, so it probably has something to do 
 with love too.


You're going to summarize that with a totally misshapen and confused notion 
about, why would life do things in this sequence, Yes there it is ...I see 
it. So you you thinks, if natural did things in that order. The conscious 
human intellect, in the technological civilization, despite blatently 
following a completely different sequence than biological 
evolution...and has access to energy sources and material bioloy never 
has. And a sequence became defined from goal seeking hunter feeler 
patterns. 

Despite all that an obviously profoundly different 

Re: Intelligence Consciousness

2014-12-23 Thread Bruno Marchal


On 22 Dec 2014, at 20:14, LizR wrote:

Sometimes allegedly conscious beings behave very unintelligently.  
However using Bruno's distinction intelligent behaviour is conscious  
(goal directed etc) but competent behaviour isn't.


So we have 3 classes of being

1 conscious
2 intelligent
3 competent

2 inplies 1 but 1 doesn't imply 2 so 1 is wider than 2. 3  
isexclusive from 1 and 2.


But how then to distinguish competence from intelligence?



With a joke.

Competence discerns and builds of itself.

Intelligence laughs of itself.

Intelligence is needed to recognize our error , which is needed to  
develop competence, but competence when developed can make  
intelligence sleepy, laughing of the others, feeling superior, saying  
a lot of stupidities, etc.


Competence is the art of winning.
Intelligence is the art of loosing.

Somehow.

Bruno








--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-23 Thread Bruno Marchal


On 23 Dec 2014, at 01:37, Russell Standish wrote:


On Sat, Dec 20, 2014 at 09:39:57PM -0800, meekerdb wrote:

consciousness is favored.  Personally, I think integrated is a
vague concept and amounts to and then a miracle happens without
some further elucidation.

Brent



See arXiv:1405.0126 for a quite explicit quantification of what
integrated means.

I haven't yet had a chance to fully digest the paper, or take a
position on it, but it seems the charge of it being a vague concept is
not valid.


Interesting. The conclusion are close to what we can expect from  
diagonalization, without diagonalization.
They asks for []p  p, and got only the []p, and grasp well the  
insuperable problem of getting the t (without going at a meta-level  
where we can take it temporarily for granted, as we manage the  
theology of a machine much simpler than us).


May be I assume here that []p  p integrates the []p, from the person  
perspective. The []p can only be felt as some multiple disconnected  
assembling, like a body.


You are right, using vague is not valid here.

Bruno





Cheers
--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

Latest project: The Amoeba's Secret
(http://www.hpcoders.com.au/AmoebasSecret.html)


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-22 Thread zibblequibble


On Sunday, December 21, 2014 11:05:08 PM UTC, John Clark wrote:

 On Sun, Dec 21, 2014 at 2:45 PM,  zibble...@gmail.com javascript: 
 wrote:

  Yesterday you said you had to conclude if the test detected 
 consciousness well it must also detect intelligence.


 That's not what I said, you've got it backward. Something can be conscious 
 but not intelligent, but if it's intelligent then it's conscious. 
 Consciousness is easy but intelligence  


John - take the amount new knowledge you assert I just the above 
sentence. From where or what do you acquire this position? 


I mean, I would say that's fundamentalcan you show your workings? What 
due dilligence did y0u perform? it's a must to do itbecause it is very 
easy to assemble some componentssuch as conceptsand feel like 
something was discovered. look again and that assembly was one of hundreds 
of variants. meaning NO VALUE! 

 The Turing Test does not detect either one


 I am certain you have met people in your life that you wouldn't hesitate 
 to call brilliant, and you've met people you'd call complete morons, but if 
 you don't examine the same thing that the Turing Test does, behavior, how 
 do you make that determination?

does
John, sure I would examine something empirically if necessary. It often 
isn't. The way I can know what the Turing test  was set up to do, is 
because it is straightforward to set up the test with all three measures.

* Intelligence*

* the human*

 consciousness.  . Not Direct


   - We can eliminate consciousness as a direct detection because there is 
   no knowledge for that in the Test. 
   - Now we entertain Intelligence. 
   - So it comes down to the logic of the Test. If Turing proposed a test 
   that was based on intelligence testing what is his reason for leaving 
   intelligence testing, which was highly standardized in his time, and 
   eminently automatable/systemisable 
   - There's no evidence Turing was a luddite. He already had incredible 
   automation experience and ability.
   - If it was intelligence testing, I would argue that part of the Test, 
   or at least a large amount of it, would have been automated at that time. 
   No need to wait for A.I, get humans to take the test. Of course al of this 
   completely standard and automated today.
   - In summary, there's no way to make sense of his Test with intelligence 
   testing. 
   - I mean...it's not efficient getting a human to decide the way Turing 
   says. It would work vastly better by double blind testing lots of humans 
   and throwing in the A.I. at multiple places. There are thousands of tests 
   that could be taken. Personality, tricks, lying, deliberately lying,, 
   cohersion, pain
   - The right way to do that would be double scattering the A.I. 
   everywhere, thousands of tests. And then at the end, the human interaction 
   component would be best one of our top boys thinks, without peeping, 
   something in those tests that should be there  for every human interaction, 
   and much less so, gone down and octave, for every interval it's the A.I. 

He makes his prediction, and another team test for it, The analysis is 
normalized and if he's right then the A.I. was detected. If he's wrong,his 
theory and all the tests, and the maths, and checking, is all fed into a 
copy of the A.!. That copy is going to version 2 but it's a long way off. 
The A.I. and its technicians spend a lot of time getting an algorithm to 
absorb that failed theory and so make the A.I. better at this game next 
time. 
Then staying with the version there's a queue outside, a lot of people want 
to take their go. 
Make 30 duplicates of version one, and get 30 at a time defining theory 
theory and getting it tested. 
Open  up new varieties of theory. Now we've computer scientists and 
psychologists coming up with environments involve a lot of time making the 
setup look like it's for one thing, when actually it isn't. It could really 
be for something immensely simple...anything that a human - every human - 
can be spun and kept off balance and suddenly, it happens, the little 
voltage, of the white noise, or perhaps we use one of those tricks that 
harness human visual perception. 

On the other side The A.I. be massing large new knowledge of the sort of 
things humans will try. The A.I. would be duplicated several more times, 
now to be scattered among all 'roles'. Everything would be double blind. 
The A.I. might get ahead and reverse everything out and maybe bring about a 
ridiculous, disturbing and definitely very entertaining, mirroring out in 
which, say,  particular theory test, is manipulatd such that the A.I. is 
identified! But it turns out to be the guys sister in law. 

Anyway, one could on. But the illustration point, is that testing for 
intelligence and psychology and s onit could be harnessed if the 
process was more than just  test but also an A.I. evolution density point. 

That's the way I'd do it for intelligence. 

Re: Intelligence Consciousness

2014-12-22 Thread zibblequibble


On Monday, December 22, 2014 12:38:35 PM UTC, Bruno Marchal wrote:


 On 22 Dec 2014, at 00:05, John Clark wrote:

 On Sun, Dec 21, 2014 at 2:45 PM,  zibble...@gmail.com javascript: 
 wrote:

  Yesterday you said you had to conclude if the test detected 
 consciousness well it must also detect intelligence.


 That's not what I said, you've got it backward. Something can be conscious 
 but not intelligent, 


 We agree on this.


The concurrence is entirely innocent. A virtuous snippet of altruistic 
better nature...it's individuals pushing that, and not just humans but way 
back. Not natural selection. Individuals. Critters with Trait. I give 
thanks for them. For 500 million years they knocked at the door with their 
single message.  Those  special critters their message simply there is 
another way to be. 

Looking what a beautiful thing for a lizard to think. But  the time it was 
inflammatory, threateningit's not that we were stupidwe knew what 
the words meant. But we loved eating each other. And those words were 
threat to our way of life. And they still are and that's why I'm going to 
eat you. 

[the above is what I decide to think was true. because why should you two 
have all the fun. oh look there's a squirrel.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-22 Thread Bruno Marchal


On 22 Dec 2014, at 18:12, zibblequib...@gmail.com wrote:




On Monday, December 22, 2014 12:38:35 PM UTC, Bruno Marchal wrote:

On 22 Dec 2014, at 00:05, John Clark wrote:


On Sun, Dec 21, 2014 at 2:45 PM,  zibble...@gmail.com wrote:

 Yesterday you said you had to conclude if the test detected  
consciousness well it must also detect intelligence.


That's not what I said, you've got it backward. Something can be  
conscious but not intelligent,


We agree on this.

The concurrence is entirely innocent. A virtuous snippet of  
altruistic better nature...it's individuals pushing that, and not  
just humans but way back. Not natural selection. Individuals.  
Critters with Trait. I give thanks for them. For 500 million years  
they knocked at the door with their single message.  Those  special  
critters their message simply there is another way to be.


Looking what a beautiful thing for a lizard to think. But  the time  
it was inflammatory, threateningit's not that we were  
stupidwe knew what the words meant. But we loved eating each  
other. And those words were threat to our way of life. And they  
still are and that's why I'm going to eat you.


[the above is what I decide to think was true. because why should  
you two have all the fun. oh look there's a squirrel.



So we agree? It is not entirely clear to me.

Bruno






--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-22 Thread John Clark
On Mon, Dec 22, 2014 at 6:20 AM, zibblequib...@gmail.com wrote:

  Something can be conscious but not intelligent, but if it's intelligent
 then it's conscious. Consciousness is easy but intelligence


  John - take the amount new knowledge you assert I just the above
 sentence. From where or what do you acquire this position?


I've been over this many times on this list, a rock may be conscious but
because it doesn't behave intelligently I (and you too) assume it is not.
And neither of us could function if we thought we were the only conscious
being in the universe so we assume that our fellow human beings are
conscious too, but not all the time, not when they are sleeping or under
anesthesia or dead, in other words when they are not behaving
intelligently.

Some of our most powerful emotions like pleasure, pain, and lust come from
the oldest parts of our brain that evolved about 500 million years ago.
About 400 million years ago Evolution figured out how to make the spinal
cord, the medulla and the pons, and we still have these brain structures
today just like fish and amphibians do, and they deal in aggressive
behavior, territoriality and social hierarchies. The Limbic System is about
150 million years old and ours is similar to that found in other mammals.
Some think the Limbic system is the source of awe and exhilaration because
it is the active site of many psychotropic drugs, and there's little doubt
that the amygdala, a part of the Limbic system, has much to do with fear.
After some animals developed a Limbic system they started to spend much
more time taking care of their young, so it probably has something to do
with love too.

It is our grossly enlarged neocortex that makes the human brain so unusual
and so recent, it only started to get large about 3 million years ago and
only started to get ridiculously large less than one million years ago. It
deals in deliberation, spatial perception, speaking, reading, writing and
mathematics; in other words everything that makes humans so very different
from other animals. The only new emotion we got out of it was worry,
probably because the neocortex is also the place where we plan for the
future.

So if nature came up with feeling first and high level intelligence only
much much later I don't see why the opposite would be true for our
computers. It's a hell of a lot easier to make something that feels but
doesn't think than something that thinks but doesn't feel.

 I am certain you have met people in your life that you wouldn't hesitate
 to call brilliant, and you've met people you'd call complete morons, but if
 you don't examine the same thing that the Turing Test does, behavior, how
 do you make that determination?


  John, sure I would examine something empirically


And that is exactly what the Turing Test does.

 if necessary. It often isn't.


It often isn't?!! Then I repeat my question, if you don't use the same
thing that the Turing Test uses, behavior,  how in the world do you tell
the difference between a genius and a moron?

 So it comes down to the logic of the Test. If Turing proposed a test that
 was based on intelligence testing what is his reason for leaving
 intelligence testing, which was highly standardized in his time,


Turing didn't need to prove what the best way to test for intelligence is,
he didn't even need to explain what it means; all he was saying is that
whatever intelligence is and whatever method you use to test for it you
should use the same method for both humans and machines.  And the Turing
Test is not perfect even when applied to humans, sometimes a human can
appear to be smarter or dumber than he really is, nevertheless it remains
valuable because despite its flaws it's all we've got.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-22 Thread LizR
Sometimes allegedly conscious beings behave very unintelligently. However
using Bruno's distinction intelligent behaviour is conscious (goal directed
etc) but competent behaviour isn't.

So we have 3 classes of being

1 conscious
2 intelligent
3 competent

2 inplies 1 but 1 doesn't imply 2 so 1 is wider than 2. 3 isexclusive from
1 and 2.

But how then to distinguish competence from intelligence?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-22 Thread John Clark
On Mon, Dec 22, 2014  LizR lizj...@gmail.com wrote:


  So we have 3 classes of being
  1 conscious
  2 intelligent
  3 competent


And what is the difference between intelligent and competent? I will tell
you. If a computer does it then it's just competent, but if a human does
the exact same thing then it's intelligent.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-22 Thread meekerdb

On 12/22/2014 11:14 AM, LizR wrote:
Sometimes allegedly conscious beings behave very unintelligently. However using Bruno's 
distinction intelligent behaviour is conscious (goal directed etc) but competent 
behaviour isn't.


So we have 3 classes of being

1 conscious
2 intelligent
3 competent

2 inplies 1 but 1 doesn't imply 2 so 1 is wider than 2. 3 isexclusive from 1 
and 2.

But how then to distinguish competence from intelligence?


One way is to equate intelligence with an ability to learn, to expand the field of 
competence.  So Deep Blue is competent at chess but not very intelligent.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-22 Thread Russell Standish
On Sat, Dec 20, 2014 at 09:39:57PM -0800, meekerdb wrote:
 consciousness is favored.  Personally, I think integrated is a
 vague concept and amounts to and then a miracle happens without
 some further elucidation.
 
 Brent
 

See arXiv:1405.0126 for a quite explicit quantification of what
integrated means.

I haven't yet had a chance to fully digest the paper, or take a
position on it, but it seems the charge of it being a vague concept is
not valid.

Cheers
-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-22 Thread meekerdb

On 12/22/2014 4:37 PM, Russell Standish wrote:

On Sat, Dec 20, 2014 at 09:39:57PM -0800, meekerdb wrote:

consciousness is favored.  Personally, I think integrated is a
vague concept and amounts to and then a miracle happens without
some further elucidation.

Brent


See arXiv:1405.0126 for a quite explicit quantification of what
integrated means.

I haven't yet had a chance to fully digest the paper, or take a
position on it, but it seems the charge of it being a vague concept is
not valid.


It's interesting that they don't want lossy information integration:

/While it seems intuitve for the brain to discard irrelevant details from 
sensory inp//
//ut, it seems undesirable for it to also hemorrhage meaningful content. In particular, 
memory functions must be vastly non-lossy, otherwise retrieving them repeatedly would 
cause them to gradually decay///


Yet that is exactly what is observed.  Each time you recall something you modify it, so it 
tends to become a confabulation.


Their concept of integration is simply scrambled together so what we would intuitively 
call a piece of information cannot be physically localized or disentangled.  They 
quantify this by edit distance.  But if I understand it correctly the same scrambling 
that makes editing impossible would also make integrating new information impossible.  And 
while information may be difficult to localize, it does get disentangled so that when you 
ask someone a question they can answer that specific question.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-21 Thread zibblequibble


On Sunday, December 21, 2014 6:46:38 AM UTC, Jason wrote:

 As with split brain patients and other humans, if some part of the brain 
 isn't connected to the part of the brain that talks, we really can't 
 conclude those other parts doing other processing are unconscious. It's 
 like me concluding your not conscious because I don't know what your 
 thinking. 
 *It's a dangerous line of reasoning. *
 Jason 


“Mankind is facing a crossroad - one road leads to despair and utter 
hopelessness and the other to total extinction

Woody Allen.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-21 Thread zibblequibble


On Sunday, December 21, 2014 2:20:01 AM UTC, John Clark wrote:

 On Sat, Dec 20, 2014 at 3:07 PM, zibble...@gmail.com javascript: 
 wrote:

  Part of how you sum up your core insight: that consciousness has no 
 detectable objective reality


 No, I'm saying that consciousness DOES have a detectable objective reality 
 if and only if it's a brute fact that consciousness is the way data feels 
 like when it is processed intelligently. And I'm saying that human beings 
 can detect intelligent behavior and so can the process that produced them, 
 Evolution. 
  

  evolution cannot detect consciousness.


 If I accept that Darwin was correct and if I also accept that John K Clark 
 is conscious then I am forced by logic to conclude that consciousness is 
 indeed the way data feels like when it is processed intelligently. 

 As a corollary I  MUST also conclude that to whatever degree the Turing 
 Test is successful at detecting intelligence then it must be equally 
 successful at detecting consciousness.   

   John K Clark


 With the other parts all cut away, it isn't obvious what you have 
there amounts to a case. Do you have such thing in US English common law 
as the ruling common enough here of No case to answer - grounds for 
dismissal. 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-21 Thread John Clark
On Sun, Dec 21, 2014 at 12:39 AM, meekerdb meeke...@verizon.net wrote:

 A part of its information processing system that is highly integrated
 will indeed be conscious. However, IIT research has shown that for many
 integrated systems,one can design a functionally equivalent feed-forward
 system that will be unconscious. This means that so-called “p-zombies” can,
 in principle, exist: systems that behave like a human and pass the Turing
 test for machine intelligence, yet lack any conscious experience
 whatsoever. Many current “deep learning” AI systems are of this p-zombie
 type. Fortunately, integrated systems such as those in our brains typically
 require much fewer computational resources than their feed-forward “zombie”
 equivalents, which may explain why evolution has favored them and made us
 conscious.


If correct then it would be easier to make a super intelligent conscious
computer than to make a super intelligent non-conscious computer; as I've
said consciousness is easy but intelligence is hard. But the trouble is
even if the Integrated information theory is correct there is no way you
could ever prove it's correct. And I don't know why he said made us
conscious, he should have said made me conscious.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-21 Thread zibblequibble


On Sunday, December 21, 2014 5:53:39 PM UTC, John Clark wrote:

 On Sun, Dec 21, 2014 at 12:39 AM, meekerdb meek...@verizon.net 
 javascript: wrote:

  A part of its information processing system that is highly integrated 
 will indeed be conscious. However, IIT research has shown that for many 
 integrated systems,one can design a functionally equivalent feed-forward 
 system that will be unconscious. This means that so-called “p-zombies” can, 
 in principle, exist: systems that behave like a human and pass the Turing 
 test for machine intelligence, yet lack any conscious experience 
 whatsoever. Many current “deep learning” AI systems are of this p-zombie 
 type. Fortunately, integrated systems such as those in our brains typically 
 require much fewer computational resources than their feed-forward “zombie” 
 equivalents, which may explain why evolution has favored them and made us 
 conscious.


 If correct then it would be easier to make a super intelligent conscious 
 computer than to make a super intelligent non-conscious computer; as I've 
 said consciousness is easy but intelligence is hard. But the trouble is 
 even if the Integrated information theory is correct there is no way you 
 could ever prove it's correct. And I don't know why he said made us 
 conscious, he should have said made me conscious.


What I really feel sure of, is that you'd do your own theory no injury, and 
almost certainly develop it further and better.should you entertain 
taking a couple of concrete steps such as for instance, performing a 
personal reassessment of Turing's Test, and what that Test defined and what 
it simply did not, and could not have. One good reason is getting sight of 
the genius Alan Turing. There's a lot of things that get asserted or 
assumed round these parts, that Alan Turing simply would not have asserted 
or assumed in any circumstance. 

Basically because he would not have attached positive value in speculative 
conjecture involving advances with no foreseeable day of reckoning its 
value. There an exponentially larger prospect of leaving the overall 
situation greatly worsened by any action such as this, that involves, 
basically large assumptions. 

Look at his test. I only thought about his Test properly a day or two back. 
Prior to that I had regarded it as totally worn out and redundant long 
since. I had reasons. One strong one being that Turing himself had he lived 
would have returned to this matter many times. Many times. Had he lived, 
there would be little or nothing in play now that was the same unvarnished 
thing Turing said 65 years before. Turing would have gone on to invent 
theories with profound and enlightening predictions. He would have finished 
what he barely started in the domain of the universal principle of 
computation. That wouldn't have been in play for 50 years. 

The exception would be the Turing Test. Which is not to say it would not 
have superceded by much higher precision, much more knowledge rich tests. 
That may well have been so. But the Turing Test would still remain an 
extremely well designed structure, by a man who self-evidently by the Test 
itself, was and knew what it was to be, a genius. He was so minimal in 
everything he sought to break ground on. 

Yesterday you said you had to conclude if the test detected consciousness 
well it must also detect intelligence. The Turing Test does not detect 
either one to any definable standard. It does not. Nor depend in any sense 
that it would. Had Turing defined the test differently, such that the 
proposal was a human would apply intelligence tests in psychometrics and 
I.Q. and perhaps one of the ultra high I.Q. untimed tests such as the Mega 
Test or the Titan or whatever, one of those questions. Had Turing designed 
a test to be that way, his test would have been exactly what I had been 
assuming it was. Low or no, noteworthy value-add. It'd be another half 
assed boiled egg about A.I. 

it's almost as if you want his test to be like that. you surely see that 
the price would be a test that no longer embodied the property of 
self-evident truth, and the tautological value given to Darwin of Once 
heard understood backdated to birth independently invented by moi in short 
we say of NS it must be true. Same Turing Test. 

But that property is only there because Turing never steps on speculative 
turf like intelligence or consciousness. Had, then we'd have a 
speculative theory. Instead he returned to the self-conception of human 
conscious intelligence as a human universal familiar to every human. Turing 
didn't say our best and brightest would nee to be there. He didn't raise 
the A.I. massively higher I.Q. as a problem for the Test. Because it wasn't 
a problem, regardless what the differences were. Because it wasn't about 
intelligence. Nor consciousness. It was about and only about the event an 
A.I. emerged that identically replicated human conscious intelligence. 
Identically. In the event 

Re: Intelligence Consciousness

2014-12-21 Thread John Clark
On Sun, Dec 21, 2014 at 2:45 PM,  zibblequib...@gmail.com wrote:

 Yesterday you said you had to conclude if the test detected consciousness
 well it must also detect intelligence.


That's not what I said, you've got it backward. Something can be conscious
but not intelligent, but if it's intelligent then it's conscious.
Consciousness is easy but intelligence

 The Turing Test does not detect either one


I am certain you have met people in your life that you wouldn't hesitate to
call brilliant, and you've met people you'd call complete morons, but if
you don't examine the same thing that the Turing Test does, behavior, how
do you make that determination?

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-21 Thread LizR
On 22 December 2014 at 12:05, John Clark johnkcl...@gmail.com wrote:

 On Sun, Dec 21, 2014 at 2:45 PM,  zibblequib...@gmail.com wrote:

  Yesterday you said you had to conclude if the test detected
 consciousness well it must also detect intelligence.


 That's not what I said, you've got it backward. Something can be conscious
 but not intelligent, but if it's intelligent then it's conscious.
 Consciousness is easy but intelligence

  The Turing Test does not detect either one


The Turing test is a heurstic, one we apply all the time to other people
(and animals, robots, characters in films, etc). It suggests whether some
physical object might have something it is like to be that object.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-20 Thread zibblequibble


On Friday, December 12, 2014 5:45:04 PM UTC, John Clark wrote:



 On Fri, Dec 12, 2014 at 2:33 AM, zibble...@gmail.com javascript: 
 wrote:

  I apologize for this unreadable drivel 


 But with a real flare for writing unreadable drivel you could go far in 
 the psychology or philosophy departments at any university.  

   John K Clark


 C.V. multiverse differentiated in accord. 

In the meantime, I've been thinking about your intelligence consciousness. 
May I seek your clarification what connection you see between your thinking 
and the thinking of Alan Turing, reasonably inferred from his Turing Test 
(say) 

e.g. 
Part of how you sum up your core insight: that consciousness has no 
detectable objective reality (-- evolution cannot detect consciousness). 

- you assert the position, and follow up your case why this must be so, by 
demonstrating what you obviously regard as the absurdity of, should your 
position not be true, evolution detecting consciousness directly would have 
Turing's test detecting consciousness directly also, and us humans 
too.  Ergo, your position is underwritten by the genius Alan Turing. On a 
straight forward reading I should say. 


- Elsewhere you restate and refine a closely related position, via a 
progression of example scenarios, that artificial Intelligence 
outperforming humans in tasks involving cognitive heavy lifting are 
intelligent if we are intelligent, and are more so for doing the task 
better. Like the Deep Blue chess A.I.  

You've used other real world examples - which is a good approach IMHO. Most 
recently Big Data algorithms involving translations with invariant 
dependence on human translation professionals. It was basically the same 
point in that different context. So there's the same point, which is 
goodthe point should be invariant. But there's another invariant 
feature and the implication you include, all you have said is only what can 
be found by extrapolating Turin's position on Intelligence defined in the 
Turing Test.  

Summing up: 
I requested your clarification. The above two examples, I make explicit 
what I think you is front and centre in your argument. Therefore the 
clarification, if possible John, is that you agree that your 
arguments embed the assertions as exampled? Or am I inaccurate? 

Assuming you concur with my reading, the sneak preview why the links you 
make are illegitimate, the most obvious first item from two lists are: 

e.g. 1 (above): No the Turing Test would remain as it was originally 
defined, regardless what evolution is found to be able detect directly of 
consciousness. The Turing Test defines one special case set of conditions, 
in which the detection of consciousness was both viable, definable.and 
airtight. 

What it was not, John, was a definition of intelligence as being as good or 
better than we are, at specific TASKS in isolation. Turing absolutely never 
suggested anything of the sort. No suggestion is implicate of the Test 
environment. The opposite is true, very much so. 

Turing tacitly acknowledged neither intelligence nor 
consciousness reflected adequate captures for definitions and benchmarks in 
detail to be approached. He side stepped intelligence just as he side 
stepped consciousness. Instead he characterized the human self-insight of 
human intelligence and consciousness. 

In the event an A.I. reproduced identically a human conscious intelligence, 
Turing observed that another human would find a difference were it not so. 

That's very fucking holistic John. It ain't playing chess 

 

 

 


 


 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-20 Thread John Clark
On Sat, Dec 20, 2014 at 3:07 PM, zibblequib...@gmail.com wrote:

 Part of how you sum up your core insight: that consciousness has no
 detectable objective reality


No, I'm saying that consciousness DOES have a detectable objective reality
if and only if it's a brute fact that consciousness is the way data feels
like when it is processed intelligently. And I'm saying that human beings
can detect intelligent behavior and so can the process that produced them,
Evolution.


  evolution cannot detect consciousness.


If I accept that Darwin was correct and if I also accept that John K Clark
is conscious then I am forced by logic to conclude that consciousness is
indeed the way data feels like when it is processed intelligently.

As a corollary I  MUST also conclude that to whatever degree the Turing
Test is successful at detecting intelligence then it must be equally
successful at detecting consciousness.

  John K Clark








 - you assert the position, and follow up your case why this must be so, by
 demonstrating what you obviously regard as the absurdity of, should your
 position not be true,




 evolution detecting consciousness directly would have Turing's test
 detecting consciousness directly also, and us humans too.  Ergo, your
 position is underwritten by the genius Alan Turing. On a straight forward
 reading I should say.


 - Elsewhere you restate and refine a closely related position, via a
 progression of example scenarios, that artificial Intelligence
 outperforming humans in tasks involving cognitive heavy lifting are
 intelligent if we are intelligent, and are more so for doing the task
 better. Like the Deep Blue chess A.I.

 You've used other real world examples - which is a good approach IMHO.
 Most recently Big Data algorithms involving translations with invariant
 dependence on human translation professionals. It was basically the same
 point in that different context. So there's the same point, which is
 goodthe point should be invariant. But there's another invariant
 feature and the implication you include, all you have said is only what can
 be found by extrapolating Turin's position on Intelligence defined in the
 Turing Test.

 Summing up:
 I requested your clarification. The above two examples, I make explicit
 what I think you is front and centre in your argument. Therefore the
 clarification, if possible John, is that you agree that your
 arguments embed the assertions as exampled? Or am I inaccurate?

 Assuming you concur with my reading, the sneak preview why the links you
 make are illegitimate, the most obvious first item from two lists are:

 e.g. 1 (above): No the Turing Test would remain as it was originally
 defined, regardless what evolution is found to be able detect directly of
 consciousness.



 The Turing Test defines one special case set of conditions, in which the
 detection of consciousness was both viable, definable.and airtight.

 What it was not, John, was a definition of intelligence as being as good
 or better than we are, at specific TASKS in isolation. Turing absolutely
 never suggested anything of the sort. No suggestion is implicate of the
 Test environment. The opposite is true, very much so.

 Turing tacitly acknowledged neither intelligence nor
 consciousness reflected adequate captures for definitions and benchmarks in
 detail to be approached. He side stepped intelligence just as he side
 stepped consciousness. Instead he characterized the human self-insight of
 human intelligence and consciousness.

 In the event an A.I. reproduced identically a human conscious
 intelligence, Turing observed that another human would find a difference
 were it not so.

 That's very fucking holistic John. It ain't playing chess













  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-20 Thread meekerdb

On 12/20/2014 6:19 PM, John Clark wrote:
On Sat, Dec 20, 2014 at 3:07 PM, zibblequib...@gmail.com 
mailto:zibblequib...@gmail.com wrote:


 Part of how you sum up your core insight: that consciousness has no 
detectable
objective reality


No, I'm saying that consciousness DOES have a detectable objective reality if and only 
if it's a brute fact that consciousness is the way data feels like when it is processed 
intelligently. And I'm saying that human beings can detect intelligent behavior and so 
can the process that produced them, Evolution.


 evolution cannot detect consciousness.


If I accept that Darwin was correct and if I also accept that John K Clark is conscious 
then I am forced by logic to conclude that consciousness is indeed the way data feels 
like when it is processed intelligently.


As a corollary I  MUST also conclude that to whatever degree the Turing Test is 
successful at detecting intelligence then it must be equally successful at detecting 
consciousness.


  John K Clark


In one of his recent papers, http://arxiv.org/pdf/1405.0493v1.pdf, Max Tegmark comments on 
this and reaches a somewhat different conclusion.  He starts from Tononi's Integrated 
Information Theory (ITT):


/  IIT has generated significant interest in the neuro-//
//science community, because it offers answers to many//
//intriguing questions. For example, why do some infor-//
//mation processing systems in our brains appear to be//
//unconscious? Based on extensive research correlating//
//brain measurements with subjectively reported experi-//
//ence, neuroscientist Christof Koch and others have con-//
//cluded that the cerebellum a brain area whose roles in-//
//clude motor control is not conscious, but is an uncon-//
//scious information processor that helps other parts of the//
//brain with certain computational tasks.//
//The IIT explanation for this is that the cerebellum//
//is mainly a collection of “feed-forward” neural networks//
//in which information flows like water down a river, and//
//each neuron affects mostly those downstream. If there//
//is no feedback, there is no integration and hence no con-//
//sciousness. The same would apply to Googles recent feed-//
//forward artificial neural network that processed millions//
//of YouTube video frames to determine whether they con-//
//tained cats. In contrast, the brain systems linked to con-//
//sciousness are strongly integrated, with all parts able to//
//affect one another.//
//*IIT thus offers an answer to the question of whether*/*/
/**/a superintelligent computer would be conscious: it de-/**/
/**/pends. A part of its information processing system that/**/
/**/is highly integrated will indeed be conscious. However,/**/
/**/IIT research has shown that for many integrated systems,/**/
/**/one can design a functionally equivalent feed-forward sys-/**/
/*/*tem that will be unconscious.* *This means that so-called*/*/
/**/“p-zombies” can, in principle, exist: systems that behave/**/
/**/like a human and pass the Turing test for machine intel-/**/
/**/ligence, yet lack any conscious experience whatsoever./*/
//Many current “deep learning” AI systems are of this p-//
//zombie type. Fortunately, integrated systems such as//
//those in our brains typically require much fewer computa-//
//tional resources than their feed-forward “zombie” equiv-//
//alents, which may explain why evolution has favoured//
//them and made us conscious.//

/So contrary to most on this list, he thinks a philosophical zombie is possible.  But 
maybe it's materially inefficient such that consciousness is favored.  Personally, I think 
integrated is a vague concept and amounts to and then a miracle happens without some 
further elucidation.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-20 Thread Jason Resch
As with split brain patients and other humans, if some part of the brain
isn't connected to the part of the brain that talks, we really can't
conclude those other parts doing other processing are unconscious. It's
like me concluding your not conscious because I don't know what your
thinking. It's a dangerous line of reasoning.

Jason

On Sunday, December 21, 2014, meekerdb meeke...@verizon.net wrote:
 On 12/20/2014 6:19 PM, John Clark wrote:

 On Sat, Dec 20, 2014 at 3:07 PM, zibblequib...@gmail.com wrote:

  Part of how you sum up your core insight: that consciousness has no
detectable objective reality

 No, I'm saying that consciousness DOES have a detectable objective
reality if and only if it's a brute fact that consciousness is the way data
feels like when it is processed intelligently. And I'm saying that human
beings can detect intelligent behavior and so can the process that produced
them, Evolution.


  evolution cannot detect consciousness.

 If I accept that Darwin was correct and if I also accept that John K
Clark is conscious then I am forced by logic to conclude that consciousness
is indeed the way data feels like when it is processed intelligently.
 As a corollary I  MUST also conclude that to whatever degree the Turing
Test is successful at detecting intelligence then it must be equally
successful at detecting consciousness.
   John K Clark

 In one of his recent papers, http://arxiv.org/pdf/1405.0493v1.pdf, Max
Tegmark comments on this and reaches a somewhat different conclusion.  He
starts from Tononi's Integrated Information Theory (ITT):

 IIT has generated significant interest in the neuro-
 science community, because it offers answers to many
 intriguing questions. For example, why do some infor-
 mation processing systems in our brains appear to be
 unconscious? Based on extensive research correlating
 brain measurements with subjectively reported experi-
 ence, neuroscientist Christof Koch and others have con-
 cluded that the cerebellum a brain area whose roles in-
 clude motor control is not conscious, but is an uncon-
 scious information processor that helps other parts of the
 brain with certain computational tasks.
 The IIT explanation for this is that the cerebellum
 is mainly a collection of “feed-forward” neural networks
 in which information flows like water down a river, and
 each neuron affects mostly those downstream. If there
 is no feedback, there is no integration and hence no con-
 sciousness. The same would apply to Googles recent feed-
 forward artificial neural network that processed millions
 of YouTube video frames to determine whether they con-
 tained cats. In contrast, the brain systems linked to con-
 sciousness are strongly integrated, with all parts able to
 affect one another.
 IIT thus offers an answer to the question of whether
 a superintelligent computer would be conscious: it de-
 pends. A part of its information processing system that
 is highly integrated will indeed be conscious. However,
 IIT research has shown that for many integrated systems,
 one can design a functionally equivalent feed-forward sys-
 tem that will be unconscious. This means that so-called
 “p-zombies” can, in principle, exist: systems that behave
 like a human and pass the Turing test for machine intel-
 ligence, yet lack any conscious experience whatsoever.
 Many current “deep learning” AI systems are of this p-
 zombie type. Fortunately, integrated systems such as
 those in our brains typically require much fewer computa-
 tional resources than their feed-forward “zombie” equiv-
 alents, which may explain why evolution has favoured
 them and made us conscious.

 So contrary to most on this list, he thinks a philosophical zombie is
possible.  But maybe it's materially inefficient such that consciousness is
favored.  Personally, I think integrated is a vague concept and amounts
to and then a miracle happens without some further elucidation.

 Brent

 --
 You received this message because you are subscribed to the Google Groups
Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-12 Thread John Clark
On Fri, Dec 12, 2014 at 2:33 AM, zibblequib...@gmail.com wrote:

 I apologize for this unreadable drivel


But with a real flare for writing unreadable drivel you could go far in the
psychology or philosophy departments at any university.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Intelligence Consciousness

2014-12-11 Thread zibblequibble
I apologize for this unreadable drivel 


On Friday, December 12, 2014 4:45:39 AM UTC, zibble...@gmail.com wrote:

 hi John, zibbsey here, we left things that I'd get back to you. if ok with 
 you I've some points / questions 

 On Saturday, November 29, 2014 5:57:38 AM UTC, John Clark wrote:


 On Fri, Nov 28, 2014 at 6:01 PM, zib...@gmail.com wrote:

 I was talking about your root idea that Evolution cannot detect 
 consciousness 


 It can't and neither can we.

  (because we can't, I think you said) 


 The reason isn't because of us, it's just that neither we nor Evolution 
 nor anything else can detect consciousness other than our own, we can only 
 detect actions.


 You are trying to perform a generalization, of two very different 
 conceptions above, adding a third below in the form of Turing test. 

 Generalizations of that kind are extremely hard to accomplish. 

 The *Human Context*: Humans are yet to make dramatic progress defining 
 consciousness, locating it in the brain, and so on. Humans are restricted 
 Actions 
 in what they have the potential to detect . Humans have abstract theory, 
 for example. A large component of what abstract theory 
 detects, are INVARIANT features. Which aren't Actions. 

 The *Natural Selection* Context.. NS is an Abstract.. Not due to some 
 preference, but fundamentally What was attracting selection, at what 
 MAGNITUDE WITH WHAT LONG TERM IMPACT, whether it reaches unity in the 
 population, or indeed whether organism gets eaten and the selection event 
 simply did not happen. 

 All of this does not begin to manifest into trait characteristics until 
 generations,, millennia even, after the first signs of that through 
 manifestation is attracting natural selection  first appear. 

 NS is fundamentally Abstract. That's the Invariance that m akes possible 
 everything else to be exchangeable. What you want to do with the concept - 
 your goal - drives natural selection from one structure to the next very 
 different structure. 

 e.g. if it's about fundamentals the structure might be 'replicators'; 

 -if its about what trait was attracting selection at what magnitude, and 
 how far trait went...whether it became unity in the population, then NS for 
 that context is defined by two temporal points; cannot be tied to just one 
 alone. This is because the characteristics being queried do not begin to 
 manifest for generations , millennia even, after the first signs of that 
 trait attracting selection first appear. AND ONLY manifest at all if there 
 is sufficiently unbroken selection from that first appearance right through 
 to that manifestation. AND ONLY if that interval  of selection exhibits 
 exponential effects. I could go on: Only if the source of the exponential 
 effects correspond to the laws of population genetics.

 The point is John NS is an Abstract. A term such as NS detects must 
 be restricted to the Brittle Metaphor the phrase gets coined to furnish.

 Put differently, Evolution detect is a slight variation of say 
 Evolution prefers... or Evolution creates or kills... 

 They are strictly metaphors John. There no real  overlay - at al - that is 
 non-trivial with the ways Humans detect  has a range of contexts. We 
 detect in literal, if abstract ways tied in closely with our temporal and 
 macroscopic - constrained reality. Humans detect things via physical. Very 
 often Actions as you say (not only Actions).

 Evolution has essentially zero non-trivial evolutionary dependence on 
 Actions. For natural selection to 'detect' an 'action' the action would 
 have to become INVARIANT an abstract space that was also Time INVARIANT, in 
 that instance caused by there being no t term in the equation. Instead it 
 becomes about generations. The Action consistently manifests given the same 
 conditions to some resolution. The same conditions giving rise to the same 
 Action consistently re-manifests in successive generations. 

 In reality, it would go to deeper resolutions, like only the sub-region of 
 the Action, that the most instances of recurrence fully overlay through all 
 Actions, will being to approach the closest context in which evolution 
 'detects' 'Actions'. 

 *The Turing Test Context*

 The Turin Test is something wholly different again. The reason you  are 
 logically flawed here, is that Alan Turing specified - explicitly - an 
 indirect detention of consciousness via everything another consciousness is 
 able to throw at a notional black box, without I/O exhibiting traits of 
 full intelligence. 

 The formulation of the Turing Test makes no statement at all about the 
 relationship of intelligence to consciousness. T

 There is no commitment implied or stated to a fundamental hard linkage of 
 intelligence and consciousness. 


 This is because Turing Tests for Consciousness given a blackbox I/O 
 hallmarking  intelligence.. NOT BY  measuring the Intelligence. If that was 
 the case the measure becomes 

Re: Intelligence is the ability to make deliberate free choices.

2012-08-15 Thread Bruno Marchal

Hi Roger,

On 14 Aug 2012, at 17:30, Roger wrote:


Hi Bruno Marchal

IMHO Intelligence is the ability to  make deliberate free choices.
One could lie if one chose to.


I am OK with this. Löbian machines too. (Löbian machine = universal  
machine capable of knowing that they are universal). They can prove  
that if they never communicate a lie, they it is consistent that they  
can communicate a lie. That is basically Gödel's second incompleteness  
theorem, and it is a well known fact, already seen by Gödel in 1931  
(but proved by Hilbert and Bernays rigorously later). To be sure,  
Gödel's statement was more general, but can be used to build a counter- 
example to your statement.


Bruno






Roger , rclo...@verizon.net
8/14/2012
- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-08-12, 05:24:48
Subject: Re: Positivism and intelligence


On 11 Aug 2012, at 14:56, Roger wrote:



Positivism seems to rule out native intelligence.
I can't see how knowledge could be created on a blank
slate without intelligence.


OK. But with comp intelligence emerges from arithmetic, out of space  
and time.





Or for that matter, how the incredibly unnatural structure
of the carbon atom could have been created somehow
somewhere by mere chance.


Hmm... This can be explained by QM, which can be explained by comp  
and arithmetic.




Fred Hoyle as I recall said
that it was very unlikely that it was created by chance.

All very unlikely things in my opinion show evidence of
intelligence. In order to extract energy from disorder
as life does shows that, like Maxwell's Demon,
some intelligence is required to sort things out.


Not sure what you mean by intelligence here.

Bruno







Roger , rclo...@verizon.net
8/11/2012
- Receiving the following content -
From: meekerdb
Receiver: everything-list
Time: 2012-08-10, 14:05:31
Subject: Re: Libet's experimental result re-evaluated!

On 8/10/2012 7:23 AM, Alberto G. Corona wrote:
 The modern positivist conception of free will has no
 scientific meaning. But all modern rephasings of old philosophy are
 degraded.

Or appear so because they make clear the deficiencies of the old  
philosophy.


 Positivist philosophy pass everithing down to what-we-know-by- 
science

 of the physical level,

That's not correct. Postivist philosophy was that we only know what  
we directly
experience and scientific theories are just ways of predicting new  
experiences from old
experiences. Things not directly experienced, like atoms, were  
merely fictions used for

prediction.

 that is the only kind of substance that they
 admit. this what-we-know-by-science makes positivism a moving  
ground, a kind
 of dictatorial cartesian blindness which states the kind of  
questions

 one is permitted at a certain time to ask or not.

 Classical conceptions of free will were concerned with the
 option ot thinking and acting morally or not, that is to have the  
capability to
 deliberate about the god or bad that a certain act implies for  
oneself


One deliberates about consequences and means, but how does one  
deliberate about what one

wants? Do you deliberate about whether pleasure or pain is good?

 and for others, and to act for god or for bad with this knowledge.
 Roughly speaking, Men
 have such faculties unless in slavery. Animals do not.

My dog doesn't think about what's good or bad for himself? I doubt  
that.


 The interesting
 parts are in the details of these statements. An yes, they are
 questions that can be expressed in more scientific terms. This  
can

 be seen in the evolutionary study of moral and law under multilevel
 selection theory:

 
https://www.google.es/search?q=multilevel+selectionsugexp=chrome,mod=11sourceid=chromeie=UTF-8

 which gives a positivistic support for moral, and a precise,
 materialistic notion of good and bad. And thus suddenly these three
 concepts must be sanctioned as legitimate objects of study by the
 positivistic dictators, without being burnt alive to social  
death, out

 of the peer-reviewed scientific magazines, where sacred words of
 Modernity resides.

 We are witnessing this devolution since slowly all the old
 philosophical and theological concepts will recover their  
legitimacy,

 and all their old problems will stand as problems here and now. For
 example, we will discover that what we call Mind is nothing but the
 old concepts of Soul and Spirit.

After stripping soul of it's immortality and acausal relation to  
physics.



 Concerning the degraded positivistic notion of free will, I said
 before that under an extended notion of evolution it is nor  
possible

 to ascertain if either the matter evolved the mind or if the mind
 selected the matter. So it could be said that the degraded  
question is

 meaningless and of course, non interesting.

But the question of their relationship is still interesting.

Brent

--
You received this message because you are subscribed to the Google 

Re: Intelligence and consciousness

2012-02-16 Thread John Clark
On Wed, Feb 15, 2012 at 7:20 PM, Craig Weinberg whatsons...@gmail.comwrote:

 Obviously Watson or Siri give you access to intelligence, but so does a
 book.


A book can contain information that can help you answer questions but can
not do so directly, but Watson and Siri can; and all three could help you
if you were doing research and writing a serious academic paper, but ELIZA
would be of no help whatsoever. ELIZA's vague evasive answers tells you
nothing you didn't already know, so when the Chinese Room in your thought
experiment behaves like ELIZA it's about as useful in enlightening us about
the nature of consciousness or intelligence as a Chinese Fortune Cookie.

 You might consider that we don't need a test. That intelligence is
 fundamentally different than muscle strength or height.


You don't know if somebody is tall until you see he is tall, you don't know
if he's strong until he does something that requires a lot of strength and
you don't know he is intelligent until you see him do something smart.

 That's helpful but it is still the programmer's intelligence that is
 reflected in the program, not the computer's.


Fine, even the largest computer is just a big lump of semiconductors
without a program.

 Do you think that productivity for the sake of productivity will lead to
 anything meaningful?


Obviously yes.

the existence of other minds can only be inferred through behavior.


 That's an assumption. You rule out all other epistemology arbitrarily.


Expand on that, show me how you can deduce that other conscious minds exist
without speech writing or some other mode of behavior showing up somewhere
in the logical deductive chain. If you can do that not only will you have
won the argument you would be the greatest philosopher by far in the
history of this small planet.

 Sulfuric acid is H2SO4, remove the sulfur and 3 oxygen atoms and the
 result is H2O, and
 you can water corn with water just fine. In a similar way the only
 difference between a cadaver and a healthy person is the way the atoms are
 organized.


 By that reasoning, all matter is the same thing.


Yep, organize the electrons neutrons and protons in the right way (which
requires information of course) you can make anything including you and me.

 Which is true in some sense, but it is the opposite of cosmos,
 consciousness and realism.


Then logically we can only conclude that whatever you mean by cosmos
consciousness and realism cannot be correct because the opposite of the
truth is false.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-15 Thread John Clark
On Mon, Feb 13, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 TO HELL WITH ELIZA That prehistoric program is NOT intelligent!


  What makes you sure it isn't intelligent but that other programs are?


How the hell do you think?! ELIZA doesn't act intelligently but other
programs do. Nobody in their right mind would use ELIZA to help with
writing a scientific paper and doing serious research, but you might use
Watson or Siri.

 20mb of conversational Chinese might be enough to pass a Turing Test for
 a moderate amount of time.


Maybe, if a chimpanzee were performing the test.

 It's completely subjective.


Yes the Turing Test is subjective and it's flawed. Failing the Turing Test
proves nothing definitive, the subject may be smart as hell but simply not
want to answer your questions and prefer to remain silent. And
unsophisticated people might even be impressed by a program as brain dead
dumb as ELIZA. And people can fool us too, I think we've all met people who
for the first 10 minutes seem smart as hell but after 30 minutes you
realize they are pretentious dullards. So with all these flaws why do we
even bother with the Turing Test? Because despite its flaws it's the ONLY
tool we have, its the only way of determining intelligence from stupidity,
but if we are not very smart ourselves we will make lots of errors in
administering the test.

 If you haven't read it already, this link from Stephen may do a better
 job than I have of explaining my position:

 http://newempiricism.blogspot.com/2009/02/symbol-grounding-problem.html


And that fails the Turing Test because the author clearly thought that
Searle was a pretty smart man.

 You ask the room to produce a quantum theory of gravity and it does so,
 you ask it to output a new poem that a considerable fraction of the human
 race would consider to be very beautiful and it does so, you ask it to
 output a original fantasy children's novel that will be more popular than
 Harry Potter and it does so.


 No. The thought experiment is not about simulating omniscience. If you
 ask the room to produce anything outside of casual conversation, it would
 politely decline.


If that's all it could do, if it just produce streams of ELIZA style
evasive blather then it has not demonstrated any intelligence  so I would
have no reason to think it was intelligent so I would not think its
conscious so WHAT'S THE POINT OF THE THOUGHT EXPERIMENT?

 First you say 'let's say that the impossible Chinese Room was possible'.
 Then you say 'it still doesn't work because the Chinese Room isn't
 possible'.


What I said was that real computers don't work anything like the Chinese
Room, they don't have a copy of Shakespeare's Hamlet in which the letters
t and s are reversed (so be or nos so be shas it she quetsion) resting
in its memory just in case somebody requested such a thing, but if it had a
copy of the play as Shakespeare (or Thaketpeare) wrote it simple ways could
be found to produce it.


  The Chinese Room is just [...]


There you do again with the is just.

 'Where were you on the night of October 15, 2011'?


Well, your honor my brain was inside the head which was on top of the body
knocking over that liquor store, my mind was in a lingerie model's bedroom,
and then on the moons of Jupiter. My sense organs are always very close to
my brain but that is just a Evolutionary accident resulting from the fact
that nerve impulses travel much much slower than light and if they were far
from my brain the signal delay would have severely reduced the chances of
my ancestors surviving long enough to reproduce.


  There is a difference between organized matter and matter that wants to
 organize.


Carbon atoms want to organize into amino acids and amino acids want to
organize into proteins and proteins want to organize into cells and cells
want to organize into brains, but silicon atoms have no ambition and don't
want to organize into anything?? Do you really think that line of thought
will lead to anything productive?

 Why wouldn't he [Einstein] be aware of his own intelligence?


You tell me, you're the one who believes that intelligent things like smart
computers are unaware of their own intelligence.


  We don't have to imagine solipsism just because subjectivity isn't
 empirical.


But that only works for you, the existence of other minds can only be
inferred through behavior.

 You admit then that you are not interested in defining it [intelligence]
 as it actually is, but only what is convenient to investigate.


Convenient? If intelligence does not mean doing intelligent things then I
don't see why anyone would be interested in it and don't even see the need
for the word.

 You can't water corn with sulfuric acid


You can if you change the organization of the acid a little. Sulfuric acid
is H2SO4, remove the sulfur and 3 oxygen atoms and the result is H2O, and
you can water corn with water just fine. In a similar way the only
difference between a cadaver and 

Re: Intelligence and consciousness

2012-02-15 Thread Craig Weinberg
On Feb 15, 1:22 pm, John Clark johnkcl...@gmail.com wrote:
 On Mon, Feb 13, 2012  Craig Weinberg whatsons...@gmail.com wrote:

  TO HELL WITH ELIZA That prehistoric program is NOT intelligent!

   What makes you sure it isn't intelligent but that other programs are?

 How the hell do you think?! ELIZA doesn't act intelligently but other
 programs do. Nobody in their right mind would use ELIZA to help with
 writing a scientific paper and doing serious research, but you might use
 Watson or Siri.

Obviously Watson or Siri give you access to intelligence, but so does
a book. Would you say that an almanac is more intelligent than a book
of poems? Does the IQ of a book change when you turn it upside down?
I'm trying to point out that what you associate with intelligence
figuratively does not correspond to literal capacity for intelligent
reasoning.


  20mb of conversational Chinese might be enough to pass a Turing Test for
  a moderate amount of time.

 Maybe, if a chimpanzee were performing the test.

Yes.


  It's completely subjective.

 Yes the Turing Test is subjective and it's flawed. Failing the Turing Test
 proves nothing definitive, the subject may be smart as hell but simply not
 want to answer your questions and prefer to remain silent. And
 unsophisticated people might even be impressed by a program as brain dead
 dumb as ELIZA. And people can fool us too, I think we've all met people who
 for the first 10 minutes seem smart as hell but after 30 minutes you
 realize they are pretentious dullards.

That's my point. But eventually we do realize they are dullards - or
machines.

So with all these flaws why do we
 even bother with the Turing Test? Because despite its flaws it's the ONLY
 tool we have, its the only way of determining intelligence from stupidity,
 but if we are not very smart ourselves we will make lots of errors in
 administering the test.

You might consider that we don't need a test. That intelligence is
fundamentally different than muscle strength or height.


  If you haven't read it already, this link from Stephen may do a better
  job than I have of explaining my position:

 http://newempiricism.blogspot.com/2009/02/symbol-grounding-problem.html

 And that fails the Turing Test because the author clearly thought that
 Searle was a pretty smart man.

He doesn't have to be smart to be right about the Chinese room. Even
if possibly for the wrong reason.


  You ask the room to produce a quantum theory of gravity and it does so,
  you ask it to output a new poem that a considerable fraction of the human
  race would consider to be very beautiful and it does so, you ask it to
  output a original fantasy children's novel that will be more popular than
  Harry Potter and it does so.

  No. The thought experiment is not about simulating omniscience. If you
  ask the room to produce anything outside of casual conversation, it would
  politely decline.

 If that's all it could do, if it just produce streams of ELIZA style
 evasive blather then it has not demonstrated any intelligence  so I would
 have no reason to think it was intelligent so I would not think its
 conscious so WHAT'S THE POINT OF THE THOUGHT EXPERIMENT?

It would demonstrate X intelligence for t duration to Z audience.
Which is all any intelligence could hope to accomplish.


  First you say 'let's say that the impossible Chinese Room was possible'.
  Then you say 'it still doesn't work because the Chinese Room isn't
  possible'.

 What I said was that real computers don't work anything like the Chinese
 Room, they don't have a copy of Shakespeare's Hamlet in which the letters
 t and s are reversed (so be or nos so be shas it she quetsion) resting
 in its memory just in case somebody requested such a thing, but if it had a
 copy of the play as Shakespeare (or Thaketpeare) wrote it simple ways could
 be found to produce it.

That's helpful but it is still the programmer's intelligence that is
reflected in the program, not the computer's. Which is the whole
point.


   The Chinese Room is just [...]

 There you do again with the is just.

  'Where were you on the night of October 15, 2011'?

 Well, your honor my brain was inside the head which was on top of the body
 knocking over that liquor store, my mind was in a lingerie model's bedroom,
 and then on the moons of Jupiter. My sense organs are always very close to
 my brain but that is just a Evolutionary accident resulting from the fact
 that nerve impulses travel much much slower than light and if they were far
 from my brain the signal delay would have severely reduced the chances of
 my ancestors surviving long enough to reproduce.

Ah, then you shouldn't mind if we put your body in prison.


   There is a difference between organized matter and matter that wants to
  organize.

 Carbon atoms want to organize into amino acids and amino acids want to
 organize into proteins and proteins want to organize into cells and cells
 want to organize into brains, but silicon 

Re: Intelligence and consciousness

2012-02-13 Thread Craig Weinberg
On Feb 12, 12:34 am, John Clark johnkcl...@gmail.com wrote:
 On Fri, Feb 10, 2012  Craig Weinberg whatsons...@gmail.com wrote:

  I think you are radically overestimating the size of the book and the

  importance of the size to the experiment. ELIZA was about 20Kb.

 TO HELL WITH ELIZA That prehistoric program is NOT intelligent!

What makes you sure it isn't intelligent but that other programs are?

 What is
 the point of a though experiment that gives stupid useless answers to
 questions?

If you haven't read it already, this link from Stephen may do a better
job than I have of explaining my position:

http://newempiricism.blogspot.com/2009/02/symbol-grounding-problem.html


 If it's a thousand times better than ELIZA, then you've got a 20 Mb
 rule book.

 For heavens sake, if a 20 Mb look-up table  was sufficient we would have
 had AI decades ago.

Sufficient for what? 20mb of conversational Chinese might be enough to
pass a Turing Test for a moderate amount of time. It's completely
subjective.


 Since you can't do so let me make the best case for the Chinese Room from
 your point of view and the most difficult case to defend from mine. Let's
 say you're right and the size of the lookup table is not important so we
 won't worry that it's larger than the observable universe, and let's say
 time is not a issue either so we won't worry that it operates a billion
 trillion times slower than our mind, and let's say the Chinese Room doesn't
 do ELIZA style bullshit but can engage in a brilliant and interesting (if
 you are very very very patient) conversation with you in Chinese or any
 other language about anything. And lets have the little man not only be
 ignorant of Chinese but be retarded and thus not understand anything in any
 language, he can only look at input symbols and then look at the huge
 lookup table till he finds similar squiggles and the appropriate response
 to those squiggles which he then outputs. The man has no idea what's going
 on, he just looks at input squiggles and matches them up with output
 squiggles, but from outside the room it's very different.


Yes

 You ask the room to produce a quantum theory of gravity and it does so, you
 ask it to output a new poem that a considerable fraction of the human race
 would consider to be very beautiful and it does so, you ask it to output a
 original fantasy children's novel that will be more popular than Harry
 Potter and it does so.

No. The thought experiment is not about simulating omniscience. If you
ask the room to produce anything outside of casual conversation, it
would politely decline.

 The room certainly behaves intelligently but the man
 was not conscious of any of the answers produced, as I've said the man
 doesn't have a clue what's going on, so does this disprove my assertion
 that intelligent behavior implies consciousness?

Yes. Nothing in the room is conscious, nor is the room itself, or the
building, city or planet conscious of the conversation.


 No it does not, or at least it probably does not, this is why. That
 reference book that contains everything that can be said about anything
 that can be asked in a finite time would be large, astronomical would be
 far far too weak a word to describe it,

Where are you getting that from? I haven't read anything about the
Chinese Room being defined as having superhuman intelligence. All it
has to do is make convincing Chinese conversation for a while.

 but it would not be infinitely
 large so it remains a legitimate thought experiment. However that
 astounding lookup table came from somewhere, whoever or whatever made it
 had to be very intelligent indeed and also I believe conscious, and so the
 brilliance of the actions of the Chinese Room does indeed imply
 consciousness.

Of course. Programs indeed reflect the intelligence and consciousness
of their programmers to an intelligent and conscious audience, but not
to the program itself. If the programmer and audience is dead, there
is no intelligence or consciousness at all. I think you are trying to
sneak out of this now by strawmanning my position. You make it sound
as if I claimed that CDs could not be used to play music because CDs
are not musicians. My position has always been that people can use
inanimate media to access subjective content by sense, yours has been
that if inanimate machines behave intelligently then they themselves
must be conscious and intelligent. Now you are backing off of that and
saying that anything that ever had anything to do with consciousness
can be said to be conscious.


 You may say that even if I'm right about that then a computer doing smart
 things would just imply the consciousness of the people who made the
 computer. But here is where the analogy breaks down, real computers don't
 work like the Chinese Room does, they don't have anything remotely like
 that astounding lookup table; the godlike thing that made the Chinese Room
 knows exactly what that room will do in 

Re: Intelligence and consciousness

2012-02-12 Thread Bruno Marchal


On 12 Feb 2012, at 06:50, L.W. Sterritt wrote:

I don't really understand this thread - magical thinking?   The  
neural network between our ears is who / what we are,  and  
everything that we will experience.


If that was the case, we would not survive with an artificial brain.  
Comp would be false. With comp it is better to consider that we have a  
brain, instead that we are a brain.





 It is the source of consciousness - even if consciousness is  
regarded as an epiphenomenon.


UDA shows that it is the other way around. I know that is is very  
counterintuitive. But the brain, as a material object is a creation of  
consciousness, which is itself a natural flux emerging on arithmetical  
truth from the points of view of universal machine/numbers. But  
locally you are right. the material brain is what makes your  
platonic consciousness capable of manifest itself relatively to a  
more probable computational history. yet in the big (counterintuitive)  
picture, the numbers relation are responsible for consciousness which  
select relative computations among an infinities, and matter is a  
first person plural phenomenon emergent from a statistical competition  
of infinities of (universal) numbers (assuming mechanism).


Most people naturally believe that mechanism is an ally to  
materialism, but they are epistemologically incompatible.


Bruno






Gandalph


On Feb 11, 2012, at 9:34 PM, John Clark wrote:


On Fri, Feb 10, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 I think you are radically overestimating the size of the book  
and the importance of the size to the experiment. ELIZA was about  
20Kb.


TO HELL WITH ELIZA That prehistoric program is NOT intelligent!  
What is the point of a though experiment that gives stupid useless  
answers to questions?


If it's a thousand times better than ELIZA, then you've got a  
20 Mb rule book.


For heavens sake, if a 20 Mb look-up table  was sufficient we would  
have had AI decades ago.


Since you can't do so let me make the best case for the Chinese  
Room from your point of view and the most difficult case to defend  
from mine. Let's say you're right and the size of the lookup table  
is not important so we won't worry that it's larger than the  
observable universe, and let's say time is not a issue either so we  
won't worry that it operates a billion trillion times slower than  
our mind, and let's say the Chinese Room doesn't do ELIZA style  
bullshit but can engage in a brilliant and interesting (if you are  
very very very patient) conversation with you in Chinese or any  
other language about anything. And lets have the little man not  
only be ignorant of Chinese but be retarded and thus not understand  
anything in any language, he can only look at input symbols and  
then look at the huge lookup table till he finds similar squiggles  
and the appropriate response to those squiggles which he then  
outputs. The man has no idea what's going on, he just looks at  
input squiggles and matches them up with output squiggles, but from  
outside the room it's very different.


You ask the room to produce a quantum theory of gravity and it does  
so, you ask it to output a new poem that a considerable fraction of  
the human race would consider to be very beautiful and it does so,  
you ask it to output a original fantasy children's novel that will  
be more popular than Harry Potter and it does so. The room  
certainly behaves intelligently but the man was not conscious of  
any of the answers produced, as I've said the man doesn't have a  
clue what's going on, so does this disprove my assertion that  
intelligent behavior implies consciousness?


No it does not, or at least it probably does not, this is why. That  
reference book that contains everything that can be said about  
anything that can be asked in a finite time would be large,  
astronomical would be far far too weak a word to describe it, but  
it would not be infinitely large so it remains a legitimate thought  
experiment. However that astounding lookup table came from  
somewhere, whoever or whatever made it had to be very intelligent  
indeed and also I believe conscious, and so the brilliance of the  
actions of the Chinese Room does indeed imply consciousness.


You may say that even if I'm right about that then a computer doing  
smart things would just imply the consciousness of the people who  
made the computer. But here is where the analogy breaks down, real  
computers don't work like the Chinese Room does, they don't have  
anything remotely like that astounding lookup table; the godlike  
thing that made the Chinese Room knows exactly what that room will  
do in every circumstance, but computer scientists don't know what  
their creation will do, all they can do is watch it and see.


But you may also say, I don't care how the room got made, I was  
talking about inside the room and I insist there was no  
consciousness inside that room. I 

Re: Intelligence and consciousness

2012-02-12 Thread John Clark
On Sun, Feb 12, 2012 at 2:13 AM, meekerdb meeke...@verizon.net wrote:

Not only that, a computer implementing AI would be able to learn from it's
 discussion.  Even if it started with an astronomically large look-up table,
 the look-up table would grow.


 That is very true!


  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-11 Thread John Clark
On Fri, Feb 10, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 I think you are radically overestimating the size of the book and the
 importance of the size to the experiment. ELIZA was about 20Kb.


TO HELL WITH ELIZA That prehistoric program is NOT intelligent! What is
the point of a though experiment that gives stupid useless answers to
questions?

If it's a thousand times better than ELIZA, then you've got a 20 Mb
rule book.

For heavens sake, if a 20 Mb look-up table  was sufficient we would have
had AI decades ago.

Since you can't do so let me make the best case for the Chinese Room from
your point of view and the most difficult case to defend from mine. Let's
say you're right and the size of the lookup table is not important so we
won't worry that it's larger than the observable universe, and let's say
time is not a issue either so we won't worry that it operates a billion
trillion times slower than our mind, and let's say the Chinese Room doesn't
do ELIZA style bullshit but can engage in a brilliant and interesting (if
you are very very very patient) conversation with you in Chinese or any
other language about anything. And lets have the little man not only be
ignorant of Chinese but be retarded and thus not understand anything in any
language, he can only look at input symbols and then look at the huge
lookup table till he finds similar squiggles and the appropriate response
to those squiggles which he then outputs. The man has no idea what's going
on, he just looks at input squiggles and matches them up with output
squiggles, but from outside the room it's very different.

You ask the room to produce a quantum theory of gravity and it does so, you
ask it to output a new poem that a considerable fraction of the human race
would consider to be very beautiful and it does so, you ask it to output a
original fantasy children's novel that will be more popular than Harry
Potter and it does so. The room certainly behaves intelligently but the man
was not conscious of any of the answers produced, as I've said the man
doesn't have a clue what's going on, so does this disprove my assertion
that intelligent behavior implies consciousness?

No it does not, or at least it probably does not, this is why. That
reference book that contains everything that can be said about anything
that can be asked in a finite time would be large, astronomical would be
far far too weak a word to describe it, but it would not be infinitely
large so it remains a legitimate thought experiment. However that
astounding lookup table came from somewhere, whoever or whatever made it
had to be very intelligent indeed and also I believe conscious, and so the
brilliance of the actions of the Chinese Room does indeed imply
consciousness.

You may say that even if I'm right about that then a computer doing smart
things would just imply the consciousness of the people who made the
computer. But here is where the analogy breaks down, real computers don't
work like the Chinese Room does, they don't have anything remotely like
that astounding lookup table; the godlike thing that made the Chinese Room
knows exactly what that room will do in every circumstance, but computer
scientists don't know what their creation will do, all they can do is watch
it and see.

But you may also say, I don't care how the room got made, I was talking
about inside the room and I insist there was no consciousness inside that
room. I would say assigning a position to consciousness is a little like
assigning a position to fast or red or any other adjective, it doesn't
make a lot of sense. If your conscious exists anywhere it's not inside a
vat made of bone balancing on your shoulders, it's where you're thinking
about. I am the way matter behaves when it is organized in a johnkclarkian
way and other things are the way matter behaves when it is organized in a
chineseroomian way.

And by the way, I don't intend to waste my time defending the assertion
that intelligent behavior implies intelligence, that would be like debating
if X implies X or not, I have better things to do with my time.

  The King James Bible can be downloaded here



No thanks, I'll pass on that.

 Only?! Einstein only seemed intelligent to scientifically literate
 speakers in the outside world.



No, he was aware of his own intelligence too.


How the hell do you know that? And you seem to be using the words
intelligent and conscious interchangeably, they are not synonyms.

  If you start out defining intelligence as an abstract function and
 category of behaviors


Which is the only operational definition of intelligence.

 rather than quality of consciousness


Which is a totally useless definition in investigating the intelligence of
a computer or a person or a animal or of ANYTHING.

 I use ELIZA as an example because you can clearly see that it is not
 intelligent


So can I, so when you use that idiot program to try to advance your
antediluvian ideas it proves 

Re: Intelligence and consciousness

2012-02-11 Thread L.W. Sterritt
I don't really understand this thread - magical thinking?   The neural network 
between our ears is who / what we are,  and everything that we will experience. 
 It is the source of consciousness - even if consciousness is regarded as an 
epiphenomenon.  

Gandalph

 
On Feb 11, 2012, at 9:34 PM, John Clark wrote:

 On Fri, Feb 10, 2012  Craig Weinberg whatsons...@gmail.com wrote:
 
  I think you are radically overestimating the size of the book and the 
 importance of the size to the experiment. ELIZA was about 20Kb.
 
 TO HELL WITH ELIZA That prehistoric program is NOT intelligent! What is 
 the point of a though experiment that gives stupid useless answers to 
 questions?
 
 If it's a thousand times better than ELIZA, then you've got a 20 Mb rule 
 book. 
 
 For heavens sake, if a 20 Mb look-up table  was sufficient we would have had 
 AI decades ago.  
 
 Since you can't do so let me make the best case for the Chinese Room from 
 your point of view and the most difficult case to defend from mine. Let's say 
 you're right and the size of the lookup table is not important so we won't 
 worry that it's larger than the observable universe, and let's say time is 
 not a issue either so we won't worry that it operates a billion trillion 
 times slower than our mind, and let's say the Chinese Room doesn't do ELIZA 
 style bullshit but can engage in a brilliant and interesting (if you are very 
 very very patient) conversation with you in Chinese or any other language 
 about anything. And lets have the little man not only be ignorant of Chinese 
 but be retarded and thus not understand anything in any language, he can only 
 look at input symbols and then look at the huge lookup table till he finds 
 similar squiggles and the appropriate response to those squiggles which he 
 then outputs. The man has no idea what's going on, he just looks at input 
 squiggles and matches them up with output squiggles, but from outside the 
 room it's very different. 
 
 You ask the room to produce a quantum theory of gravity and it does so, you 
 ask it to output a new poem that a considerable fraction of the human race 
 would consider to be very beautiful and it does so, you ask it to output a 
 original fantasy children's novel that will be more popular than Harry Potter 
 and it does so. The room certainly behaves intelligently but the man was not 
 conscious of any of the answers produced, as I've said the man doesn't have a 
 clue what's going on, so does this disprove my assertion that intelligent 
 behavior implies consciousness?
 
 No it does not, or at least it probably does not, this is why. That reference 
 book that contains everything that can be said about anything that can be 
 asked in a finite time would be large, astronomical would be far far too 
 weak a word to describe it, but it would not be infinitely large so it 
 remains a legitimate thought experiment. However that astounding lookup table 
 came from somewhere, whoever or whatever made it had to be very intelligent 
 indeed and also I believe conscious, and so the brilliance of the actions of 
 the Chinese Room does indeed imply consciousness. 
 
 You may say that even if I'm right about that then a computer doing smart 
 things would just imply the consciousness of the people who made the 
 computer. But here is where the analogy breaks down, real computers don't 
 work like the Chinese Room does, they don't have anything remotely like that 
 astounding lookup table; the godlike thing that made the Chinese Room knows 
 exactly what that room will do in every circumstance, but computer scientists 
 don't know what their creation will do, all they can do is watch it and see.  
 
 But you may also say, I don't care how the room got made, I was talking about 
 inside the room and I insist there was no consciousness inside that room. I 
 would say assigning a position to consciousness is a little like assigning a 
 position to fast or red or any other adjective, it doesn't make a lot of 
 sense. If your conscious exists anywhere it's not inside a vat made of bone 
 balancing on your shoulders, it's where you're thinking about. I am the way 
 matter behaves when it is organized in a johnkclarkian way and other things 
 are the way matter behaves when it is organized in a chineseroomian way. 
 
 And by the way, I don't intend to waste my time defending the assertion that 
 intelligent behavior implies intelligence, that would be like debating if X 
 implies X or not, I have better things to do with my time.  
 
   The King James Bible can be downloaded here
 
 
 No thanks, I'll pass on that.
 
  Only?! Einstein only seemed intelligent to scientifically literate 
  speakers in the outside world.
  
No, he was aware of his own intelligence too. 
 
 How the hell do you know that? And you seem to be using the words 
 intelligent and conscious interchangeably, they are not synonyms.
 
   If you start out defining intelligence as an 

Re: Intelligence and consciousness

2012-02-11 Thread meekerdb

On 2/11/2012 9:34 PM, John Clark wrote:
You may say that even if I'm right about that then a computer doing smart things would 
just imply the consciousness of the people who made the computer. But here is where the 
analogy breaks down, real computers don't work like the Chinese Room does, they don't 
have anything remotely like that astounding lookup table; the godlike thing that made 
the Chinese Room knows exactly what that room will do in every circumstance, but 
computer scientists don't know what their creation will do, all they can do is watch it 
and see.


Not only that, a computer implementing AI would be able to learn from it's discussion.  
Even if it started with an astronomically large look-up table, the look-up table would grow.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-10 Thread John Clark
On Thu, Feb 9, 2012 Craig Weinberg whatsons...@gmail.com wrote:
 The rule book is the memory.

Yes but the rule book not only contains a astronomically large database it
also contains a super ingenious artificial intelligence program; without
those things the little man is like a naked microprocessor sitting on a
storage shelf, its not a brain and its not a computer and its not doing one
damn thing.


 The contents of memory is dumb too - as dumb as player piano rolls.


That's pretty dumb. but the synapses of the brain are just as dumb and the
atoms they, and computers and everything else, are made of are even
dumber.

 The two together only seem intelligent to Chinese speakers outside the
 door


Only?! Einstein only seemed intelligent to scientifically literate speakers
in the outside world. It seems that, as you use the term, seeming
intelligent is as good as being intelligent. In fact it seems to me that
believing intelligent actions are not a sign of intelligence is not very
intelligent.

 A conversation that lasts a few hours could probably be generated from a
 standard Chinese phrase book, especially if equipped with some useful
 evasive answers (a la ELIZA).


You bring up that stupid 40 year old program again? Yes ELIZA displayed
little if any intelligence but that program is 40 years old! Do try to keep
up. And if you are really confident in your ideas push the thought
experiment to the limit and let the Chinese Room produce brilliant answers
to complex questions, if it just churns out ELIZA style evasive crap that
proves nothing because we both agree that's not very intelligent.

 The size isn't the point though.


I rather think it is. A book larger than the observable universe and a
program more brilliant than any written, yet you insist that if understand
is anywhere in that room it must be in the by far least remarkable part of
it, the silly little man.  And remember the consciousness that room
produces would not be like the consciousness you or I have, if would take
that room many billions of years to generate as much consciousness as you
do in one second.

 Speed is a red herring too.


No it is not and I will tell you exactly why as soon as the sun burns out
and collapses into a white dwarf. Speed isn't a issue so you have to
concede that I won that point.

  if it makes sense for a room to be conscious, then it makes sense that
 anything and everything can be conscious


Yes, providing the thing in question behaves intelligently.  We only think
our fellow humans are conscious when they behave intelligently and that's
the only reason we DON'T think they're conscious when they're sleeping or
dead; all I ask is that you play by the same rules when dealing with
computers or Chinese Rooms.

 However Searle does not expect us to think it odd that 3 pounds of grey
 goo in a bone vat can be conscious


 Because unlike you, he [Searl] is not presuming the neuron doctrine. I
 think his position is that consciousness cannot solely because of the
 material functioning of the brain and it must be something else.


And yet if you change the way the brain functions, through drugs or surgery
or electrical stimulation or a bullet to the head, the conscious experience
changes too.  And if the brain can make use of this free floating glowing
bullshit of yours what reason is there to believe that computers can't also
do so? I've asked this question before and the best you could come up with
is that computers aren't squishy and don't smell bad so they can't be
conscious. I don't find that argument compelling.

 We know the brain relates directly to consciousness, but we don't know
 for sure how.


If you don't know how the brain produces consciousness then how in the
world can you be so certain a computer can't do it too, especially if the
computer is as intelligent or even more intelligent than the brain?

 We can make a distinction between the temporary disposition of the brain
 and it's more permanent structureor organization.


A 44 magnum bullet in the brain would cause a change in brain organization
and would seem to be rather permanent. I believe such a thing would also
cause a rather significant change in consciousness. Do you disagree?

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-10 Thread Craig Weinberg
On Feb 10, 3:52 pm, John Clark johnkcl...@gmail.com wrote:
 On Thu, Feb 9, 2012 Craig Weinberg whatsons...@gmail.com wrote:

  The rule book is the memory.

 Yes but the rule book not only contains a astronomically large database it
 also contains a super ingenious artificial intelligence program; without
 those things the little man is like a naked microprocessor sitting on a
 storage shelf, its not a brain and its not a computer and its not doing one
 damn thing.

I think you are radically overestimating the size of the book and the
importance of the size to the experiment. ELIZA was about 20Kb.
http://www.jesperjuul.net/eliza/

If it's a thousand times better than ELIZA, then you've got a 20 Mb
rule book. The King James Bible can be downloaded here
http://www.biblepath.com/bible_download.html at 14.33Mb. There is no
time limit specified so we have no way of knowing how long it would
take for a book this size to fail the Turing Test.

It might be more useful to use more of a pharmaceutical model, like
LD50 or LD100; how long of a conversation do you have to have before
50% of the native speakers fail the system. Is the Turing Test an LD00
test with unbounded duration? No native speaker can ever tell the
difference no matter how long they converse? This is clearly
impossible. It's context dependent and subjective. I only assume that
everyone here is human because I have no reason to doubt that, but in
a testing situation, I would not be confident that everyone here is
human judging only from responses.


  The contents of memory is dumb too - as dumb as player piano rolls.

 That's pretty dumb. but the synapses of the brain are just as dumb and the
 atoms they, and computers and everything else, are made of are even
 dumber.

Player piano rolls aren't living organisms that create and repair vast
organic communication networks. Computers don't do anything by
themselves, they have to be carefully programed and maintained by
people and they have to have human users to make sense of any of their
output. Neurons require no external physical agents to program or use
them.


  The two together only seem intelligent to Chinese speakers outside the
  door

 Only?! Einstein only seemed intelligent to scientifically literate speakers
 in the outside world.

No, he was aware of his own intelligence too. I think you're grasping
at straws.

 It seems that, as you use the term, seeming
 intelligent is as good as being intelligent.

So if I imitate Arnold Schwarzenegger on the phone, then that's as
good as me being Schwarzenegger.

 In fact it seems to me that
 believing intelligent actions are not a sign of intelligence is not very
 intelligent.

I understand that you think of it that way, and I think that is a
moronic belief, but I don't think that makes you a moron. It all comes
down to thinking in terms of an arbitrary formalism of language rather
and working backward to reality rather than working from concrete
realism and using language to understand it. If you start out defining
intelligence as an abstract function and category of behaviors rather
than quality of consciousness which entails the capacity for behaviors
and functions, then you end up proving your own assumptions with
circular reasoning.


  A conversation that lasts a few hours could probably be generated from a
  standard Chinese phrase book, especially if equipped with some useful
  evasive answers (a la ELIZA).

 You bring up that stupid 40 year old program again? Yes ELIZA displayed
 little if any intelligence but that program is 40 years old! Do try to keep
 up.

You keep up. ELIZA is still being updated as of 2007:
http://webscripts.softpedia.com/script/Programming-Methods-and-Algorithms-Python/Artificial-Intelligence-Chatterbot-Eliza-15909.html

I use ELIZA as an example because you can clearly see that it is not
intelligent and you can clearly see that it could superficially seem
intelligent. It becomes more difficult to be as sure what is going on
when the program is more sophisticated because it is a more convincing
fake. The ELIZA example is perfect because it exposes the fundamental
mechanism by which trivial intelligence can be mistaken for the
potential for understanding.

 And if you are really confident in your ideas push the thought
 experiment to the limit and let the Chinese Room produce brilliant answers
 to complex questions, if it just churns out ELIZA style evasive crap that
 proves nothing because we both agree that's not very intelligent.

Ok, make it a million times the size of ELIZA. A set of 1,000 books. I
think that would pass an LD50 Turing Test of a five hour conversation,
don't you?


  The size isn't the point though.

 I rather think it is. A book larger than the observable universe and a
 program more brilliant than any written,

where are you getting that from?

 yet you insist that if understand
 is anywhere in that room it must be in the by far least remarkable part of
 it, the silly little man.

That's the 

Re: Intelligence and consciousness

2012-02-09 Thread Bruno Marchal


On 08 Feb 2012, at 18:47, Stephen P. King wrote:


On 2/8/2012 11:46 AM, Bruno Marchal wrote:


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:

I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.


Umm, could one's theory of another be such that it is a faithful  
subimage of the theory maker?


Maybe I have a theory that Bruno is a Tyrant and Craig is a  
Jester. ;-)




You do seem avoiding reasoning, to reassert in many ways a  
conviction that you have.
You want to seem to change the rule of the game, where, personally,  
I want them to be applied in any field, notably in theology,  
defined as the notion of truth about entities. Basically Plato's  
definition of Theology. Truth. The truth we search, not the one we  
might find.




Could you imagine that your representation is not singular? There is  
more than one way of thinking of the idea that you are considering.


How? Either your consciousness changes in the Turing emulation at some  
level, or it does not (comp). The rest is logic, and can be explained  
in arithmetic, which can be formalized in contexts which eliminate the  
'metaphysical baggage.
In theoretical science we can always been enough clear so that  
colleagues, or nature, can find a mistake, so that we can progress.
In (continental-like) philosophy, that's different, but that is the  
reason why I avoid such type of philosophy at the start.













but the trick is that
I emulate Einstein himself, and I provide the answer that Einstein
answers me (and I guess I will have to make some work to understand
them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is the  
one who makes the confusion. Einstein is the relatively concrete  
immaterial person which has been temporary able to manifest itself  
through the easy but tedious task to emulate its brain.
Searle confused an easy  low level of simulation (neurons, say)  
with the emulated person, which, if you deny the consciousness, is  
an actual zombie (corroborating Stathis' early debunking of your  
argument).


There is no problem with having conviction, Craig, but you have to  
keep them personal, and this for reasoning for comp or for non- 
comp, or on whatever. It is the very idea of *reasoning* (always  
from public assumptions).


If not I am afraid you are just not playing the game most  
participant want to play in the list.


Both in science and in philosophy there are scientists and  
philosophers. Scientists are those who can recognize they might be  
wrong, or that they are wrong. You seem to be unable to conceive  
that comp *might* be true, (in the weak sense of the existence of   
*some* level of substitution), and you seem be unable to put down  
your assumption and a reasoning which leads to your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error  
recognition (abunding in Knight's idea that you might be a troll,  
which I am not yet sure).


   But you cannot be wrong, Bruno, right? LOL


Of course I can be wrong. But you have to show the error if you think  
so. I worked hard to make the argument modularized in many simple  
steps to help you in that very task.


And of course comp can be wrong too, but if my argument is correct,  
the only way to know that is to find a physical facts contradicting  
the comp physical prediction. That should not be too difficult given  
that comp gives the whole of physics. In 1991, after the discovery of  
the p-BDp (the arithmetical quantization)  I predicted that comp 
+Theaetetus would be refuted before 2000.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread Stephen P. King

On 2/9/2012 5:19 AM, Bruno Marchal wrote:


On 08 Feb 2012, at 18:47, Stephen P. King wrote:


On 2/8/2012 11:46 AM, Bruno Marchal wrote:


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:

I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.


Umm, could one's theory of another be such that it is a faithful 
subimage of the theory maker?


Maybe I have a theory that Bruno is a Tyrant and Craig is a Jester. ;-)



You do seem avoiding reasoning, to reassert in many ways a 
conviction that you have.
You want to seem to change the rule of the game, where, personally, 
I want them to be applied in any field, notably in theology, defined 
as the notion of truth about entities. Basically Plato's definition 
of Theology. Truth. The truth we search, not the one we might find.




Could you imagine that your representation is not singular? There is 
more than one way of thinking of the idea that you are considering.


How? Either your consciousness changes in the Turing emulation at some 
level, or it does not (comp). The rest is logic, and can be explained 
in arithmetic, which can be formalized in contexts which eliminate the 
'metaphysical baggage.
In theoretical science we can always been enough clear so that 
colleagues, or nature, can find a mistake, so that we can progress.
In (continental-like) philosophy, that's different, but that is the 
reason why I avoid such type of philosophy at the start.













but the trick is that
I emulate Einstein himself, and I provide the answer that Einstein
answers me (and I guess I will have to make some work to understand
them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is the 
one who makes the confusion. Einstein is the relatively concrete 
immaterial person which has been temporary able to manifest itself 
through the easy but tedious task to emulate its brain.
Searle confused an easy  low level of simulation (neurons, say) 
with the emulated person, which, if you deny the consciousness, is 
an actual zombie (corroborating Stathis' early debunking of your 
argument).


There is no problem with having conviction, Craig, but you have to 
keep them personal, and this for reasoning for comp or for non-comp, 
or on whatever. It is the very idea of *reasoning* (always from 
public assumptions).


If not I am afraid you are just not playing the game most 
participant want to play in the list.


Both in science and in philosophy there are scientists and 
philosophers. Scientists are those who can recognize they might be 
wrong, or that they are wrong. You seem to be unable to conceive 
that comp *might* be true, (in the weak sense of the existence of  
*some* level of substitution), and you seem be unable to put down 
your assumption and a reasoning which leads to your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error 
recognition (abunding in Knight's idea that you might be a troll, 
which I am not yet sure).


   But you cannot be wrong, Bruno, right? LOL


Of course I can be wrong. But you have to show the error if you think 
so. I worked hard to make the argument modularized in many simple 
steps to help you in that very task.


And of course comp can be wrong too, but if my argument is correct, 
the only way to know that is to find a physical facts contradicting 
the comp physical prediction. That should not be too difficult given 
that comp gives the whole of physics. In 1991, after the discovery of 
the p-BDp (the arithmetical quantization)  I predicted that 
comp+Theaetetus would be refuted before 2000.


Bruno

http://iridia.ulb.ac.be/~marchal/




Dear Bruno,

My best expression of my theory, although it does not quite rise 
to that level, is in my last response to ACW under the subject line 
Ontological problems of COMP. My claim is that your argument is 
self-refuting as it claims to prohibit the very means to communicate it. 
I point out that this problem can easily be resolved by putting the 
abstract aspect of COMP at the same ontological level as its 
interpersonal expressions, but this implies dualism which you resist. 
That is your choice, but you need to understand the consequences of 
Ideal monism. It has no explanation for interactions between minds.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread Bruno Marchal


On 09 Feb 2012, at 13:20, Stephen P. King wrote:


On 2/9/2012 5:19 AM, Bruno Marchal wrote:


On 08 Feb 2012, at 18:47, Stephen P. King wrote:


On 2/8/2012 11:46 AM, Bruno Marchal wrote:


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:

I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.


Umm, could one's theory of another be such that it is a faithful  
subimage of the theory maker?


Maybe I have a theory that Bruno is a Tyrant and Craig is a  
Jester. ;-)




You do seem avoiding reasoning, to reassert in many ways a  
conviction that you have.
You want to seem to change the rule of the game, where,  
personally, I want them to be applied in any field, notably in  
theology, defined as the notion of truth about entities.  
Basically Plato's definition of Theology. Truth. The truth we  
search, not the one we might find.




Could you imagine that your representation is not singular? There  
is more than one way of thinking of the idea that you are  
considering.


How? Either your consciousness changes in the Turing emulation at  
some level, or it does not (comp). The rest is logic, and can be  
explained in arithmetic, which can be formalized in contexts which  
eliminate the 'metaphysical baggage.
In theoretical science we can always been enough clear so that  
colleagues, or nature, can find a mistake, so that we can progress.
In (continental-like) philosophy, that's different, but that is the  
reason why I avoid such type of philosophy at the start.













but the trick is that
I emulate Einstein himself, and I provide the answer that  
Einstein
answers me (and I guess I will have to make some work to  
understand

them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is  
the one who makes the confusion. Einstein is the relatively  
concrete immaterial person which has been temporary able to  
manifest itself through the easy but tedious task to emulate its  
brain.
Searle confused an easy  low level of simulation (neurons, say)  
with the emulated person, which, if you deny the consciousness,  
is an actual zombie (corroborating Stathis' early debunking of  
your argument).


There is no problem with having conviction, Craig, but you have  
to keep them personal, and this for reasoning for comp or for non- 
comp, or on whatever. It is the very idea of *reasoning* (always  
from public assumptions).


If not I am afraid you are just not playing the game most  
participant want to play in the list.


Both in science and in philosophy there are scientists and  
philosophers. Scientists are those who can recognize they might  
be wrong, or that they are wrong. You seem to be unable to  
conceive that comp *might* be true, (in the weak sense of the  
existence of  *some* level of substitution), and you seem be  
unable to put down your assumption and a reasoning which leads to  
your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error  
recognition (abunding in Knight's idea that you might be a troll,  
which I am not yet sure).


  But you cannot be wrong, Bruno, right? LOL


Of course I can be wrong. But you have to show the error if you  
think so. I worked hard to make the argument modularized in many  
simple steps to help you in that very task.


And of course comp can be wrong too, but if my argument is correct,  
the only way to know that is to find a physical facts contradicting  
the comp physical prediction. That should not be too difficult  
given that comp gives the whole of physics. In 1991, after the  
discovery of the p-BDp (the arithmetical quantization)  I  
predicted that comp+Theaetetus would be refuted before 2000.


Bruno

http://iridia.ulb.ac.be/~marchal/




Dear Bruno,

   My best expression of my theory, although it does not quite  
rise to that level, is in my last response to ACW under the subject  
line Ontological problems of COMP. My claim is that your argument  
is self-refuting as it claims to prohibit the very means to  
communicate it. I point out that this problem can easily be resolved  
by putting the abstract aspect of COMP at the same ontological level  
as its interpersonal expressions, but this implies dualism which you  
resist.


You keep telling me that you defend neutral monism, and now you  
pretend that I am wrong because I resist to dualism?
I have explained that comp, and thus arithmetic alone, explains many  
form of dualism, all embedded in a precise Plotionian-like 'octalism'.




That is your choice, but you need to understand the consequences of  
Ideal monism. It has no explanation for interactions between minds.


It is not a matter of choice, but of proof in theoretical framework.

Then the neutral arithmetical monism explains very well the  
interactions between minds a priori. Only UDA shows 

Re: Intelligence and consciousness

2012-02-09 Thread Stephen P. King

On 2/9/2012 9:14 AM, Bruno Marchal wrote:


On 09 Feb 2012, at 13:20, Stephen P. King wrote:


Dear Bruno,

   My best expression of my theory, although it does not quite rise 
to that level, is in my last response to ACW under the subject line 
Ontological problems of COMP. My claim is that your argument is 
self-refuting as it claims to prohibit the very means to communicate 
it. I point out that this problem can easily be resolved by putting 
the abstract aspect of COMP at the same ontological level as its 
interpersonal expressions, but this implies dualism which you resist.


You keep telling me that you defend neutral monism, and now you 
pretend that I am wrong because I resist to dualism?
I have explained that comp, and thus arithmetic alone, explains many 
form of dualism, all embedded in a precise Plotionian-like 'octalism'.



[SPK]

Hi Bruno,

I don't see that you distinguish between the ontological nature of 
the  representation of a theory in terms of mathematics and the 
mathematics itself. You seem to identify the representation with the 
object but never explicitly. I am simply asking you why not? Could you 
elaborate on this octalism and how does it relate to a neutral monism. 
How is its neutrality defined?





That is your choice, but you need to understand the consequences of 
Ideal monism. It has no explanation for interactions between minds.


It is not a matter of choice, but of proof in theoretical framework.

[SPK]

But I could, if I had the skill with words, construct a theory of 
pink unicorns that would have the same degree of structure and make the 
same claims. How would I test it against yours? That is the problem. If 
a theory claims that the physical world does not exist then it throws 
away the very means to test it. It becomes by definition unfalsifiable.





Then the neutral arithmetical monism explains very well the 
interactions between minds a priori. 

[SPK]

How is it neutral when it takes a certain set of properties as 
ontologically primitive?  Neutral monism cannot do that or it violates 
the very definition of neutrality. Can you not see this? Additionally, 
you have yet to show exactly how interactions between minds are defined. 
All I have see is discussion of a plurality of minds, but no where is 
there anything like an example in detail that considered the interaction 
of one mind with another. I know that minds do interact, my proof is the 
fact that this email discussion is occurring, and that is evidence 
enough. But how does your result explain it?



Only UDA shows that we have to explain matter entirely through dream 
interferences (say). That is a success, because it explains 
conceptually the origin of the physical laws, and the explanation is 
constructive, once we agree on the classical axioms  for knowledge, 
making comp testable.


But that is a problem because we have to chose a set of axioms to 
agree upon and there is potentially an infinite number of axioms. I am 
reminded of the full extent of Pascal's /Gambit/ 
http://en.wikipedia.org/wiki/Pascal%27s_Wager. There is no a priori 
way of knowing which definition of god is the correct one. Pascal's 
situation and the situation with Bpp makes truth a mere accident. Maybe 
this is OK for you, but not for me. Maybe I demand too much from 
explanations of our world, but I ask that they at least explain the 
necessity of the appearances without asking me to believe in the 
explanation by blind faith.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread John Clark
On Tue, Feb 7, 2012 at 5:18 PM, Craig Weinberg whatsons...@gmail.comwrote:

 How in hell would putting a computer in the position of the man prove
 anything??


 Because that is the position a computer is in when it runs a program
 based on user inputs and outputs. The man in the room is a CPU.


A CPU without a memory is not a brain or a computer it's just a CPU. The
man must be a infinitesimally small part of the entire Chinese room; think
about it, the man doesn't know a word of Chinese and yet when he is in that
room it can input questions in Chinese and output intelligent answers to
them also in Chinese, so the non-man parts of that room must be very
remarkable and unlike any room you or I have ever been in.

 He and the rule book are the only parts that are relevant to strong AI.


A rule book larger than the observable universe and a room that thinks a
billion trillion times slower than you or I think. Searle expects us to
throw logic to the wind and to think that even if the consciousness is
slowed down by that much the room in general can't be conscious because...
because just because. However Searle does not expect us to think it odd
that 3 pounds of grey goo in a bone vat can be conscious, that's different
because... because just because. Because of this I expect that Searle
is an idiot.

  The organization of my kitchen sink does not change with the
 temperature of the water coming out of the faucet.


 Glad to hear it, now I know who to ask when I need plumbing advice.


 Couldn't think of a legitimate counterpoint?


I didn't even try because I didn't want to inflect needless wear and tear
on my brain. The problem is I don't give a damn if the organization of my
kitchen sink changes with the temperature of the water coming out of the
faucet or not. I'm more interested in how the brain works than kitchen
sinks.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread Craig Weinberg
On Feb 9, 1:26 pm, John Clark johnkcl...@gmail.com wrote:
 On Tue, Feb 7, 2012 at 5:18 PM, Craig Weinberg whatsons...@gmail.comwrote:

  How in hell would putting a computer in the position of the man prove
  anything??

  Because that is the position a computer is in when it runs a program
  based on user inputs and outputs. The man in the room is a CPU.

 A CPU without a memory is not a brain or a computer it's just a CPU. The
 man must be a infinitesimally small part of the entire Chinese room; think
 about it, the man doesn't know a word of Chinese and yet when he is in that
 room it can input questions in Chinese and output intelligent answers to
 them also in Chinese, so the non-man parts of that room must be very
 remarkable and unlike any room you or I have ever been in.

The rule book is the memory. A computer works the way the Chinese Room
illustrates - the dumb CPU retrieves recorded instructions of a fixed
range of procedures. The contents of memory is dumb too - as dumb as
player piano rolls. The two together only seem intelligent to the
Chinese speakers outside the door, because they are intelligent and
can project their own understanding on the contents of what the
computer mindlessly produces.


  He and the rule book are the only parts that are relevant to strong AI.

 A rule book larger than the observable universe

Where are you getting that? I have already addressed that the book
only needs to be as large as the level of sensitivity of the Chinese
speakers demands. A conversation that lasts a few hours could probably
be generated from a standard Chinese phrase book, especially if
equipped with some useful evasive answers (a la ELIZA). The size isn't
the point though. Make it a 5000 Tb database instead. What would be
the difference? A book is only there as a device to ground the then
unfamiliar mechanics of data processing into familiar terms.

 and a room that thinks a
 billion trillion times slower than you or I think.

Speed is a red herring too.

 Searle expects us to
 throw logic to the wind and to think that even if the consciousness is
 slowed down by that much the room in general can't be conscious because...
 because just because.

Because if it makes sense for a room to be conscious, then it makes
sense that anything and everything can be conscious, which doesn't
make much more sense than anything.

 However Searle does not expect us to think it odd
 that 3 pounds of grey goo in a bone vat can be conscious,

Because unlike you, he is not presuming the neuron doctrine. I think
his position is that consciousness cannot solely because of the
material functioning of the brain and it must be something else. We
know the brain relates directly to consciousness, but we don't know
for sure how. What Searle is doing is ruling out the possibility that
computation alone is responsible for consciousness. I agree with this,
but go further to suggest that physics has a mechanistic side
expressed as matter across space/topology as well as a non-mechanistic
side expressed as sense experience through time/sequence.

 that's different
 because... because just because. Because of this I expect that Searle
 is an idiot.

He may be an idiot, I don't know, but I think in this case his
experiment is valid, even if a bit ungainly.


   The organization of my kitchen sink does not change with the
  temperature of the water coming out of the faucet.

  Glad to hear it, now I know who to ask when I need plumbing advice.

  Couldn't think of a legitimate counterpoint?

 I didn't even try because I didn't want to inflect needless wear and tear
 on my brain. The problem is I don't give a damn if the organization of my
 kitchen sink changes with the temperature of the water coming out of the
 faucet or not. I'm more interested in how the brain works than kitchen
 sinks.

The metaphor works though. We can make a distinction between the
temporary disposition of the brain and it's more permanent structure
or organization.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread Bruno Marchal


On 09 Feb 2012, at 17:43, Stephen P. King wrote:


On 2/9/2012 9:14 AM, Bruno Marchal wrote:



On 09 Feb 2012, at 13:20, Stephen P. King wrote:


Dear Bruno,

   My best expression of my theory, although it does not quite  
rise to that level, is in my last response to ACW under the  
subject line Ontological problems of COMP. My claim is that your  
argument is self-refuting as it claims to prohibit the very means  
to communicate it. I point out that this problem can easily be  
resolved by putting the abstract aspect of COMP at the same  
ontological level as its interpersonal expressions, but this  
implies dualism which you resist.


You keep telling me that you defend neutral monism, and now you  
pretend that I am wrong because I resist to dualism?
I have explained that comp, and thus arithmetic alone, explains  
many form of dualism, all embedded in a precise Plotionian-like  
'octalism'.



[SPK]

Hi Bruno,

I don't see that you distinguish between the ontological nature  
of the  representation of a theory in terms of mathematics and the  
mathematics itself.


?


You seem to identify the representation with the object but never  
explicitly.


On the contrary. When you say yes to the doctor, you identify  
yourself as a relative number, relative to some universal system you  
bet on. So, with comp, we make precise where and when, and how, we  
identify something and its local 3p incarnation. Else where, we keep  
distinct the terms and their intepretations (by universal numbers),  
which are many.






I am simply asking you why not? Could you elaborate on this  
octalism and how does it relate to a neutral monism. How is its  
neutrality defined?


Neutrality is defined more or less like in Spinoza. It is something,  
which makes sense, and is neither mind, nor body. With comp UDA  
proves, or is supposed to prove, to you, that with comp, arithmetic  
(or recursively equivalent) is both necessary, and enough.

The octalism are the eight hypostases I have already described:
p
Bp
Bp  p
Bp  Dt
Bp  Dt  p

Which are 5 distinct variant of self-reference (Bp, Godël's  
beweisbar('p'), p (sigma_1- arithmetical sentences).


The G/G* splitting, the difference between what machine can prove and  
what is true about them is inherited in three of those variants  
leading to 8 hypostases. The three above matches well Plotinus ONE,  
INTELLECT and SOUL, and the two last, which both splits, matches well  
the two MATTER notion of Plotinus, which is a simple platonic  
correction of an idea of Aristotle. But I found the matter hypostases  
as an attempt to define the measure one on the computational histories  
when observed in self-duplicating machines. Physics is given by the  
material hypostases, and the G* (divine, true) parts give the logic  
of qualia. Quanta have to be part of it form making quantum  
indeterminacy coherent with the comp indeterminacy, and this saves  
comp from solipsism and allow interactions, and first person plural  
notion, although this indeed has not yet been proved. Good difficult  
exercise.










That is your choice, but you need to understand the consequences  
of Ideal monism. It has no explanation for interactions between  
minds.


It is not a matter of choice, but of proof in theoretical framework.

[SPK]

But I could, if I had the skill with words, construct a theory  
of pink unicorns that would have the same degree of structure and  
make the same claims.


Then do it, and let us compare it to comp.





How would I test it against yours? That is the problem.


I have no theory. Today probably 99% of the scientist believes in comp  
(not always consciously), and in some primariness of physics. I just  
explain that this does not work.


As a logician, I just explain that comp and materialism are not  
compatible. The fundamental realm is computer science, and technically  
we can extract the many dreams structure from number theory, including  
intensional theory (with relative role to numbers, like codes).





If a theory claims that the physical world does not exist then it  
throws away the very means to test it.


I can't agree more.




It becomes by definition unfalsifiable.


Sure.

The whole point is that comp proves the physical reality to be  
observable. Physical reality exists, but is not primitive.










Then the neutral arithmetical monism explains very well the  
interactions between minds a priori.

[SPK]

How is it neutral when it takes a certain set of properties as  
ontologically primitive?  Neutral monism cannot do that or it  
violates the very definition of neutrality.


Something exist, right? Monism just take it as being neutral on the  
body or mind side.

That should not prevent the ontology to be clear and intelligible.




Can you not see this? Additionally, you have yet to show exactly how  
interactions between minds are defined.


I have yet to prove the existence of a particle.

The result is that anyone 

Re: Intelligence and consciousness

2012-02-08 Thread 1Z


On Feb 7, 5:52 pm, Craig Weinberg whatsons...@gmail.com wrote:
 On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:

  More seriously, in the chinese room experience, Searle's error can be
  seen also as a confusion of level. If I can emulate Einstein brain,
  I can answer all question you ask to Einstein,

 You're assuming that a brain can be emulated in the first place. If
 that were true, there is no need to have the thought experiment.

You seem to be confusing the theoretical can and the practical can.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-08 Thread Bruno Marchal


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:


More seriously, in the chinese room experience, Searle's error can be
seen also as a confusion of level. If I can emulate Einstein brain,
I can answer all question you ask to Einstein,


You're assuming that a brain can be emulated in the first place.


Together with Searle, for the purpose of following its thought  
experiment, and show where it is invalid.
You might study the detailed answer to Searle, by Dennett and  
Hofstadter in the book Mind's I.
I appreciate Searle, Lucas and Penrose of presenting real argument  
which can be shown precisely wrong.
I am not sure Searle understood his error, or recognize it. he might  
belong to the philosopher who starts from a personal conviction (which  
is a symptom of not wanting to play the *science* game.
Penrose did recognize his error, but hide it and seem unaware of the  
gigantic impact of precisely that error. Godel's theorem does not  
shown that we are not machine, but he shows that we cannot  
consistently know which machine we are, and that's the start of the  
meta formal explanation of the appearance between the subjective and  
objective indeterminacies, for any Löbian machine looking inside.





If
that were true, there is no need to have the thought experiment.


I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.

You do seem avoiding reasoning, to reassert in many ways a conviction  
that you have.
You want to seem to change the rule of the game, where, personally, I  
want them to be applied in any field, notably in theology, defined as  
the notion of truth about entities. Basically Plato's definition of  
Theology. Truth. The truth we search, not the one we might find.








but the trick is that
I emulate Einstein himself, and I provide the answer that Einstein
answers me (and I guess I will have to make some work to understand
them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is the  
one who makes the confusion. Einstein is the relatively concrete  
immaterial person which has been temporary able to manifest itself  
through the easy but tedious task to emulate its brain.
Searle confused an easy  low level of simulation (neurons, say) with  
the emulated person, which, if you deny the consciousness, is an  
actual zombie (corroborating Stathis' early debunking of your argument).


There is no problem with having conviction, Craig, but you have to  
keep them personal, and this for reasoning for comp or for non-comp,  
or on whatever. It is the very idea of *reasoning* (always from public  
assumptions).


If not I am afraid you are just not playing the game most participant  
want to play in the list.


Both in science and in philosophy there are scientists and  
philosophers. Scientists are those who can recognize they might be  
wrong, or that they are wrong. You seem to be unable to conceive that  
comp *might* be true, (in the weak sense of the existence of  *some*  
level of substitution), and you seem be unable to put down your  
assumption and a reasoning which leads to your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error recognition  
(abunding in Knight's idea that you might be a troll, which I am not  
yet sure).


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-08 Thread Stephen P. King

On 2/8/2012 11:46 AM, Bruno Marchal wrote:


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:

I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.


Umm, could one's theory of another be such that it is a faithful 
subimage of the theory maker?


Maybe I have a theory that Bruno is a Tyrant and Craig is a Jester. ;-)



You do seem avoiding reasoning, to reassert in many ways a conviction 
that you have.
You want to seem to change the rule of the game, where, personally, I 
want them to be applied in any field, notably in theology, defined as 
the notion of truth about entities. Basically Plato's definition of 
Theology. Truth. The truth we search, not the one we might find.




Could you imagine that your representation is not singular? There is 
more than one way of thinking of the idea that you are considering.









but the trick is that
I emulate Einstein himself, and I provide the answer that Einstein
answers me (and I guess I will have to make some work to understand
them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is the 
one who makes the confusion. Einstein is the relatively concrete 
immaterial person which has been temporary able to manifest itself 
through the easy but tedious task to emulate its brain.
Searle confused an easy  low level of simulation (neurons, say) with 
the emulated person, which, if you deny the consciousness, is an 
actual zombie (corroborating Stathis' early debunking of your argument).


There is no problem with having conviction, Craig, but you have to 
keep them personal, and this for reasoning for comp or for non-comp, 
or on whatever. It is the very idea of *reasoning* (always from public 
assumptions).


If not I am afraid you are just not playing the game most participant 
want to play in the list.


Both in science and in philosophy there are scientists and 
philosophers. Scientists are those who can recognize they might be 
wrong, or that they are wrong. You seem to be unable to conceive that 
comp *might* be true, (in the weak sense of the existence of  *some* 
level of substitution), and you seem be unable to put down your 
assumption and a reasoning which leads to your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error recognition 
(abunding in Knight's idea that you might be a troll, which I am not 
yet sure).


But you cannot be wrong, Bruno, right? LOL

Onward!

Trolling Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Craig Weinberg
On Feb 6, 10:54 am, John Clark johnkcl...@gmail.com wrote:
 On Sun, Feb 5, 2012  Craig Weinberg whatsons...@gmail.com wrote:

   The only understanding of Chinese going on is by those Chinese speakers
  outside the room who are carrying on a one-sided conversation with a rule
  book.

 So you say, but Searle says his idiotic thought experiment has PROVEN it;
 and yet one key step in the proof is if there is understanding it can
 only be in the little man but the little man does not understand so there
 is no understanding involved.

If you are proving that a computer in the position of the man has no
understanding then this thought experiment proves it. If you are
trying to prove that there is no understanding in the universe then
the thought experiment does not prove that. The whole idea of there
being 'understanding involved' is a non-sequitur. It takes
consciousness for granted, like some free floating glow. If I
understand how to cook and then I walk into a building, does the
building, now that it includes me, now know how to cook?

 But if you start the thought experiment as
 that as one of the axioms then what the hell is the point of the thought
 experiment in the first place, how can you claim to have proven what you
 just assumed?  I stand by my remarks that Clark's Chinese Room, described
 previously, has just as much profundity (or lack thereof) as Searle's
 Chinese Room.

 OK fine, the man does not understand Chinese, so what? How does that
  prove that understanding was not involved in the room/outside-people
  conversation?

   Because there is nobody on the inside end of the conversation.

 So what?

So nothing is understanding you on the other end. It's not a live
performance, it's a canned recording.

 The point of the thought experiment was to determine if
 understanding was involved at the room end,

Huh? The point of the thought experiment was to show that AI doesn't
necessarily understand the data it is processing, which it does. I
like my truck carrying a piano over a bumpy road better, but it still
reveals the important point that accounting is not understanding.

 not how many people were inside
 the room, you can write and scream that there was no understanding from now
 to the end of time but you have not proven it, and neither has Searle.

And you can do the same denying it. Searle is assuming the common
sense of the audience to show them that having a conversation in a
language you don't understand cannot constitute understanding, but he
underestimates the power of comp to obscure common sense.

 It's
 not uncommon for a mathematical proof to contain a hidden assumption of
 the very thing you're trying to prove, but usually this error is subtle and
 takes some close analysis and digging to find the mistake, but in the case
 of the Chinese Room the blunder is as obvious as a angry elephant in your
 living room and that is why I have no hesitation in saying that John Searle
 is a moron.

I don't think he's a moron, but he may not understand that comp
already denies any distinction between trivial or prosthetic
intelligence and subjective understanding, so it doesn't help to make
examples which highlight that distinction.


  I suspect the use of the man in the room is a device to force people to
  identify personally with (what would normally be) the computer.

 Yes that's exactly what he's doing, and that's what makes Searle a con
 artist, he's like a stage magician who waves his right hand around and
 makes you look at it so you don't notice what his left hand is doing,

No, I think it's an honest device to help people get around their
prejudices. If someone claims that a program is no different than a
person, this is a way that we can imagine what it is actually like to
do what a program does. The result, is that rather than being forced
to accept that yes, AI must be sentient, we see clearly, that no, AI
is appears to be an automatic and unconscious mechanism.

 and
 the thing that makes him a idiot is that he believes his own bullshit. It's
 as if I forced you to identify with the neurotransmitter acetylcholine and
 then asked you to derive grand conclusions from the fact that acetylcholine
 doesn't understand much.

If the claim of strong AI was that it functioned exactly like
acetylcholine, then what's wrong with that?


  yes I only have first hand knowledge of consciousness. Because the nature
  of sense is to fill the
  gaps, connect the dots, solve the puzzle, etc, we are able to generalize
  figuratively. We are not limited to solipsism

 By fill in the gaps you mean we accept certain rules of thumb and axioms
 of existence to be true even though we can not prove them,

No. It means we make sense of patterns. We figure them out. The rules
and axioms are a posteriori.

 like induction
 and that intelligent behavior implies consciousness.

No, it requires sense. When we look at yellow and blue dots from far
away and see green, that pattern is not determined as an 

Re: Intelligence and consciousness

2012-02-07 Thread Craig Weinberg
On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:

 More seriously, in the chinese room experience, Searle's error can be
 seen also as a confusion of level. If I can emulate Einstein brain,
 I can answer all question you ask to Einstein,

You're assuming that a brain can be emulated in the first place. If
that were true, there is no need to have the thought experiment.

 but the trick is that
 I emulate Einstein himself, and I provide the answer that Einstein
 answers me (and I guess I will have to make some work to understand
 them, or not).

It still doesn't make you Einstein, which is Searle's point.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread John Clark
On Tue, Feb 7, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 If you are proving that a computer in the position of the man has no
 understanding then this thought experiment proves it.


How in hell would putting a computer in the position of the man prove
anything?? The man is just a very very very small part of the Chinese Room,
all Searle proved is that a tiny part of a system does not have all the
properties the entire system has. Well duh, the neurotransmitter
acetylcholine is part of the human brain and would not work without it, but
acetylcholine does not have all the properties that the entire human brain
has either.

 It takes consciousness for granted, like some free floating glow.


Oh now I see the light, literally, consciousness is like some free floating
glow! Now I understand everything!

 If I understand how to cook and then I walk into a building, does the
 building, now that it includes me, now know how to cook?


If you didn't know how to cook, if you didn't even know how to boil water
but the building was now employed at 4 star restaurants preparing delicious
meals then certainly the building knows how to cook, and you must be a very
small cog in that operation.

 Searle is assuming the common sense of the audience to show them that
 having a conversation in a language you don't understand cannot constitute
 understanding


I am having a conversation right now and acetylcholine is in your brain but
acetylcholine does not understand English, so I am having a conversation in
English with somebody who does not understand English. Foolish reasoning is
it not.

 The organization of my kitchen sink does not change with the temperature
 of the water coming out of the faucet.


Glad to hear it, now I know who to ask when I need plumbing advice.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Quentin Anciaux
2012/2/7 Craig Weinberg whatsons...@gmail.com

 On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:

  More seriously, in the chinese room experience, Searle's error can be
  seen also as a confusion of level. If I can emulate Einstein brain,
  I can answer all question you ask to Einstein,

 You're assuming that a brain can be emulated in the first place. If
 that were true, there is no need to have the thought experiment.


You're assuming that a brain can't be emulated in the first place. If that
were true, there is no need to have the thought experiment.

Stupid thought, stupid conclusion Craig Weinberg as usual.



  but the trick is that
  I emulate Einstein himself, and I provide the answer that Einstein
  answers me (and I guess I will have to make some work to understand
  them, or not).

 It still doesn't make you Einstein, which is Searle's point.

 Craig

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Craig Weinberg
On Feb 7, 1:41 pm, John Clark johnkcl...@gmail.com wrote:
 On Tue, Feb 7, 2012  Craig Weinberg whatsons...@gmail.com wrote:

  If you are proving that a computer in the position of the man has no
  understanding then this thought experiment proves it.

 How in hell would putting a computer in the position of the man prove
 anything??

Because that is the position a computer is in when it runs a program
based on user inputs and outputs. The man in the room is a CPU.

 The man is just a very very very small part of the Chinese Room,

He and the rule book are the only parts that are relevant to strong
AI.

 all Searle proved is that a tiny part of a system does not have all the
 properties the entire system has. Well duh, the neurotransmitter
 acetylcholine is part of the human brain and would not work without it, but
 acetylcholine does not have all the properties that the entire human brain
 has either.

You are assuming a part/whole relationship rather than form/content
relationship.


  It takes consciousness for granted, like some free floating glow.

 Oh now I see the light, literally, consciousness is like some free floating
 glow! Now I understand everything!

Yes, that would solve your problem completely. The rule book and the
man glow a little, and together they make the whole room glow much
more.


  If I understand how to cook and then I walk into a building, does the
  building, now that it includes me, now know how to cook?

 If you didn't know how to cook, if you didn't even know how to boil water
 but the building was now employed at 4 star restaurants preparing delicious
 meals then certainly the building knows how to cook, and you must be a very
 small cog in that operation.

I guess you are talking about a universe where buildings work at
restaurants or something.


  Searle is assuming the common sense of the audience to show them that
  having a conversation in a language you don't understand cannot constitute
  understanding

 I am having a conversation right now and acetylcholine is in your brain but
 acetylcholine does not understand English, so I am having a conversation in
 English with somebody who does not understand English. Foolish reasoning is
 it not.

Again, you assume a part/whole relation rather than a form/content
relation. A story is not made of pages in a book, it is an experience
which is made possible through the understanding of the symbolic
content of the pages. Anyone can copy the words from one book to
another, or give instructions of which sections of what book to excise
and reproduce, but it doesn't make them a storyteller.


  The organization of my kitchen sink does not change with the temperature
  of the water coming out of the faucet.

 Glad to hear it, now I know who to ask when I need plumbing advice.

Couldn't think of a legitimate counterpoint?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Craig Weinberg
On Feb 7, 3:08 pm, Quentin Anciaux allco...@gmail.com wrote:
 2012/2/7 Craig Weinberg whatsons...@gmail.com

  On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:

   More seriously, in the chinese room experience, Searle's error can be
   seen also as a confusion of level. If I can emulate Einstein brain,
   I can answer all question you ask to Einstein,

  You're assuming that a brain can be emulated in the first place. If
  that were true, there is no need to have the thought experiment.

 You're assuming that a brain can't be emulated in the first place. If that
 were true, there is no need to have the thought experiment.

 Stupid thought, stupid conclusion Craig Weinberg as usual.

I'm not assuming that it can't be emulated, I am only assuming an
appropriately skeptical stance - especially since brain emulation has
never occurred in reality. I say that it is not proven that brains can
be emulated, that's all. If the refutation of the Chinese Room is
contingent upon the assumption that brains can be emulated than it's a
religious faith.

My point is that the Chinese Room doesn't require a belief or
disbelief in brain emulation, it only demonstrates the difference
between trivial computation and personal understanding...something
which comp is in pathological denial of.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Quentin Anciaux
2012/2/7 Craig Weinberg whatsons...@gmail.com

 On Feb 7, 3:08 pm, Quentin Anciaux allco...@gmail.com wrote:
  2012/2/7 Craig Weinberg whatsons...@gmail.com
 
   On Feb 6, 11:30 am, Bruno Marchal marc...@ulb.ac.be wrote:
 
More seriously, in the chinese room experience, Searle's error can be
seen also as a confusion of level. If I can emulate Einstein brain,
I can answer all question you ask to Einstein,
 
   You're assuming that a brain can be emulated in the first place. If
   that were true, there is no need to have the thought experiment.
 
  You're assuming that a brain can't be emulated in the first place. If
 that
  were true, there is no need to have the thought experiment.
 
  Stupid thought, stupid conclusion Craig Weinberg as usual.

 I'm not assuming that it can't be emulated, I am only assuming an
 appropriately skeptical stance - especially since brain emulation has
 never occurred in reality. I say that it is not proven that brains can
 be emulated, that's all. If the refutation of the Chinese Room is
 contingent upon the assumption that brains can be emulated than it's a
 religious faith.

 My point is that the Chinese Room doesn't require a belief or
 disbelief in brain emulation, it only demonstrates the difference
 between trivial computation and personal understanding...something
 which comp is in pathological denial of.

 No, the chinese room refute the room consciousness by arguing that the
only thing conscious is the human in the room and obviously he does not
understand chinese. In your answer to john you replaced the human by the
CPU and that's correct... And no one has ever argue that the CPU is
conscious... It's the execution of the program which is conscious... ie:
the man only does the execution, it is a part of the system, not the whole
system... what is conscious is the program executed by the man following
the book instruction. I see no problem to have unconscious part of a
conscious thing, in fact that's what happen in our brain, only has a whole
(us) can we talk about it being conscious.

Anyway a thought experiment per se can't determine if AI is possible or
not... so you it's not correct to say  You're assuming that a brain can be
emulated blabla or the opposite, it's plain wrong.

Quentin


 Craig

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-06 Thread Bruno Marchal

Evgenii,


On 05 Feb 2012, at 14:41, Evgenii Rudnyi wrote:


I would agree that profit should be a tool. On the other hand it is  
working this way. There are rules of a game that are adjusted by the  
government accordingly and then what is not not forbidden is  
allowed. In such a setup, if a new idea allows us to increase  
profit, then it might be a good idea.


Only if the good idea is based on real work, or reasonable (founded)  
speculation. But if the idea is a lie (like the idea that drugs are  
very dangerous and should be prohibited) then the profit will be  
equivalent to stealing, and everybody will get poorer, except locally  
for some bandits.
Few years of prohibition of alcohol has created Al Capone, you can  
guess what has brought 70 years of planetary marijuana prohibition ...
Once the bandits have the power, as in the USA since Nixon, I would  
say, the notion of honest and dishonest profit blurs completely, the  
money becomes gray, and we become all hostages of the special interest  
of a minority. The economy becomes a big pyramidal game, where the top  
steal the money of the bottom, until the system crashes.






I would say that the statement Emotions are ineffable excludes  
them from scientific considerations.


I don't see why. emotions are ineffable is a perfect scientific  
statement about the emotions.
What is true is that emotions cannot be used *in* the scientific  
statement, but you can do scientific statement *about* emotions.
Indeed, nothing escape the domain on which we can develop a scientific  
attitude, from emotions to the big one ineffable.
What is forbidden (or invalid) is the use of emotion in scientific  
discourse. You cannot say the lemma is correct because I feel so, or  
1+1=2 because God told me.




Then we should not mention emotions at all. This however leads to a  
practical problem. For a mass product, for example electronics,  
emotions of customers are very important to get/keep/increase the  
market share. Hence if you do not consider emotions at all, you do  
not get paid.


I can agree. Emotions is part of the panorama, and we can deal with  
them, with or without emotions.
But emotions can also be misused, like in the fear selling business of  
the bandits.


Bruno




On 04.02.2012 22:42 Bruno Marchal said the following:

Hi Evgenyi,

On 04 Feb 2012, at 18:09, Evgenii Rudnyi wrote:


Also, if your theory is that we (in the 3-sense) are not Turing
emulable, you have to explain us why, and what it adds to the
explanation.


Bruno,

I do not have a theory.


That's OK. Technically, me neither. I am a logician. All what I
assert is that two (weak) theories, mechanism and materialism are
incompatible. I don't hide that my heart invites my brain to listen
to what some rich universal machine can already prove, and guess by
themselves, about themselves.





As for comp, my only note that I have made recently was that if to
look at the current state-of-art of computer architectures and
algorithms, then it is clear that any practical implementation is
out of reach.


Well, OK. We disagree here. AUDA is the illustration that many simple
machine, basically any first order specification of a universal
system (machine, programming language) extended with the
corresponding induction axioms, Those are the one I call the Löbian
machine, they are already as clever as you and me. By lacking our
layers of historical and prehistorical prejudices, they seems even
rather wiser, too. (In my opinion). AUDA is the theology of the
self-introspecting LUM. It gives an octuple of hypostases (inside
views of arithmetic by locally arithmetical being) which mirrors
rather well the discourse of the Platonists, neoplatonists and
mystics in all cultures (as well argued by Aldous Huxley, for
example).

You laptop is one inch close to Löbianity, but why would you want
that humans make Löbian machines when Introspection is not even in
the human curriculum. I begin to think that each time a human become
Löbian, he got banned, exiled, burned, imprizonned, ignored, sent in
asylum, or a perhaps become a big artist, musician or something.







Whether comp is true of false in principle, frankly speaking I have
no idea.


Me neither. Practically, mechanism is more a right (to say yes or no
to the doctor).



I guess that my subconsciousness still believes in primitive
materialism, as consciously I experience a question Why it is bad
to say that math is mind dependent.


Human math is human mind dependent. This does not imply that math, or
a metaphysically clean part of math (like arithmetic or computer
science) might not be the cause/reason of the stable persistent
beliefs in a physical reality. The physical reality would be a
projective view of arithmetic from inside.




Yet, I should confess that after following discussions at this list
I see some problems with such a statement and pass doubts back to
my subconsciousness. Let us see what happens.

I still listen to the lectures 

Re: Intelligence and consciousness

2012-02-06 Thread Bruno Marchal


On 06 Feb 2012, at 16:54, John Clark wrote:


Well it had better be! If the outside world could be anything we  
wanted it to be then our senses would be of no value and Evolution  
would never have had a reason to develop them. In reality if we  
project our wishes on how we interpret the information from our  
senses too much our life expectancy will be very short; I don't like  
that saber toothed tiger over there so I'll think of him as a cute  
little bunny rabbit.




Hmm... If you succeed of thinking of the saber toothed tiger as a cute  
little bunny rabbit, your body will not send the fear chemicals needed  
by the tiger to trigger an attack response. The tiger might be very  
impressed, and think twice before eating you, even if hungry.


I agree with your reply with respect to Craig point, though. Logicians  
like to joke with not completely relevant counter-example.


More seriously, in the chinese room experience, Searle's error can be  
seen also as a confusion of level. If I can emulate Einstein brain,  
I can answer all question you ask to Einstein, but the trick is that  
I emulate Einstein himself, and I provide the answer that Einstein  
answers me (and I guess I will have to make some work to understand  
them, or not).


That is an interesting error, and I would not judge someone because he  
does an error (although I am not sure Searle recognizes it or  
understand it).


The confusion between provability and computability is of that type.  
RA (arithmetic without induction) can already simulate PA (arithmetic  
with induction), yet, like me simulating Einstein, RA remains unable  
to prove many theorems that PA can prove. For RA, proving that PA  
proves some proposition P might be much easier than proving P. RA can  
easily prove that PA and ZF prove the consistency of RA, but RA can  
hardly prove that.
RA can emulate PA, ZF, you, and me. (in the comp theory). And this  
does not mean that RA will ever believe what PA, ZF, or you, or me  
might assert or believe.



Bruno





 John K Clark




--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-05 Thread Evgenii Rudnyi

On 04.02.2012 21:05 meekerdb said the following:
 On 2/4/2012 9:09 AM, Evgenii Rudnyi wrote:

...

 As for computers having emotions, I am a practitioner and I am
 working right now closely with engineers. I should say that the
 modern market would love electronics with emotions. Just imagine
 such a slogan

 Smartphone with Emotions* (*scientifically proved)

 It would be a killer application.

 So if you miss a turn your driving direction app will get mad and
 scold you? I you use your calculator to find the square root of 121
 it will mock you for forgetting your 8th grade mathematics? Emotions
 imply having values and being able to act on them -- why would you
 want your computer to have it's own values and act on them? Don't you
 want it to just have the values you have, i.e. answer the questions
 you ask?

 Brent


You should talk with marketing guys. A quick search reveals that this is 
already a reality


Hercules Dualpix Emotion Webcam

There is also a term emotional AI that is discussed by game makers.

Yet, I do not get your point in general. For example recently there was 
an article in Die Zeit


Die Roboter kommen (Robots are coming)
http://www.zeit.de/2012/04/T-Roboter

Among other things, they discuss an issue of a potential collision 
between a moving robot and a human being. To this end, there is 
experimental study to research on what pain in what part of a body can a 
human being sustain. This experimental database will be used by 
engineering developing robots.


Pain is not exactly emotion but I guess it is not that far away. This 
shows that pain could be experimentally researched by people. Could it 
be experimentally researched by computers and robots?


On the other hand, engineers design computers and robots. Can science 
give engineers guidelines to control emotions by computers or robots? Or 
emotions are some kind of emergent phenomena that will appear in 
computers and robots independently and uncontrollable from engineers?


Finally, if you already has discovered emotions among computers, why 
then it is impossible to research this effect on how it has emerged 
independent from engineers?


Evgenii

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-05 Thread Evgenii Rudnyi

Bruno,

I would agree that profit should be a tool. On the other hand it is 
working this way. There are rules of a game that are adjusted by the 
government accordingly and then what is not not forbidden is allowed. In 
such a setup, if a new idea allows us to increase profit, then it might 
be a good idea.


I would say that the statement Emotions are ineffable excludes them 
from scientific considerations. Then we should not mention emotions at 
all. This however leads to a practical problem. For a mass product, for 
example electronics, emotions of customers are very important to 
get/keep/increase the market share. Hence if you do not consider 
emotions at all, you do not get paid.


Evgenii


On 04.02.2012 22:42 Bruno Marchal said the following:

Hi Evgenyi,

On 04 Feb 2012, at 18:09, Evgenii Rudnyi wrote:


Also, if your theory is that we (in the 3-sense) are not Turing
emulable, you have to explain us why, and what it adds to the
explanation.


Bruno,

I do not have a theory.


That's OK. Technically, me neither. I am a logician. All what I
assert is that two (weak) theories, mechanism and materialism are
incompatible. I don't hide that my heart invites my brain to listen
to what some rich universal machine can already prove, and guess by
themselves, about themselves.





As for comp, my only note that I have made recently was that if to
 look at the current state-of-art of computer architectures and
algorithms, then it is clear that any practical implementation is
out of reach.


Well, OK. We disagree here. AUDA is the illustration that many simple
 machine, basically any first order specification of a universal
system (machine, programming language) extended with the
corresponding induction axioms, Those are the one I call the Löbian
machine, they are already as clever as you and me. By lacking our
layers of historical and prehistorical prejudices, they seems even
rather wiser, too. (In my opinion). AUDA is the theology of the
self-introspecting LUM. It gives an octuple of hypostases (inside
views of arithmetic by locally arithmetical being) which mirrors
rather well the discourse of the Platonists, neoplatonists and
mystics in all cultures (as well argued by Aldous Huxley, for
example).

You laptop is one inch close to Löbianity, but why would you want
that humans make Löbian machines when Introspection is not even in
the human curriculum. I begin to think that each time a human become
Löbian, he got banned, exiled, burned, imprizonned, ignored, sent in
asylum, or a perhaps become a big artist, musician or something.







Whether comp is true of false in principle, frankly speaking I have
no idea.


Me neither. Practically, mechanism is more a right (to say yes or no
to the doctor).



I guess that my subconsciousness still believes in primitive
materialism, as consciously I experience a question Why it is bad
to say that math is mind dependent.


Human math is human mind dependent. This does not imply that math, or
a metaphysically clean part of math (like arithmetic or computer
science) might not be the cause/reason of the stable persistent
beliefs in a physical reality. The physical reality would be a
projective view of arithmetic from inside.




Yet, I should confess that after following discussions at this list
I see some problems with such a statement and pass doubts back to
my subconsciousness. Let us see what happens.

I still listen to the lectures of Prof Hoenen. Recently I have
finished Theorien der Wahrheit and right now I am at
Beweistheorien. When I am done with Prof Hoenen, as promised I will
go through your The Origin of Physical Laws and Sensations. Yet, I
do not know when it happens, as it takes more time as I thought
originally.

As for computers having emotions, I am a practitioner and I am
working right now closely with engineers. I should say that the
modern market would love electronics with emotions. Just imagine
such a slogan

Smartphone with Emotions* (*scientifically proved)


This will never happen. Never. More exactly, if this happens, it
means you are in front of a con crackpot. Emotion are ineffable,
although a range of the corresponding behavior is easy to simulate.
To have genuine emotion, you need to be entangled to genuine complex
long computation. But their outputs are easy to simulate. A friend of
mine made a piece of theater with a little robot-dog, emulating
emotions, and the public reacted correspondingly. In comp there is no
philosophical zombies, but there are plenty of local zombies
possible, like cartoon cops on the roads which makes their effect, or
 that emotive robot-dog. But an emotion, by its very nature cannot be
scientifically proved. All what happens is that a person succeeds in
being recognize as such by other persons. Computers might already be
conscious. It might be our lack of civility which prevents us to
listen to them.

But you were probably joking with the *scientifically proved.

Concerning reality, science never proves. It only 

Re: Intelligence and consciousness

2012-02-05 Thread Craig Weinberg
On Feb 5, 11:55 am, John Clark johnkcl...@gmail.com wrote:
 On Sat, Feb 4, 2012 at Craig Weinberg whatsons...@gmail.com wrote:

  You don't understand Searle's thought experiment.

 I understand it one hell of a lot better than Searle did, but that's not
 really much of a boast.

  The whole point is to reveal the absurdity of taking understanding for
  granted in data manipulation processes.

 And Searle takes it for granted that if the little man doing a trivial task
 does not understand Chinese then Chinese is not understood, and that
 assumption simply is not bright.

No, I can see clearly that Searle is correct.You are applying a
figurative sense of understanding when a literal sense is required.
The only understanding of Chinese going on is by those Chinese
speakers outside the room who are carrying on a one-sided conversation
with a rule book. To say that Chinese is understood by the Chinese
Room system is to say that the entire universe understands Chinese.


  None of the descriptions of the argument I find online make any mention
  of infinite books, paper, or ink.

 Just how big do you think a book would need to be to contain every possible
 question and every possible answer to those questions?

It doesn't need to be able to answer every possible question, it only
needs to approximate a typical conversational capacity. It can ask
'what do you mean by that?'


  All I find is a clear and simple experiment:

 Yes simple, as in stupid.

It seems like you take it's contradiction of your position personally.
I assume that you don't mean that literally though, right? You don't
think that the thought experiment has a low I.Q., right? Thinking that
would be entirely consistent with what you are saying though.


  The fact that he can use the book to make the people outside think they
  are carrying on a conversation with them in Chinese reveals that it is only
  necessary for the man to be trained to use the book, not to understand
  Chinese or communication in general.

 OK fine, the man does not understand Chinese, so what? How does that prove
 that understanding was not involved in the room/outside-people
 conversation?

Because there is nobody on the inside end of the conversation.

 You maintain that only humans can have understanding while I
 maintain that other things can have it too.

No, I don't limit understanding to humans, I just limit human quality
understanding to humans. Not that it's the highest quality
understanding, but it is the only human understanding.

To determine which of us is
 correct Searle sets up a cosmically impractical and complex thought
 experiment in which a human is a trivial part. Searle says that if
 understanding exists anywhere it must be concentrated in the human and
 nowhere else, but the little man does not understand Chinese so Searle
 concludes that understanding is not involved. What makes Searle such an
 idiot is that determining if humans are the only thing that can have
 understanding or not is the entire point of the thought experiment, he's
 assuming the very thing he's trying to prove. If Siri or Watson had behaved
 as stupidly as Searle did their programers would hang their heads in shame!

I'm not sure if Searle maintains that understanding is forever limited
to humans, but I suspect the use of the man in the room is a device to
force people to identify personally with (what would normally be) the
computer. This way he makes you confront the reality that looking up
reactions in a rule book is not the same thing as reacting
authentically and generating responses personally.


  Makes sense to me.

 I know, that's the problem.

No, because I understand why the way you are looking at it misses the
point, and I understand that you aren't willing to entertain my way of
looking at it.


  We know for a fact that human consciousness is associated with human
  brains

 That should be with a human brain not with human brains; you only know
 for a fact that one human brain is conscious, your own.

Again, there are literal and figurative senses. In the most literal
sense of 'you only know for a fact', yes I only have first hand
knowledge of consciousness. Because the nature of sense is to fill the
gaps, connect the dots, solve the puzzle, etc, we are able to
generalize figuratively. We are not limited to solipsism or formal
proofs that other people are conscious, we have a multi-contextual
human commonality. We share many common senses and can create new
senses through the existing sense channels we share. Knowing whether a
person is conscious or not therefore, is only an issue under very
unusual conditions.


  but we do not have much reason to suspect the rooms can become conscious

 Because up to now rooms have not behaved very intelligently, but the room
 containing the Watson supercomputer is getting close.

Close to letting us use it to fool ourselves is all. It's still only a
room with a large, fast rulebook.


  Organization of the brain does not make 

Re: Intelligence and consciousness

2012-02-04 Thread Evgenii Rudnyi

On 04.02.2012 01:10 meekerdb said the following:

On 2/3/2012 1:50 PM, Evgenii Rudnyi wrote:

On 03.02.2012 22:07 meekerdb said the following:

On 2/3/2012 12:23 PM, Evgenii Rudnyi wrote:

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.net
wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind
of you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that
within the context of chess the machine acts just like a
person who had those emotions. So it had at least the
functional equivalent of those emotions. Whereas your
opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that
the whole idea that there can be a 'functional equivalent
of emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU,
voicemail...all of these things demonstrate that there need
not be any connection at all between function and interior
experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the
chess playing computer, there is no person providing the
'emotion' because the 'emotion' depends on complex and
unforeseeable events. Hence it is appropriate to attribute
the 'emotion' to the computer/program.

Brent


Craig's position that computers in the present form do not
have emotions is not unique, as emotions belong to
consciousness. A quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard
Problem.

The last sentence from the chapter 10.2 Conscious computers?

p. 128 Our further discussion here, however, will take it as
established that his can never happen.

Now the last paragraph from the chapter 10.3 Conscious
robots?

p. 130. So, while we may grant robots the power to form
meaningful categorical representations at a level reached by
the unconscious brain and by the behaviour controlled by the
unconscious brain, we should remain doubtful whether they are
likely to experience conscious percepts. This conclusion should
not, however, be over-interpreted. It does not necessarily
imply that human beings will never be able to build artefacts
with conscious experience. That will depend on how the trick of
consciousness is done. If and when we know the trick, it may be
possible to duplicate it. But the mere provision of behavioural
dispositions is unlikely to be up to the mark.

If we say that computers right now have emotions, then we must
be able exactly define the difference between unconscious and
conscious experience in the computer (for example in that
computer that has won Kasparov). Can you do it?


Can you do it for people? For yourself? No. Experiments show
that people confuse the source of their own emotions. So your
requirement that we be able to exactly define is just something
you've invented.

Brent


I believe that there is at least a small difference. Presumably we
 know everything about the computer that has played chess. Then it
 seems that a hypothesis about emotions in that computer could be
verified without a problem - hence my notion on exactly define.
On the other hand, consciousness remains to be a hard problem and
here exactly define does not work.

However, the latter does not mean that consciousness does not exist
as a phenomenon. Let us take for example life. I would say that
there is not good definition what life is (exactly define does
not work), yet this does not prevent science to research it. This
should be the same for conscious experience.

Evgenii


So you've reversed your theory? If computers have emotions like
people we must *not* be able to exactly define them. And if we can
exactly define them, that must prove they are not like people?


No, I do not. My point was that we can check the statement a computer 
have emotions exactly. Then it would be possible to check if such a 
definition applies to people. I have nothing against of such a way - 
make a hypothesis what emotion in a computer is, research it, and then 
try to apply this concept to people.


Yet, if we know in another direction, from people to computers, then 
first we should research what emotion in a human being is. Here is the 
difference with the computer, we cannot right now make a strict 
definition. We can though still research emotions in people.



Actually, if we've made an intelligent chess playing computer, one
that learns from experience, we probably don't know everything about
it. We might be able to find out - but only in the sense that in
principle we could find out all the neural connections and functions
in a human brain. It's probably easier and more certain 

Re: Intelligence and consciousness

2012-02-04 Thread Bruno Marchal


On 03 Feb 2012, at 21:23, Evgenii Rudnyi wrote:


On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of you to
inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within the
context of chess the machine acts just like a person who had
those emotions. So it had at least the functional equivalent of
those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already
been over this issue a dozen times. My view is that the whole idea
that there can be a 'functional equivalent of emotions' is
completely unsupported. I give examples of puppets, movies,
trashcans that say THANK YOU, voicemail...all of these things
demonstrate that there need not be any connection at all between
function and interior experience.


Except that in every case there is an emotion in your examples...it's
just the emotion of the puppeter, the screenwriter, the trashcan
painter. But in the case of the chess playing computer, there is no
person providing the 'emotion' because the 'emotion' depends on
complex and unforeseeable events. Hence it is appropriate to
attribute the 'emotion' to the computer/program.

Brent


Craig's position that computers in the present form do not have  
emotions is not unique, as emotions belong to consciousness. A quote  
from my favorite book


Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter 10.2 Conscious computers?

p. 128 Our further discussion here, however, will take it as  
established that his can never happen.


Now the last paragraph from the chapter 10.3 Conscious robots?

p. 130. So, while we may grant robots the power to form meaningful  
categorical representations at a level reached by the unconscious  
brain and by the behaviour controlled by the unconscious brain, we  
should remain doubtful whether they are likely to experience  
conscious percepts. This conclusion should not, however, be over- 
interpreted. It does not necessarily imply that human beings will  
never be able to build artefacts with conscious experience. That  
will depend on how the trick of consciousness is done. If and when  
we know the trick, it may be possible to duplicate it. But the mere  
provision of behavioural dispositions is unlikely to be up to the  
mark.


If we say that computers right now have emotions, then we must be  
able exactly define the difference between unconscious and conscious  
experience in the computer (for example in that computer that has  
won Kasparov). Can you do it?


Yes. It is the point of AUDA. We can do it in the theoretical  
framework, once we accept some theory (axiomatic) of knowledge.
Also, if your theory is that we (in the 3-sense) are not Turing  
emulable, you have to explain us why, and what it adds to the  
explanation.
With comp, the trick of both consciousness and matter is not entirely  
computable. You have to resist to a reductionist conception of numbers  
and machines.


No computers has ever emotion right now, they have *always* right  
now emotions. With comp, the mind-body link is a bit tricky. Real  
consciousness is better seen to be associated to an infinity of  
computations instead of one, as we are programmed to do by years of  
local evolution.





Hence I personally find this particular Craig's position as supported.


You might miss the discovery of the universal machine and its self- 
reference logic.


Clark is right on this, emotion are easy, despite being able to run  
very deep, and to govern us. Esay but not so easy, you need the  
sensible matter non communicable hyposases.


The emotion of your laptot is unknown, and unmanifested, because your  
laptop has no deep persistant self-reference ability to share with  
you.  We want a slave, and would be anxious in front of a machine  
taking too much independence.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Bruno Marchal


On 03 Feb 2012, at 23:58, Craig Weinberg wrote:


 Consciousness and mechanism are
mutually exclusive by definition and always will be.


I think you confuse mechanism before and after Gödel, and you miss the  
1-indeterminacy. You confuse Turing emulable, and Turing recoverable  
(by 1-indeterminacy on non computable domain).


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Craig Weinberg
On Feb 4, 2:23 am, John Clark johnkcl...@gmail.com wrote:
 On Fri, Feb 3, 2012  Craig Weinberg whatsons...@gmail.com wrote:

  Huge abacuses are a really good way to look at this, although it's pretty
  much the same as the China Brain.

 I hope you're not talking about Searle's Chinese room, the stupidest
 thought experiment in history.

I don't see what is stupid about that thought experiment. Please
explain exactly what you mean.


   Your position is [...] The abacus could literally be made to think

 Yes.

  Do you see why I am incredulous about this?

 No.

    I am crystal clear in my own understanding that no matter how good the
  program seems, Siri 5000 will feel exactly the same thing as Siri. Nothing.

 I accept that you are absolutely positively 100% certain of the above, but
 I do NOT accept that you are correct. I'm not a religious man so I don't
 believe in divine revelation and that's the only way you could know what it
 feels like to be Siri, hell you don't even know what it feels like to be me
 and we are of the same species (I presume); all you can do is observe how
 Siri and I behave and try to make conclusions about our inner life or like
 of same from that.

That isn't 'all I can do'. I don't need to do anything special to
understand exactly what Siri is and exactly why it feels nothing any
more than I understand why an audioanimatronic pirate at Disneyland
feels nothing.


  As I continue to try to explain, awareness is not a function of objects,
  it is the symmetrically anomalous counterpart of objects.

 Bafflegab.

Translation: I don't understand and I don't care, but it makes me
feel superior to dismiss your position arbitrarily'.


  Experiences accumulate semantic charge

 Semantic charge? Poetic crapola of that sort may impress some but not me.
 If you can't express your ideas clearer than that they are not worth
 expressing at all.

See above.


   The beads will never learn anything. They are only beads.

 Computers can and do learn things

No. Computers have never learned anything. We learn things using
computers. Computers store, retrieve, and process data. Nothing more.
The human mind does much more. It feels, sees, knows, believes,
understands, wants, tries, opposes, speculates, creates, imagines,
etc.

 and a Turing Machine can simulate any
 computer and you can make a Turing Machine from beads. I won't insult your
 intelligence by spelling out the obvious conclusion from that fact.

Why not? You insult my intelligence in most of your other replies to
me?


  Machines are made of unconsciousness.

  Machines are made of atoms just like you and me.

And atoms are unconscious, are they not?


  All machines are unconscious. That is how we can control them.

 A argument that is already very weak and will become DRAMATICALLY weaker in
 the future.

Promissory mechanism is religious faith to me.

 In the long run there is no way we can control computers, they
 are our slave right now but that circumstance will not continue.

So you say. I say we are already the slaves of computers now. It's
called economics.


  That would not be necessary if the machine had any capacity to learn.

 I don't know what you're talking about, machines have been able to learn
 for decades.

If I fill a file cabinet with files, do you say that the cabinet has
learned something?


  at a fundamental level no human being could write a computer program
  like Siri and nobody knows how it works.

 I wouldn't say we don't know how it works. Binary logic is pretty

  straightforward.

 One binary logic operation is pretty straightforward but *20,000 trillion
 of them every second is not, *and that's what today's supercomputers can
 do, and they are doubling in power every 18 months.

It's more numerous but no less straightforward. You could stop the
program at any given point and understand every thread of every
process. It's big and it's fast, sure, but it's still not mysterious.


  That's the theory. Meanwhile, in reality, we are using the same basic
  interface for computers since 1995.

 What are you talking about? Siri is a computer interface as is Google and
 even supercomputers didn't have them or anything close to it in 1995.

There were web search engines before Google. They weren't quite as
good but there has been no improvement in searches since then. Siri is
a new branding and improved implementation of voice recognition that
we have had in other devices for a while. It's progress, but hardly
Einstein, Edison, Tesla, or Wright brothers progress.


   people in a vegetative state do sometimes have an inner life despite
  their behavior.

  In the course of our conversations you have made declarative statements
  like the above dozens if not hundreds of times but you never seriously ask
  yourself HOW DO I KNOW THIS?.

  There is a lot of anecdotal evidence. People come out of comas.



 So people come out of comas and you observe that they make certain sounds
 with their mouth then you 

Re: Intelligence and consciousness

2012-02-04 Thread Craig Weinberg
On Feb 4, 7:29 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 03 Feb 2012, at 23:58, Craig Weinberg wrote:

   Consciousness and mechanism are
  mutually exclusive by definition and always will be.

 I think you confuse mechanism before and after Gödel, and you miss the
 1-indeterminacy.

I don't think that I do. I only miss the link between the validity of
the mathematical form of 1-indeterminacy and the logical necessity of
the content: the qualitative experience associated with it.

You confuse Turing emulable, and Turing recoverable
 (by 1-indeterminacy on non computable domain).

That's probably true since I can't find any definition for Turing
recoverable online. Are you saying that consciousness is not Turing
emulable but merely Turing recoverable (which I am imagining is about
addressing non-comp records to play or record)?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread meekerdb

On 2/4/2012 1:17 AM, Bruno Marchal wrote:
The emotion of your laptot is unknown, and unmanifested, because your laptop has no deep 
persistant self-reference ability to share with you.  We want a slave, and would be 
anxious in front of a machine taking too much independence.


Bruno 


Yes, that's exactly why John McCarthy wrote that we should not provide AI programs with 
self-reflection and emotions, because it would create ethical problems in using them.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Evgenii Rudnyi

 Also,
 if your theory is that we (in the 3-sense) are not Turing emulable,
 you have to explain us why, and what it adds to the explanation.

Bruno,

I do not have a theory.

As for comp, my only note that I have made recently was that if to look 
at the current state-of-art of computer architectures and algorithms, 
then it is clear that any practical implementation is out of reach.


Whether comp is true of false in principle, frankly speaking I have no 
idea. I guess that my subconsciousness still believes in primitive 
materialism, as consciously I experience a question Why it is bad to say 
that math is mind dependent. Yet, I should confess that after following 
discussions at this list I see some problems with such a statement and 
pass doubts back to my subconsciousness. Let us see what happens.


I still listen to the lectures of Prof Hoenen. Recently I have finished 
Theorien der Wahrheit and right now I am at Beweistheorien. When I am 
done with Prof Hoenen, as promised I will go through your The Origin of 
Physical Laws and Sensations. Yet, I do not know when it happens, as it 
takes more time as I thought originally.


As for computers having emotions, I am a practitioner and I am working 
right now closely with engineers. I should say that the modern market 
would love electronics with emotions. Just imagine such a slogan


Smartphone with Emotions* (*scientifically proved)

It would be a killer application. Hence I do not understand why people 
here that state a computer has already emotions do not explore such a 
wonderful opportunity. After all, whether it is comp, physicalism, 
monism, dualism or whatever does not matter. What is really important is 
to make profit.


Evgenii

On 04.02.2012 10:17 Bruno Marchal said the following:


On 03 Feb 2012, at 21:23, Evgenii Rudnyi wrote:


On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of
you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within
the context of chess the machine acts just like a person who
had those emotions. So it had at least the functional
equivalent of those emotions. Whereas your opinion is simple
prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that the
whole idea that there can be a 'functional equivalent of
emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU, voicemail...all
of these things demonstrate that there need not be any
connection at all between function and interior experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the chess
playing computer, there is no person providing the 'emotion'
because the 'emotion' depends on complex and unforeseeable
events. Hence it is appropriate to attribute the 'emotion' to the
computer/program.

Brent


Craig's position that computers in the present form do not have
emotions is not unique, as emotions belong to consciousness. A
quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter 10.2 Conscious computers?

p. 128 Our further discussion here, however, will take it as
established that his can never happen.

Now the last paragraph from the chapter 10.3 Conscious robots?

p. 130. So, while we may grant robots the power to form meaningful
 categorical representations at a level reached by the unconscious
 brain and by the behaviour controlled by the unconscious brain, we
 should remain doubtful whether they are likely to experience
conscious percepts. This conclusion should not, however, be
over-interpreted. It does not necessarily imply that human beings
will never be able to build artefacts with conscious experience.
That will depend on how the trick of consciousness is done. If and
when we know the trick, it may be possible to duplicate it. But the
mere provision of behavioural dispositions is unlikely to be up to
the mark.

If we say that computers right now have emotions, then we must be
able exactly define the difference between unconscious and
conscious experience in the computer (for example in that computer
that has won Kasparov). Can you do it?


Yes. It is the point of AUDA. We can do it in the theoretical
framework, once we accept some theory (axiomatic) of knowledge. Also,
if your theory is that we (in the 3-sense) are not Turing emulable,
you have to explain us why, and what it adds to the explanation. With
comp, the trick of both consciousness and matter is not entirely
computable. You have to resist 

Re: Intelligence and consciousness

2012-02-04 Thread John Clark
On Sat, Feb 4, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 I hope you're not talking about Searle's Chinese room, the stupidest
 thought experiment in history.


  I don't see what is stupid about that thought experiment.


And that tells us a great deal about you.


  Please explain exactly what you mean.


You already know about Searle's room, now I want to tell you about Clark's
Chinese Room. You are a professor of Chinese Literature and are in a room
with me and the great Chinese Philosopher and Poet Laotse. Laotse writes
something in his native language on a paper and hands it to me. I walk 10
feet and give it to you. You read the paper and are impressed with the
wisdom of the message and the beauty of its language. Now I tell you that I
don't know a word of Chinese, can you find any deep implications from that
fact? I believe Clark's Chinese Room is just as profound as Searle's
Chinese Room. Not very.

All Searle did was come up with a wildly impractical model (the Chinese
Room) of an intelligence in which a human being happens to play a trivial
part. Consider what's in Searle's model:

1) An incredible book, larger than the observable universe even if the
writing was microfilm sized.

2) An equally large or larger book of blank paper.

3) A pen, several trillion galaxies of ink, and oh yes I almost forgot,
your little man.

Searle claims to have proven something profound when he shows that a
trivial part does not have all the properties that the whole system does.
In his example the man could be replaced with a simple machine made with a
few vacuum tubes or even mechanical relays, and it would do a better job.
It's like saying the synaptic transmitter dopamine does not understand how
to solve differential equations, dopamine is a small part of the human
brain thus the human brain does not understand how to solve differential
equations.

Yes, it does seem strange that consciousness is somehow hanging around the
room as a whole, even if slowed down by a factor of a billion trillion or
so, but no stranger than the fact that consciousness is hanging around 3
pounds of gray goo in our head, and yet we know that it does. It's time to
just face the fact that consciousness is what matter does when it is
organized in certain complex ways.


  I understand why an audioanimatronic pirate at Disneyland feels nothing.


That is incorrect, you don't understand. I agree it probably feels nothing
but unlike you I can logically explain exactly why I have that opinion and
I don't need semantic batteries or flux capacitors or dilithhium crystals
or any other new age bilge to do so.


  No. Computers have never learned anything.


I could give examples dating back to the 1950's that computers can indeed
learn but there would be no point in me doing  so, you would say it didn't
really learn it just behaved like it learned, and Einstein wasn't
really smart he just behaved like he was smart, and the guy who filed a
complaint with the police that somebody stole his cocaine was not really
stupid he just behaved stupidly.
**

   Machines are made of atoms just like you and me.


  And atoms are unconscious, are they not?


Atoms don't behave intelligently so my very very strong hunch is that they
are not conscious, but there is no way I can know for certain. On the other
hand you are even more posative than I about this matter and as  always
happens whenever somebody is absolutely positively 100% certain about
anything they can almost never produce any logical reason for their belief.
There seems to be a inverse relationship, the stronger the belief the
weaker the evidence.

  One binary logic operation is pretty straightforward but 20,000
 trillion of them every second is not, *and that's what today's
 supercomputers can do, and they are doubling in power every 18 months.


  You could stop the program at any given point and understand every
 thread of every process.


Yes, you can understand ANY thread but you cannot understand EVERY thread.
And that 20,000 trillion a second figure that I used was really a big
understatement, it's the number of floating point operations (FLOPS) not
the far simpler binary operations. A typical man on the street might take
the better part of one minute to do one flop with pencil and paper, and
today's  supercomputers can do 20,000 million million a second and they
double in power every 18 months.
**

 They come out of comas and communicate with other human beings


You think the noises coming out of ex-coma patient's mouths give us
profound insight into their inner life, but noises produced by Siri tell us
absolutely nothing, why the difference? Because Siri is not squishy and
does not smell bad. I don't think your philosophy is one bit more
sophisticated than that.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this 

Re: Intelligence and consciousness

2012-02-04 Thread meekerdb

On 2/4/2012 9:09 AM, Evgenii Rudnyi wrote:

 Also,
 if your theory is that we (in the 3-sense) are not Turing emulable,
 you have to explain us why, and what it adds to the explanation.

Bruno,

I do not have a theory.

As for comp, my only note that I have made recently was that if to look at the current 
state-of-art of computer architectures and algorithms, then it is clear that any 
practical implementation is out of reach.


Whether comp is true of false in principle, frankly speaking I have no idea. I guess 
that my subconsciousness still believes in primitive materialism, as consciously I 
experience a question Why it is bad to say that math is mind dependent. Yet, I should 
confess that after following discussions at this list I see some problems with such a 
statement and pass doubts back to my subconsciousness. Let us see what happens.


I still listen to the lectures of Prof Hoenen. Recently I have finished Theorien der 
Wahrheit and right now I am at Beweistheorien. When I am done with Prof Hoenen, as 
promised I will go through your The Origin of Physical Laws and Sensations. Yet, I do 
not know when it happens, as it takes more time as I thought originally.


As for computers having emotions, I am a practitioner and I am working right now closely 
with engineers. I should say that the modern market would love electronics with 
emotions. Just imagine such a slogan


Smartphone with Emotions* (*scientifically proved)

It would be a killer application. 


So if you miss a turn your driving direction app will get mad and scold you?  I you use 
your calculator to find the square root of 121 it will mock you for forgetting your 8th 
grade mathematics? Emotions imply having values and being able to act on them -- why would 
you want your computer to have it's own values and act on them?  Don't you want it to just 
have the values you have, i.e. answer the questions you ask?


Brent


Hence I do not understand why people here that state a computer has already emotions 
do not explore such a wonderful opportunity. After all, whether it is comp, physicalism, 
monism, dualism or whatever does not matter. What is really important is to make profit.


Evgenii

On 04.02.2012 10:17 Bruno Marchal said the following:


On 03 Feb 2012, at 21:23, Evgenii Rudnyi wrote:


On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of
you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within
the context of chess the machine acts just like a person who
had those emotions. So it had at least the functional
equivalent of those emotions. Whereas your opinion is simple
prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that the
whole idea that there can be a 'functional equivalent of
emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU, voicemail...all
of these things demonstrate that there need not be any
connection at all between function and interior experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the chess
playing computer, there is no person providing the 'emotion'
because the 'emotion' depends on complex and unforeseeable
events. Hence it is appropriate to attribute the 'emotion' to the
computer/program.

Brent


Craig's position that computers in the present form do not have
emotions is not unique, as emotions belong to consciousness. A
quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter 10.2 Conscious computers?

p. 128 Our further discussion here, however, will take it as
established that his can never happen.

Now the last paragraph from the chapter 10.3 Conscious robots?

p. 130. So, while we may grant robots the power to form meaningful
 categorical representations at a level reached by the unconscious
 brain and by the behaviour controlled by the unconscious brain, we
 should remain doubtful whether they are likely to experience
conscious percepts. This conclusion should not, however, be
over-interpreted. It does not necessarily imply that human beings
will never be able to build artefacts with conscious experience.
That will depend on how the trick of consciousness is done. If and
when we know the trick, it may be possible to duplicate it. But the
mere provision of behavioural dispositions is unlikely to be up to
the mark.

If we say that computers right now have emotions, then we must be
able exactly define the difference between unconscious 

Re: Intelligence and consciousness

2012-02-04 Thread Bruno Marchal

Hi Evgenyi,

On 04 Feb 2012, at 18:09, Evgenii Rudnyi wrote:


 Also,
 if your theory is that we (in the 3-sense) are not Turing emulable,
 you have to explain us why, and what it adds to the explanation.

Bruno,

I do not have a theory.


That's OK. Technically, me neither. I am a logician. All what I assert  
is that two (weak) theories, mechanism and materialism are incompatible.
I don't hide that my heart invites my brain to listen to what some  
rich universal machine can already prove, and guess by themselves,  
about themselves.






As for comp, my only note that I have made recently was that if to  
look at the current state-of-art of computer architectures and  
algorithms, then it is clear that any practical implementation is  
out of reach.


Well, OK. We disagree here. AUDA is the illustration that many simple  
machine, basically any first order specification of a universal system  
(machine, programming language) extended with the corresponding  
induction axioms, Those are the one I call the Löbian machine, they  
are already as clever as you and me. By lacking our layers of  
historical and prehistorical prejudices, they seems even rather wiser,  
too. (In my opinion).
AUDA is the theology of the self-introspecting LUM. It gives an  
octuple of hypostases (inside views of arithmetic by locally  
arithmetical being) which mirrors rather well the discourse of the  
Platonists, neoplatonists and mystics in all cultures (as well argued  
by Aldous Huxley, for example).


You laptop is one inch close to Löbianity, but why would you want that  
humans make Löbian machines when Introspection is not even in the  
human curriculum. I begin to think that each time a human become  
Löbian, he got banned, exiled, burned, imprizonned, ignored, sent in  
asylum, or a perhaps become a big artist, musician or something.








Whether comp is true of false in principle, frankly speaking I have  
no idea.


Me neither. Practically, mechanism is more a right (to say yes or no  
to the doctor).



I guess that my subconsciousness still believes in primitive  
materialism, as consciously I experience a question Why it is bad to  
say that math is mind dependent.


Human math is human mind dependent. This does not imply that math, or  
a metaphysically clean part of math (like arithmetic or computer  
science) might not be the cause/reason of the stable persistent  
beliefs in a physical reality.
The physical reality would be a projective view of arithmetic from  
inside.




Yet, I should confess that after following discussions at this list  
I see some problems with such a statement and pass doubts back to my  
subconsciousness. Let us see what happens.


I still listen to the lectures of Prof Hoenen. Recently I have  
finished Theorien der Wahrheit and right now I am at Beweistheorien.  
When I am done with Prof Hoenen, as promised I will go through your  
The Origin of Physical Laws and Sensations. Yet, I do not know when  
it happens, as it takes more time as I thought originally.


As for computers having emotions, I am a practitioner and I am  
working right now closely with engineers. I should say that the  
modern market would love electronics with emotions. Just imagine  
such a slogan


Smartphone with Emotions* (*scientifically proved)


This will never happen. Never.
More exactly, if this happens, it means you are in front of a con  
crackpot. Emotion are ineffable, although a range of the corresponding  
behavior is easy to simulate. To have genuine emotion, you need to be  
entangled to genuine complex long computation. But their outputs are  
easy to simulate.
A friend of mine made a piece of theater with a little robot-dog,  
emulating emotions, and the public reacted correspondingly. In comp  
there is no philosophical zombies, but there are plenty of local  
zombies possible, like cartoon cops on the roads which makes their  
effect, or that emotive robot-dog.
But an emotion, by its very nature cannot be scientifically proved.  
All what happens is that a person succeeds in being recognize as such  
by other persons.
Computers might already be conscious. It might be our lack of civility  
which prevents us to listen to them.


But you were probably joking with the *scientifically proved.

Concerning reality, science never proves. It only suggests  
interrogatively. If not, it is pseudo science or pseudo religion.





It would be a killer application. Hence I do not understand why  
people here that state a computer has already emotions do not  
explore such a wonderful opportunity. After all, whether it is comp,  
physicalism, monism, dualism or whatever does not matter. What is  
really important is to make profit.



Hmm... I am not sure. What is important is to be able to eat when  
hungry, and to drink when thirsty and some amount of heat. What is  
hoped for is the larger freedom spectrum for the exploratory  
opportunities.


Profit might be a tool, hardly a goal by itself.

If 

Re: Intelligence and consciousness

2012-02-04 Thread Bruno Marchal


On 04 Feb 2012, at 17:38, meekerdb wrote:


On 2/4/2012 1:17 AM, Bruno Marchal wrote:


The emotion of your laptot is unknown, and unmanifested, because  
your laptop has no deep persistant self-reference ability to share  
with you.  We want a slave, and would be anxious in front of a  
machine taking too much independence.


Bruno


Yes, that's exactly why John McCarthy wrote that we should not  
provide AI programs with self-reflection and emotions, because it  
would create ethical problems in using them.


He is right. But doing babies does already the same, and for  
economical reason we will often get some rewards from letting some  
degree of autonomy in machines. The time needed for hand made machine  
descendent to be as free as us and as entangled as us in the local  
computational histories, we will already be machines ourselves. Things  
will be different. The shadow of the measure problem solution  
indicates that we might, in some sense, be already there. I'm not sure.



Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Craig Weinberg
On Feb 4, 1:13 pm, John Clark johnkcl...@gmail.com wrote:
 On Sat, Feb 4, 2012  Craig Weinberg whatsons...@gmail.com wrote:

  I hope you're not talking about Searle's Chinese room, the stupidest
  thought experiment in history.

   I don't see what is stupid about that thought experiment.

 And that tells us a great deal about you.

   Please explain exactly what you mean.

 You already know about Searle's room, now I want to tell you about Clark's
 Chinese Room. You are a professor of Chinese Literature and are in a room
 with me and the great Chinese Philosopher and Poet Laotse. Laotse writes
 something in his native language on a paper and hands it to me. I walk 10
 feet and give it to you. You read the paper and are impressed with the
 wisdom of the message and the beauty of its language. Now I tell you that I
 don't know a word of Chinese, can you find any deep implications from that
 fact? I believe Clark's Chinese Room is just as profound as Searle's
 Chinese Room. Not very.

You don't understand Searle's thought experiment. The whole point is
to reveal the absurdity of taking understanding for granted in data
manipulation processes. Since you take it for granted from the
beginning, it seems stupid to you.


 All Searle did was come up with a wildly impractical model (the Chinese
 Room) of an intelligence in which a human being happens to play a trivial
 part. Consider what's in Searle's model:

 1) An incredible book, larger than the observable universe even if the
 writing was microfilm sized.

 2) An equally large or larger book of blank paper.

 3) A pen, several trillion galaxies of ink, and oh yes I almost forgot,
 your little man.

Is there an original document you are getting this from? None of the
descriptions of the argument I find online make any mention of
infinite books, paper, or ink. All I find is a clear and simple
experiment: A man sits in a locked room and receives notes in Chinese
through a slot. He has a rule book (size is irrelevant and not
mentioned) with which contains instructions for what to do in response
to receiving these Characters. The fact that he can use the book to
make the people outside think they are carrying on a conversation with
them in Chinese reveals that it is only necessary for the man to be
trained to use the book, not to understand Chinese or communication in
general.

Makes sense to me. The refutations I've read aren't persuasive. They
have to do with claiming that the room as a whole is intelligent or
that neurons cannot be intelligent either, etc.


 Searle claims to have proven something profound when he shows that a
 trivial part does not have all the properties that the whole system does.

The whole point is to show that in an AI system, the machine is the
trivial part of a system which invariably includes a human user to
understand anything.

 In his example the man could be replaced with a simple machine made with a
 few vacuum tubes or even mechanical relays, and it would do a better job.

No because he's trying to bring it to a human level so there is no
silly speculation about whether vacuum tubes can understand anything
or not.

 It's like saying the synaptic transmitter dopamine does not understand how
 to solve differential equations, dopamine is a small part of the human
 brain thus the human brain does not understand how to solve differential
 equations.

The human brain doesn't understand, any more than the baseball diamond
plays baseball. The diamond and the experiences shown through the
playing of the game are two aspects of a single whole.


 Yes, it does seem strange that consciousness is somehow hanging around the
 room as a whole, even if slowed down by a factor of a billion trillion or
 so, but no stranger than the fact that consciousness is hanging around 3
 pounds of gray goo in our head,

We know for a fact that human consciousness is associated with human
brains, but we do not have much reason to suspect the rooms can become
conscious (Amityville notwithstanding).

 and yet we know that it does. It's time to
 just face the fact that consciousness is what matter does when it is
 organized in certain complex ways.

Not at all. Organization of the brain does not make the difference
between being awake and being unconscious. Organization is certainly
important, but only if it arises organically. Organization imposed
from the outside doesn't cause that organization to become
internalized as awareness.


   I understand why an audioanimatronic pirate at Disneyland feels nothing.

 That is incorrect, you don't understand. I agree it probably feels nothing
 but unlike you I can logically explain exactly why I have that opinion and
 I don't need semantic batteries or flux capacitors or dilithhium crystals
 or any other new age bilge to do so.

Yes, I've heard you logical explanation...because it doesn't behave
intelligently. It's circular reasoning. My understanding is not
predicated on a schema of literal rules. Yours isn't either 

Re: Intelligence and consciousness

2012-02-03 Thread Craig Weinberg
On Feb 2, 4:05 pm, John Clark johnkcl...@gmail.com wrote:
 On Thu, Feb 2, 2012 Craig Weinberg whatsons...@gmail.com wrote:

  My view is that the whole idea that there can be a 'functional equivalent
  of emotions' is completely unsupported. I give examples of puppets

 A puppet needs a puppeteer, a computer does not.

Yes it does. It needs a user at some point to make any sense at all of
what the computer is doing. An abacus is a computer. Left to it's own
devices it's just a rectangle of wood and bamboo or whatever. Attach a
motor to it that does computations with it, and you still have a
rectangle of wood with a metal motor attached to it clicking out
meaningless non-patterns in the abacus with nothing to recognize the
patterns but the occasional ant getting injured by a rapidly sliding
bead.


  movies, trashcans that say THANK YOU, voicemail...all of these things
  demonstrate that there need not be any connection at all between function
  and interior experience.

 For all these examples to be effective you would need to know that they do
 not have a inner life, and how do you know they don't have a inner life.
 You know because they don't behave as if they have a inner life. Behavior
 is the only tool we have for detecting such things in others so your
 examples are useless.

No, because people in a vegetative state do sometimes have an inner
life despite their behavior. It is our similarity to and familiarity
with other humans that encourages us to give them the benefit of the
doubt. We go the extra mile to see if we can figure out if they are
still alive. Most of us don't care as much whether a steer is alive
when we are executing it for hamburger patties or a carrot feels
something when we rip it out of the ground. With that in mind, we
certainly don't owe a trashcan lid any such benefit of the doubt. Like
a computer, it is manufactured out of materials selected specifically
for their stable, uniform, inanimate properties. I understand what you
mean though, and yes, our perception of something's behavior is a
primary tool to how we think of it, but not the only one. More
important is the influence of conventional wisdom in a given society
or group. We like to eat beef so most of us rationalize it without
much thought despite the sentient behavior of steer. We like the idea
of AI so we project the possibility of feeling and understanding on it
- we go out of our way to prove that it is possible despite the
automatic and mechanical behavior of the instruments.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread Craig Weinberg
On Feb 2, 4:33 pm, John Clark johnkcl...@gmail.com wrote:
 On Thu, Feb 2, 2012  Craig Weinberg whatsons...@gmail.com wrote:

  Do you have any examples of an intelligent organism which evolved without
  emotion?

 Intelligence is not possible without emotion,

That's what I'm saying. Are you suddenly switching to my argument?

 but emotion is possible
 without intelligence.

Of course.

 And I was surprised you asked me for a example of a
 emotional organism with or without intelligence, how in the world could I
 do that?  You refuse to accept behavior as evidence for emotion or even for
 intelligence, so I can't  tell if anyone or anything is emotional or
 intelligent in the entire universe except for me.

But you can tell if something seems like it is.


   The whole idea of evolution 'figuring out' anything is not consistent
  with our understanding of natural selection.

 It's a figure of speech.

   Natural selection is not teleological.

 A keen grasp of the obvious.

Sorry, I can't tell the difference here if you are using it as a
figure of speech or positing agency to evolution. Even using that as a
figure of speech suggests an exaggeration of the role of evolution in
the universe.


  The subject was things that influenced my theory. Light, electricity, and
  electromagnetism are significant influences.

 Electromagnetism significantly influences everything as do the other 3
 fundamental physical forces. Tell me something new.

If I yell out 'Tesla' and then get hit by lightning, is that not
associated strongly enough for you to be noticed?


   What do you think understanding is actually supposed to lead to?

 Your ideas lead to navel gazing not understanding.

And your ideas lead to...?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread John Clark
On Fri, Feb 3, 2012 Craig Weinberg whatsons...@gmail.com wrote:


  An abacus is a computer. Left to it's own devices it's just a rectangle
 of wood and bamboo or whatever.


That is true so although it certainly needs to be huge we can't just make
our very very big abacuses even bigger and expect them to be intelligent,
its not just a hardware problem of wiring together more microchips, we must
teach (program) the abacus to learn on its own. That is a very difficult
task but enormous progress has been made in the last few years; as I said
before, in 1998 nobody knew how to program even the largest 30 million
dollar super abacus in the world to perform acts of intelligence that today
can be done by that $399 iPhone abacus in your pocket.

I admit that it could turn out that humans just aren't smart enough to know
how to teach a computer to be as smart or smarter than they are, but that
doesn't mean it won't happen because humans have help, computers
themselves. In a sense that's already true, a computer program needs to be
in zeros and ones but nobody could write the Siri program that way, but we
have computer assemblers and compilers to do that so we can write in a much
higher level language than zeros and ones. So at a fundamental level no
human being could write a computer program like Siri and nobody knows how
it works. But programs like that get written nevertheless. And as computers
get better the tools for writing programs get better and intelligent
programs even more complex than Siri will get written with even less human
understanding of their operation. The process builds on itself and thus
accelerates.

   people in a vegetative state do sometimes have an inner life despite
 their behavior.


In the course of our conversations you have made declarative statements
like the above dozens if not hundreds of times but you never seriously ask
yourself HOW DO I KNOW THIS?.


  we certainly don't owe a trashcan lid any such benefit of the doubt.


Why certainly, why are you so certain? I know why I am but I can't figure
out why you are. Like you I also think the idea that a plastic trashcan can
have a inner life is ridiculous but unlike you I can give a clear logical
reason WHY I think it's ridiculous: a trash can does not behave
intelligently.

 Like a computer, it is manufactured out of materials selected
 specifically for their stable, uniform, inanimate properties.


Just exactly like human beings that are manufactured out of stable,
uniform, inanimate materials like amino acids.

 I understand what you mean though, and yes, our perception of something's
 behavior is a primary tool to how we think of it, but not the only one.
 More important is the influence of conventional wisdom in a given society
 or group.


At one time the conventional wisdom in society was that black people didn't
have much of a inner life, certainly nothing like that of white people, so
they could own and do whatever they wanted to people of a darker hue
without guilt. Do you really expect Mr. Joe Blow and his conventional
wisdom can teach us anything about the future of computers?

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread Evgenii Rudnyi

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of you to
inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within the
context of chess the machine acts just like a person who had
those emotions. So it had at least the functional equivalent of
those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already
been over this issue a dozen times. My view is that the whole idea
that there can be a 'functional equivalent of emotions' is
completely unsupported. I give examples of puppets, movies,
trashcans that say THANK YOU, voicemail...all of these things
demonstrate that there need not be any connection at all between
function and interior experience.


Except that in every case there is an emotion in your examples...it's
 just the emotion of the puppeter, the screenwriter, the trashcan
painter. But in the case of the chess playing computer, there is no
person providing the 'emotion' because the 'emotion' depends on
complex and unforeseeable events. Hence it is appropriate to
attribute the 'emotion' to the computer/program.

Brent


Craig's position that computers in the present form do not have emotions 
is not unique, as emotions belong to consciousness. A quote from my 
favorite book


Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter 10.2 Conscious computers?

p. 128 Our further discussion here, however, will take it as 
established that his can never happen.


Now the last paragraph from the chapter 10.3 Conscious robots?

p. 130. So, while we may grant robots the power to form meaningful 
categorical representations at a level reached by the unconscious brain 
and by the behaviour controlled by the unconscious brain, we should 
remain doubtful whether they are likely to experience conscious 
percepts. This conclusion should not, however, be over-interpreted. It 
does not necessarily imply that human beings will never be able to build 
artefacts with conscious experience. That will depend on how the trick 
of consciousness is done. If and when we know the trick, it may be 
possible to duplicate it. But the mere provision of behavioural 
dispositions is unlikely to be up to the mark.


If we say that computers right now have emotions, then we must be able 
exactly define the difference between unconscious and conscious 
experience in the computer (for example in that computer that has won 
Kasparov). Can you do it?


Hence I personally find this particular Craig's position as supported.

Evgenii

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread meekerdb

On 2/3/2012 12:23 PM, Evgenii Rudnyi wrote:

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of you to
inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within the
context of chess the machine acts just like a person who had
those emotions. So it had at least the functional equivalent of
those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already
been over this issue a dozen times. My view is that the whole idea
that there can be a 'functional equivalent of emotions' is
completely unsupported. I give examples of puppets, movies,
trashcans that say THANK YOU, voicemail...all of these things
demonstrate that there need not be any connection at all between
function and interior experience.


Except that in every case there is an emotion in your examples...it's
 just the emotion of the puppeter, the screenwriter, the trashcan
painter. But in the case of the chess playing computer, there is no
person providing the 'emotion' because the 'emotion' depends on
complex and unforeseeable events. Hence it is appropriate to
attribute the 'emotion' to the computer/program.

Brent


Craig's position that computers in the present form do not have emotions is not unique, 
as emotions belong to consciousness. A quote from my favorite book


Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter 10.2 Conscious computers?

p. 128 Our further discussion here, however, will take it as established that his can 
never happen.


Now the last paragraph from the chapter 10.3 Conscious robots?

p. 130. So, while we may grant robots the power to form meaningful categorical 
representations at a level reached by the unconscious brain and by the behaviour 
controlled by the unconscious brain, we should remain doubtful whether they are likely 
to experience conscious percepts. This conclusion should not, however, be 
over-interpreted. It does not necessarily imply that human beings will never be able to 
build artefacts with conscious experience. That will depend on how the trick of 
consciousness is done. If and when we know the trick, it may be possible to duplicate 
it. But the mere provision of behavioural dispositions is unlikely to be up to the mark.


If we say that computers right now have emotions, then we must be able exactly define 
the difference between unconscious and conscious experience in the computer (for example 
in that computer that has won Kasparov). Can you do it?


Can you do it for people? For yourself?  No.  Experiments show that people confuse the 
source of their own emotions.  So your requirement that we be able to exactly define is 
just something you've invented.


Brent





Hence I personally find this particular Craig's position as supported.

Evgenii



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread Evgenii Rudnyi

On 03.02.2012 22:07 meekerdb said the following:

On 2/3/2012 12:23 PM, Evgenii Rudnyi wrote:

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of
you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within
the context of chess the machine acts just like a person who
had those emotions. So it had at least the functional
equivalent of those emotions. Whereas your opinion is simple
prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that the
whole idea that there can be a 'functional equivalent of
emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU, voicemail...all
of these things demonstrate that there need not be any
connection at all between function and interior experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the chess
playing computer, there is no person providing the 'emotion'
because the 'emotion' depends on complex and unforeseeable
events. Hence it is appropriate to attribute the 'emotion' to the
computer/program.

Brent


Craig's position that computers in the present form do not have
emotions is not unique, as emotions belong to consciousness. A
quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter 10.2 Conscious computers?

p. 128 Our further discussion here, however, will take it as
established that his can never happen.

Now the last paragraph from the chapter 10.3 Conscious robots?

p. 130. So, while we may grant robots the power to form meaningful
 categorical representations at a level reached by the unconscious
 brain and by the behaviour controlled by the unconscious brain, we
 should remain doubtful whether they are likely to experience
conscious percepts. This conclusion should not, however, be
over-interpreted. It does not necessarily imply that human beings
will never be able to build artefacts with conscious experience.
That will depend on how the trick of consciousness is done. If and
when we know the trick, it may be possible to duplicate it. But the
mere provision of behavioural dispositions is unlikely to be up to
the mark.

If we say that computers right now have emotions, then we must be
able exactly define the difference between unconscious and
conscious experience in the computer (for example in that computer
that has won Kasparov). Can you do it?


Can you do it for people? For yourself? No. Experiments show that
people confuse the source of their own emotions. So your requirement
that we be able to exactly define is just something you've
invented.

Brent


I believe that there is at least a small difference. Presumably we know 
everything about the computer that has played chess. Then it seems that 
a hypothesis about emotions in that computer could be verified without a 
problem - hence my notion on exactly define. On the other hand, 
consciousness remains to be a hard problem and here exactly define 
does not work.


However, the latter does not mean that consciousness does not exist as a 
phenomenon. Let us take for example life. I would say that there is not 
good definition what life is (exactly define does not work), yet this 
does not prevent science to research it. This should be the same for 
conscious experience.


Evgenii






Hence I personally find this particular Craig's position as
supported.

Evgenii





--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread Craig Weinberg
On Feb 3, 11:22 am, John Clark johnkcl...@gmail.com wrote:
 On Fri, Feb 3, 2012 Craig Weinberg whatsons...@gmail.com wrote:



   An abacus is a computer. Left to it's own devices it's just a rectangle
  of wood and bamboo or whatever.

 That is true so although it certainly needs to be huge we can't just make
 our very very big abacuses even bigger and expect them to be intelligent,
 its not just a hardware problem of wiring together more microchips, we must
 teach (program) the abacus to learn on its own.

Huge abacuses are a really good way to look at this, although it's
pretty much the same as the China Brain. Your position is that if we
made a gigantic abacus the size of our solar system, and had a person
manning each sliding bead, that there would be a possibility that even
though each person could only slide their bead to the left or right
according and tell others or be told by others to slide their beads
left or right, that with a well crafted enough sequences of
instructions, the abacus itself would begin to have a conscious
experience. The abacus could literally be made to think that it had
been born and come to life as a human being, a turtle, a fictional
character...anything we choose to convert into a sequence we feels
reflects the functionality of these beings will actually cause the
abacus to experience being that thing.

Do you see why I am incredulous about this? I understand that if you
assume comp this seems feasible, after all, we can make CleverBot and
Siri, etc. The problem is that CleverBot and Siri live in our very
human minds. They do not have lives and dream of being Siri II. I
understand that you and others here are convinced that logic would
dictate that we can't prove that Siri 5000 won't feel our feelings and
understand what we understand, but I am crystal clear in my own
understanding that no matter how good the program seems, Siri 5000
will feel exactly the same thing as Siri. Nothing. No more than the
instructions being called out on the scaffolds of the Mega Abacus will
begin to feel something some day. It doesn't work that way. Why? As I
continue to try to explain, awareness is not a function of objects, it
is the symmetrically anomalous counterpart of objects. Experiences
accumulate semantic charge - significance - over time, which, in a
natural circumstance, is reflected in the objective shadow, as
external agendas are reflected in the subjective experience. The
abacus and computer's instructions will never be native to them. The
beads will never learn anything. They are only beads.

That is a very difficult
 task but enormous progress has been made in the last few years; as I said
 before, in 1998 nobody knew how to program even the largest 30 million
 dollar super abacus in the world to perform acts of intelligence that today
 can be done by that $399 iPhone abacus in your pocket.

I know. I've heard. As I say, no version is any closer to awareness of
any kind than the first version.


 I admit that it could turn out that humans just aren't smart enough to know
 how to teach a computer to be as smart or smarter than they are,

It's not a matter of being smart enough. You can't turn up into down.
Machines are made of unconsciousness. All machines are unconscious.
That is how we can control them. Consciousness and mechanism are
mutually exclusive by definition and always will be.

 but that
 doesn't mean it won't happen because humans have help, computers
 themselves. In a sense that's already true, a computer program needs to be
 in zeros and ones but nobody could write the Siri program that way, but we
 have computer assemblers and compilers to do that so we can write in a much
 higher level language than zeros and ones.

That would not be necessary if the machine had any capacity to learn.
Like the neurons of our brain, the microprocessors would adapt and
begin to understand natural human language.

 So at a fundamental level no
 human being could write a computer program like Siri and nobody knows how
 it works.

I wouldn't say we don't know how it works. Binary logic is pretty
straightforward.

 But programs like that get written nevertheless. And as computers
 get better the tools for writing programs get better and intelligent
 programs even more complex than Siri will get written with even less human
 understanding of their operation. The process builds on itself and thus
 accelerates.

That's the theory. Meanwhile, in reality, we are using the same basic
interface for computers since 1995.


    people in a vegetative state do sometimes have an inner life despite
  their behavior.

 In the course of our conversations you have made declarative statements
 like the above dozens if not hundreds of times but you never seriously ask
 yourself HOW DO I KNOW THIS?.

There is a lot of anecdotal evidence. People come out of comas.
Recently a study proved it with MRI scans where the comatose patient
was able to stimulate areas of their brain associated with coordinated

Re: Intelligence and consciousness

2012-02-03 Thread meekerdb

On 2/3/2012 1:50 PM, Evgenii Rudnyi wrote:

On 03.02.2012 22:07 meekerdb said the following:

On 2/3/2012 12:23 PM, Evgenii Rudnyi wrote:

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.net wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of
you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within
the context of chess the machine acts just like a person who
had those emotions. So it had at least the functional
equivalent of those emotions. Whereas your opinion is simple
prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that the
whole idea that there can be a 'functional equivalent of
emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU, voicemail...all
of these things demonstrate that there need not be any
connection at all between function and interior experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the chess
playing computer, there is no person providing the 'emotion'
because the 'emotion' depends on complex and unforeseeable
events. Hence it is appropriate to attribute the 'emotion' to the
computer/program.

Brent


Craig's position that computers in the present form do not have
emotions is not unique, as emotions belong to consciousness. A
quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter 10.2 Conscious computers?

p. 128 Our further discussion here, however, will take it as
established that his can never happen.

Now the last paragraph from the chapter 10.3 Conscious robots?

p. 130. So, while we may grant robots the power to form meaningful
 categorical representations at a level reached by the unconscious
 brain and by the behaviour controlled by the unconscious brain, we
 should remain doubtful whether they are likely to experience
conscious percepts. This conclusion should not, however, be
over-interpreted. It does not necessarily imply that human beings
will never be able to build artefacts with conscious experience.
That will depend on how the trick of consciousness is done. If and
when we know the trick, it may be possible to duplicate it. But the
mere provision of behavioural dispositions is unlikely to be up to
the mark.

If we say that computers right now have emotions, then we must be
able exactly define the difference between unconscious and
conscious experience in the computer (for example in that computer
that has won Kasparov). Can you do it?


Can you do it for people? For yourself? No. Experiments show that
people confuse the source of their own emotions. So your requirement
that we be able to exactly define is just something you've
invented.

Brent


I believe that there is at least a small difference. Presumably we know everything about 
the computer that has played chess. Then it seems that a hypothesis about emotions in 
that computer could be verified without a problem - hence my notion on exactly define. 
On the other hand, consciousness remains to be a hard problem and here exactly define 
does not work.


However, the latter does not mean that consciousness does not exist as a phenomenon. Let 
us take for example life. I would say that there is not good definition what life is 
(exactly define does not work), yet this does not prevent science to research it. This 
should be the same for conscious experience.


Evgenii


So you've reversed your theory?  If computers have emotions like people we must *not* be 
able to exactly define them.  And if we can exactly define them, that must prove they are 
not like people?


Actually, if we've made an intelligent chess playing computer, one that learns from 
experience, we probably don't know everything about it.  We might be able to find out - 
but only in the sense that in principle we could find out all the neural connections and 
functions in a human brain.  It's probably easier and more certain to just watch behavior.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread John Clark
On Fri, Feb 3, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 Huge abacuses are a really good way to look at this, although it's pretty
 much the same as the China Brain.


I hope you're not talking about Searle's Chinese room, the stupidest
thought experiment in history.


  Your position is [...] The abacus could literally be made to think


Yes.

 Do you see why I am incredulous about this?


No.



   I am crystal clear in my own understanding that no matter how good the
 program seems, Siri 5000 will feel exactly the same thing as Siri. Nothing.


I accept that you are absolutely positively 100% certain of the above, but
I do NOT accept that you are correct. I'm not a religious man so I don't
believe in divine revelation and that's the only way you could know what it
feels like to be Siri, hell you don't even know what it feels like to be me
and we are of the same species (I presume); all you can do is observe how
Siri and I behave and try to make conclusions about our inner life or like
of same from that.

 As I continue to try to explain, awareness is not a function of objects,
 it is the symmetrically anomalous counterpart of objects.


Bafflegab.

 Experiences accumulate semantic charge


Semantic charge? Poetic crapola of that sort may impress some but not me.
If you can't express your ideas clearer than that they are not worth
expressing at all.


  The beads will never learn anything. They are only beads.


Computers can and do learn things and a Turing Machine can simulate any
computer and you can make a Turing Machine from beads. I won't insult your
intelligence by spelling out the obvious conclusion from that fact.

 Machines are made of unconsciousness.


 Machines are made of atoms just like you and me.

 All machines are unconscious. That is how we can control them.


A argument that is already very weak and will become DRAMATICALLY weaker in
the future. In the long run there is no way we can control computers, they
are our slave right now but that circumstance will not continue.

 That would not be necessary if the machine had any capacity to learn.


I don't know what you're talking about, machines have been able to learn
for decades.

 at a fundamental level no human being could write a computer program
 like Siri and nobody knows how it works.


I wouldn't say we don't know how it works. Binary logic is pretty
 straightforward.


One binary logic operation is pretty straightforward but *20,000 trillion
of them every second is not, *and that's what today's supercomputers can
do, and they are doubling in power every 18 months.

 That's the theory. Meanwhile, in reality, we are using the same basic
 interface for computers since 1995.


What are you talking about? Siri is a computer interface as is Google and
even supercomputers didn't have them or anything close to it in 1995.

  people in a vegetative state do sometimes have an inner life despite
 their behavior.


 In the course of our conversations you have made declarative statements
 like the above dozens if not hundreds of times but you never seriously ask
 yourself HOW DO I KNOW THIS?.


 There is a lot of anecdotal evidence. People come out of comas.


So people come out of comas and you observe that they make certain sounds
with their mouth then you make guesses about their inner life based on
those sounds. Siri can make sounds too.

 Recently a study proved it with MRI scans where the comatose patient was
 able to stimulate areas of their brain associated with coordinated physical
 activity in response to the scientists request for them to imagine playing
 tennis.


And how do you know that stimulated brain areas have anything to do with
consciousness? By observing behavior when that happens and making guesses
that seem reasonable to you.

 Why doesn't it [ a trash can] behave intelligently though?


We don't know exactly why people behave intelligently so we can't give a
definitive answer why a trash can doesn't, at least not yet, but in general
I can say that unlike a computer or a brain a trash can is not organized as
a Turing Machine.


  Just exactly like human beings that are manufactured out of stable,
 uniform, inanimate materials like amino acids.

  I disagree. Organic chemistry is volatile. It reeks.


Oh for God's sake, now consciousness must stink! Well when selenium
rectifiers fail they reek to high heaven!

 Molecules may be too primitive to be described as part of us.


Primitive or not you are made of molecules and molecules are made of atoms
and if you've seen one atom you've seen them all.

 I like how you start out grandstanding against prejudice and superficial
 assumptions and end with completely blowing off Mr. Joe Blow.


Thank you, I thought it was rather good myself.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email 

Re: Intelligence and consciousness

2012-02-02 Thread Craig Weinberg
On Jan 30, 6:54 pm, meekerdb meeke...@verizon.net wrote:
 On 1/30/2012 3:14 PM, Craig Weinberg wrote:

  On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.net  wrote:
  On 1/30/2012 2:52 PM, Craig Weinberg wrote:
  So kind of you to inform us of your unsupported opinion.
  I was commenting on your unsupported opinion.

 Except that my opinion is supported by the fact that within the context of 
 chess the
 machine acts just like a person who had those emotions.  So it had at least 
 the functional
 equivalent of those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already been
over this issue a dozen times. My view is that the whole idea that
there can be a 'functional equivalent of emotions' is completely
unsupported. I give examples of puppets, movies, trashcans that say
THANK YOU, voicemail...all of these things demonstrate that there need
not be any connection at all between function and interior experience.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread meekerdb

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdbmeeke...@verizon.net  wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbmeeke...@verizon.netwrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote:
So kind of you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within the context of 
chess the
machine acts just like a person who had those emotions.  So it had at least the 
functional
equivalent of those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already been
over this issue a dozen times. My view is that the whole idea that
there can be a 'functional equivalent of emotions' is completely
unsupported. I give examples of puppets, movies, trashcans that say
THANK YOU, voicemail...all of these things demonstrate that there need
not be any connection at all between function and interior experience.

Except that in every case there is an emotion in your examples...it's just the emotion of 
the puppeter, the screenwriter, the trashcan painter.  But in the case of the chess 
playing computer, there is no person providing the 'emotion' because the 'emotion' depends 
on complex and unforeseeable events.  Hence it is appropriate to attribute the 'emotion' 
to the computer/program.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread Craig Weinberg
On Jan 31, 1:33 pm, John Clark johnkcl...@gmail.com wrote:



   The Limbic system predates the Neocortex evolutionarily.

 As I've said on this list many times.

   There is no reason to think that emotion emerged after intelligence.

 And as I've said emotion is about 500 million years old but Evolution found
 intelligence much harder to produce, it only figured out how to do it about
 a million years ago, perhaps less.

Do you have any examples of an intelligent organism which evolved
without emotion? The whole idea of evolution 'figuring out' anything
is not consistent with our understanding of natural selection. Natural
selection is not teleological. There is no figuring, only statistical
probabilities related to environmental conditions.

  Evolution doesn't see anything.

 Don't be ridiculous.

I'm not. Statistics don't see things.


  Which thoughtless fallacy should I choose? Oh right, I have no free will
  anyhow so some reason will choose for me.

 Cannot comment, don't know what ASCII string free will means.

   You asked what influenced my theory. You don't see how Tesla relates to
  lightning and electromagnetism?

 I made a Tesla Coil when I was 14, it was great fun looked great and really
 impressed the rubes, but I don't see the relevance to the subject at hand.

The subject was things that influenced my theory. Light, electricity,
and electromagnetism are significant influences.


  That is exactly what the cosmos is - things happening for a reason and
  not happening for a
  reason at the same time.

 And you expect this sort of new age crapola to actually lead to something,

What do you think understanding is actually supposed to lead to?

 like a basic understanding of how the world works? Dream on. But then again
 it might work if you're right about logic not existing.

Logic exists, but it's not the only thing that exists.


  Is there anyone noteworthy in the history of human progress who has not
  been called insane?

 Richard Feynman.

http://inspirescience.wordpress.com/2010/11/09/richard-p-feynman1/

Richard P. Feynman – Crazy as he is Genius

He didn’t do things through conventional or traditional means, but
rather was eccentric, crazy, and went against the norms of society. He
was an explorer of the deepest nature. He adopted at a young age the
philosophy that you should never care what other people think, because
everyone else is most likely wrong.

When I see equations, I see the letters in colors – I don't know
why. As I'm talking, I see vague pictures of Bessel functions from
Jahnke and Emde's book, with light-tan j's, slightly violet-bluish
n's, and dark brown x's flying around. And I wonder what the hell it
must look like to the students. - Richard Feynman.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread Craig Weinberg
On Jan 31, 3:25 pm, John Mikes jami...@gmail.com wrote:
 Craig and Brent:
 would you kindly disclose an opinion that can be
 deemed  SUPPORTED

 All our 'support' (evidence, verification whatever) comes from mostly
 uninformed information fragments we receive by observation(?) of the
 already accessible details and try to complete them by the available
 'knowledge' already established as conventional science stuff.
 We use instruments, constructed to work on the just described portion of
 observations and evaluate the 'received'(?) data by our flimsy
 (*human?)*mathematical logic ONLY.

Yes! This is a core assumption of multisense realism. I go a step
further to describe that not only do our observations arise entirely
from our the qualities of our observational capacities (senses, sense
making), but that the the nature of our senses are such that what we
observe as being within us 'seems to be' many different ways, but the
more distant observations are understood in terms of facts that
'simply are'. This forms the basis for our human worldviews, with the
far-sighted approaches being overly anthropomorphic and the
mechanistic approaches being the near-sighted view.

 Just compare opinions (scientific that is) of different ages before (and
 after) different levels of accepted (and believed!) informational basis
 (like Flat Earth, BEFORE electricity, BEFORE Marie Curie, Watson, etc.)

 My worldview (and my narrative, of course) is also based on UNSUPPORTED
 OPINION:  mine.

Exactly. This is the native orientation of the universe. The impulse
to validate that opinion externally is valuable, but it also can
seduce us into a false certainty. This is not an illusion, it is
actually how the universe works. In my unsupported opinion.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread John Clark
On Thu, Feb 2, 2012 Craig Weinberg whatsons...@gmail.com wrote:

 My view is that the whole idea that there can be a 'functional equivalent
 of emotions' is completely unsupported. I give examples of puppets


A puppet needs a puppeteer, a computer does not.

 movies, trashcans that say THANK YOU, voicemail...all of these things
 demonstrate that there need not be any connection at all between function
 and interior experience.


For all these examples to be effective you would need to know that they do
not have a inner life, and how do you know they don't have a inner life?
You know because they don't behave as if they have a inner life. Behavior
is the only tool we have for detecting such things in others so your
examples are useless.

   John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread John Clark
On Thu, Feb 2, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 Do you have any examples of an intelligent organism which evolved without
 emotion?


Intelligence is not possible without emotion, but emotion is possible
without intelligence. And I was surprised you asked me for a example of a
emotional organism with or without intelligence, how in the world could I
do that?  You refuse to accept behavior as evidence for emotion or even for
intelligence, so I can't  tell if anyone or anything is emotional or
intelligent in the entire universe except for me.


  The whole idea of evolution 'figuring out' anything is not consistent
 with our understanding of natural selection.


It's a figure of speech.


  Natural selection is not teleological.


A keen grasp of the obvious.

 The subject was things that influenced my theory. Light, electricity, and
 electromagnetism are significant influences.


Electromagnetism significantly influences everything as do the other 3
fundamental physical forces. Tell me something new.


  What do you think understanding is actually supposed to lead to?


Your ideas lead to navel gazing not understanding.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-31 Thread John Clark
On Mon, Jan 30, 2012  Craig Weinberg whatsons...@gmail.com wrote:


  The Limbic system predates the Neocortex evolutionarily.


As I've said on this list many times.


  There is no reason to think that emotion emerged after intelligence.


And as I've said emotion is about 500 million years old but Evolution found
intelligence much harder to produce, it only figured out how to do it about
a million years ago, perhaps less.

 Evolution doesn't see anything.


Don't be ridiculous.

 Which thoughtless fallacy should I choose? Oh right, I have no free will
 anyhow so some reason will choose for me.


Cannot comment, don't know what ASCII string free will means.


  You asked what influenced my theory. You don't see how Tesla relates to
 lightning and electromagnetism?


I made a Tesla Coil when I was 14, it was great fun looked great and really
impressed the rubes, but I don't see the relevance to the subject at hand.

 That is exactly what the cosmos is - things happening for a reason and
 not happening for a
 reason at the same time.


And you expect this sort of new age crapola to actually lead to something,
like a basic understanding of how the world works? Dream on. But then again
it might work if you're right about logic not existing.

 Is there anyone noteworthy in the history of human progress who has not
 been called insane?


Richard Feynman.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-31 Thread John Mikes
Craig and Brent:
would you kindly disclose an opinion that can be
deemed  SUPPORTED

All our 'support' (evidence, verification whatever) comes from mostly
uninformed information fragments we receive by observation(?) of the
already accessible details and try to complete them by the available
'knowledge' already established as conventional science stuff.
We use instruments, constructed to work on the just described portion of
observations and evaluate the 'received'(?) data by our flimsy
(*human?)*mathematical logic ONLY.
Just compare opinions (scientific that is) of different ages before (and
after) different levels of accepted (and believed!) informational basis
(like Flat Earth, BEFORE electricity, BEFORE Marie Curie, Watson, etc.)

My worldview (and my narrative, of course) is also based on UNSUPPORTED
OPINION:  mine.

John Mikes

On Mon, Jan 30, 2012 at 6:54 PM, meekerdb meeke...@verizon.net wrote:

 On 1/30/2012 3:14 PM, Craig Weinberg wrote:

 On Jan 30, 6:08 pm, meekerdb meeke...@verizon.net meeke...@verizon.net 
 wrote:

 On 1/30/2012 2:52 PM, Craig Weinberg wrote:

 So kind of you to inform us of your unsupported opinion.

 I was commenting on your unsupported opinion.



 Except that my opinion is supported by the fact that within the context of
 chess the machine acts just like a person who had those emotions.  So it
 had at least the functional equivalent of those emotions. Whereas your
 opinion is simple prejudice.

 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread John Clark
On Sun, Jan 29, 2012 at 8:53 PM, Craig Weinberg whatsons...@gmail.comwrote:

 I just understand that intelligence is an evolution of emotion,


There is simply no logical way that could be true. However important it may
be to us Evolution can not see emotion or consciousness, Evolution can only
see actions, so either emotion and consciousness are a byproduct of
intelligence or emotion and consciousness do not exist. Perhaps you will
insist that emotion and consciousness will join the very long list of
things that you say do not exist  (bits electrons information logic etc)
but I am of the opinion that consciousness and emotion do in fact exist.


  I also understand that electronic computers use semiconductors which I
 know have not evolved into organisms and do not seem to be capable of what
 I would call sensation.


My arguments are based on logic, your argument is that computers just don't
feel squishy enough for your taste.

 Logic plays a part but mainly it's [...]


Not logical.

 My house got struck by lightning right after I really figured out the
 photon theory. I had left my computer on with a website on the biography of
 Tesla on the screen while we saw a movie. True 
 story.http://www.stationlink.com/lightning/IMG_1981.JPG


Interesting, but I don't see the relevance.


  You don't deny free will, you just deny that it's possible to even
 conceive of it in the first place. Ohh kayy...


Fortunately I cannot conceive of something happening for a reason and not
happening for a reason at the same time. I say fortunately because there
is a word to describe people who can conceive of such a contradiction,
insane.

 'Computers' that are in use now have not even improved meaningfully in
 the last 15 years. Is Windows 7, XP, 2000, really much better then Windows
 98?


I don't know a lot about Windows 7 but I do know that my Macintosh is one
hell of a lot better than my old steam powered Windows 98 boat anchor. And
in 1998 the most elite AI researchers on the planet working with 30 million
dollar supercomputers the size of a small cathedral could not do anything
that came even close to what Siri can do on that $399 iPhone in your
pocket, those poor 1998 guys weren't even in the same universe.

And it took Evolution 4 billion years to make human level intelligence, but
humans have only been working on AI for about 50 years. So yes at that rate
I think computers will be smarter than humans at EVERYTHING in 15 to 65
years, and I'd be more surprised if it took longer than 65 years than if it
took less than 15; but even if it took 10 or 100 times that long it would
still be virtually instantaneous on the geological timescale that Evolution
deals with. Oh well, 99% of the species that have ever existed are extinct
and we are about to join their number, but at least we will leave behind a
new and more advanced descendent species.

  John K Clark
**

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread meekerdb

On 1/30/2012 9:47 AM, John Clark wrote:


 I just understand that intelligence is an evolution of emotion,


There is simply no logical way that could be true. However important it may be to us 
Evolution can not see emotion or consciousness, Evolution can only see actions, so 
either emotion and consciousness are a byproduct of intelligence or emotion and 
consciousness do not exist. Perhaps you will insist that emotion and consciousness will 
join the very long list of things that you say do not exist  (bits electrons information 
logic etc) but I am of the opinion that consciousness and emotion do in fact exist.


Of course evolution can't 'see' intelligence either.  As you say selection can only be 
based on action.  But action takes emotion, in the general sense of desiring one thing 
(food, sex,...) and disliking another (pain, being eaten,...). So I'd say emotion, knowing 
what you value, is as important as intelligence, knowing how to get it.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread John Clark
On Mon, Jan 30, 2012  meekerdb meeke...@verizon.net wrote:

 Of course evolution can't 'see' intelligence either.  As you say
 selection can only be based on action.  But action takes emotion



OK I have no problem with that, but then Deep Blue had emotions way back in
1996 when it beat the best human Chess  player in the world because in the
course of that game Deep Blue performed actions, very intelligent actions
in fact. As I've said many times emotion is easy but intelligence is hard.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread Bruno Marchal


On 30 Jan 2012, at 19:13, meekerdb wrote:


On 1/30/2012 9:47 AM, John Clark wrote:



 I just understand that intelligence is an evolution of emotion,

There is simply no logical way that could be true. However  
important it may be to us Evolution can not see emotion or  
consciousness, Evolution can only see actions, so either emotion  
and consciousness are a byproduct of intelligence or emotion and  
consciousness do not exist. Perhaps you will insist that emotion  
and consciousness will join the very long list of things that you  
say do not exist  (bits electrons information logic etc) but I am  
of the opinion that consciousness and emotion do in fact exist.


Of course evolution can't 'see' intelligence either.  As you say  
selection can only be based on action.  But action takes emotion, in  
the general sense of desiring one thing (food, sex,...) and  
disliking another (pain, being eaten,...). So I'd say emotion,  
knowing what you value, is as important as intelligence, knowing how  
to get it.


I can agree more :)

Reason is the servant of the heart.
I think. Arguably in machines' theology. With reasonable definitions.

We are more defined by our value, than by our flesh. I think (that's a  
bit obvious in comp).


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



  1   2   3   >