Re: Re: A requirement of intelligence and consciousness which only humanshave

2012-09-24 Thread Roger Clough
Hi Stathis Papaioannou  

You'll have to ask Descartes. 


Roger Clough, rclo...@verizon.net 
9/24/2012  
"Forever is a long time, especially near the end." -Woody Allen 


- Receiving the following content -  
From: Stathis Papaioannou  
Receiver: everything-list  
Time: 2012-09-24, 02:44:10 
Subject: Re: A requirement of intelligence and consciousness which only 
humanshave 


On Sun, Sep 23, 2012 at 11:20 PM, Roger Clough  wrote: 
> 
> Intelligence and consciousness require an agent outside 
> of spacetime (mental) to make choices about or manipulate 
> physical objects within spacetime. 
> 
> Computers have no agent or self outside of spacetime. 
> So they have no intelligence and cannot be conscious. 
> 
> Period. 

Roger, 

How do you come up with this stuff? 


--  
Stathis Papaioannou 

--  
You received this message because you are subscribed to the Google Groups 
"Everything List" group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: A requirement of intelligence and consciousness which only humans have

2012-09-23 Thread Stathis Papaioannou
On Sun, Sep 23, 2012 at 11:20 PM, Roger Clough  wrote:
>
> Intelligence and consciousness require an agent outside
> of spacetime (mental) to make choices about or manipulate
> physical objects within spacetime.
>
> Computers have no agent or self outside of spacetime.
> So they have no intelligence and cannot be conscious.
>
> Period.

Roger,

How do you come up with this stuff?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-16 Thread John Clark
On Wed, Feb 15, 2012 at 7:20 PM, Craig Weinberg wrote:

> Obviously Watson or Siri give you access to intelligence, but so does a
> book.


A book can contain information that can help you answer questions but can
not do so directly, but Watson and Siri can; and all three could help you
if you were doing research and writing a serious academic paper, but ELIZA
would be of no help whatsoever. ELIZA's vague evasive answers tells you
nothing you didn't already know, so when the Chinese Room in your thought
experiment behaves like ELIZA it's about as useful in enlightening us about
the nature of consciousness or intelligence as a Chinese Fortune Cookie.

> You might consider that we don't need a test. That intelligence is
> fundamentally different than muscle strength or height.
>

You don't know if somebody is tall until you see he is tall, you don't know
if he's strong until he does something that requires a lot of strength and
you don't know he is intelligent until you see him do something smart.

> That's helpful but it is still the programmer's intelligence that is
> reflected in the program, not the computer's.
>

Fine, even the largest computer is just a big lump of semiconductors
without a program.

> Do you think that productivity for the sake of productivity will lead to
> anything meaningful?
>

Obviously yes.

>the existence of other minds can only be inferred through behavior.
>>
>
> That's an assumption. You rule out all other epistemology arbitrarily.
>

Expand on that, show me how you can deduce that other conscious minds exist
without speech writing or some other mode of behavior showing up somewhere
in the logical deductive chain. If you can do that not only will you have
won the argument you would be the greatest philosopher by far in the
history of this small planet.

> Sulfuric acid is H2SO4, remove the sulfur and 3 oxygen atoms and the
>> result is H2O, and
>> you can water corn with water just fine. In a similar way the only
>> difference between a cadaver and a healthy person is the way the atoms are
>> organized.
>>
>
> By that reasoning, all matter is the same thing.


Yep, organize the electrons neutrons and protons in the right way (which
requires information of course) you can make anything including you and me.

> Which is true in some sense, but it is the opposite of cosmos,
> consciousness and realism.
>

Then logically we can only conclude that whatever you mean by "cosmos
consciousness and realism" cannot be correct because the opposite of the
truth is false.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-15 Thread Craig Weinberg
On Feb 15, 1:22 pm, John Clark  wrote:
> On Mon, Feb 13, 2012  Craig Weinberg  wrote:
>
> >> TO HELL WITH ELIZA That prehistoric program is NOT intelligent!
>
> > > What makes you sure it isn't intelligent but that other programs are?
>
> How the hell do you think?! ELIZA doesn't act intelligently but other
> programs do. Nobody in their right mind would use ELIZA to help with
> writing a scientific paper and doing serious research, but you might use
> Watson or Siri.

Obviously Watson or Siri give you access to intelligence, but so does
a book. Would you say that an almanac is more intelligent than a book
of poems? Does the IQ of a book change when you turn it upside down?
I'm trying to point out that what you associate with intelligence
figuratively does not correspond to literal capacity for intelligent
reasoning.

>
> > 20mb of conversational Chinese might be enough to pass a Turing Test for
> > a moderate amount of time.
>
> Maybe, if a chimpanzee were performing the test.

Yes.

>
> > It's completely subjective.
>
> Yes the Turing Test is subjective and it's flawed. Failing the Turing Test
> proves nothing definitive, the subject may be smart as hell but simply not
> want to answer your questions and prefer to remain silent. And
> unsophisticated people might even be impressed by a program as brain dead
> dumb as ELIZA. And people can fool us too, I think we've all met people who
> for the first 10 minutes seem smart as hell but after 30 minutes you
> realize they are pretentious dullards.

That's my point. But eventually we do realize they are dullards - or
machines.

>So with all these flaws why do we
> even bother with the Turing Test? Because despite its flaws it's the ONLY
> tool we have, its the only way of determining intelligence from stupidity,
> but if we are not very smart ourselves we will make lots of errors in
> administering the test.

You might consider that we don't need a test. That intelligence is
fundamentally different than muscle strength or height.

>
> > If you haven't read it already, this link from Stephen may do a better
> > job than I have of explaining my position:
>
> >http://newempiricism.blogspot.com/2009/02/symbol-grounding-problem.html
>
> And that fails the Turing Test because the author clearly thought that
> Searle was a pretty smart man.

He doesn't have to be smart to be right about the Chinese room. Even
if possibly for the wrong reason.

>
> >> You ask the room to produce a quantum theory of gravity and it does so,
> >> you ask it to output a new poem that a considerable fraction of the human
> >> race would consider to be very beautiful and it does so, you ask it to
> >> output a original fantasy children's novel that will be more popular than
> >> Harry Potter and it does so.
>
> > No. The thought experiment is not about simulating omniscience. If you
> > ask the room to produce anything outside of casual conversation, it would
> > politely decline.
>
> If that's all it could do, if it just produce streams of ELIZA style
> evasive blather then it has not demonstrated any intelligence  so I would
> have no reason to think it was intelligent so I would not think its
> conscious so WHAT'S THE POINT OF THE THOUGHT EXPERIMENT?

It would demonstrate X intelligence for t duration to Z audience.
Which is all any intelligence could hope to accomplish.

>
> > First you say 'let's say that the impossible Chinese Room was possible'.
> > Then you say 'it still doesn't work because the Chinese Room isn't
> > possible'.
>
> What I said was that real computers don't work anything like the Chinese
> Room, they don't have a copy of Shakespeare's Hamlet in which the letters
> "t" and "s" are reversed (so be or nos so be shas it she quetsion) resting
> in its memory just in case somebody requested such a thing, but if it had a
> copy of the play as Shakespeare (or Thaketpeare) wrote it simple ways could
> be found to produce it.

That's helpful but it is still the programmer's intelligence that is
reflected in the program, not the computer's. Which is the whole
point.

>
> > > The Chinese Room is just [...]
>
> There you do again with the "is just".
>
> > 'Where were you on the night of October 15, 2011'?
>
> Well, your honor my brain was inside the head which was on top of the body
> knocking over that liquor store, my mind was in a lingerie model's bedroom,
> and then on the moons of Jupiter. My sense organs are always very close to
> my brain but that is just a Evolutionary accident resulting from the fact
> that nerve impulses travel much much slower than light and if they were far
> from my brain the signal delay would have severely reduced the chances of
> my ancestors surviving long enough to reproduce.

Ah, then you shouldn't mind if we put your body in prison.

>
> > > There is a difference between organized matter and matter that wants to
> > organize.
>
> Carbon atoms want to organize into amino acids and amino acids want to
> organize into proteins and proteins 

Re: Intelligence and consciousness

2012-02-15 Thread John Clark
On Mon, Feb 13, 2012  Craig Weinberg  wrote:

>> TO HELL WITH ELIZA That prehistoric program is NOT intelligent!
>>
>
> > What makes you sure it isn't intelligent but that other programs are?
>

How the hell do you think?! ELIZA doesn't act intelligently but other
programs do. Nobody in their right mind would use ELIZA to help with
writing a scientific paper and doing serious research, but you might use
Watson or Siri.

> 20mb of conversational Chinese might be enough to pass a Turing Test for
> a moderate amount of time.


Maybe, if a chimpanzee were performing the test.

> It's completely subjective.
>

Yes the Turing Test is subjective and it's flawed. Failing the Turing Test
proves nothing definitive, the subject may be smart as hell but simply not
want to answer your questions and prefer to remain silent. And
unsophisticated people might even be impressed by a program as brain dead
dumb as ELIZA. And people can fool us too, I think we've all met people who
for the first 10 minutes seem smart as hell but after 30 minutes you
realize they are pretentious dullards. So with all these flaws why do we
even bother with the Turing Test? Because despite its flaws it's the ONLY
tool we have, its the only way of determining intelligence from stupidity,
but if we are not very smart ourselves we will make lots of errors in
administering the test.

> If you haven't read it already, this link from Stephen may do a better
> job than I have of explaining my position:
>
> http://newempiricism.blogspot.com/2009/02/symbol-grounding-problem.html
>

And that fails the Turing Test because the author clearly thought that
Searle was a pretty smart man.

>> You ask the room to produce a quantum theory of gravity and it does so,
>> you ask it to output a new poem that a considerable fraction of the human
>> race would consider to be very beautiful and it does so, you ask it to
>> output a original fantasy children's novel that will be more popular than
>> Harry Potter and it does so.
>>
>
> No. The thought experiment is not about simulating omniscience. If you
> ask the room to produce anything outside of casual conversation, it would
> politely decline.
>

If that's all it could do, if it just produce streams of ELIZA style
evasive blather then it has not demonstrated any intelligence  so I would
have no reason to think it was intelligent so I would not think its
conscious so WHAT'S THE POINT OF THE THOUGHT EXPERIMENT?

> First you say 'let's say that the impossible Chinese Room was possible'.
> Then you say 'it still doesn't work because the Chinese Room isn't
> possible'.
>

What I said was that real computers don't work anything like the Chinese
Room, they don't have a copy of Shakespeare's Hamlet in which the letters
"t" and "s" are reversed (so be or nos so be shas it she quetsion) resting
in its memory just in case somebody requested such a thing, but if it had a
copy of the play as Shakespeare (or Thaketpeare) wrote it simple ways could
be found to produce it.


> > The Chinese Room is just [...]
>

There you do again with the "is just".

> 'Where were you on the night of October 15, 2011'?
>

Well, your honor my brain was inside the head which was on top of the body
knocking over that liquor store, my mind was in a lingerie model's bedroom,
and then on the moons of Jupiter. My sense organs are always very close to
my brain but that is just a Evolutionary accident resulting from the fact
that nerve impulses travel much much slower than light and if they were far
from my brain the signal delay would have severely reduced the chances of
my ancestors surviving long enough to reproduce.


> > There is a difference between organized matter and matter that wants to
> organize.


Carbon atoms want to organize into amino acids and amino acids want to
organize into proteins and proteins want to organize into cells and cells
want to organize into brains, but silicon atoms have no ambition and don't
want to organize into anything?? Do you really think that line of thought
will lead to anything productive?

> Why wouldn't he [Einstein] be aware of his own intelligence?


You tell me, you're the one who believes that intelligent things like smart
computers are unaware of their own intelligence.


> > We don't have to imagine solipsism just because subjectivity isn't
> empirical.
>

But that only works for you, the existence of other minds can only be
inferred through behavior.

> You admit then that you are not interested in defining it [intelligence]
> as it actually is, but only what is convenient to investigate.
>

Convenient? If intelligence does not mean doing intelligent things then I
don't see why anyone would be interested in it and don't even see the need
for the word.

> You can't water corn with sulfuric acid


You can if you change the organization of the acid a little. Sulfuric acid
is H2SO4, remove the sulfur and 3 oxygen atoms and the result is H2O, and
you can water corn with water just fine. In a similar way the

Re: Intelligence and consciousness

2012-02-13 Thread Craig Weinberg
On Feb 12, 12:34 am, John Clark  wrote:
> On Fri, Feb 10, 2012  Craig Weinberg  wrote:
>
> > I think you are radically overestimating the size of the book and the
>
> > importance of the size to the experiment. ELIZA was about 20Kb.
>
> TO HELL WITH ELIZA That prehistoric program is NOT intelligent!

What makes you sure it isn't intelligent but that other programs are?

> What is
> the point of a though experiment that gives stupid useless answers to
> questions?

If you haven't read it already, this link from Stephen may do a better
job than I have of explaining my position:

http://newempiricism.blogspot.com/2009/02/symbol-grounding-problem.html

>
> >If it's a thousand times better than ELIZA, then you've got a 20 Mb
> rule book.
>
> For heavens sake, if a 20 Mb look-up table  was sufficient we would have
> had AI decades ago.

Sufficient for what? 20mb of conversational Chinese might be enough to
pass a Turing Test for a moderate amount of time. It's completely
subjective.

>
> Since you can't do so let me make the best case for the Chinese Room from
> your point of view and the most difficult case to defend from mine. Let's
> say you're right and the size of the lookup table is not important so we
> won't worry that it's larger than the observable universe, and let's say
> time is not a issue either so we won't worry that it operates a billion
> trillion times slower than our mind, and let's say the Chinese Room doesn't
> do ELIZA style bullshit but can engage in a brilliant and interesting (if
> you are very very very patient) conversation with you in Chinese or any
> other language about anything. And lets have the little man not only be
> ignorant of Chinese but be retarded and thus not understand anything in any
> language, he can only look at input symbols and then look at the huge
> lookup table till he finds similar squiggles and the appropriate response
> to those squiggles which he then outputs. The man has no idea what's going
> on, he just looks at input squiggles and matches them up with output
> squiggles, but from outside the room it's very different.
>

Yes

> You ask the room to produce a quantum theory of gravity and it does so, you
> ask it to output a new poem that a considerable fraction of the human race
> would consider to be very beautiful and it does so, you ask it to output a
> original fantasy children's novel that will be more popular than Harry
> Potter and it does so.

No. The thought experiment is not about simulating omniscience. If you
ask the room to produce anything outside of casual conversation, it
would politely decline.

> The room certainly behaves intelligently but the man
> was not conscious of any of the answers produced, as I've said the man
> doesn't have a clue what's going on, so does this disprove my assertion
> that intelligent behavior implies consciousness?

Yes. Nothing in the room is conscious, nor is the room itself, or the
building, city or planet conscious of the conversation.

>
> No it does not, or at least it probably does not, this is why. That
> reference book that contains everything that can be said about anything
> that can be asked in a finite time would be large, "astronomical" would be
> far far too weak a word to describe it,

Where are you getting that from? I haven't read anything about the
Chinese Room being defined as having superhuman intelligence. All it
has to do is make convincing Chinese conversation for a while.

> but it would not be infinitely
> large so it remains a legitimate thought experiment. However that
> astounding lookup table came from somewhere, whoever or whatever made it
> had to be very intelligent indeed and also I believe conscious, and so the
> brilliance of the actions of the Chinese Room does indeed imply
> consciousness.

Of course. Programs indeed reflect the intelligence and consciousness
of their programmers to an intelligent and conscious audience, but not
to the program itself. If the programmer and audience is dead, there
is no intelligence or consciousness at all. I think you are trying to
sneak out of this now by strawmanning my position. You make it sound
as if I claimed that CDs could not be used to play music because CDs
are not musicians. My position has always been that people can use
inanimate media to access subjective content by sense, yours has been
that if inanimate machines behave intelligently then they themselves
must be conscious and intelligent. Now you are backing off of that and
saying that anything that ever had anything to do with consciousness
can be said to be conscious.

>
> You may say that even if I'm right about that then a computer doing smart
> things would just i

Re: Intelligence and consciousness

2012-02-12 Thread John Clark
On Sun, Feb 12, 2012 at 2:13 AM, meekerdb  wrote:

Not only that, a computer implementing AI would be able to learn from it's
> discussion.  Even if it started with an astronomically large look-up table,
> the look-up table would grow.
>

 That is very true!


  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-12 Thread Bruno Marchal


On 12 Feb 2012, at 06:50, L.W. Sterritt wrote:

I don't really understand this thread - magical thinking?   The  
neural network between our ears is who / what we are,  and  
everything that we will experience.


If that was the case, we would not survive with an artificial brain.  
Comp would be false. With comp it is better to consider that we have a  
brain, instead that we are a brain.





 It is the source of consciousness - even if consciousness is  
regarded as an epiphenomenon.


UDA shows that it is the other way around. I know that is is very  
counterintuitive. But the brain, as a material object is a creation of  
consciousness, which is itself a natural flux emerging on arithmetical  
truth from the points of view of universal machine/numbers. But  
locally you are right. the material brain is what makes your  
"platonic" consciousness capable of manifest itself relatively to a  
more probable computational history. yet in the big (counterintuitive)  
picture, the numbers relation are responsible for consciousness which  
select relative computations among an infinities, and matter is a  
first person plural phenomenon emergent from a statistical competition  
of infinities of (universal) numbers (assuming mechanism).


Most people naturally believe that mechanism is an ally to  
materialism, but they are epistemologically incompatible.


Bruno






Gandalph


On Feb 11, 2012, at 9:34 PM, John Clark wrote:


On Fri, Feb 10, 2012  Craig Weinberg  wrote:

> I think you are radically overestimating the size of the book  
and the importance of the size to the experiment. ELIZA was about  
20Kb.


TO HELL WITH ELIZA That prehistoric program is NOT intelligent!  
What is the point of a though experiment that gives stupid useless  
answers to questions?


>If it's a thousand times better than ELIZA, then you've got a  
20 Mb rule book.


For heavens sake, if a 20 Mb look-up table  was sufficient we would  
have had AI decades ago.


Since you can't do so let me make the best case for the Chinese  
Room from your point of view and the most difficult case to defend  
from mine. Let's say you're right and the size of the lookup table  
is not important so we won't worry that it's larger than the  
observable universe, and let's say time is not a issue either so we  
won't worry that it operates a billion trillion times slower than  
our mind, and let's say the Chinese Room doesn't do ELIZA style  
bullshit but can engage in a brilliant and interesting (if you are  
very very very patient) conversation with you in Chinese or any  
other language about anything. And lets have the little man not  
only be ignorant of Chinese but be retarded and thus not understand  
anything in any language, he can only look at input symbols and  
then look at the huge lookup table till he finds similar squiggles  
and the appropriate response to those squiggles which he then  
outputs. The man has no idea what's going on, he just looks at  
input squiggles and matches them up with output squiggles, but from  
outside the room it's very different.


You ask the room to produce a quantum theory of gravity and it does  
so, you ask it to output a new poem that a considerable fraction of  
the human race would consider to be very beautiful and it does so,  
you ask it to output a original fantasy children's novel that will  
be more popular than Harry Potter and it does so. The room  
certainly behaves intelligently but the man was not conscious of  
any of the answers produced, as I've said the man doesn't have a  
clue what's going on, so does this disprove my assertion that  
intelligent behavior implies consciousness?


No it does not, or at least it probably does not, this is why. That  
reference book that contains everything that can be said about  
anything that can be asked in a finite time would be large,  
"astronomical" would be far far too weak a word to describe it, but  
it would not be infinitely large so it remains a legitimate thought  
experiment. However that astounding lookup table came from  
somewhere, whoever or whatever made it had to be very intelligent  
indeed and also I believe conscious, and so the brilliance of the  
actions of the Chinese Room does indeed imply consciousness.


You may say that even if I'm right about that then a computer doing  
smart things would just imply the consciousness of the people who  
made the computer. But here is where the analogy breaks down, real  
computers don't work like the Chinese Room does, they don't have  
anything remotely like that astounding lookup table; the godlike  
thing that made the Chinese Room knows exactly what that room will  
do in every circumstance, but computer scientists don't know what  
their creation will do, all they can do is watch it and see.


But you may also say, I don't care how the room got made, I was  
talking about inside the room and I insist there was no  
consciousness inside that room. I would say assig

Re: Intelligence and consciousness

2012-02-11 Thread meekerdb

On 2/11/2012 9:34 PM, John Clark wrote:
You may say that even if I'm right about that then a computer doing smart things would 
just imply the consciousness of the people who made the computer. But here is where the 
analogy breaks down, real computers don't work like the Chinese Room does, they don't 
have anything remotely like that astounding lookup table; the godlike thing that made 
the Chinese Room knows exactly what that room will do in every circumstance, but 
computer scientists don't know what their creation will do, all they can do is watch it 
and see.


Not only that, a computer implementing AI would be able to learn from it's discussion.  
Even if it started with an astronomically large look-up table, the look-up table would grow.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-11 Thread L.W. Sterritt
I don't really understand this thread - magical thinking?   The neural network 
between our ears is who / what we are,  and everything that we will experience. 
 It is the source of consciousness - even if consciousness is regarded as an 
epiphenomenon.  

Gandalph

 
On Feb 11, 2012, at 9:34 PM, John Clark wrote:

> On Fri, Feb 10, 2012  Craig Weinberg  wrote:
> 
> > I think you are radically overestimating the size of the book and the 
> importance of the size to the experiment. ELIZA was about 20Kb.
> 
> TO HELL WITH ELIZA That prehistoric program is NOT intelligent! What is 
> the point of a though experiment that gives stupid useless answers to 
> questions?
> 
> >If it's a thousand times better than ELIZA, then you've got a 20 Mb rule 
> book. 
> 
> For heavens sake, if a 20 Mb look-up table  was sufficient we would have had 
> AI decades ago.  
> 
> Since you can't do so let me make the best case for the Chinese Room from 
> your point of view and the most difficult case to defend from mine. Let's say 
> you're right and the size of the lookup table is not important so we won't 
> worry that it's larger than the observable universe, and let's say time is 
> not a issue either so we won't worry that it operates a billion trillion 
> times slower than our mind, and let's say the Chinese Room doesn't do ELIZA 
> style bullshit but can engage in a brilliant and interesting (if you are very 
> very very patient) conversation with you in Chinese or any other language 
> about anything. And lets have the little man not only be ignorant of Chinese 
> but be retarded and thus not understand anything in any language, he can only 
> look at input symbols and then look at the huge lookup table till he finds 
> similar squiggles and the appropriate response to those squiggles which he 
> then outputs. The man has no idea what's going on, he just looks at input 
> squiggles and matches them up with output squiggles, but from outside the 
> room it's very different. 
> 
> You ask the room to produce a quantum theory of gravity and it does so, you 
> ask it to output a new poem that a considerable fraction of the human race 
> would consider to be very beautiful and it does so, you ask it to output a 
> original fantasy children's novel that will be more popular than Harry Potter 
> and it does so. The room certainly behaves intelligently but the man was not 
> conscious of any of the answers produced, as I've said the man doesn't have a 
> clue what's going on, so does this disprove my assertion that intelligent 
> behavior implies consciousness?
> 
> No it does not, or at least it probably does not, this is why. That reference 
> book that contains everything that can be said about anything that can be 
> asked in a finite time would be large, "astronomical" would be far far too 
> weak a word to describe it, but it would not be infinitely large so it 
> remains a legitimate thought experiment. However that astounding lookup table 
> came from somewhere, whoever or whatever made it had to be very intelligent 
> indeed and also I believe conscious, and so the brilliance of the actions of 
> the Chinese Room does indeed imply consciousness. 
> 
> You may say that even if I'm right about that then a computer doing smart 
> things would just imply the consciousness of the people who made the 
> computer. But here is where the analogy breaks down, real computers don't 
> work like the Chinese Room does, they don't have anything remotely like that 
> astounding lookup table; the godlike thing that made the Chinese Room knows 
> exactly what that room will do in every circumstance, but computer scientists 
> don't know what their creation will do, all they can do is watch it and see.  
> 
> But you may also say, I don't care how the room got made, I was talking about 
> inside the room and I insist there was no consciousness inside that room. I 
> would say assigning a position to consciousness is a little like assigning a 
> position to "fast" or "red" or any other adjective, it doesn't make a lot of 
> sense. If your conscious exists anywhere it's not inside a vat made of bone 
> balancing on your shoulders, it's where you're thinking about. I am the way 
> matter behaves when it is organized in a johnkclarkian way and other things 
> are the way matter behaves when it is organized in a chineseroomian way. 
> 
> And by the way, I don't intend to waste my time defending the assertion that 
> intelligent behavior implies intelligence, that would be like debating if X 
> implies X or not, I have better things to do with my time.  
> 
>   >The King James Bible can be downloaded here
> 
> 
> No thanks, I'll pass on that.
> 
> >> Only?! Einstein only seemed intelligent to scientifically literate 
> >> speakers in the outside world.
>  
>   > No, he was aware of his own intelligence too. 
> 
> How the hell do you know that? And you seem to be using the words 
> "intelligent" and "conscious" interchangeably, t

Re: Intelligence and consciousness

2012-02-11 Thread John Clark
On Fri, Feb 10, 2012  Craig Weinberg  wrote:

> I think you are radically overestimating the size of the book and the
> importance of the size to the experiment. ELIZA was about 20Kb.
>

TO HELL WITH ELIZA That prehistoric program is NOT intelligent! What is
the point of a though experiment that gives stupid useless answers to
questions?

>If it's a thousand times better than ELIZA, then you've got a 20 Mb
rule book.

For heavens sake, if a 20 Mb look-up table  was sufficient we would have
had AI decades ago.

Since you can't do so let me make the best case for the Chinese Room from
your point of view and the most difficult case to defend from mine. Let's
say you're right and the size of the lookup table is not important so we
won't worry that it's larger than the observable universe, and let's say
time is not a issue either so we won't worry that it operates a billion
trillion times slower than our mind, and let's say the Chinese Room doesn't
do ELIZA style bullshit but can engage in a brilliant and interesting (if
you are very very very patient) conversation with you in Chinese or any
other language about anything. And lets have the little man not only be
ignorant of Chinese but be retarded and thus not understand anything in any
language, he can only look at input symbols and then look at the huge
lookup table till he finds similar squiggles and the appropriate response
to those squiggles which he then outputs. The man has no idea what's going
on, he just looks at input squiggles and matches them up with output
squiggles, but from outside the room it's very different.

You ask the room to produce a quantum theory of gravity and it does so, you
ask it to output a new poem that a considerable fraction of the human race
would consider to be very beautiful and it does so, you ask it to output a
original fantasy children's novel that will be more popular than Harry
Potter and it does so. The room certainly behaves intelligently but the man
was not conscious of any of the answers produced, as I've said the man
doesn't have a clue what's going on, so does this disprove my assertion
that intelligent behavior implies consciousness?

No it does not, or at least it probably does not, this is why. That
reference book that contains everything that can be said about anything
that can be asked in a finite time would be large, "astronomical" would be
far far too weak a word to describe it, but it would not be infinitely
large so it remains a legitimate thought experiment. However that
astounding lookup table came from somewhere, whoever or whatever made it
had to be very intelligent indeed and also I believe conscious, and so the
brilliance of the actions of the Chinese Room does indeed imply
consciousness.

You may say that even if I'm right about that then a computer doing smart
things would just imply the consciousness of the people who made the
computer. But here is where the analogy breaks down, real computers don't
work like the Chinese Room does, they don't have anything remotely like
that astounding lookup table; the godlike thing that made the Chinese Room
knows exactly what that room will do in every circumstance, but computer
scientists don't know what their creation will do, all they can do is watch
it and see.

But you may also say, I don't care how the room got made, I was talking
about inside the room and I insist there was no consciousness inside that
room. I would say assigning a position to consciousness is a little like
assigning a position to "fast" or "red" or any other adjective, it doesn't
make a lot of sense. If your conscious exists anywhere it's not inside a
vat made of bone balancing on your shoulders, it's where you're thinking
about. I am the way matter behaves when it is organized in a johnkclarkian
way and other things are the way matter behaves when it is organized in a
chineseroomian way.

And by the way, I don't intend to waste my time defending the assertion
that intelligent behavior implies intelligence, that would be like debating
if X implies X or not, I have better things to do with my time.

  >The King James Bible can be downloaded here
>


No thanks, I'll pass on that.

>> Only?! Einstein only seemed intelligent to scientifically literate
>> speakers in the outside world.
>>
>

>   > No, he was aware of his own intelligence too.
>

How the hell do you know that? And you seem to be using the words
"intelligent" and "conscious" interchangeably, they are not synonyms.

  >If you start out defining intelligence as an abstract function and
> category of behaviors
>

Which is the only operational definition of intelligence.

> rather than quality of consciousness
>

Which is a totally useless definition in investigating the intelligence of
a computer or a person or a animal or of ANYTHING.

> I use ELIZA as an example because you can clearly see that it is not
> intelligent
>

So can I, so when you use that idiot program to try to advance your
antediluvian ideas i

Re: Intelligence and consciousness

2012-02-10 Thread Craig Weinberg
On Feb 10, 3:52 pm, John Clark  wrote:
> On Thu, Feb 9, 2012 Craig Weinberg  wrote:
>
> > The rule book is the memory.
>
> Yes but the rule book not only contains a astronomically large database it
> also contains a super ingenious artificial intelligence program; without
> those things the little man is like a naked microprocessor sitting on a
> storage shelf, its not a brain and its not a computer and its not doing one
> damn thing.

I think you are radically overestimating the size of the book and the
importance of the size to the experiment. ELIZA was about 20Kb.
http://www.jesperjuul.net/eliza/

If it's a thousand times better than ELIZA, then you've got a 20 Mb
rule book. The King James Bible can be downloaded here
http://www.biblepath.com/bible_download.html at 14.33Mb. There is no
time limit specified so we have no way of knowing how long it would
take for a book this size to fail the Turing Test.

It might be more useful to use more of a pharmaceutical model, like
LD50 or LD100; how long of a conversation do you have to have before
50% of the native speakers fail the system. Is the Turing Test an LD00
test with unbounded duration? No native speaker can ever tell the
difference no matter how long they converse? This is clearly
impossible. It's context dependent and subjective. I only assume that
everyone here is human because I have no reason to doubt that, but in
a testing situation, I would not be confident that everyone here is
human judging only from responses.

>
> > >The contents of memory is dumb too - as dumb as player piano rolls.
>
> That's pretty dumb. but the synapses of the brain are just as dumb and the
> atoms they, and computers and everything else, are made of are even
> dumber.

Player piano rolls aren't living organisms that create and repair vast
organic communication networks. Computers don't do anything by
themselves, they have to be carefully programed and maintained by
people and they have to have human users to make sense of any of their
output. Neurons require no external physical agents to program or use
them.

>
> > The two together only seem intelligent to Chinese speakers outside the
> > door
>
> Only?! Einstein only seemed intelligent to scientifically literate speakers
> in the outside world.

No, he was aware of his own intelligence too. I think you're grasping
at straws.

> It "seems" that, as you use the term, seeming
> intelligent is as good as being intelligent.

So if I imitate Arnold Schwarzenegger on the phone, then that's as
good as me being Schwarzenegger.

> In fact it seems to me that
> believing intelligent actions are not a sign of intelligence is not very
> intelligent.

I understand that you think of it that way, and I think that is a
moronic belief, but I don't think that makes you a moron. It all comes
down to thinking in terms of an arbitrary formalism of language rather
and working backward to reality rather than working from concrete
realism and using language to understand it. If you start out defining
intelligence as an abstract function and category of behaviors rather
than quality of consciousness which entails the capacity for behaviors
and functions, then you end up proving your own assumptions with
circular reasoning.

>
> > A conversation that lasts a few hours could probably be generated from a
> > standard Chinese phrase book, especially if equipped with some useful
> > evasive answers (a la ELIZA).
>
> You bring up that stupid 40 year old program again? Yes ELIZA displayed
> little if any intelligence but that program is 40 years old! Do try to keep
> up.

You keep up. ELIZA is still being updated as of 2007:
http://webscripts.softpedia.com/script/Programming-Methods-and-Algorithms-Python/Artificial-Intelligence-Chatterbot-Eliza-15909.html

I use ELIZA as an example because you can clearly see that it is not
intelligent and you can clearly see that it could superficially seem
intelligent. It becomes more difficult to be as sure what is going on
when the program is more sophisticated because it is a more convincing
fake. The ELIZA example is perfect because it exposes the fundamental
mechanism by which trivial intelligence can be mistaken for the
potential for understanding.

> And if you are really confident in your ideas push the thought
> experiment to the limit and let the Chinese Room produce brilliant answers
> to complex questions, if it just churns out ELIZA style evasive crap that
> proves nothing because we both agree that's not very intelligent.

Ok, make it a million times the size of ELIZA. A set of 1,000 books. I
think that would pass an LD50 Turing Test of a five hour conversation,
don't you?

>
> > The size isn't the point though.
>
> I rather think it is. A book larger than the observable universe and a
> program more brilliant than any written,

where are you getting that from?

> yet you insist that if understand
> is anywhere in that room it must be in the by far least remarkable part of
> it, the silly little man.


Re: Intelligence and consciousness

2012-02-10 Thread John Clark
On Thu, Feb 9, 2012 Craig Weinberg  wrote:
> The rule book is the memory.

Yes but the rule book not only contains a astronomically large database it
also contains a super ingenious artificial intelligence program; without
those things the little man is like a naked microprocessor sitting on a
storage shelf, its not a brain and its not a computer and its not doing one
damn thing.


> >The contents of memory is dumb too - as dumb as player piano rolls.


That's pretty dumb. but the synapses of the brain are just as dumb and the
atoms they, and computers and everything else, are made of are even
dumber.

> The two together only seem intelligent to Chinese speakers outside the
> door


Only?! Einstein only seemed intelligent to scientifically literate speakers
in the outside world. It "seems" that, as you use the term, seeming
intelligent is as good as being intelligent. In fact it seems to me that
believing intelligent actions are not a sign of intelligence is not very
intelligent.

> A conversation that lasts a few hours could probably be generated from a
> standard Chinese phrase book, especially if equipped with some useful
> evasive answers (a la ELIZA).


You bring up that stupid 40 year old program again? Yes ELIZA displayed
little if any intelligence but that program is 40 years old! Do try to keep
up. And if you are really confident in your ideas push the thought
experiment to the limit and let the Chinese Room produce brilliant answers
to complex questions, if it just churns out ELIZA style evasive crap that
proves nothing because we both agree that's not very intelligent.

> The size isn't the point though.


I rather think it is. A book larger than the observable universe and a
program more brilliant than any written, yet you insist that if understand
is anywhere in that room it must be in the by far least remarkable part of
it, the silly little man.  And remember the consciousness that room
produces would not be like the consciousness you or I have, if would take
that room many billions of years to generate as much consciousness as you
do in one second.

> Speed is a red herring too.
>

No it is not and I will tell you exactly why as soon as the sun burns out
and collapses into a white dwarf. Speed isn't a issue so you have to
concede that I won that point.

 > if it makes sense for a room to be conscious, then it makes sense that
> anything and everything can be conscious


Yes, providing the thing in question behaves intelligently.  We only think
our fellow humans are conscious when they behave intelligently and that's
the only reason we DON'T think they're conscious when they're sleeping or
dead; all I ask is that you play by the same rules when dealing with
computers or Chinese Rooms.

>> However Searle does not expect us to think it odd that 3 pounds of grey
>> goo in a bone vat can be conscious
>>
>
> Because unlike you, he [Searl] is not presuming the neuron doctrine. I
> think his position is that consciousness cannot solely because of the
> material functioning of the brain and it must be something else.


And yet if you change the way the brain functions, through drugs or surgery
or electrical stimulation or a bullet to the head, the conscious experience
changes too.  And if the brain can make use of this free floating glowing
bullshit of yours what reason is there to believe that computers can't also
do so? I've asked this question before and the best you could come up with
is that computers aren't squishy and don't smell bad so they can't be
conscious. I don't find that argument compelling.

> We know the brain relates directly to consciousness, but we don't know
> for sure how.


If you don't know how the brain produces consciousness then how in the
world can you be so certain a computer can't do it too, especially if the
computer is as intelligent or even more intelligent than the brain?

> We can make a distinction between the temporary disposition of the brain
> and it's more permanent structureor organization.
>

A 44 magnum bullet in the brain would cause a change in brain organization
and would seem to be rather permanent. I believe such a thing would also
cause a rather significant change in consciousness. Do you disagree?

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread Bruno Marchal


On 09 Feb 2012, at 17:43, Stephen P. King wrote:


On 2/9/2012 9:14 AM, Bruno Marchal wrote:



On 09 Feb 2012, at 13:20, Stephen P. King wrote:


Dear Bruno,

   My best expression of my "theory", although it does not quite  
rise to that level, is in my last response to ACW under the  
subject line "Ontological problems of COMP". My claim is that your  
argument is self-refuting as it claims to prohibit the very means  
to communicate it. I point out that this problem can easily be  
resolved by putting the abstract aspect of COMP at the same  
ontological level as its interpersonal expressions, but this  
implies dualism which you resist.


You keep telling me that you defend neutral monism, and now you  
pretend that I am wrong because I resist to dualism?
I have explained that comp, and thus arithmetic alone, explains  
many form of dualism, all embedded in a precise Plotionian-like  
'octalism'.



[SPK]

Hi Bruno,

I don't see that you distinguish between the ontological nature  
of the  representation of a theory in terms of mathematics and the  
mathematics itself.


?


You seem to identify the representation with the object but never  
explicitly.


On the contrary. When you say "yes" to the doctor, you identify  
yourself as a relative number, relative to some universal system you  
bet on. So, with comp, we make precise where and when, and how, we  
identify something and its local 3p incarnation. Else where, we keep  
distinct the terms and their intepretations (by universal numbers),  
which are many.






I am simply asking you why not? Could you elaborate on this  
"octalism" and how does it relate to a neutral monism. How is its  
neutrality defined?


Neutrality is defined more or less like in Spinoza. It is something,  
which makes sense, and is neither mind, nor body. With comp UDA  
proves, or is supposed to prove, to you, that with comp, arithmetic  
(or recursively equivalent) is both necessary, and enough.

The octalism are the eight hypostases I have already described:
p
Bp
Bp & p
Bp & Dt
Bp & Dt & p

Which are 5 distinct variant of self-reference (Bp, Godël's  
beweisbar('p'), p (sigma_1- arithmetical sentences).


The G/G* splitting, the difference between what machine can prove and  
what is true about them" is inherited in three of those variants  
leading to 8 hypostases. The three above matches well Plotinus "ONE,  
INTELLECT and SOUL", and the two last, which both splits, matches well  
the two MATTER notion of Plotinus, which is a "simple" platonic  
correction of an idea of Aristotle. But I found the matter hypostases  
as an attempt to define the measure one on the computational histories  
when observed in self-duplicating machines. Physics is given by the  
material hypostases, and the "G*" (divine, true) parts give the logic  
of qualia. Quanta have to be part of it form making quantum  
indeterminacy coherent with the comp indeterminacy, and this saves  
comp from solipsism and allow interactions, and first person plural  
notion, although this indeed has not yet been proved. Good difficult  
exercise.










That is your choice, but you need to understand the consequences  
of Ideal monism. It has no explanation for interactions between  
minds.


It is not a matter of choice, but of proof in theoretical framework.

[SPK]

But I could, if I had the skill with words, construct a theory  
of pink unicorns that would have the same degree of structure and  
make the same claims.


Then do it, and let us compare it to comp.





How would I test it against yours? That is the problem.


I have no theory. Today probably 99% of the scientist believes in comp  
(not always consciously), and in some primariness of physics. I just  
explain that this does not work.


As a logician, I just explain that comp and materialism are not  
compatible. The fundamental realm is computer science, and technically  
we can extract the many dreams structure from number theory, including  
intensional theory (with relative role to numbers, like codes).





If a theory claims that the physical world does not exist then it  
throws away the very means to test it.


I can't agree more.




It becomes by definition unfalsifiable.


Sure.

The whole point is that comp proves the physical reality to be  
observable. Physical reality exists, but is not primitive.










Then the neutral arithmetical monism explains very well the  
interactions between minds a priori.

[SPK]

How is it neutral when it takes a certain set of properties as  
ontologically primitive?  Neutral monism cannot do that or it  
violates the very definition of neutrality.


Something exist, right? Monism just take it as being neutral on the  
body or mind side.

That should not prevent the ontology to be clear and intelligible.




Can you not see this? Additionally, you have yet to show exactly how  
interactions between minds are defined.


I have yet to prove the existence of a particle.

The result i

Re: Intelligence and consciousness

2012-02-09 Thread Craig Weinberg
On Feb 9, 1:26 pm, John Clark  wrote:
> On Tue, Feb 7, 2012 at 5:18 PM, Craig Weinberg wrote:
>
> >> How in hell would putting a computer in the position of the man prove
> >> anything??
>
> > >Because that is the position a computer is in when it runs a program
> > based on user inputs and outputs. The man in the room is a CPU.
>
> A CPU without a memory is not a brain or a computer it's just a CPU. The
> man must be a infinitesimally small part of the entire Chinese room; think
> about it, the man doesn't know a word of Chinese and yet when he is in that
> room it can input questions in Chinese and output intelligent answers to
> them also in Chinese, so the non-man parts of that room must be very
> remarkable and unlike any room you or I have ever been in.

The rule book is the memory. A computer works the way the Chinese Room
illustrates - the dumb CPU retrieves recorded instructions of a fixed
range of procedures. The contents of memory is dumb too - as dumb as
player piano rolls. The two together only seem intelligent to the
Chinese speakers outside the door, because they are intelligent and
can project their own understanding on the contents of what the
computer mindlessly produces.

>
> > He and the rule book are the only parts that are relevant to strong AI.
>
> A rule book larger than the observable universe

Where are you getting that? I have already addressed that the book
only needs to be as large as the level of sensitivity of the Chinese
speakers demands. A conversation that lasts a few hours could probably
be generated from a standard Chinese phrase book, especially if
equipped with some useful evasive answers (a la ELIZA). The size isn't
the point though. Make it a 5000 Tb database instead. What would be
the difference? A book is only there as a device to ground the then
unfamiliar mechanics of data processing into familiar terms.

> and a room that thinks a
> billion trillion times slower than you or I think.

Speed is a red herring too.

> Searle expects us to
> throw logic to the wind and to think that even if the consciousness is
> slowed down by that much the room in general can't be conscious because...
> because just because.

Because if it makes sense for a room to be conscious, then it makes
sense that anything and everything can be conscious, which doesn't
make much more sense than anything.

> However Searle does not expect us to think it odd
> that 3 pounds of grey goo in a bone vat can be conscious,

Because unlike you, he is not presuming the neuron doctrine. I think
his position is that consciousness cannot solely because of the
material functioning of the brain and it must be something else. We
know the brain relates directly to consciousness, but we don't know
for sure how. What Searle is doing is ruling out the possibility that
computation alone is responsible for consciousness. I agree with this,
but go further to suggest that physics has a mechanistic side
expressed as matter across space/topology as well as a non-mechanistic
side expressed as sense experience through time/sequence.

> that's different
> because... because just because. Because of this I expect that Searle
> is an idiot.

He may be an idiot, I don't know, but I think in this case his
experiment is valid, even if a bit ungainly.

>
> > >> The organization of my kitchen sink does not change with the
> >>> temperature of the water coming out of the faucet.
>
> >> Glad to hear it, now I know who to ask when I need plumbing advice.
>
> > Couldn't think of a legitimate counterpoint?
>
> I didn't even try because I didn't want to inflect needless wear and tear
> on my brain. The problem is I don't give a damn if the organization of my
> kitchen sink changes with the temperature of the water coming out of the
> faucet or not. I'm more interested in how the brain works than kitchen
> sinks.

The metaphor works though. We can make a distinction between the
temporary disposition of the brain and it's more permanent structure
or organization.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread John Clark
On Tue, Feb 7, 2012 at 5:18 PM, Craig Weinberg wrote:

>> How in hell would putting a computer in the position of the man prove
>> anything??
>>
>
> >Because that is the position a computer is in when it runs a program
> based on user inputs and outputs. The man in the room is a CPU.
>

A CPU without a memory is not a brain or a computer it's just a CPU. The
man must be a infinitesimally small part of the entire Chinese room; think
about it, the man doesn't know a word of Chinese and yet when he is in that
room it can input questions in Chinese and output intelligent answers to
them also in Chinese, so the non-man parts of that room must be very
remarkable and unlike any room you or I have ever been in.

> He and the rule book are the only parts that are relevant to strong AI.
>

A rule book larger than the observable universe and a room that thinks a
billion trillion times slower than you or I think. Searle expects us to
throw logic to the wind and to think that even if the consciousness is
slowed down by that much the room in general can't be conscious because...
because just because. However Searle does not expect us to think it odd
that 3 pounds of grey goo in a bone vat can be conscious, that's different
because... because just because. Because of this I expect that Searle
is an idiot.

> >> The organization of my kitchen sink does not change with the
>>> temperature of the water coming out of the faucet.
>>>
>>
>> Glad to hear it, now I know who to ask when I need plumbing advice.
>>
>
> Couldn't think of a legitimate counterpoint?
>

I didn't even try because I didn't want to inflect needless wear and tear
on my brain. The problem is I don't give a damn if the organization of my
kitchen sink changes with the temperature of the water coming out of the
faucet or not. I'm more interested in how the brain works than kitchen
sinks.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread Stephen P. King

On 2/9/2012 9:14 AM, Bruno Marchal wrote:


On 09 Feb 2012, at 13:20, Stephen P. King wrote:


Dear Bruno,

   My best expression of my "theory", although it does not quite rise 
to that level, is in my last response to ACW under the subject line 
"Ontological problems of COMP". My claim is that your argument is 
self-refuting as it claims to prohibit the very means to communicate 
it. I point out that this problem can easily be resolved by putting 
the abstract aspect of COMP at the same ontological level as its 
interpersonal expressions, but this implies dualism which you resist.


You keep telling me that you defend neutral monism, and now you 
pretend that I am wrong because I resist to dualism?
I have explained that comp, and thus arithmetic alone, explains many 
form of dualism, all embedded in a precise Plotionian-like 'octalism'.



[SPK]

Hi Bruno,

I don't see that you distinguish between the ontological nature of 
the  representation of a theory in terms of mathematics and the 
mathematics itself. You seem to identify the representation with the 
object but never explicitly. I am simply asking you why not? Could you 
elaborate on this "octalism" and how does it relate to a neutral monism. 
How is its neutrality defined?





That is your choice, but you need to understand the consequences of 
Ideal monism. It has no explanation for interactions between minds.


It is not a matter of choice, but of proof in theoretical framework.

[SPK]

But I could, if I had the skill with words, construct a theory of 
pink unicorns that would have the same degree of structure and make the 
same claims. How would I test it against yours? That is the problem. If 
a theory claims that the physical world does not exist then it throws 
away the very means to test it. It becomes by definition unfalsifiable.





Then the neutral arithmetical monism explains very well the 
interactions between minds a priori. 

[SPK]

How is it neutral when it takes a certain set of properties as 
ontologically primitive?  Neutral monism cannot do that or it violates 
the very definition of neutrality. Can you not see this? Additionally, 
you have yet to show exactly how interactions between minds are defined. 
All I have see is discussion of a plurality of minds, but no where is 
there anything like an example in detail that considered the interaction 
of one mind with another. I know that minds do interact, my proof is the 
fact that this email discussion is occurring, and that is evidence 
enough. But how does your result explain it?



Only UDA shows that we have to explain matter entirely through dream 
interferences (say). That is a success, because it explains 
conceptually the origin of the physical laws, and the explanation is 
constructive, once we agree on the classical axioms  for knowledge, 
making comp testable.


But that is a problem because we have to chose a set of axioms to 
agree upon and there is potentially an infinite number of axioms. I am 
reminded of the full extent of Pascal's /Gambit/ 
. There is no a priori 
way of knowing which definition of god is the correct one. Pascal's 
situation and the situation with Bp&p makes truth a mere accident. Maybe 
this is OK for you, but not for me. Maybe I demand too much from 
explanations of our world, but I ask that they at least explain the 
necessity of the appearances without asking me to believe in the 
explanation by blind faith.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread Bruno Marchal


On 09 Feb 2012, at 13:20, Stephen P. King wrote:


On 2/9/2012 5:19 AM, Bruno Marchal wrote:


On 08 Feb 2012, at 18:47, Stephen P. King wrote:


On 2/8/2012 11:46 AM, Bruno Marchal wrote:


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal  wrote:

I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.


Umm, could one's theory of another be such that it is a faithful  
subimage of the theory maker?


Maybe I have a theory that Bruno is a Tyrant and Craig is a  
Jester. ;-)




You do seem avoiding reasoning, to reassert in many ways a  
conviction that you have.
You want to seem to change the rule of the game, where,  
personally, I want them to be applied in any field, notably in  
theology, defined as the notion of truth about entities.  
Basically Plato's definition of Theology. Truth. The truth we  
search, not the one we might find.




Could you imagine that your representation is not singular? There  
is more than one way of thinking of the idea that you are  
considering.


How? Either your consciousness changes in the Turing emulation at  
some level, or it does not (comp). The rest is logic, and can be  
explained in arithmetic, which can be formalized in contexts which  
eliminate the 'metaphysical baggage".
In theoretical science we can always been enough clear so that  
colleagues, or nature, can find a mistake, so that we can progress.
In (continental-like) philosophy, that's different, but that is the  
reason why I avoid such type of philosophy at the start.













but the trick is that
I emulate Einstein himself, and I provide the answer that  
Einstein
answers me (and I guess I will have to make some work to  
understand

them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is  
the one who makes the confusion. Einstein is the relatively  
concrete immaterial person which has been temporary able to  
manifest itself through the easy but tedious task to emulate its  
brain.
Searle confused an "easy"  low level of simulation (neurons, say)  
with the emulated person, which, if you deny the consciousness,  
is an actual zombie (corroborating Stathis' early debunking of  
your argument).


There is no problem with having conviction, Craig, but you have  
to keep them personal, and this for reasoning for comp or for non- 
comp, or on whatever. It is the very idea of *reasoning* (always  
from public assumptions).


If not I am afraid you are just not playing the game most  
participant want to play in the list.


Both in "science" and in "philosophy" there are scientists and  
philosophers. Scientists are those who can recognize they might  
be wrong, or that they are wrong. You seem to be unable to  
conceive that comp *might* be true, (in the weak sense of the  
existence of  *some* level of substitution), and you seem be  
unable to put down your assumption and a reasoning which leads to  
your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error  
recognition (abunding in Knight's idea that you might be a troll,  
which I am not yet sure).


  But you cannot be wrong, Bruno, right? LOL


Of course I can be wrong. But you have to show the error if you  
think so. I worked hard to make the argument modularized in many  
"simple" steps to help you in that very task.


And of course comp can be wrong too, but if my argument is correct,  
the only way to know that is to find a physical facts contradicting  
the comp physical prediction. That should not be too difficult  
given that comp gives the whole of physics. In 1991, after the  
discovery of the p->BDp (the arithmetical quantization)  I  
predicted that comp+Theaetetus would be refuted before 2000.


Bruno

http://iridia.ulb.ac.be/~marchal/




Dear Bruno,

   My best expression of my "theory", although it does not quite  
rise to that level, is in my last response to ACW under the subject  
line "Ontological problems of COMP". My claim is that your argument  
is self-refuting as it claims to prohibit the very means to  
communicate it. I point out that this problem can easily be resolved  
by putting the abstract aspect of COMP at the same ontological level  
as its interpersonal expressions, but this implies dualism which you  
resist.


You keep telling me that you defend neutral monism, and now you  
pretend that I am wrong because I resist to dualism?
I have explained that comp, and thus arithmetic alone, explains many  
form of dualism, all embedded in a precise Plotionian-like 'octalism'.




That is your choice, but you need to understand the consequences of  
Ideal monism. It has no explanation for interactions between minds.


It is not a matter of choice, but of proof in theoretical framework.

Then the neutral arithmetical monism explains very well the  
interactions between minds a priori. Only UDA shows that

Re: Intelligence and consciousness

2012-02-09 Thread Stephen P. King

On 2/9/2012 5:19 AM, Bruno Marchal wrote:


On 08 Feb 2012, at 18:47, Stephen P. King wrote:


On 2/8/2012 11:46 AM, Bruno Marchal wrote:


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal  wrote:

I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.


Umm, could one's theory of another be such that it is a faithful 
subimage of the theory maker?


Maybe I have a theory that Bruno is a Tyrant and Craig is a Jester. ;-)



You do seem avoiding reasoning, to reassert in many ways a 
conviction that you have.
You want to seem to change the rule of the game, where, personally, 
I want them to be applied in any field, notably in theology, defined 
as the notion of truth about entities. Basically Plato's definition 
of Theology. Truth. The truth we search, not the one we might find.




Could you imagine that your representation is not singular? There is 
more than one way of thinking of the idea that you are considering.


How? Either your consciousness changes in the Turing emulation at some 
level, or it does not (comp). The rest is logic, and can be explained 
in arithmetic, which can be formalized in contexts which eliminate the 
'metaphysical baggage".
In theoretical science we can always been enough clear so that 
colleagues, or nature, can find a mistake, so that we can progress.
In (continental-like) philosophy, that's different, but that is the 
reason why I avoid such type of philosophy at the start.













but the trick is that
I emulate Einstein himself, and I provide the answer that Einstein
answers me (and I guess I will have to make some work to understand
them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is the 
one who makes the confusion. Einstein is the relatively concrete 
immaterial person which has been temporary able to manifest itself 
through the easy but tedious task to emulate its brain.
Searle confused an "easy"  low level of simulation (neurons, say) 
with the emulated person, which, if you deny the consciousness, is 
an actual zombie (corroborating Stathis' early debunking of your 
argument).


There is no problem with having conviction, Craig, but you have to 
keep them personal, and this for reasoning for comp or for non-comp, 
or on whatever. It is the very idea of *reasoning* (always from 
public assumptions).


If not I am afraid you are just not playing the game most 
participant want to play in the list.


Both in "science" and in "philosophy" there are scientists and 
philosophers. Scientists are those who can recognize they might be 
wrong, or that they are wrong. You seem to be unable to conceive 
that comp *might* be true, (in the weak sense of the existence of  
*some* level of substitution), and you seem be unable to put down 
your assumption and a reasoning which leads to your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error 
recognition (abunding in Knight's idea that you might be a troll, 
which I am not yet sure).


   But you cannot be wrong, Bruno, right? LOL


Of course I can be wrong. But you have to show the error if you think 
so. I worked hard to make the argument modularized in many "simple" 
steps to help you in that very task.


And of course comp can be wrong too, but if my argument is correct, 
the only way to know that is to find a physical facts contradicting 
the comp physical prediction. That should not be too difficult given 
that comp gives the whole of physics. In 1991, after the discovery of 
the p->BDp (the arithmetical quantization)  I predicted that 
comp+Theaetetus would be refuted before 2000.


Bruno

http://iridia.ulb.ac.be/~marchal/




Dear Bruno,

My best expression of my "theory", although it does not quite rise 
to that level, is in my last response to ACW under the subject line 
"Ontological problems of COMP". My claim is that your argument is 
self-refuting as it claims to prohibit the very means to communicate it. 
I point out that this problem can easily be resolved by putting the 
abstract aspect of COMP at the same ontological level as its 
interpersonal expressions, but this implies dualism which you resist. 
That is your choice, but you need to understand the consequences of 
Ideal monism. It has no explanation for interactions between minds.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-09 Thread Bruno Marchal


On 08 Feb 2012, at 18:47, Stephen P. King wrote:


On 2/8/2012 11:46 AM, Bruno Marchal wrote:


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal  wrote:

I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.


Umm, could one's theory of another be such that it is a faithful  
subimage of the theory maker?


Maybe I have a theory that Bruno is a Tyrant and Craig is a  
Jester. ;-)




You do seem avoiding reasoning, to reassert in many ways a  
conviction that you have.
You want to seem to change the rule of the game, where, personally,  
I want them to be applied in any field, notably in theology,  
defined as the notion of truth about entities. Basically Plato's  
definition of Theology. Truth. The truth we search, not the one we  
might find.




Could you imagine that your representation is not singular? There is  
more than one way of thinking of the idea that you are considering.


How? Either your consciousness changes in the Turing emulation at some  
level, or it does not (comp). The rest is logic, and can be explained  
in arithmetic, which can be formalized in contexts which eliminate the  
'metaphysical baggage".
In theoretical science we can always been enough clear so that  
colleagues, or nature, can find a mistake, so that we can progress.
In (continental-like) philosophy, that's different, but that is the  
reason why I avoid such type of philosophy at the start.













but the trick is that
I emulate Einstein himself, and I provide the answer that Einstein
answers me (and I guess I will have to make some work to understand
them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is the  
one who makes the confusion. Einstein is the relatively concrete  
immaterial person which has been temporary able to manifest itself  
through the easy but tedious task to emulate its brain.
Searle confused an "easy"  low level of simulation (neurons, say)  
with the emulated person, which, if you deny the consciousness, is  
an actual zombie (corroborating Stathis' early debunking of your  
argument).


There is no problem with having conviction, Craig, but you have to  
keep them personal, and this for reasoning for comp or for non- 
comp, or on whatever. It is the very idea of *reasoning* (always  
from public assumptions).


If not I am afraid you are just not playing the game most  
participant want to play in the list.


Both in "science" and in "philosophy" there are scientists and  
philosophers. Scientists are those who can recognize they might be  
wrong, or that they are wrong. You seem to be unable to conceive  
that comp *might* be true, (in the weak sense of the existence of   
*some* level of substitution), and you seem be unable to put down  
your assumption and a reasoning which leads to your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error  
recognition (abunding in Knight's idea that you might be a troll,  
which I am not yet sure).


   But you cannot be wrong, Bruno, right? LOL


Of course I can be wrong. But you have to show the error if you think  
so. I worked hard to make the argument modularized in many "simple"  
steps to help you in that very task.


And of course comp can be wrong too, but if my argument is correct,  
the only way to know that is to find a physical facts contradicting  
the comp physical prediction. That should not be too difficult given  
that comp gives the whole of physics. In 1991, after the discovery of  
the p->BDp (the arithmetical quantization)  I predicted that comp 
+Theaetetus would be refuted before 2000.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-08 Thread Stephen P. King

On 2/8/2012 11:46 AM, Bruno Marchal wrote:


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal  wrote:

I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.


Umm, could one's theory of another be such that it is a faithful 
subimage of the theory maker?


Maybe I have a theory that Bruno is a Tyrant and Craig is a Jester. ;-)



You do seem avoiding reasoning, to reassert in many ways a conviction 
that you have.
You want to seem to change the rule of the game, where, personally, I 
want them to be applied in any field, notably in theology, defined as 
the notion of truth about entities. Basically Plato's definition of 
Theology. Truth. The truth we search, not the one we might find.




Could you imagine that your representation is not singular? There is 
more than one way of thinking of the idea that you are considering.









but the trick is that
I emulate Einstein himself, and I provide the answer that Einstein
answers me (and I guess I will have to make some work to understand
them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is the 
one who makes the confusion. Einstein is the relatively concrete 
immaterial person which has been temporary able to manifest itself 
through the easy but tedious task to emulate its brain.
Searle confused an "easy"  low level of simulation (neurons, say) with 
the emulated person, which, if you deny the consciousness, is an 
actual zombie (corroborating Stathis' early debunking of your argument).


There is no problem with having conviction, Craig, but you have to 
keep them personal, and this for reasoning for comp or for non-comp, 
or on whatever. It is the very idea of *reasoning* (always from public 
assumptions).


If not I am afraid you are just not playing the game most participant 
want to play in the list.


Both in "science" and in "philosophy" there are scientists and 
philosophers. Scientists are those who can recognize they might be 
wrong, or that they are wrong. You seem to be unable to conceive that 
comp *might* be true, (in the weak sense of the existence of  *some* 
level of substitution), and you seem be unable to put down your 
assumption and a reasoning which leads to your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error recognition 
(abunding in Knight's idea that you might be a troll, which I am not 
yet sure).


But you cannot be wrong, Bruno, right? LOL

Onward!

Trolling Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-08 Thread Bruno Marchal


On 07 Feb 2012, at 18:52, Craig Weinberg wrote:


On Feb 6, 11:30 am, Bruno Marchal  wrote:


More seriously, in the chinese room experience, Searle's error can be
seen also as a confusion of level. If I can emulate Einstein brain,
"I" can answer all question you ask to Einstein,


You're assuming that a brain can be emulated in the first place.


Together with Searle, for the purpose of following its thought  
experiment, and show where it is invalid.
You might study the detailed answer to Searle, by Dennett and  
Hofstadter in the book Mind's I.
I appreciate Searle, Lucas and Penrose of presenting real argument  
which can be shown precisely wrong.
I am not sure Searle understood his error, or recognize it. he might  
belong to the philosopher who starts from a personal conviction (which  
is a symptom of not wanting to play the *science* game.
Penrose did recognize his error, but hide it and seem unaware of the  
gigantic impact of precisely that error. Godel's theorem does not  
shown that "we" are not machine, but he shows that we cannot  
consistently know which machine we are, and that's the start of the  
meta formal explanation of the appearance between the subjective and  
objective indeterminacies, for any Löbian machine looking inside.





If
that were true, there is no need to have the thought experiment.


I think Quentin has a theory here, that you might be stupid.
Joseph Knight has another theory, which is that you are a troll.

You do seem avoiding reasoning, to reassert in many ways a conviction  
that you have.
You want to seem to change the rule of the game, where, personally, I  
want them to be applied in any field, notably in theology, defined as  
the notion of truth about entities. Basically Plato's definition of  
Theology. Truth. The truth we search, not the one we might find.








but the trick is that
I emulate Einstein himself, and I provide the answer that Einstein
answers me (and I guess I will have to make some work to understand
them, or not).


It still doesn't make you Einstein, which is Searle's point.


And of course I am not Einstein, in that display, but Searle is the  
one who makes the confusion. Einstein is the relatively concrete  
immaterial person which has been temporary able to manifest itself  
through the easy but tedious task to emulate its brain.
Searle confused an "easy"  low level of simulation (neurons, say) with  
the emulated person, which, if you deny the consciousness, is an  
actual zombie (corroborating Stathis' early debunking of your argument).


There is no problem with having conviction, Craig, but you have to  
keep them personal, and this for reasoning for comp or for non-comp,  
or on whatever. It is the very idea of *reasoning* (always from public  
assumptions).


If not I am afraid you are just not playing the game most participant  
want to play in the list.


Both in "science" and in "philosophy" there are scientists and  
philosophers. Scientists are those who can recognize they might be  
wrong, or that they are wrong. You seem to be unable to conceive that  
comp *might* be true, (in the weak sense of the existence of  *some*  
level of substitution), and you seem be unable to put down your  
assumption and a reasoning which leads to your conviction.
Worst, you seem gifted in rhetorical tricks to avoid error recognition  
(abunding in Knight's idea that you might be a troll, which I am not  
yet sure).


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-08 Thread 1Z


On Feb 7, 5:52 pm, Craig Weinberg  wrote:
> On Feb 6, 11:30 am, Bruno Marchal  wrote:
>
> > More seriously, in the chinese room experience, Searle's error can be
> > seen also as a confusion of level. If I can emulate Einstein brain,
> > "I" can answer all question you ask to Einstein,
>
> You're assuming that a brain can be emulated in the first place. If
> that were true, there is no need to have the thought experiment.

You seem to be confusing the theoretical can and the practical can.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Quentin Anciaux
2012/2/7 Craig Weinberg 

> On Feb 7, 3:08 pm, Quentin Anciaux  wrote:
> > 2012/2/7 Craig Weinberg 
> >
> > > On Feb 6, 11:30 am, Bruno Marchal  wrote:
> >
> > > > More seriously, in the chinese room experience, Searle's error can be
> > > > seen also as a confusion of level. If I can emulate Einstein brain,
> > > > "I" can answer all question you ask to Einstein,
> >
> > > You're assuming that a brain can be emulated in the first place. If
> > > that were true, there is no need to have the thought experiment.
> >
> > You're assuming that a brain can't be emulated in the first place. If
> that
> > were true, there is no need to have the thought experiment.
> >
> > Stupid thought, stupid conclusion Craig Weinberg as usual.
>
> I'm not assuming that it can't be emulated, I am only assuming an
> appropriately skeptical stance - especially since brain emulation has
> never occurred in reality. I say that it is not proven that brains can
> be emulated, that's all. If the refutation of the Chinese Room is
> contingent upon the assumption that brains can be emulated than it's a
> religious faith.
>
> My point is that the Chinese Room doesn't require a belief or
> disbelief in brain emulation, it only demonstrates the difference
> between trivial computation and personal understanding...something
> which comp is in pathological denial of.
>
> No, the chinese room refute the room consciousness by arguing that the
only thing conscious is the human in the room and obviously he does not
understand chinese. In your answer to john you replaced the human by the
CPU and that's correct... And no one has ever argue that the CPU is
conscious... It's the execution of the program which is conscious... ie:
the man only does the execution, it is a part of the system, not the whole
system... what is conscious is the program executed by the man following
the book instruction. I see no problem to have unconscious part of a
conscious thing, in fact that's what happen in our brain, only has a whole
(us) can we talk about it being conscious.

Anyway a thought experiment per se can't determine if AI is possible or
not... so you it's not correct to say " You're assuming that a brain can be
emulated blabla or the opposite", it's plain wrong.

Quentin


> Craig
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>


-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Craig Weinberg
On Feb 7, 3:08 pm, Quentin Anciaux  wrote:
> 2012/2/7 Craig Weinberg 
>
> > On Feb 6, 11:30 am, Bruno Marchal  wrote:
>
> > > More seriously, in the chinese room experience, Searle's error can be
> > > seen also as a confusion of level. If I can emulate Einstein brain,
> > > "I" can answer all question you ask to Einstein,
>
> > You're assuming that a brain can be emulated in the first place. If
> > that were true, there is no need to have the thought experiment.
>
> You're assuming that a brain can't be emulated in the first place. If that
> were true, there is no need to have the thought experiment.
>
> Stupid thought, stupid conclusion Craig Weinberg as usual.

I'm not assuming that it can't be emulated, I am only assuming an
appropriately skeptical stance - especially since brain emulation has
never occurred in reality. I say that it is not proven that brains can
be emulated, that's all. If the refutation of the Chinese Room is
contingent upon the assumption that brains can be emulated than it's a
religious faith.

My point is that the Chinese Room doesn't require a belief or
disbelief in brain emulation, it only demonstrates the difference
between trivial computation and personal understanding...something
which comp is in pathological denial of.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Craig Weinberg
On Feb 7, 1:41 pm, John Clark  wrote:
> On Tue, Feb 7, 2012  Craig Weinberg  wrote:
>
> > If you are proving that a computer in the position of the man has no
> > understanding then this thought experiment proves it.
>
> How in hell would putting a computer in the position of the man prove
> anything??

Because that is the position a computer is in when it runs a program
based on user inputs and outputs. The man in the room is a CPU.

> The man is just a very very very small part of the Chinese Room,

He and the rule book are the only parts that are relevant to strong
AI.

> all Searle proved is that a tiny part of a system does not have all the
> properties the entire system has. Well duh, the neurotransmitter
> acetylcholine is part of the human brain and would not work without it, but
> acetylcholine does not have all the properties that the entire human brain
> has either.

You are assuming a part/whole relationship rather than form/content
relationship.

>
> > It takes consciousness for granted, like some free floating glow.
>
> Oh now I see the light, literally, consciousness is like some free floating
> glow! Now I understand everything!

Yes, that would solve your problem completely. The rule book and the
man glow a little, and together they make the whole room glow much
more.

>
> > If I understand how to cook and then I walk into a building, does the
> > building, now that it includes me, now know how to cook?
>
> If you didn't know how to cook, if you didn't even know how to boil water
> but the building was now employed at 4 star restaurants preparing delicious
> meals then certainly the building knows how to cook, and you must be a very
> small cog in that operation.

I guess you are talking about a universe where buildings work at
restaurants or something.

>
> > Searle is assuming the common sense of the audience to show them that
> > having a conversation in a language you don't understand cannot constitute
> > understanding
>
> I am having a conversation right now and acetylcholine is in your brain but
> acetylcholine does not understand English, so I am having a conversation in
> English with somebody who does not understand English. Foolish reasoning is
> it not.

Again, you assume a part/whole relation rather than a form/content
relation. A story is not made of pages in a book, it is an experience
which is made possible through the understanding of the symbolic
content of the pages. Anyone can copy the words from one book to
another, or give instructions of which sections of what book to excise
and reproduce, but it doesn't make them a storyteller.

>
> > The organization of my kitchen sink does not change with the temperature
> > of the water coming out of the faucet.
>
> Glad to hear it, now I know who to ask when I need plumbing advice.

Couldn't think of a legitimate counterpoint?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Quentin Anciaux
2012/2/7 Craig Weinberg 

> On Feb 6, 11:30 am, Bruno Marchal  wrote:
>
> > More seriously, in the chinese room experience, Searle's error can be
> > seen also as a confusion of level. If I can emulate Einstein brain,
> > "I" can answer all question you ask to Einstein,
>
> You're assuming that a brain can be emulated in the first place. If
> that were true, there is no need to have the thought experiment.
>

You're assuming that a brain can't be emulated in the first place. If that
were true, there is no need to have the thought experiment.

Stupid thought, stupid conclusion Craig Weinberg as usual.


>
> > but the trick is that
> > I emulate Einstein himself, and I provide the answer that Einstein
> > answers me (and I guess I will have to make some work to understand
> > them, or not).
>
> It still doesn't make you Einstein, which is Searle's point.
>
> Craig
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>


-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread John Clark
On Tue, Feb 7, 2012  Craig Weinberg  wrote:

> If you are proving that a computer in the position of the man has no
> understanding then this thought experiment proves it.
>

How in hell would putting a computer in the position of the man prove
anything?? The man is just a very very very small part of the Chinese Room,
all Searle proved is that a tiny part of a system does not have all the
properties the entire system has. Well duh, the neurotransmitter
acetylcholine is part of the human brain and would not work without it, but
acetylcholine does not have all the properties that the entire human brain
has either.

> It takes consciousness for granted, like some free floating glow.
>

Oh now I see the light, literally, consciousness is like some free floating
glow! Now I understand everything!

> If I understand how to cook and then I walk into a building, does the
> building, now that it includes me, now know how to cook?
>

If you didn't know how to cook, if you didn't even know how to boil water
but the building was now employed at 4 star restaurants preparing delicious
meals then certainly the building knows how to cook, and you must be a very
small cog in that operation.

> Searle is assuming the common sense of the audience to show them that
> having a conversation in a language you don't understand cannot constitute
> understanding
>

I am having a conversation right now and acetylcholine is in your brain but
acetylcholine does not understand English, so I am having a conversation in
English with somebody who does not understand English. Foolish reasoning is
it not.

> The organization of my kitchen sink does not change with the temperature
> of the water coming out of the faucet.
>

Glad to hear it, now I know who to ask when I need plumbing advice.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Craig Weinberg
On Feb 6, 11:30 am, Bruno Marchal  wrote:

> More seriously, in the chinese room experience, Searle's error can be
> seen also as a confusion of level. If I can emulate Einstein brain,
> "I" can answer all question you ask to Einstein,

You're assuming that a brain can be emulated in the first place. If
that were true, there is no need to have the thought experiment.

> but the trick is that
> I emulate Einstein himself, and I provide the answer that Einstein
> answers me (and I guess I will have to make some work to understand
> them, or not).

It still doesn't make you Einstein, which is Searle's point.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-07 Thread Craig Weinberg
On Feb 6, 10:54 am, John Clark  wrote:
> On Sun, Feb 5, 2012  Craig Weinberg  wrote:
>
> > > The only understanding of Chinese going on is by those Chinese speakers
> > outside the room who are carrying on a one-sided conversation with a rule
> > book.
>
> So you say, but Searle says his idiotic thought experiment has PROVEN it;
> and yet one key step in the "proof" is "if there is understanding it can
> only be in the little man but the little man does not understand so there
> is no understanding involved".

If you are proving that a computer in the position of the man has no
understanding then this thought experiment proves it. If you are
trying to prove that there is no understanding in the universe then
the thought experiment does not prove that. The whole idea of there
being 'understanding involved' is a non-sequitur. It takes
consciousness for granted, like some free floating glow. If I
understand how to cook and then I walk into a building, does the
building, now that it includes me, now know how to cook?

> But if you start the thought experiment as
> that as one of the axioms then what the hell is the point of the thought
> experiment in the first place, how can you claim to have proven what you
> just assumed?  I stand by my remarks that Clark's Chinese Room, described
> previously, has just as much profundity (or lack thereof) as Searle's
> Chinese Room.
>
> > > >  OK fine, the man does not understand Chinese, so what? How does that
> >> prove that understanding was not involved in the room/outside-people
> >> conversation?
>
> > > Because there is nobody on the inside end of the conversation.
>
> So what?

So nothing is understanding you on the other end. It's not a live
performance, it's a canned recording.

> The point of the thought experiment was to determine if
> understanding was involved at the room end,

Huh? The point of the thought experiment was to show that AI doesn't
necessarily understand the data it is processing, which it does. I
like my truck carrying a piano over a bumpy road better, but it still
reveals the important point that accounting is not understanding.

> not how many people were inside
> the room, you can write and scream that there was no understanding from now
> to the end of time but you have not proven it, and neither has Searle.

And you can do the same denying it. Searle is assuming the common
sense of the audience to show them that having a conversation in a
language you don't understand cannot constitute understanding, but he
underestimates the power of comp to obscure common sense.

> It's
> not uncommon for a mathematical "proof" to contain a hidden assumption of
> the very thing you're trying to prove, but usually this error is subtle and
> takes some close analysis and digging to find the mistake, but in the case
> of the Chinese Room the blunder is as obvious as a angry elephant in your
> living room and that is why I have no hesitation in saying that John Searle
> is a moron.

I don't think he's a moron, but he may not understand that comp
already denies any distinction between trivial or prosthetic
intelligence and subjective understanding, so it doesn't help to make
examples which highlight that distinction.

>
> > I suspect the use of the man in the room is a device to force people to
> > identify personally with (what would normally be) the computer.
>
> Yes that's exactly what he's doing, and that's what makes Searle a con
> artist, he's like a stage magician who waves his right hand around and
> makes you look at it so you don't notice what his left hand is doing,

No, I think it's an honest device to help people get around their
prejudices. If someone claims that a program is no different than a
person, this is a way that we can imagine what it is actually like to
do what a program does. The result, is that rather than being forced
to accept that yes, AI must be sentient, we see clearly, that no, AI
is appears to be an automatic and unconscious mechanism.

> and
> the thing that makes him a idiot is that he believes his own bullshit. It's
> as if I forced you to identify with the neurotransmitter acetylcholine and
> then asked you to derive grand conclusions from the fact that acetylcholine
> doesn't understand much.

If the claim of strong AI was that it functioned exactly like
acetylcholine, then what's wrong with that?

>
> > yes I only have first hand knowledge of consciousness. Because the nature
> > of sense is to fill the
> > gaps, connect the dots, solve the puzzle, etc, we are able to generalize
> > figuratively. We are not limited to solipsism
>
> By "fill in the gaps" you mean we accept certain rules of thumb and axioms
> of existence to be true even though we can not prove them,

No. It means we make sense of patterns. We figure them out. The rules
and axioms are a posteriori.

> like induction
> and that intelligent behavior implies consciousness.

No, it requires sense. When we look at yellow and blue dots from far
away and see gre

Re: Intelligence and consciousness

2012-02-06 Thread Bruno Marchal


On 06 Feb 2012, at 16:54, John Clark wrote:


Well it had better be! If the outside world could be anything we  
wanted it to be then our senses would be of no value and Evolution  
would never have had a reason to develop them. In reality if we  
project our wishes on how we interpret the information from our  
senses too much our life expectancy will be very short; I don't like  
that saber toothed tiger over there so I'll think of him as a cute  
little bunny rabbit.




Hmm... If you succeed of thinking of the saber toothed tiger as a cute  
little bunny rabbit, your body will not send the fear chemicals needed  
by the tiger to trigger an attack response. The tiger might be very  
impressed, and think twice before eating you, even if hungry.


I agree with your reply with respect to Craig point, though. Logicians  
like to joke with not completely relevant counter-example.


More seriously, in the chinese room experience, Searle's error can be  
seen also as a confusion of level. If I can emulate Einstein brain,  
"I" can answer all question you ask to Einstein, but the trick is that  
I emulate Einstein himself, and I provide the answer that Einstein  
answers me (and I guess I will have to make some work to understand  
them, or not).


That is an interesting error, and I would not judge someone because he  
does an error (although I am not sure Searle recognizes it or  
understand it).


The confusion between provability and computability is of that type.  
RA (arithmetic without induction) can already simulate PA (arithmetic  
with induction), yet, like me simulating Einstein, RA remains unable  
to prove many theorems that PA can prove. For RA, proving that PA  
proves some proposition P might be much easier than proving P. RA can  
easily prove that PA and ZF prove the consistency of RA, but RA can  
hardly prove that.
RA can emulate PA, ZF, you, and me. (in the comp theory). And this  
does not mean that RA will ever believe what PA, ZF, or you, or me  
might assert or believe.



Bruno





 John K Clark




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-06 Thread John Clark
On Sun, Feb 5, 2012  Craig Weinberg  wrote:


> > The only understanding of Chinese going on is by those Chinese speakers
> outside the room who are carrying on a one-sided conversation with a rule
> book.


So you say, but Searle says his idiotic thought experiment has PROVEN it;
and yet one key step in the "proof" is "if there is understanding it can
only be in the little man but the little man does not understand so there
is no understanding involved". But if you start the thought experiment as
that as one of the axioms then what the hell is the point of the thought
experiment in the first place, how can you claim to have proven what you
just assumed?  I stand by my remarks that Clark's Chinese Room, described
previously, has just as much profundity (or lack thereof) as Searle's
Chinese Room.

> > >  OK fine, the man does not understand Chinese, so what? How does that
>> prove that understanding was not involved in the room/outside-people
>> conversation?
>>
>
> > Because there is nobody on the inside end of the conversation.
>

So what? The point of the thought experiment was to determine if
understanding was involved at the room end, not how many people were inside
the room, you can write and scream that there was no understanding from now
to the end of time but you have not proven it, and neither has Searle. It's
not uncommon for a mathematical "proof" to contain a hidden assumption of
the very thing you're trying to prove, but usually this error is subtle and
takes some close analysis and digging to find the mistake, but in the case
of the Chinese Room the blunder is as obvious as a angry elephant in your
living room and that is why I have no hesitation in saying that John Searle
is a moron.

> I suspect the use of the man in the room is a device to force people to
> identify personally with (what would normally be) the computer.


Yes that's exactly what he's doing, and that's what makes Searle a con
artist, he's like a stage magician who waves his right hand around and
makes you look at it so you don't notice what his left hand is doing, and
the thing that makes him a idiot is that he believes his own bullshit. It's
as if I forced you to identify with the neurotransmitter acetylcholine and
then asked you to derive grand conclusions from the fact that acetylcholine
doesn't understand much.

> yes I only have first hand knowledge of consciousness. Because the nature
> of sense is to fill the
> gaps, connect the dots, solve the puzzle, etc, we are able to generalize
> figuratively. We are not limited to solipsism


By "fill in the gaps" you mean we accept certain rules of thumb and axioms
of existence to be true even though we can not prove them, like induction
and that intelligent behavior implies consciousness. That is the only way
to avoid solipsism.


> >>  Take a sleeping pill and your brain organization, its chemistry,
>> changes and your consciousness goes away; take a pep pill and the
>> organization reverses itself and your consciousness comes back.
>>
>

>  > The organization of the brain is still the same in either case.
>

Bullshit.


> > the brain retains the capacity for consciousness the whole time.


As long as the drug is in your brain consciousness is not possible, it is
only when chemical processes break down the drug and the concentration of
it is reduced (it wares off in other words) does consciousness return.

> If the pill killed you, a pep pill would not bring you back.


And if the pill did kill you that would certainly change brain
organization. And by the way, why do you believe that dead people are not
conscious? Because they no longer behave intelligently.

> What we sense is not imposed from the outside,
>

Well it had better be! If the outside world could be anything we wanted it
to be then our senses would be of no value and Evolution would never have
had a reason to develop them. In reality if we project our wishes on how we
interpret the information from our senses too much our life expectancy will
be very short; I don't like that saber toothed tiger over there so I'll
think of him as a cute little bunny rabbit.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-06 Thread Bruno Marchal

Evgenii,


On 05 Feb 2012, at 14:41, Evgenii Rudnyi wrote:


I would agree that profit should be a tool. On the other hand it is  
working this way. There are rules of a game that are adjusted by the  
government accordingly and then what is not not forbidden is  
allowed. In such a setup, if a new idea allows us to increase  
profit, then it might be a good idea.


Only if the good idea is based on real work, or reasonable (founded)  
speculation. But if the idea is a lie (like the idea that drugs are  
very dangerous and should be prohibited) then the profit will be  
equivalent to stealing, and everybody will get poorer, except locally  
for some bandits.
Few years of prohibition of alcohol has created Al Capone, you can  
guess what has brought 70 years of planetary marijuana prohibition ...
Once the bandits have the power, as in the USA since Nixon, I would  
say, the notion of honest and dishonest profit blurs completely, the  
money becomes gray, and we become all hostages of the special interest  
of a minority. The economy becomes a big pyramidal game, where the top  
steal the money of the bottom, until the system crashes.






I would say that the statement "Emotions are ineffable" excludes  
them from scientific considerations.


I don't see why. "emotions are ineffable" is a perfect scientific  
statement about the emotions.
What is true is that emotions cannot be used *in* the scientific  
statement, but you can do scientific statement *about* emotions.
Indeed, nothing escape the domain on which we can develop a scientific  
attitude, from emotions to the big one ineffable.
What is forbidden (or invalid) is the use of emotion in scientific  
discourse. You cannot say "the lemma is correct because I feel so", or  
"1+1=2 because God told me".




Then we should not mention emotions at all. This however leads to a  
practical problem. For a mass product, for example electronics,  
emotions of customers are very important to get/keep/increase the  
market share. Hence if you do not consider emotions at all, you do  
not get paid.


I can agree. Emotions is part of the panorama, and we can deal with  
them, with or without emotions.
But emotions can also be misused, like in the fear selling business of  
the bandits.


Bruno




On 04.02.2012 22:42 Bruno Marchal said the following:

Hi Evgenyi,

On 04 Feb 2012, at 18:09, Evgenii Rudnyi wrote:


Also, if your theory is that we (in the 3-sense) are not Turing
emulable, you have to explain us why, and what it adds to the
explanation.


Bruno,

I do not have a theory.


That's OK. Technically, me neither. I am a logician. All what I
assert is that two (weak) theories, mechanism and materialism are
incompatible. I don't hide that my heart invites my brain to listen
to what some rich universal machine can already prove, and guess by
themselves, about themselves.





As for comp, my only note that I have made recently was that if to
look at the current state-of-art of computer architectures and
algorithms, then it is clear that any practical implementation is
out of reach.


Well, OK. We disagree here. AUDA is the illustration that many simple
machine, basically any first order specification of a universal
system (machine, programming language) extended with the
corresponding induction axioms, Those are the one I call the Löbian
machine, they are already as clever as you and me. By lacking our
layers of historical and prehistorical prejudices, they seems even
rather wiser, too. (In my opinion). AUDA is the theology of the
self-introspecting LUM. It gives an octuple of hypostases (inside
views of arithmetic by locally arithmetical being) which mirrors
rather well the discourse of the Platonists, neoplatonists and
mystics in all cultures (as well argued by Aldous Huxley, for
example).

You laptop is one inch close to Löbianity, but why would you want
that humans make Löbian machines when Introspection is not even in
the human curriculum. I begin to think that each time a human become
Löbian, he got banned, exiled, burned, imprizonned, ignored, sent in
asylum, or a perhaps become a big artist, musician or something.







Whether comp is true of false in principle, frankly speaking I have
no idea.


Me neither. Practically, mechanism is more a right (to say yes or no
to the doctor).



I guess that my subconsciousness still believes in primitive
materialism, as consciously I experience a question Why it is bad
to say that math is mind dependent.


Human math is human mind dependent. This does not imply that math, or
a "metaphysically clean" part of math (like arithmetic or computer
science) might not be the "cause/reason" of the stable persistent
beliefs in a physical reality. The physical reality would be a
projective view of arithmetic from inside.




Yet, I should confess that after following discussions at this list
I see some problems with such a statement and pass doubts back to
my subconsciousness. Let us see what happens.

I still listen to th

Re: Intelligence and consciousness

2012-02-05 Thread Craig Weinberg
On Feb 5, 11:55 am, John Clark  wrote:
> On Sat, Feb 4, 2012 at Craig Weinberg  wrote:
>
> > You don't understand Searle's thought experiment.
>
> I understand it one hell of a lot better than Searle did, but that's not
> really much of a boast.
>
> > The whole point is to reveal the absurdity of taking understanding for
> > granted in data manipulation processes.
>
> And Searle takes it for granted that if the little man doing a trivial task
> does not understand Chinese then Chinese is not understood, and that
> assumption simply is not bright.

No, I can see clearly that Searle is correct.You are applying a
figurative sense of understanding when a literal sense is required.
The only understanding of Chinese going on is by those Chinese
speakers outside the room who are carrying on a one-sided conversation
with a rule book. To say that Chinese is understood by the Chinese
Room system is to say that the entire universe understands Chinese.

>
> > None of the descriptions of the argument I find online make any mention
> > of infinite books, paper, or ink.
>
> Just how big do you think a book would need to be to contain every possible
> question and every possible answer to those questions?

It doesn't need to be able to answer every possible question, it only
needs to approximate a typical conversational capacity. It can ask
'what do you mean by that?'

>
> > All I find is a clear and simple experiment:
>
> Yes simple, as in stupid.

It seems like you take it's contradiction of your position personally.
I assume that you don't mean that literally though, right? You don't
think that the thought experiment has a low I.Q., right? Thinking that
would be entirely consistent with what you are saying though.

>
> > The fact that he can use the book to make the people outside think they
> > are carrying on a conversation with them in Chinese reveals that it is only
> > necessary for the man to be trained to use the book, not to understand
> > Chinese or communication in general.
>
> OK fine, the man does not understand Chinese, so what? How does that prove
> that understanding was not involved in the room/outside-people
> conversation?

Because there is nobody on the inside end of the conversation.

> You maintain that only humans can have understanding while I
> maintain that other things can have it too.

No, I don't limit understanding to humans, I just limit human quality
understanding to humans. Not that it's the highest quality
understanding, but it is the only human understanding.

>To determine which of us is
> correct Searle sets up a cosmically impractical and complex thought
> experiment in which a human is a trivial part. Searle says that if
> understanding exists anywhere it must be concentrated in the human and
> nowhere else, but the little man does not understand Chinese so Searle
> concludes that understanding is not involved. What makes Searle such an
> idiot is that determining if humans are the only thing that can have
> understanding or not is the entire point of the thought experiment, he's
> assuming the very thing he's trying to prove. If Siri or Watson had behaved
> as stupidly as Searle did their programers would hang their heads in shame!

I'm not sure if Searle maintains that understanding is forever limited
to humans, but I suspect the use of the man in the room is a device to
force people to identify personally with (what would normally be) the
computer. This way he makes you confront the reality that looking up
reactions in a rule book is not the same thing as reacting
authentically and generating responses personally.

>
> > Makes sense to me.
>
> I know, that's the problem.

No, because I understand why the way you are looking at it misses the
point, and I understand that you aren't willing to entertain my way of
looking at it.

>
> > We know for a fact that human consciousness is associated with human
> > brains
>
> That should be "with a human brain" not "with human brains"; you only know
> for a fact that one human brain is conscious, your own.

Again, there are literal and figurative senses. In the most literal
sense of 'you only know for a fact', yes I only have first hand
knowledge of consciousness. Because the nature of sense is to fill the
gaps, connect the dots, solve the puzzle, etc, we are able to
generalize figuratively. We are not limited to solipsism or formal
proofs that other people are conscious, we have a multi-contextual
human commonality. We share many common senses and can create new
senses through the existing sense channels we share. Knowing whether a
person is conscious or not therefore, is only an issue under very
unusual conditions.

>
> > but we do not have much reason to suspect the rooms can become conscious
>
> Because up to now rooms have not behaved very intelligently, but the room
> containing the Watson supercomputer is getting close.

Close to letting us use it to fool ourselves is all. It's still only a
room with a large, fast rulebook.

>
> > Or

Re: Intelligence and consciousness

2012-02-05 Thread John Clark
On Sat, Feb 4, 2012 at Craig Weinberg  wrote:

> You don't understand Searle's thought experiment.
>

I understand it one hell of a lot better than Searle did, but that's not
really much of a boast.

> The whole point is to reveal the absurdity of taking understanding for
> granted in data manipulation processes.
>

And Searle takes it for granted that if the little man doing a trivial task
does not understand Chinese then Chinese is not understood, and that
assumption simply is not bright.

> None of the descriptions of the argument I find online make any mention
> of infinite books, paper, or ink.
>

Just how big do you think a book would need to be to contain every possible
question and every possible answer to those questions?

> All I find is a clear and simple experiment:
>

Yes simple, as in stupid.

> The fact that he can use the book to make the people outside think they
> are carrying on a conversation with them in Chinese reveals that it is only
> necessary for the man to be trained to use the book, not to understand
> Chinese or communication in general.
>

OK fine, the man does not understand Chinese, so what? How does that prove
that understanding was not involved in the room/outside-people
conversation? You maintain that only humans can have understanding while I
maintain that other things can have it too. To determine which of us is
correct Searle sets up a cosmically impractical and complex thought
experiment in which a human is a trivial part. Searle says that if
understanding exists anywhere it must be concentrated in the human and
nowhere else, but the little man does not understand Chinese so Searle
concludes that understanding is not involved. What makes Searle such an
idiot is that determining if humans are the only thing that can have
understanding or not is the entire point of the thought experiment, he's
assuming the very thing he's trying to prove. If Siri or Watson had behaved
as stupidly as Searle did their programers would hang their heads in shame!

> Makes sense to me.
>

I know, that's the problem.

> We know for a fact that human consciousness is associated with human
> brains
>

That should be "with a human brain" not "with human brains"; you only know
for a fact that one human brain is conscious, your own.

> but we do not have much reason to suspect the rooms can become conscious
>

Because up to now rooms have not behaved very intelligently, but the room
containing the Watson supercomputer is getting close.

> Organization of the brain does not make the difference between being
> awake and being unconscious.
>

Don't be ridiculous! Take a sleeping pill and your brain organization, its
chemistry, changes and your consciousness goes away; take a pep pill and
the organization reverses itself and your consciousness comes back.

> Organization is certainly important, but only if it arises organically.
>

Electricity is not organic but it will change your organization and
dramatically change your consciousness if that electricity is applied to
your brain, or any other part of your body for that matter.

> Organization imposed from the outside doesn't cause that organization to
> become internalized as awareness.
>

So the light entering your eye and other inputs from your senses about the
outside world has no effect on your awareness. How unfortunate for you,
your nose must be sore by now from walking into so many walls that you were
not aware of.

> Yet if someone pulls the plug on a coma patient, they can go to prison,
but iPhones can be disposed of at will.

Good heavens, do you really expect to understand the nature of reality by
studying the legal system?!

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-05 Thread Evgenii Rudnyi

Bruno,

I would agree that profit should be a tool. On the other hand it is 
working this way. There are rules of a game that are adjusted by the 
government accordingly and then what is not not forbidden is allowed. In 
such a setup, if a new idea allows us to increase profit, then it might 
be a good idea.


I would say that the statement "Emotions are ineffable" excludes them 
from scientific considerations. Then we should not mention emotions at 
all. This however leads to a practical problem. For a mass product, for 
example electronics, emotions of customers are very important to 
get/keep/increase the market share. Hence if you do not consider 
emotions at all, you do not get paid.


Evgenii


On 04.02.2012 22:42 Bruno Marchal said the following:

Hi Evgenyi,

On 04 Feb 2012, at 18:09, Evgenii Rudnyi wrote:


Also, if your theory is that we (in the 3-sense) are not Turing
emulable, you have to explain us why, and what it adds to the
explanation.


Bruno,

I do not have a theory.


That's OK. Technically, me neither. I am a logician. All what I
assert is that two (weak) theories, mechanism and materialism are
incompatible. I don't hide that my heart invites my brain to listen
to what some rich universal machine can already prove, and guess by
themselves, about themselves.





As for comp, my only note that I have made recently was that if to
 look at the current state-of-art of computer architectures and
algorithms, then it is clear that any practical implementation is
out of reach.


Well, OK. We disagree here. AUDA is the illustration that many simple
 machine, basically any first order specification of a universal
system (machine, programming language) extended with the
corresponding induction axioms, Those are the one I call the Löbian
machine, they are already as clever as you and me. By lacking our
layers of historical and prehistorical prejudices, they seems even
rather wiser, too. (In my opinion). AUDA is the theology of the
self-introspecting LUM. It gives an octuple of hypostases (inside
views of arithmetic by locally arithmetical being) which mirrors
rather well the discourse of the Platonists, neoplatonists and
mystics in all cultures (as well argued by Aldous Huxley, for
example).

You laptop is one inch close to Löbianity, but why would you want
that humans make Löbian machines when Introspection is not even in
the human curriculum. I begin to think that each time a human become
Löbian, he got banned, exiled, burned, imprizonned, ignored, sent in
asylum, or a perhaps become a big artist, musician or something.







Whether comp is true of false in principle, frankly speaking I have
no idea.


Me neither. Practically, mechanism is more a right (to say yes or no
to the doctor).



I guess that my subconsciousness still believes in primitive
materialism, as consciously I experience a question Why it is bad
to say that math is mind dependent.


Human math is human mind dependent. This does not imply that math, or
a "metaphysically clean" part of math (like arithmetic or computer
science) might not be the "cause/reason" of the stable persistent
beliefs in a physical reality. The physical reality would be a
projective view of arithmetic from inside.




Yet, I should confess that after following discussions at this list
I see some problems with such a statement and pass doubts back to
my subconsciousness. Let us see what happens.

I still listen to the lectures of Prof Hoenen. Recently I have
finished Theorien der Wahrheit and right now I am at
Beweistheorien. When I am done with Prof Hoenen, as promised I will
go through your The Origin of Physical Laws and Sensations. Yet, I
do not know when it happens, as it takes more time as I thought
originally.

As for computers having emotions, I am a practitioner and I am
working right now closely with engineers. I should say that the
modern market would love electronics with emotions. Just imagine
such a slogan

Smartphone with Emotions* (*scientifically proved)


This will never happen. Never. More exactly, if this happens, it
means you are in front of a con crackpot. Emotion are ineffable,
although a range of the corresponding behavior is easy to simulate.
To have genuine emotion, you need to be entangled to genuine complex
long computation. But their outputs are easy to simulate. A friend of
mine made a piece of theater with a little robot-dog, emulating
emotions, and the public reacted correspondingly. In comp there is no
philosophical zombies, but there are plenty of local zombies
possible, like cartoon cops on the roads which makes their effect, or
 that emotive robot-dog. But an emotion, by its very nature cannot be
scientifically proved. All what happens is that a person succeeds in
being recognize as such by other persons. Computers might already be
conscious. It might be our lack of civility which prevents us to
listen to them.

But you were probably joking with the "*scientifically proved".

Concerning reality, science never proves. I

Re: Intelligence and consciousness

2012-02-05 Thread Evgenii Rudnyi

On 04.02.2012 21:05 meekerdb said the following:
> On 2/4/2012 9:09 AM, Evgenii Rudnyi wrote:

...

>> As for computers having emotions, I am a practitioner and I am
>> working right now closely with engineers. I should say that the
>> modern market would love electronics with emotions. Just imagine
>> such a slogan
>>
>> Smartphone with Emotions* (*scientifically proved)
>>
>> It would be a killer application.
>
> So if you miss a turn your driving direction app will get mad and
> scold you? I you use your calculator to find the square root of 121
> it will mock you for forgetting your 8th grade mathematics? Emotions
> imply having values and being able to act on them -- why would you
> want your computer to have it's own values and act on them? Don't you
> want it to just have the values you have, i.e. answer the questions
> you ask?
>
> Brent
>

You should talk with marketing guys. A quick search reveals that this is 
already a reality


Hercules Dualpix Emotion Webcam

There is also a term emotional AI that is discussed by game makers.

Yet, I do not get your point in general. For example recently there was 
an article in Die Zeit


Die Roboter kommen (Robots are coming)
http://www.zeit.de/2012/04/T-Roboter

Among other things, they discuss an issue of a potential collision 
between a moving robot and a human being. To this end, there is 
experimental study to research on what pain in what part of a body can a 
human being sustain. This experimental database will be used by 
engineering developing robots.


Pain is not exactly emotion but I guess it is not that far away. This 
shows that pain could be experimentally researched by people. Could it 
be experimentally researched by computers and robots?


On the other hand, engineers design computers and robots. Can science 
give engineers guidelines to control emotions by computers or robots? Or 
emotions are some kind of emergent phenomena that will appear in 
computers and robots independently and uncontrollable from engineers?


Finally, if you already has discovered emotions among computers, why 
then it is impossible to research this effect on how it has emerged 
independent from engineers?


Evgenii

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Craig Weinberg
On Feb 4, 1:13 pm, John Clark  wrote:
> On Sat, Feb 4, 2012  Craig Weinberg  wrote:
>
> >> I hope you're not talking about Searle's Chinese room, the stupidest
> >> thought experiment in history.
>
> > > I don't see what is stupid about that thought experiment.
>
> And that tells us a great deal about you.
>
> > > Please explain exactly what you mean.
>
> You already know about Searle's room, now I want to tell you about Clark's
> Chinese Room. You are a professor of Chinese Literature and are in a room
> with me and the great Chinese Philosopher and Poet Laotse. Laotse writes
> something in his native language on a paper and hands it to me. I walk 10
> feet and give it to you. You read the paper and are impressed with the
> wisdom of the message and the beauty of its language. Now I tell you that I
> don't know a word of Chinese, can you find any deep implications from that
> fact? I believe Clark's Chinese Room is just as profound as Searle's
> Chinese Room. Not very.

You don't understand Searle's thought experiment. The whole point is
to reveal the absurdity of taking understanding for granted in data
manipulation processes. Since you take it for granted from the
beginning, it seems stupid to you.

>
> All Searle did was come up with a wildly impractical model (the Chinese
> Room) of an intelligence in which a human being happens to play a trivial
> part. Consider what's in Searle's model:
>
> 1) An incredible book, larger than the observable universe even if the
> writing was microfilm sized.
>
> 2) An equally large or larger book of blank paper.
>
> 3) A pen, several trillion galaxies of ink, and oh yes I almost forgot,
> your little man.

Is there an original document you are getting this from? None of the
descriptions of the argument I find online make any mention of
infinite books, paper, or ink. All I find is a clear and simple
experiment: A man sits in a locked room and receives notes in Chinese
through a slot. He has a rule book (size is irrelevant and not
mentioned) with which contains instructions for what to do in response
to receiving these Characters. The fact that he can use the book to
make the people outside think they are carrying on a conversation with
them in Chinese reveals that it is only necessary for the man to be
trained to use the book, not to understand Chinese or communication in
general.

Makes sense to me. The refutations I've read aren't persuasive. They
have to do with claiming that the room as a whole is intelligent or
that neurons cannot be intelligent either, etc.

>
> Searle claims to have proven something profound when he shows that a
> trivial part does not have all the properties that the whole system does.

The whole point is to show that in an AI system, the machine is the
trivial part of a system which invariably includes a human user to
understand anything.

> In his example the man could be replaced with a simple machine made with a
> few vacuum tubes or even mechanical relays, and it would do a better job.

No because he's trying to bring it to a human level so there is no
silly speculation about whether vacuum tubes can understand anything
or not.

> It's like saying the synaptic transmitter dopamine does not understand how
> to solve differential equations, dopamine is a small part of the human
> brain thus the human brain does not understand how to solve differential
> equations.

The human brain doesn't understand, any more than the baseball diamond
plays baseball. The diamond and the experiences shown through the
playing of the game are two aspects of a single whole.

>
> Yes, it does seem strange that consciousness is somehow hanging around the
> room as a whole, even if slowed down by a factor of a billion trillion or
> so, but no stranger than the fact that consciousness is hanging around 3
> pounds of gray goo in our head,

We know for a fact that human consciousness is associated with human
brains, but we do not have much reason to suspect the rooms can become
conscious (Amityville notwithstanding).

> and yet we know that it does. It's time to
> just face the fact that consciousness is what matter does when it is
> organized in certain complex ways.

Not at all. Organization of the brain does not make the difference
between being awake and being unconscious. Organization is certainly
important, but only if it arises organically. Organization imposed
from the outside doesn't cause that organization to become
internalized as awareness.

>
> > > I understand why an audioanimatronic pirate at Disneyland feels nothing.
>
> That is incorrect, you don't understand. I agree it probably feels nothing
> but unlike you I can logically explain exactly why I have that opinion and
> I don't need semantic batteries or flux capacitors or dilithhium crystals
> or any other new age bilge to do so.

Yes, I've heard you logical explanation..."because it doesn't behave
intelligently". It's circular reasoning. My understanding is not
predicated on a schema of literal ru

Re: Intelligence and consciousness

2012-02-04 Thread Bruno Marchal


On 04 Feb 2012, at 17:38, meekerdb wrote:


On 2/4/2012 1:17 AM, Bruno Marchal wrote:


The emotion of your laptot is unknown, and unmanifested, because  
your laptop has no deep persistant self-reference ability to share  
with you.  We want a slave, and would be anxious in front of a  
machine taking too much independence.


Bruno


Yes, that's exactly why John McCarthy wrote that we should not  
provide AI programs with self-reflection and emotions, because it  
would create ethical problems in using them.


He is right. But doing babies does already the same, and for  
economical reason we will often get some rewards from letting some  
degree of autonomy in machines. The time needed for hand made machine  
descendent to be as free as us and as entangled as us in the local  
computational histories, we will already be machines ourselves. Things  
will be different. The shadow of the measure problem solution  
indicates that we might, in some sense, be already there. I'm not sure.



Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Bruno Marchal

Hi Evgenyi,

On 04 Feb 2012, at 18:09, Evgenii Rudnyi wrote:


> Also,
> if your theory is that we (in the 3-sense) are not Turing emulable,
> you have to explain us why, and what it adds to the explanation.

Bruno,

I do not have a theory.


That's OK. Technically, me neither. I am a logician. All what I assert  
is that two (weak) theories, mechanism and materialism are incompatible.
I don't hide that my heart invites my brain to listen to what some  
rich universal machine can already prove, and guess by themselves,  
about themselves.






As for comp, my only note that I have made recently was that if to  
look at the current state-of-art of computer architectures and  
algorithms, then it is clear that any practical implementation is  
out of reach.


Well, OK. We disagree here. AUDA is the illustration that many simple  
machine, basically any first order specification of a universal system  
(machine, programming language) extended with the corresponding  
induction axioms, Those are the one I call the Löbian machine, they  
are already as clever as you and me. By lacking our layers of  
historical and prehistorical prejudices, they seems even rather wiser,  
too. (In my opinion).
AUDA is the theology of the self-introspecting LUM. It gives an  
octuple of hypostases (inside views of arithmetic by locally  
arithmetical being) which mirrors rather well the discourse of the  
Platonists, neoplatonists and mystics in all cultures (as well argued  
by Aldous Huxley, for example).


You laptop is one inch close to Löbianity, but why would you want that  
humans make Löbian machines when Introspection is not even in the  
human curriculum. I begin to think that each time a human become  
Löbian, he got banned, exiled, burned, imprizonned, ignored, sent in  
asylum, or a perhaps become a big artist, musician or something.








Whether comp is true of false in principle, frankly speaking I have  
no idea.


Me neither. Practically, mechanism is more a right (to say yes or no  
to the doctor).



I guess that my subconsciousness still believes in primitive  
materialism, as consciously I experience a question Why it is bad to  
say that math is mind dependent.


Human math is human mind dependent. This does not imply that math, or  
a "metaphysically clean" part of math (like arithmetic or computer  
science) might not be the "cause/reason" of the stable persistent  
beliefs in a physical reality.
The physical reality would be a projective view of arithmetic from  
inside.




Yet, I should confess that after following discussions at this list  
I see some problems with such a statement and pass doubts back to my  
subconsciousness. Let us see what happens.


I still listen to the lectures of Prof Hoenen. Recently I have  
finished Theorien der Wahrheit and right now I am at Beweistheorien.  
When I am done with Prof Hoenen, as promised I will go through your  
The Origin of Physical Laws and Sensations. Yet, I do not know when  
it happens, as it takes more time as I thought originally.


As for computers having emotions, I am a practitioner and I am  
working right now closely with engineers. I should say that the  
modern market would love electronics with emotions. Just imagine  
such a slogan


Smartphone with Emotions* (*scientifically proved)


This will never happen. Never.
More exactly, if this happens, it means you are in front of a con  
crackpot. Emotion are ineffable, although a range of the corresponding  
behavior is easy to simulate. To have genuine emotion, you need to be  
entangled to genuine complex long computation. But their outputs are  
easy to simulate.
A friend of mine made a piece of theater with a little robot-dog,  
emulating emotions, and the public reacted correspondingly. In comp  
there is no philosophical zombies, but there are plenty of local  
zombies possible, like cartoon cops on the roads which makes their  
effect, or that emotive robot-dog.
But an emotion, by its very nature cannot be scientifically proved.  
All what happens is that a person succeeds in being recognize as such  
by other persons.
Computers might already be conscious. It might be our lack of civility  
which prevents us to listen to them.


But you were probably joking with the "*scientifically proved".

Concerning reality, science never proves. It only suggests  
interrogatively. If not, it is pseudo science or pseudo religion.





It would be a killer application. Hence I do not understand why  
people here that state "a computer has already emotions" do not  
explore such a wonderful opportunity. After all, whether it is comp,  
physicalism, monism, dualism or whatever does not matter. What is  
really important is to make profit.



Hmm... I am not sure. What is important is to be able to eat when  
hungry, and to drink when thirsty and some amount of heat. What is  
hoped for is the larger freedom spectrum for the exploratory  
opportunities.


Profit might be a tool, hardly a goal by its

Re: Intelligence and consciousness

2012-02-04 Thread meekerdb

On 2/4/2012 9:09 AM, Evgenii Rudnyi wrote:

> Also,
> if your theory is that we (in the 3-sense) are not Turing emulable,
> you have to explain us why, and what it adds to the explanation.

Bruno,

I do not have a theory.

As for comp, my only note that I have made recently was that if to look at the current 
state-of-art of computer architectures and algorithms, then it is clear that any 
practical implementation is out of reach.


Whether comp is true of false in principle, frankly speaking I have no idea. I guess 
that my subconsciousness still believes in primitive materialism, as consciously I 
experience a question Why it is bad to say that math is mind dependent. Yet, I should 
confess that after following discussions at this list I see some problems with such a 
statement and pass doubts back to my subconsciousness. Let us see what happens.


I still listen to the lectures of Prof Hoenen. Recently I have finished Theorien der 
Wahrheit and right now I am at Beweistheorien. When I am done with Prof Hoenen, as 
promised I will go through your The Origin of Physical Laws and Sensations. Yet, I do 
not know when it happens, as it takes more time as I thought originally.


As for computers having emotions, I am a practitioner and I am working right now closely 
with engineers. I should say that the modern market would love electronics with 
emotions. Just imagine such a slogan


Smartphone with Emotions* (*scientifically proved)

It would be a killer application. 


So if you miss a turn your driving direction app will get mad and scold you?  I you use 
your calculator to find the square root of 121 it will mock you for forgetting your 8th 
grade mathematics? Emotions imply having values and being able to act on them -- why would 
you want your computer to have it's own values and act on them?  Don't you want it to just 
have the values you have, i.e. answer the questions you ask?


Brent


Hence I do not understand why people here that state "a computer has already emotions" 
do not explore such a wonderful opportunity. After all, whether it is comp, physicalism, 
monism, dualism or whatever does not matter. What is really important is to make profit.


Evgenii

On 04.02.2012 10:17 Bruno Marchal said the following:


On 03 Feb 2012, at 21:23, Evgenii Rudnyi wrote:


On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdb wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdb wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of
you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within
the context of chess the machine acts just like a person who
had those emotions. So it had at least the functional
equivalent of those emotions. Whereas your opinion is simple
prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that the
whole idea that there can be a 'functional equivalent of
emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU, voicemail...all
of these things demonstrate that there need not be any
connection at all between function and interior experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the chess
playing computer, there is no person providing the 'emotion'
because the 'emotion' depends on complex and unforeseeable
events. Hence it is appropriate to attribute the 'emotion' to the
computer/program.

Brent


Craig's position that computers in the present form do not have
emotions is not unique, as emotions belong to consciousness. A
quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as
established that his can never happen."

Now the last paragraph from the chapter "10.3 Conscious robots?"

p. 130. "So, while we may grant robots the power to form meaningful
 categorical representations at a level reached by the unconscious
 brain and by the behaviour controlled by the unconscious brain, we
 should remain doubtful whether they are likely to experience
conscious percepts. This conclusion should not, however, be
over-interpreted. It does not necessarily imply that human beings
will never be able to build artefacts with conscious experience.
That will depend on how the trick of consciousness is done. If and
when we know the trick, it may be possible to duplicate it. But the
mere provision of behavioural dispositions is unlikely to be up to
the mark."

If we say that computers right now have emotions, then we must be
able exactly define the difference between unconscious and
conscious experience in th

Re: Intelligence and consciousness

2012-02-04 Thread John Clark
On Sat, Feb 4, 2012  Craig Weinberg  wrote:

>> I hope you're not talking about Searle's Chinese room, the stupidest
>> thought experiment in history.
>>
>
> > I don't see what is stupid about that thought experiment.


And that tells us a great deal about you.


> > Please explain exactly what you mean.
>

You already know about Searle's room, now I want to tell you about Clark's
Chinese Room. You are a professor of Chinese Literature and are in a room
with me and the great Chinese Philosopher and Poet Laotse. Laotse writes
something in his native language on a paper and hands it to me. I walk 10
feet and give it to you. You read the paper and are impressed with the
wisdom of the message and the beauty of its language. Now I tell you that I
don't know a word of Chinese, can you find any deep implications from that
fact? I believe Clark's Chinese Room is just as profound as Searle's
Chinese Room. Not very.

All Searle did was come up with a wildly impractical model (the Chinese
Room) of an intelligence in which a human being happens to play a trivial
part. Consider what's in Searle's model:

1) An incredible book, larger than the observable universe even if the
writing was microfilm sized.

2) An equally large or larger book of blank paper.

3) A pen, several trillion galaxies of ink, and oh yes I almost forgot,
your little man.

Searle claims to have proven something profound when he shows that a
trivial part does not have all the properties that the whole system does.
In his example the man could be replaced with a simple machine made with a
few vacuum tubes or even mechanical relays, and it would do a better job.
It's like saying the synaptic transmitter dopamine does not understand how
to solve differential equations, dopamine is a small part of the human
brain thus the human brain does not understand how to solve differential
equations.

Yes, it does seem strange that consciousness is somehow hanging around the
room as a whole, even if slowed down by a factor of a billion trillion or
so, but no stranger than the fact that consciousness is hanging around 3
pounds of gray goo in our head, and yet we know that it does. It's time to
just face the fact that consciousness is what matter does when it is
organized in certain complex ways.


> > I understand why an audioanimatronic pirate at Disneyland feels nothing.
>

That is incorrect, you don't understand. I agree it probably feels nothing
but unlike you I can logically explain exactly why I have that opinion and
I don't need semantic batteries or flux capacitors or dilithhium crystals
or any other new age bilge to do so.


> > No. Computers have never learned anything.


I could give examples dating back to the 1950's that computers can indeed
learn but there would be no point in me doing  so, you would say it didn't
"really" learn it "just" behaved like it learned, and Einstein wasn't
"really" smart he "just" behaved like he was smart, and the guy who filed a
complaint with the police that somebody stole his cocaine was not "really"
stupid he "just" behaved stupidly.
**

> >>  Machines are made of atoms just like you and me.
>>
>
> > And atoms are unconscious, are they not?
>

Atoms don't behave intelligently so my very very strong hunch is that they
are not conscious, but there is no way I can know for certain. On the other
hand you are even more posative than I about this matter and as  always
happens whenever somebody is absolutely positively 100% certain about
anything they can almost never produce any logical reason for their belief.
There seems to be a inverse relationship, the stronger the belief the
weaker the evidence.

>>  One binary logic operation is pretty straightforward but 20,000
>> trillion of them every second is not, *and that's what today's
>> supercomputers can do, and they are doubling in power every 18 months.
>>
>
> > You could stop the program at any given point and understand every
>> thread of every process.
>
>
Yes, you can understand ANY thread but you cannot understand EVERY thread.
And that 20,000 trillion a second figure that I used was really a big
understatement, it's the number of floating point operations (FLOPS) not
the far simpler binary operations. A typical man on the street might take
the better part of one minute to do one flop with pencil and paper, and
today's  supercomputers can do 20,000 million million a second and they
double in power every 18 months.
**

> >They come out of comas and communicate with other human beings


You think the noises coming out of ex-coma patient's mouths give us
profound insight into their inner life, but noises produced by Siri tell us
absolutely nothing, why the difference? Because Siri is not squishy and
does not smell bad. I don't think your philosophy is one bit more
sophisticated than that.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.

Re: Intelligence and consciousness

2012-02-04 Thread Evgenii Rudnyi

> Also,
> if your theory is that we (in the 3-sense) are not Turing emulable,
> you have to explain us why, and what it adds to the explanation.

Bruno,

I do not have a theory.

As for comp, my only note that I have made recently was that if to look 
at the current state-of-art of computer architectures and algorithms, 
then it is clear that any practical implementation is out of reach.


Whether comp is true of false in principle, frankly speaking I have no 
idea. I guess that my subconsciousness still believes in primitive 
materialism, as consciously I experience a question Why it is bad to say 
that math is mind dependent. Yet, I should confess that after following 
discussions at this list I see some problems with such a statement and 
pass doubts back to my subconsciousness. Let us see what happens.


I still listen to the lectures of Prof Hoenen. Recently I have finished 
Theorien der Wahrheit and right now I am at Beweistheorien. When I am 
done with Prof Hoenen, as promised I will go through your The Origin of 
Physical Laws and Sensations. Yet, I do not know when it happens, as it 
takes more time as I thought originally.


As for computers having emotions, I am a practitioner and I am working 
right now closely with engineers. I should say that the modern market 
would love electronics with emotions. Just imagine such a slogan


Smartphone with Emotions* (*scientifically proved)

It would be a killer application. Hence I do not understand why people 
here that state "a computer has already emotions" do not explore such a 
wonderful opportunity. After all, whether it is comp, physicalism, 
monism, dualism or whatever does not matter. What is really important is 
to make profit.


Evgenii

On 04.02.2012 10:17 Bruno Marchal said the following:


On 03 Feb 2012, at 21:23, Evgenii Rudnyi wrote:


On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdb wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdb wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of
you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within
the context of chess the machine acts just like a person who
had those emotions. So it had at least the functional
equivalent of those emotions. Whereas your opinion is simple
prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that the
whole idea that there can be a 'functional equivalent of
emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU, voicemail...all
of these things demonstrate that there need not be any
connection at all between function and interior experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the chess
playing computer, there is no person providing the 'emotion'
because the 'emotion' depends on complex and unforeseeable
events. Hence it is appropriate to attribute the 'emotion' to the
computer/program.

Brent


Craig's position that computers in the present form do not have
emotions is not unique, as emotions belong to consciousness. A
quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as
established that his can never happen."

Now the last paragraph from the chapter "10.3 Conscious robots?"

p. 130. "So, while we may grant robots the power to form meaningful
 categorical representations at a level reached by the unconscious
 brain and by the behaviour controlled by the unconscious brain, we
 should remain doubtful whether they are likely to experience
conscious percepts. This conclusion should not, however, be
over-interpreted. It does not necessarily imply that human beings
will never be able to build artefacts with conscious experience.
That will depend on how the trick of consciousness is done. If and
when we know the trick, it may be possible to duplicate it. But the
mere provision of behavioural dispositions is unlikely to be up to
the mark."

If we say that computers right now have emotions, then we must be
able exactly define the difference between unconscious and
conscious experience in the computer (for example in that computer
that has won Kasparov). Can you do it?


Yes. It is the point of AUDA. We can do it in the theoretical
framework, once we accept some theory (axiomatic) of knowledge. Also,
if your theory is that we (in the 3-sense) are not Turing emulable,
you have to explain us why, and what it adds to the explanation. With
comp, the trick of both consciousness and matter is not entirely
computable. You have to resist to a reductionist conception

Re: Intelligence and consciousness

2012-02-04 Thread meekerdb

On 2/4/2012 1:17 AM, Bruno Marchal wrote:
The emotion of your laptot is unknown, and unmanifested, because your laptop has no deep 
persistant self-reference ability to share with you.  We want a slave, and would be 
anxious in front of a machine taking too much independence.


Bruno 


Yes, that's exactly why John McCarthy wrote that we should not provide AI programs with 
self-reflection and emotions, because it would create ethical problems in using them.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Craig Weinberg
On Feb 4, 7:29 am, Bruno Marchal  wrote:
> On 03 Feb 2012, at 23:58, Craig Weinberg wrote:
>
> >  Consciousness and mechanism are
> > mutually exclusive by definition and always will be.
>
> I think you confuse mechanism before and after Gödel, and you miss the
> 1-indeterminacy.

I don't think that I do. I only miss the link between the validity of
the mathematical form of 1-indeterminacy and the logical necessity of
the content: the qualitative experience associated with it.

>You confuse Turing emulable, and Turing recoverable
> (by 1-indeterminacy on non computable domain).

That's probably true since I can't find any definition for Turing
recoverable online. Are you saying that consciousness is not Turing
emulable but merely Turing recoverable (which I am imagining is about
addressing non-comp records to play or record)?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Craig Weinberg
On Feb 4, 2:23 am, John Clark  wrote:
> On Fri, Feb 3, 2012  Craig Weinberg  wrote:
>
> > Huge abacuses are a really good way to look at this, although it's pretty
> > much the same as the China Brain.
>
> I hope you're not talking about Searle's Chinese room, the stupidest
> thought experiment in history.

I don't see what is stupid about that thought experiment. Please
explain exactly what you mean.

>
> > > Your position is [...] The abacus could literally be made to think
>
> Yes.
>
> > Do you see why I am incredulous about this?
>
> No.
>
> >  > I am crystal clear in my own understanding that no matter how good the
> > program seems, Siri 5000 will feel exactly the same thing as Siri. Nothing.
>
> I accept that you are absolutely positively 100% certain of the above, but
> I do NOT accept that you are correct. I'm not a religious man so I don't
> believe in divine revelation and that's the only way you could know what it
> feels like to be Siri, hell you don't even know what it feels like to be me
> and we are of the same species (I presume); all you can do is observe how
> Siri and I behave and try to make conclusions about our inner life or like
> of same from that.

That isn't 'all I can do'. I don't need to do anything special to
understand exactly what Siri is and exactly why it feels nothing any
more than I understand why an audioanimatronic pirate at Disneyland
feels nothing.

>
> > As I continue to try to explain, awareness is not a function of objects,
> > it is the symmetrically anomalous counterpart of objects.
>
> Bafflegab.

Translation: "I don't understand and I don't care, but it makes me
feel superior to dismiss your position arbitrarily'.

>
> > Experiences accumulate semantic charge
>
> Semantic charge? Poetic crapola of that sort may impress some but not me.
> If you can't express your ideas clearer than that they are not worth
> expressing at all.

See above.

>
> > > The beads will never learn anything. They are only beads.
>
> Computers can and do learn things

No. Computers have never learned anything. We learn things using
computers. Computers store, retrieve, and process data. Nothing more.
The human mind does much more. It feels, sees, knows, believes,
understands, wants, tries, opposes, speculates, creates, imagines,
etc.

> and a Turing Machine can simulate any
> computer and you can make a Turing Machine from beads. I won't insult your
> intelligence by spelling out the obvious conclusion from that fact.

Why not? You insult my intelligence in most of your other replies to
me?

>
> > Machines are made of unconsciousness.
>
>  Machines are made of atoms just like you and me.

And atoms are unconscious, are they not?

>
> > All machines are unconscious. That is how we can control them.
>
> A argument that is already very weak and will become DRAMATICALLY weaker in
> the future.

Promissory mechanism is religious faith to me.

> In the long run there is no way we can control computers, they
> are our slave right now but that circumstance will not continue.

So you say. I say we are already the slaves of computers now. It's
called economics.

>
> > That would not be necessary if the machine had any capacity to learn.
>
> I don't know what you're talking about, machines have been able to learn
> for decades.

If I fill a file cabinet with files, do you say that the cabinet has
learned something?

>
> >> at a fundamental level no human being could write a computer program
> >> like Siri and nobody knows how it works.
>
> I wouldn't say we don't know how it works. Binary logic is pretty
>
> > straightforward.
>
> One binary logic operation is pretty straightforward but *20,000 trillion
> of them every second is not, *and that's what today's supercomputers can
> do, and they are doubling in power every 18 months.

It's more numerous but no less straightforward. You could stop the
program at any given point and understand every thread of every
process. It's big and it's fast, sure, but it's still not mysterious.

>
> > That's the theory. Meanwhile, in reality, we are using the same basic
> > interface for computers since 1995.
>
> What are you talking about? Siri is a computer interface as is Google and
> even supercomputers didn't have them or anything close to it in 1995.

There were web search engines before Google. They weren't quite as
good but there has been no improvement in searches since then. Siri is
a new branding and improved implementation of voice recognition that
we have had in other devices for a while. It's progress, but hardly
Einstein, Edison, Tesla, or Wright brothers progress.

>
> >>>  people in a vegetative state do sometimes have an inner life despite
> >> their behavior.
>
> >> In the course of our conversations you have made declarative statements
> > like the above dozens if not hundreds of times but you never seriously ask
> > yourself "HOW DO I KNOW THIS?".
>
>  >There is a lot of anecdotal evidence. People come out of comas.
>
>
>
> So people come out 

Re: Intelligence and consciousness

2012-02-04 Thread Bruno Marchal


On 03 Feb 2012, at 23:58, Craig Weinberg wrote:


 Consciousness and mechanism are
mutually exclusive by definition and always will be.


I think you confuse mechanism before and after Gödel, and you miss the  
1-indeterminacy. You confuse Turing emulable, and Turing recoverable  
(by 1-indeterminacy on non computable domain).


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Bruno Marchal


On 03 Feb 2012, at 21:23, Evgenii Rudnyi wrote:


On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdb wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdb wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of you to
inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within the
context of chess the machine acts just like a person who had
those emotions. So it had at least the functional equivalent of
those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already
been over this issue a dozen times. My view is that the whole idea
that there can be a 'functional equivalent of emotions' is
completely unsupported. I give examples of puppets, movies,
trashcans that say THANK YOU, voicemail...all of these things
demonstrate that there need not be any connection at all between
function and interior experience.


Except that in every case there is an emotion in your examples...it's
just the emotion of the puppeter, the screenwriter, the trashcan
painter. But in the case of the chess playing computer, there is no
person providing the 'emotion' because the 'emotion' depends on
complex and unforeseeable events. Hence it is appropriate to
attribute the 'emotion' to the computer/program.

Brent


Craig's position that computers in the present form do not have  
emotions is not unique, as emotions belong to consciousness. A quote  
from my favorite book


Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as  
established that his can never happen."


Now the last paragraph from the chapter "10.3 Conscious robots?"

p. 130. "So, while we may grant robots the power to form meaningful  
categorical representations at a level reached by the unconscious  
brain and by the behaviour controlled by the unconscious brain, we  
should remain doubtful whether they are likely to experience  
conscious percepts. This conclusion should not, however, be over- 
interpreted. It does not necessarily imply that human beings will  
never be able to build artefacts with conscious experience. That  
will depend on how the trick of consciousness is done. If and when  
we know the trick, it may be possible to duplicate it. But the mere  
provision of behavioural dispositions is unlikely to be up to the  
mark."


If we say that computers right now have emotions, then we must be  
able exactly define the difference between unconscious and conscious  
experience in the computer (for example in that computer that has  
won Kasparov). Can you do it?


Yes. It is the point of AUDA. We can do it in the theoretical  
framework, once we accept some theory (axiomatic) of knowledge.
Also, if your theory is that we (in the 3-sense) are not Turing  
emulable, you have to explain us why, and what it adds to the  
explanation.
With comp, the trick of both consciousness and matter is not entirely  
computable. You have to resist to a reductionist conception of numbers  
and machines.


No computers has ever emotion "right now", they have *always* "right  
now emotions". With comp, the mind-body link is a bit tricky. Real  
consciousness is better seen to be associated to an infinity of  
computations instead of one, as we are programmed to do by years of  
local evolution.





Hence I personally find this particular Craig's position as supported.


You might miss the discovery of the universal machine and its self- 
reference logic.


Clark is right on this, emotion are easy, despite being able to run  
very deep, and to govern us. Esay but not so easy, you need the  
sensible matter non communicable hyposases.


The emotion of your laptot is unknown, and unmanifested, because your  
laptop has no deep persistant self-reference ability to share with  
you.  We want a slave, and would be anxious in front of a machine  
taking too much independence.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-04 Thread Evgenii Rudnyi

On 04.02.2012 01:10 meekerdb said the following:

On 2/3/2012 1:50 PM, Evgenii Rudnyi wrote:

On 03.02.2012 22:07 meekerdb said the following:

On 2/3/2012 12:23 PM, Evgenii Rudnyi wrote:

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdb wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdb
wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind
of you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that
within the context of chess the machine acts just like a
person who had those emotions. So it had at least the
functional equivalent of those emotions. Whereas your
opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that
the whole idea that there can be a 'functional equivalent
of emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU,
voicemail...all of these things demonstrate that there need
not be any connection at all between function and interior
experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the
chess playing computer, there is no person providing the
'emotion' because the 'emotion' depends on complex and
unforeseeable events. Hence it is appropriate to attribute
the 'emotion' to the computer/program.

Brent


Craig's position that computers in the present form do not
have emotions is not unique, as emotions belong to
consciousness. A quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard
Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as
established that his can never happen."

Now the last paragraph from the chapter "10.3 Conscious
robots?"

p. 130. "So, while we may grant robots the power to form
meaningful categorical representations at a level reached by
the unconscious brain and by the behaviour controlled by the
unconscious brain, we should remain doubtful whether they are
likely to experience conscious percepts. This conclusion should
not, however, be over-interpreted. It does not necessarily
imply that human beings will never be able to build artefacts
with conscious experience. That will depend on how the trick of
consciousness is done. If and when we know the trick, it may be
possible to duplicate it. But the mere provision of behavioural
dispositions is unlikely to be up to the mark."

If we say that computers right now have emotions, then we must
be able exactly define the difference between unconscious and
conscious experience in the computer (for example in that
computer that has won Kasparov). Can you do it?


Can you do it for people? For yourself? No. Experiments show
that people confuse the source of their own emotions. So your
requirement that we be able to "exactly define" is just something
you've invented.

Brent


I believe that there is at least a small difference. Presumably we
 know everything about the computer that has played chess. Then it
 seems that a hypothesis about emotions in that computer could be
verified without a problem - hence my notion on "exactly define".
On the other hand, consciousness remains to be a hard problem and
here "exactly define" does not work.

However, the latter does not mean that consciousness does not exist
as a phenomenon. Let us take for example life. I would say that
there is not good definition what life is ("exactly define" does
not work), yet this does not prevent science to research it. This
should be the same for conscious experience.

Evgenii


So you've reversed your theory? If computers have emotions like
people we must *not* be able to exactly define them. And if we can
exactly define them, that must prove they are not like people?


No, I do not. My point was that we can check the statement "a computer 
have emotions" exactly. Then it would be possible to check if such a 
definition applies to people. I have nothing against of such a way - 
make a hypothesis what emotion in a computer is, research it, and then 
try to apply this concept to people.


Yet, if we know in another direction, from people to computers, then 
first we should research what emotion in a human being is. Here is the 
difference with the computer, we cannot right now make a strict 
definition. We can though still research emotions in people.



Actually, if we've made an intelligent chess playing computer, one
that learns from experience, we probably don't know everything about
it. We might be able to find out - but only in the sense that in
principle we could find out all the neural connections and functions
in a human brain. It's probably easier and more certain to just watch
behavior.

Re: Intelligence and consciousness

2012-02-03 Thread John Clark
On Fri, Feb 3, 2012  Craig Weinberg  wrote:

> Huge abacuses are a really good way to look at this, although it's pretty
> much the same as the China Brain.


I hope you're not talking about Searle's Chinese room, the stupidest
thought experiment in history.


> > Your position is [...] The abacus could literally be made to think
>

Yes.

> Do you see why I am incredulous about this?


No.



>  > I am crystal clear in my own understanding that no matter how good the
> program seems, Siri 5000 will feel exactly the same thing as Siri. Nothing.


I accept that you are absolutely positively 100% certain of the above, but
I do NOT accept that you are correct. I'm not a religious man so I don't
believe in divine revelation and that's the only way you could know what it
feels like to be Siri, hell you don't even know what it feels like to be me
and we are of the same species (I presume); all you can do is observe how
Siri and I behave and try to make conclusions about our inner life or like
of same from that.

> As I continue to try to explain, awareness is not a function of objects,
> it is the symmetrically anomalous counterpart of objects.


Bafflegab.

> Experiences accumulate semantic charge


Semantic charge? Poetic crapola of that sort may impress some but not me.
If you can't express your ideas clearer than that they are not worth
expressing at all.


> > The beads will never learn anything. They are only beads.
>

Computers can and do learn things and a Turing Machine can simulate any
computer and you can make a Turing Machine from beads. I won't insult your
intelligence by spelling out the obvious conclusion from that fact.

> Machines are made of unconsciousness.


 Machines are made of atoms just like you and me.

> All machines are unconscious. That is how we can control them.


A argument that is already very weak and will become DRAMATICALLY weaker in
the future. In the long run there is no way we can control computers, they
are our slave right now but that circumstance will not continue.

> That would not be necessary if the machine had any capacity to learn.
>

I don't know what you're talking about, machines have been able to learn
for decades.

>> at a fundamental level no human being could write a computer program
>> like Siri and nobody knows how it works.
>>
>
I wouldn't say we don't know how it works. Binary logic is pretty
> straightforward.
>

One binary logic operation is pretty straightforward but *20,000 trillion
of them every second is not, *and that's what today's supercomputers can
do, and they are doubling in power every 18 months.

> That's the theory. Meanwhile, in reality, we are using the same basic
> interface for computers since 1995.
>

What are you talking about? Siri is a computer interface as is Google and
even supercomputers didn't have them or anything close to it in 1995.

>>>  people in a vegetative state do sometimes have an inner life despite
>> their behavior.
>>
>
>> In the course of our conversations you have made declarative statements
> like the above dozens if not hundreds of times but you never seriously ask
> yourself "HOW DO I KNOW THIS?".
>

 >There is a lot of anecdotal evidence. People come out of comas.
>

So people come out of comas and you observe that they make certain sounds
with their mouth then you make guesses about their inner life based on
those sounds. Siri can make sounds too.

> Recently a study proved it with MRI scans where the comatose patient was
> able to stimulate areas of their brain associated with coordinated physical
> activity in response to the scientists request for them to imagine playing
> tennis.
>

And how do you know that stimulated brain areas have anything to do with
consciousness? By observing behavior when that happens and making guesses
that seem reasonable to you.

> Why doesn't it [ a trash can] behave intelligently though?


We don't know exactly why people behave intelligently so we can't give a
definitive answer why a trash can doesn't, at least not yet, but in general
I can say that unlike a computer or a brain a trash can is not organized as
a Turing Machine.


> >> Just exactly like human beings that are manufactured out of stable,
> uniform, inanimate materials like amino acids.
>
> > I disagree. Organic chemistry is volatile. It reeks.
>

Oh for God's sake, now consciousness must stink! Well when selenium
rectifiers fail they reek to high heaven!

> Molecules may be too primitive to be described as part of us.
>

Primitive or not you are made of molecules and molecules are made of atoms
and if you've seen one atom you've seen them all.

> I like how you start out grandstanding against prejudice and superficial
> assumptions and end with completely blowing off Mr. Joe Blow.


Thank you, I thought it was rather good myself.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.co

Re: Intelligence and consciousness

2012-02-03 Thread meekerdb

On 2/3/2012 1:50 PM, Evgenii Rudnyi wrote:

On 03.02.2012 22:07 meekerdb said the following:

On 2/3/2012 12:23 PM, Evgenii Rudnyi wrote:

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdb wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdb wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of
you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within
the context of chess the machine acts just like a person who
had those emotions. So it had at least the functional
equivalent of those emotions. Whereas your opinion is simple
prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that the
whole idea that there can be a 'functional equivalent of
emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU, voicemail...all
of these things demonstrate that there need not be any
connection at all between function and interior experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the chess
playing computer, there is no person providing the 'emotion'
because the 'emotion' depends on complex and unforeseeable
events. Hence it is appropriate to attribute the 'emotion' to the
computer/program.

Brent


Craig's position that computers in the present form do not have
emotions is not unique, as emotions belong to consciousness. A
quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as
established that his can never happen."

Now the last paragraph from the chapter "10.3 Conscious robots?"

p. 130. "So, while we may grant robots the power to form meaningful
 categorical representations at a level reached by the unconscious
 brain and by the behaviour controlled by the unconscious brain, we
 should remain doubtful whether they are likely to experience
conscious percepts. This conclusion should not, however, be
over-interpreted. It does not necessarily imply that human beings
will never be able to build artefacts with conscious experience.
That will depend on how the trick of consciousness is done. If and
when we know the trick, it may be possible to duplicate it. But the
mere provision of behavioural dispositions is unlikely to be up to
the mark."

If we say that computers right now have emotions, then we must be
able exactly define the difference between unconscious and
conscious experience in the computer (for example in that computer
that has won Kasparov). Can you do it?


Can you do it for people? For yourself? No. Experiments show that
people confuse the source of their own emotions. So your requirement
that we be able to "exactly define" is just something you've
invented.

Brent


I believe that there is at least a small difference. Presumably we know everything about 
the computer that has played chess. Then it seems that a hypothesis about emotions in 
that computer could be verified without a problem - hence my notion on "exactly define". 
On the other hand, consciousness remains to be a hard problem and here "exactly define" 
does not work.


However, the latter does not mean that consciousness does not exist as a phenomenon. Let 
us take for example life. I would say that there is not good definition what life is 
("exactly define" does not work), yet this does not prevent science to research it. This 
should be the same for conscious experience.


Evgenii


So you've reversed your theory?  If computers have emotions like people we must *not* be 
able to exactly define them.  And if we can exactly define them, that must prove they are 
not like people?


Actually, if we've made an intelligent chess playing computer, one that learns from 
experience, we probably don't know everything about it.  We might be able to find out - 
but only in the sense that in principle we could find out all the neural connections and 
functions in a human brain.  It's probably easier and more certain to just watch behavior.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread Craig Weinberg
On Feb 3, 11:22 am, John Clark  wrote:
> On Fri, Feb 3, 2012 Craig Weinberg  wrote:
>
>
>
> > > An abacus is a computer. Left to it's own devices it's just a rectangle
> > of wood and bamboo or whatever.
>
> That is true so although it certainly needs to be huge we can't just make
> our very very big abacuses even bigger and expect them to be intelligent,
> its not just a hardware problem of wiring together more microchips, we must
> teach (program) the abacus to learn on its own.

Huge abacuses are a really good way to look at this, although it's
pretty much the same as the China Brain. Your position is that if we
made a gigantic abacus the size of our solar system, and had a person
manning each sliding bead, that there would be a possibility that even
though each person could only slide their bead to the left or right
according and tell others or be told by others to slide their beads
left or right, that with a well crafted enough sequences of
instructions, the abacus itself would begin to have a conscious
experience. The abacus could literally be made to think that it had
been born and come to life as a human being, a turtle, a fictional
character...anything we choose to convert into a sequence we feels
reflects the functionality of these beings will actually cause the
abacus to experience being that thing.

Do you see why I am incredulous about this? I understand that if you
assume comp this seems feasible, after all, we can make CleverBot and
Siri, etc. The problem is that CleverBot and Siri live in our very
human minds. They do not have lives and dream of being Siri II. I
understand that you and others here are convinced that logic would
dictate that we can't prove that Siri 5000 won't feel our feelings and
understand what we understand, but I am crystal clear in my own
understanding that no matter how good the program seems, Siri 5000
will feel exactly the same thing as Siri. Nothing. No more than the
instructions being called out on the scaffolds of the Mega Abacus will
begin to feel something some day. It doesn't work that way. Why? As I
continue to try to explain, awareness is not a function of objects, it
is the symmetrically anomalous counterpart of objects. Experiences
accumulate semantic charge - significance - over time, which, in a
natural circumstance, is reflected in the objective shadow, as
external agendas are reflected in the subjective experience. The
abacus and computer's instructions will never be native to them. The
beads will never learn anything. They are only beads.

>That is a very difficult
> task but enormous progress has been made in the last few years; as I said
> before, in 1998 nobody knew how to program even the largest 30 million
> dollar super abacus in the world to perform acts of intelligence that today
> can be done by that $399 iPhone abacus in your pocket.

I know. I've heard. As I say, no version is any closer to awareness of
any kind than the first version.

>
> I admit that it could turn out that humans just aren't smart enough to know
> how to teach a computer to be as smart or smarter than they are,

It's not a matter of being smart enough. You can't turn up into down.
Machines are made of unconsciousness. All machines are unconscious.
That is how we can control them. Consciousness and mechanism are
mutually exclusive by definition and always will be.

> but that
> doesn't mean it won't happen because humans have help, computers
> themselves. In a sense that's already true, a computer program needs to be
> in zeros and ones but nobody could write the Siri program that way, but we
> have computer assemblers and compilers to do that so we can write in a much
> higher level language than zeros and ones.

That would not be necessary if the machine had any capacity to learn.
Like the neurons of our brain, the microprocessors would adapt and
begin to understand natural human language.

> So at a fundamental level no
> human being could write a computer program like Siri and nobody knows how
> it works.

I wouldn't say we don't know how it works. Binary logic is pretty
straightforward.

> But programs like that get written nevertheless. And as computers
> get better the tools for writing programs get better and intelligent
> programs even more complex than Siri will get written with even less human
> understanding of their operation. The process builds on itself and thus
> accelerates.

That's the theory. Meanwhile, in reality, we are using the same basic
interface for computers since 1995.

>
> >  > people in a vegetative state do sometimes have an inner life despite
> > their behavior.
>
> In the course of our conversations you have made declarative statements
> like the above dozens if not hundreds of times but you never seriously ask
> yourself "HOW DO I KNOW THIS?".

There is a lot of anecdotal evidence. People come out of comas.
Recently a study proved it with MRI scans where the comatose patient
was able to stimulate areas of their brain associated with coord

Re: Intelligence and consciousness

2012-02-03 Thread Evgenii Rudnyi

On 03.02.2012 22:07 meekerdb said the following:

On 2/3/2012 12:23 PM, Evgenii Rudnyi wrote:

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdb wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdb wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of
you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within
the context of chess the machine acts just like a person who
had those emotions. So it had at least the functional
equivalent of those emotions. Whereas your opinion is simple
prejudice.

I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that the
whole idea that there can be a 'functional equivalent of
emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU, voicemail...all
of these things demonstrate that there need not be any
connection at all between function and interior experience.


Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the chess
playing computer, there is no person providing the 'emotion'
because the 'emotion' depends on complex and unforeseeable
events. Hence it is appropriate to attribute the 'emotion' to the
computer/program.

Brent


Craig's position that computers in the present form do not have
emotions is not unique, as emotions belong to consciousness. A
quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as
established that his can never happen."

Now the last paragraph from the chapter "10.3 Conscious robots?"

p. 130. "So, while we may grant robots the power to form meaningful
 categorical representations at a level reached by the unconscious
 brain and by the behaviour controlled by the unconscious brain, we
 should remain doubtful whether they are likely to experience
conscious percepts. This conclusion should not, however, be
over-interpreted. It does not necessarily imply that human beings
will never be able to build artefacts with conscious experience.
That will depend on how the trick of consciousness is done. If and
when we know the trick, it may be possible to duplicate it. But the
mere provision of behavioural dispositions is unlikely to be up to
the mark."

If we say that computers right now have emotions, then we must be
able exactly define the difference between unconscious and
conscious experience in the computer (for example in that computer
that has won Kasparov). Can you do it?


Can you do it for people? For yourself? No. Experiments show that
people confuse the source of their own emotions. So your requirement
that we be able to "exactly define" is just something you've
invented.

Brent


I believe that there is at least a small difference. Presumably we know 
everything about the computer that has played chess. Then it seems that 
a hypothesis about emotions in that computer could be verified without a 
problem - hence my notion on "exactly define". On the other hand, 
consciousness remains to be a hard problem and here "exactly define" 
does not work.


However, the latter does not mean that consciousness does not exist as a 
phenomenon. Let us take for example life. I would say that there is not 
good definition what life is ("exactly define" does not work), yet this 
does not prevent science to research it. This should be the same for 
conscious experience.


Evgenii






Hence I personally find this particular Craig's position as
supported.

Evgenii





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread meekerdb

On 2/3/2012 12:23 PM, Evgenii Rudnyi wrote:

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdb wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdb wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of you to
inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within the
context of chess the machine acts just like a person who had
those emotions. So it had at least the functional equivalent of
those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already
been over this issue a dozen times. My view is that the whole idea
that there can be a 'functional equivalent of emotions' is
completely unsupported. I give examples of puppets, movies,
trashcans that say THANK YOU, voicemail...all of these things
demonstrate that there need not be any connection at all between
function and interior experience.


Except that in every case there is an emotion in your examples...it's
 just the emotion of the puppeter, the screenwriter, the trashcan
painter. But in the case of the chess playing computer, there is no
person providing the 'emotion' because the 'emotion' depends on
complex and unforeseeable events. Hence it is appropriate to
attribute the 'emotion' to the computer/program.

Brent


Craig's position that computers in the present form do not have emotions is not unique, 
as emotions belong to consciousness. A quote from my favorite book


Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as established that his can 
never happen."


Now the last paragraph from the chapter "10.3 Conscious robots?"

p. 130. "So, while we may grant robots the power to form meaningful categorical 
representations at a level reached by the unconscious brain and by the behaviour 
controlled by the unconscious brain, we should remain doubtful whether they are likely 
to experience conscious percepts. This conclusion should not, however, be 
over-interpreted. It does not necessarily imply that human beings will never be able to 
build artefacts with conscious experience. That will depend on how the trick of 
consciousness is done. If and when we know the trick, it may be possible to duplicate 
it. But the mere provision of behavioural dispositions is unlikely to be up to the mark."


If we say that computers right now have emotions, then we must be able exactly define 
the difference between unconscious and conscious experience in the computer (for example 
in that computer that has won Kasparov). Can you do it?


Can you do it for people? For yourself?  No.  Experiments show that people confuse the 
source of their own emotions.  So your requirement that we be able to "exactly define" is 
just something you've invented.


Brent





Hence I personally find this particular Craig's position as supported.

Evgenii



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread Evgenii Rudnyi

On 02.02.2012 21:49 meekerdb said the following:

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdb wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdb wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of you to
inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within the
context of chess the machine acts just like a person who had
those emotions. So it had at least the functional equivalent of
those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already
been over this issue a dozen times. My view is that the whole idea
that there can be a 'functional equivalent of emotions' is
completely unsupported. I give examples of puppets, movies,
trashcans that say THANK YOU, voicemail...all of these things
demonstrate that there need not be any connection at all between
function and interior experience.


Except that in every case there is an emotion in your examples...it's
 just the emotion of the puppeter, the screenwriter, the trashcan
painter. But in the case of the chess playing computer, there is no
person providing the 'emotion' because the 'emotion' depends on
complex and unforeseeable events. Hence it is appropriate to
attribute the 'emotion' to the computer/program.

Brent


Craig's position that computers in the present form do not have emotions 
is not unique, as emotions belong to consciousness. A quote from my 
favorite book


Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as 
established that his can never happen."


Now the last paragraph from the chapter "10.3 Conscious robots?"

p. 130. "So, while we may grant robots the power to form meaningful 
categorical representations at a level reached by the unconscious brain 
and by the behaviour controlled by the unconscious brain, we should 
remain doubtful whether they are likely to experience conscious 
percepts. This conclusion should not, however, be over-interpreted. It 
does not necessarily imply that human beings will never be able to build 
artefacts with conscious experience. That will depend on how the trick 
of consciousness is done. If and when we know the trick, it may be 
possible to duplicate it. But the mere provision of behavioural 
dispositions is unlikely to be up to the mark."


If we say that computers right now have emotions, then we must be able 
exactly define the difference between unconscious and conscious 
experience in the computer (for example in that computer that has won 
Kasparov). Can you do it?


Hence I personally find this particular Craig's position as supported.

Evgenii

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread John Clark
On Fri, Feb 3, 2012 Craig Weinberg  wrote:

>
> > An abacus is a computer. Left to it's own devices it's just a rectangle
> of wood and bamboo or whatever.


That is true so although it certainly needs to be huge we can't just make
our very very big abacuses even bigger and expect them to be intelligent,
its not just a hardware problem of wiring together more microchips, we must
teach (program) the abacus to learn on its own. That is a very difficult
task but enormous progress has been made in the last few years; as I said
before, in 1998 nobody knew how to program even the largest 30 million
dollar super abacus in the world to perform acts of intelligence that today
can be done by that $399 iPhone abacus in your pocket.

I admit that it could turn out that humans just aren't smart enough to know
how to teach a computer to be as smart or smarter than they are, but that
doesn't mean it won't happen because humans have help, computers
themselves. In a sense that's already true, a computer program needs to be
in zeros and ones but nobody could write the Siri program that way, but we
have computer assemblers and compilers to do that so we can write in a much
higher level language than zeros and ones. So at a fundamental level no
human being could write a computer program like Siri and nobody knows how
it works. But programs like that get written nevertheless. And as computers
get better the tools for writing programs get better and intelligent
programs even more complex than Siri will get written with even less human
understanding of their operation. The process builds on itself and thus
accelerates.

>  > people in a vegetative state do sometimes have an inner life despite
> their behavior.


In the course of our conversations you have made declarative statements
like the above dozens if not hundreds of times but you never seriously ask
yourself "HOW DO I KNOW THIS?".


> > we certainly don't owe a trashcan lid any such benefit of the doubt.


Why "certainly", why are you so certain? I know why I am but I can't figure
out why you are. Like you I also think the idea that a plastic trashcan can
have a inner life is ridiculous but unlike you I can give a clear logical
reason WHY I think it's ridiculous: a trash can does not behave
intelligently.

> Like a computer, it is manufactured out of materials selected
> specifically for their stable, uniform, inanimate properties.


Just exactly like human beings that are manufactured out of stable,
uniform, inanimate materials like amino acids.

> I understand what you mean though, and yes, our perception of something's
> behavior is a primary tool to how we think of it, but not the only one.
> More important is the influence of conventional wisdom in a given society
> or group.


At one time the conventional wisdom in society was that black people didn't
have much of a inner life, certainly nothing like that of white people, so
they could own and do whatever they wanted to people of a darker hue
without guilt. Do you really expect Mr. Joe Blow and his conventional
wisdom can teach us anything about the future of computers?

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread Craig Weinberg
On Feb 2, 4:33 pm, John Clark  wrote:
> On Thu, Feb 2, 2012  Craig Weinberg  wrote:
>
> > Do you have any examples of an intelligent organism which evolved without
> > emotion?
>
> Intelligence is not possible without emotion,

That's what I'm saying. Are you suddenly switching to my argument?

> but emotion is possible
> without intelligence.

Of course.

> And I was surprised you asked me for a example of a
> emotional organism with or without intelligence, how in the world could I
> do that?  You refuse to accept behavior as evidence for emotion or even for
> intelligence, so I can't  tell if anyone or anything is emotional or
> intelligent in the entire universe except for me.

But you can tell if something seems like it is.

>
> > > The whole idea of evolution 'figuring out' anything is not consistent
> > with our understanding of natural selection.
>
> It's a figure of speech.
>
> > > Natural selection is not teleological.
>
> A keen grasp of the obvious.

Sorry, I can't tell the difference here if you are using it as a
figure of speech or positing agency to evolution. Even using that as a
figure of speech suggests an exaggeration of the role of evolution in
the universe.

>
> > The subject was things that influenced my theory. Light, electricity, and
> > electromagnetism are significant influences.
>
> Electromagnetism significantly influences everything as do the other 3
> fundamental physical forces. Tell me something new.

If I yell out 'Tesla' and then get hit by lightning, is that not
associated strongly enough for you to be noticed?

>
> > > What do you think understanding is actually supposed to lead to?
>
> Your ideas lead to navel gazing not understanding.

And your ideas lead to...?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-03 Thread Craig Weinberg
On Feb 2, 4:05 pm, John Clark  wrote:
> On Thu, Feb 2, 2012 Craig Weinberg  wrote:
>
> > My view is that the whole idea that there can be a 'functional equivalent
> > of emotions' is completely unsupported. I give examples of puppets
>
> A puppet needs a puppeteer, a computer does not.

Yes it does. It needs a user at some point to make any sense at all of
what the computer is doing. An abacus is a computer. Left to it's own
devices it's just a rectangle of wood and bamboo or whatever. Attach a
motor to it that does computations with it, and you still have a
rectangle of wood with a metal motor attached to it clicking out
meaningless non-patterns in the abacus with nothing to recognize the
patterns but the occasional ant getting injured by a rapidly sliding
bead.

>
> > movies, trashcans that say THANK YOU, voicemail...all of these things
> > demonstrate that there need not be any connection at all between function
> > and interior experience.
>
> For all these examples to be effective you would need to know that they do
> not have a inner life, and how do you know they don't have a inner life.
> You know because they don't behave as if they have a inner life. Behavior
> is the only tool we have for detecting such things in others so your
> examples are useless.

No, because people in a vegetative state do sometimes have an inner
life despite their behavior. It is our similarity to and familiarity
with other humans that encourages us to give them the benefit of the
doubt. We go the extra mile to see if we can figure out if they are
still alive. Most of us don't care as much whether a steer is alive
when we are executing it for hamburger patties or a carrot feels
something when we rip it out of the ground. With that in mind, we
certainly don't owe a trashcan lid any such benefit of the doubt. Like
a computer, it is manufactured out of materials selected specifically
for their stable, uniform, inanimate properties. I understand what you
mean though, and yes, our perception of something's behavior is a
primary tool to how we think of it, but not the only one. More
important is the influence of conventional wisdom in a given society
or group. We like to eat beef so most of us rationalize it without
much thought despite the sentient behavior of steer. We like the idea
of AI so we project the possibility of feeling and understanding on it
- we go out of our way to prove that it is possible despite the
automatic and mechanical behavior of the instruments.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread John Clark
On Thu, Feb 2, 2012  Craig Weinberg  wrote:

> Do you have any examples of an intelligent organism which evolved without
> emotion?


Intelligence is not possible without emotion, but emotion is possible
without intelligence. And I was surprised you asked me for a example of a
emotional organism with or without intelligence, how in the world could I
do that?  You refuse to accept behavior as evidence for emotion or even for
intelligence, so I can't  tell if anyone or anything is emotional or
intelligent in the entire universe except for me.


> > The whole idea of evolution 'figuring out' anything is not consistent
> with our understanding of natural selection.


It's a figure of speech.


> > Natural selection is not teleological.


A keen grasp of the obvious.

> The subject was things that influenced my theory. Light, electricity, and
> electromagnetism are significant influences.
>

Electromagnetism significantly influences everything as do the other 3
fundamental physical forces. Tell me something new.


> > What do you think understanding is actually supposed to lead to?
>

Your ideas lead to navel gazing not understanding.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread John Clark
On Thu, Feb 2, 2012 Craig Weinberg  wrote:

> My view is that the whole idea that there can be a 'functional equivalent
> of emotions' is completely unsupported. I give examples of puppets


A puppet needs a puppeteer, a computer does not.

> movies, trashcans that say THANK YOU, voicemail...all of these things
> demonstrate that there need not be any connection at all between function
> and interior experience.
>

For all these examples to be effective you would need to know that they do
not have a inner life, and how do you know they don't have a inner life?
You know because they don't behave as if they have a inner life. Behavior
is the only tool we have for detecting such things in others so your
examples are useless.

   John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread Craig Weinberg
On Jan 31, 3:25 pm, John Mikes  wrote:
> Craig and Brent:
> would you kindly disclose an opinion that can be
> deemed  "SUPPORTED"
>
> All our 'support' (evidence, verification whatever) comes from mostly
> uninformed information fragments we receive by observation(?) of the
> already accessible details and try to complete them by the available
> 'knowledge' already established as "conventional science" stuff.
> We use instruments, constructed to work on the just described portion of
> observations and evaluate the 'received'(?) data by our flimsy
> (*human?)*mathematical logic ONLY.

Yes! This is a core assumption of multisense realism. I go a step
further to describe that not only do our observations arise entirely
from our the qualities of our observational capacities (senses, sense
making), but that the the nature of our senses are such that what we
observe as being within us 'seems to be' many different ways, but the
more distant observations are understood in terms of facts that
'simply are'. This forms the basis for our human worldviews, with the
far-sighted approaches being overly anthropomorphic and the
mechanistic approaches being the near-sighted view.

> Just compare "opinions" (scientific that is) of different ages before (and
> after) different levels of accepted (and believed!) informational basis
> (like Flat Earth, BEFORE electricity, BEFORE Marie Curie, Watson, etc.)
>
> My "worldview" (and my narrative, of course) is also based on UNSUPPORTED
> OPINION:  "mine".

Exactly. This is the native orientation of the universe. The impulse
to validate that opinion externally is valuable, but it also can
seduce us into a false certainty. This is not an illusion, it is
actually how the universe works. In my unsupported opinion.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread Craig Weinberg
On Jan 31, 1:33 pm, John Clark  wrote:
>
>
>
> > > The Limbic system predates the Neocortex evolutionarily.
>
> As I've said on this list many times.
>
> > > There is no reason to think that emotion emerged after intelligence.
>
> And as I've said emotion is about 500 million years old but Evolution found
> intelligence much harder to produce, it only figured out how to do it about
> a million years ago, perhaps less.

Do you have any examples of an intelligent organism which evolved
without emotion? The whole idea of evolution 'figuring out' anything
is not consistent with our understanding of natural selection. Natural
selection is not teleological. There is no figuring, only statistical
probabilities related to environmental conditions.
>
> > Evolution doesn't see anything.
>
> Don't be ridiculous.

I'm not. Statistics don't see things.

>
> > Which thoughtless fallacy should I choose? Oh right, I have no free will
> > anyhow so some reason will choose for me.
>
> Cannot comment, don't know what ASCII string "free will" means.
>
> > > You asked what influenced my theory. You don't see how Tesla relates to
> > lightning and electromagnetism?
>
> I made a Tesla Coil when I was 14, it was great fun looked great and really
> impressed the rubes, but I don't see the relevance to the subject at hand.

The subject was things that influenced my theory. Light, electricity,
and electromagnetism are significant influences.

>
> > That is exactly what the cosmos is - things happening for a reason and
> > not happening for a
> > reason at the same time.
>
> And you expect this sort of new age crapola to actually lead to something,

What do you think understanding is actually supposed to lead to?

> like a basic understanding of how the world works? Dream on. But then again
> it might work if you're right about logic not existing.

Logic exists, but it's not the only thing that exists.

>
> > Is there anyone noteworthy in the history of human progress who has not
> > been called insane?
>
> Richard Feynman.

http://inspirescience.wordpress.com/2010/11/09/richard-p-feynman1/

Richard P. Feynman – Crazy as he is Genius

"He didn’t do things through conventional or traditional means, but
rather was eccentric, crazy, and went against the norms of society. He
was an explorer of the deepest nature. He adopted at a young age the
philosophy that you should never care what other people think, because
everyone else is most likely wrong."

"When I see equations, I see the letters in colors – I don't know
why. As I'm talking, I see vague pictures of Bessel functions from
Jahnke and Emde's book, with light-tan j's, slightly violet-bluish
n's, and dark brown x's flying around. And I wonder what the hell it
must look like to the students." - Richard Feynman.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread meekerdb

On 2/2/2012 12:38 PM, Craig Weinberg wrote:

On Jan 30, 6:54 pm, meekerdb  wrote:

On 1/30/2012 3:14 PM, Craig Weinberg wrote:


On Jan 30, 6:08 pm, meekerdbwrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote:
So kind of you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

Except that my opinion is supported by the fact that within the context of 
chess the
machine acts just like a person who had those emotions.  So it had at least the 
functional
equivalent of those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already been
over this issue a dozen times. My view is that the whole idea that
there can be a 'functional equivalent of emotions' is completely
unsupported. I give examples of puppets, movies, trashcans that say
THANK YOU, voicemail...all of these things demonstrate that there need
not be any connection at all between function and interior experience.

Except that in every case there is an emotion in your examples...it's just the emotion of 
the puppeter, the screenwriter, the trashcan painter.  But in the case of the chess 
playing computer, there is no person providing the 'emotion' because the 'emotion' depends 
on complex and unforeseeable events.  Hence it is appropriate to attribute the 'emotion' 
to the computer/program.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-02-02 Thread Craig Weinberg
On Jan 30, 6:54 pm, meekerdb  wrote:
> On 1/30/2012 3:14 PM, Craig Weinberg wrote:
>
> > On Jan 30, 6:08 pm, meekerdb  wrote:
> >> On 1/30/2012 2:52 PM, Craig Weinberg wrote:
> >> So kind of you to inform us of your unsupported opinion.
> > I was commenting on your unsupported opinion.
>
> Except that my opinion is supported by the fact that within the context of 
> chess the
> machine acts just like a person who had those emotions.  So it had at least 
> the functional
> equivalent of those emotions. Whereas your opinion is simple prejudice.

I agree my opinion would be simple prejudice had we not already been
over this issue a dozen times. My view is that the whole idea that
there can be a 'functional equivalent of emotions' is completely
unsupported. I give examples of puppets, movies, trashcans that say
THANK YOU, voicemail...all of these things demonstrate that there need
not be any connection at all between function and interior experience.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-31 Thread John Mikes
Craig and Brent:
would you kindly disclose an opinion that can be
deemed  "SUPPORTED"

All our 'support' (evidence, verification whatever) comes from mostly
uninformed information fragments we receive by observation(?) of the
already accessible details and try to complete them by the available
'knowledge' already established as "conventional science" stuff.
We use instruments, constructed to work on the just described portion of
observations and evaluate the 'received'(?) data by our flimsy
(*human?)*mathematical logic ONLY.
Just compare "opinions" (scientific that is) of different ages before (and
after) different levels of accepted (and believed!) informational basis
(like Flat Earth, BEFORE electricity, BEFORE Marie Curie, Watson, etc.)

My "worldview" (and my narrative, of course) is also based on UNSUPPORTED
OPINION:  "mine".

John Mikes

On Mon, Jan 30, 2012 at 6:54 PM, meekerdb  wrote:

> On 1/30/2012 3:14 PM, Craig Weinberg wrote:
>
> On Jan 30, 6:08 pm, meekerdb   
> wrote:
>
> On 1/30/2012 2:52 PM, Craig Weinberg wrote:
>
> So kind of you to inform us of your unsupported opinion.
>
> I was commenting on your unsupported opinion.
>
>
>
> Except that my opinion is supported by the fact that within the context of
> chess the machine acts just like a person who had those emotions.  So it
> had at least the functional equivalent of those emotions. Whereas your
> opinion is simple prejudice.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-31 Thread John Clark
On Mon, Jan 30, 2012  Craig Weinberg  wrote:

>
> > The Limbic system predates the Neocortex evolutionarily.


As I've said on this list many times.


> > There is no reason to think that emotion emerged after intelligence.
>

And as I've said emotion is about 500 million years old but Evolution found
intelligence much harder to produce, it only figured out how to do it about
a million years ago, perhaps less.

> Evolution doesn't see anything.


Don't be ridiculous.

> Which thoughtless fallacy should I choose? Oh right, I have no free will
> anyhow so some reason will choose for me.
>

Cannot comment, don't know what ASCII string "free will" means.


> > You asked what influenced my theory. You don't see how Tesla relates to
> lightning and electromagnetism?
>

I made a Tesla Coil when I was 14, it was great fun looked great and really
impressed the rubes, but I don't see the relevance to the subject at hand.

> That is exactly what the cosmos is - things happening for a reason and
> not happening for a
> reason at the same time.
>

And you expect this sort of new age crapola to actually lead to something,
like a basic understanding of how the world works? Dream on. But then again
it might work if you're right about logic not existing.

> Is there anyone noteworthy in the history of human progress who has not
> been called insane?
>

Richard Feynman.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread meekerdb

On 1/30/2012 3:14 PM, Craig Weinberg wrote:

On Jan 30, 6:08 pm, meekerdb  wrote:

On 1/30/2012 2:52 PM, Craig Weinberg wrote:
So kind of you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.



Except that my opinion is supported by the fact that within the context of chess the 
machine acts just like a person who had those emotions.  So it had at least the functional 
equivalent of those emotions. Whereas your opinion is simple prejudice.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread L.W. Sterritt
On Jan 30, 5:16 pm, meekerdb  wrote:

> Sure it did.  If it had been equipped to express them they would have been 
> something like,
> "This position feels good."  "That position feels weak." etc.  Not much 
> range...but
> emotions nevertheless.

You seriously believe that? Wow. That make Santa Claus seem quite
plausible by comparison to me.

What Deep Blue thinks can be expressed directly as a memory dump. It
looks like hex code. It has zero feelings, zero thoughts. It's a
sewing machine that moves chess pieces instead of stitches.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread Craig Weinberg
On Jan 30, 6:08 pm, meekerdb  wrote:
> On 1/30/2012 2:52 PM, Craig Weinberg wrote:

> So kind of you to inform us of your unsupported opinion.

I was commenting on your unsupported opinion.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread meekerdb

On 1/30/2012 2:52 PM, Craig Weinberg wrote:

On Jan 30, 5:16 pm, meekerdb  wrote:


Sure it did.  If it had been equipped to express them they would have been 
something like,
"This position feels good."  "That position feels weak." etc.  Not much 
range...but
emotions nevertheless.

You seriously believe that? Wow. That make Santa Claus seem quite
plausible by comparison to me.

What Deep Blue thinks can be expressed directly as a memory dump. It
looks like hex code. It has zero feelings, zero thoughts.


So kind of you to inform us of your unsupported opinion.

Brent



It's a
sewing machine that moves chess pieces instead of stitches.



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread Craig Weinberg
On Jan 30, 5:16 pm, meekerdb  wrote:

> Sure it did.  If it had been equipped to express them they would have been 
> something like,
> "This position feels good."  "That position feels weak." etc.  Not much 
> range...but
> emotions nevertheless.

You seriously believe that? Wow. That make Santa Claus seem quite
plausible by comparison to me.

What Deep Blue thinks can be expressed directly as a memory dump. It
looks like hex code. It has zero feelings, zero thoughts. It's a
sewing machine that moves chess pieces instead of stitches.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread meekerdb

On 1/30/2012 10:27 AM, John Clark wrote:

On Mon, Jan 30, 2012  meekerdb mailto:meeke...@verizon.net>> wrote:

> Of course evolution can't 'see' intelligence either.  As you say 
selection can
only be based on action.  But action takes emotion


OK I have no problem with that, but then Deep Blue had emotions way back in 1996



Sure it did.  If it had been equipped to express them they would have been something like, 
"This position feels good."  "That position feels weak." etc.  Not much range...but 
emotions nevertheless.


Brent

when it beat the best human Chess  player in the world because in the course of that 
game Deep Blue performed actions, very intelligent actions in fact. As I've said many 
times emotion is easy but intelligence is hard.


 John K Clark


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread Craig Weinberg
On Jan 30, 1:27 pm, John Clark  wrote:
> On Mon, Jan 30, 2012  meekerdb  wrote:
>
> > Of course evolution can't 'see' intelligence either.  As you say
> > selection can only be based on action.  But action takes emotion
>
> OK I have no problem with that, but then Deep Blue had emotions way back in
> 1996 when it beat the best human Chess  player in the world because in the
> course of that game Deep Blue performed actions, very intelligent actions
> in fact. As I've said many times emotion is easy but intelligence is hard.

These are the absurdities that arise from deciding that AI is
literally intelligence. Why stop there, why not say Deep Blue must
have had a stomach and teeth?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread Craig Weinberg
On Jan 30, 12:47 pm, John Clark  wrote:
> On Sun, Jan 29, 2012 at 8:53 PM, Craig Weinberg wrote:
>
> > I just understand that intelligence is an evolution of emotion,
>
> There is simply no logical way that could be true.

I think that it's a medical fact. 
http://www.mta.org/eweb/docs/journal/2000/summer/Figure3-1.JPG

The Limbic system predates the Neocortex evolutionarily. There is no
reason to think that emotion emerged after intelligence.

> However important it may
> be to us Evolution can not see emotion or consciousness, Evolution can only
> see actions,

Evolution doesn't see anything. If you live somewhere that gets cold
and you die before you can reproduce then evolution has selected for
your neighbors in warmer climes. There is no action that would have
changed that outcome, unless you could have gotten to a warmer place.
If you happened to be someone who really disliked the cold, if it made
you so upset and cranky that it caused you to migrate sooner, well
then, evolution selected for emotion and consciousness related to cold
weather.

That's not what I'm talking about at all though. I'm talking about the
logic of what emotion is leading to thinking and not the other way
around. If you cannot feel emotion, you never will regardless of your
thought patterns. If you are emotional however, you can develop
thoughts. This is the natural course of development in human infancy.
Sensation first, then emotion, then communication.

> so either emotion and consciousness are a byproduct of
> intelligence or emotion and consciousness do not exist.

Which thoughtless fallacy should I choose? Oh right, I have no free
will anyhow so some reason will choose for me.

Emotion and consciousness are not a byproduct of intelligence. If they
were, then babies would be discussing theories in the womb instead of
crying and crapping all over themselves.

> Perhaps you will
> insist that emotion and consciousness will join the very long list of
> things that you say do not exist  (bits electrons information logic etc)
> but I am of the opinion that consciousness and emotion do in fact exist.

Perhaps you will join the very long list of people who have bothered
to notice how the evolution of the human brain and mind actually
occurs.

>
> > > I also understand that electronic computers use semiconductors which I
> > know have not evolved into organisms and do not seem to be capable of what
> > I would call sensation.
>
> My arguments are based on logic, your argument is that computers just don't
> feel squishy enough for your taste.

What logic is your argument based on? That there's no difference
between life and death so there's no reason that a marionette can't
run for congress?

>
> > Logic plays a part but mainly it's [...]
>
> Not logical.
>
> > My house got struck by lightning right after I really figured out the
> > photon theory. I had left my computer on with a website on the biography of
> > Tesla on the screen while we saw a movie. True 
> > story.http://www.stationlink.com/lightning/IMG_1981.JPG
>
> Interesting, but I don't see the relevance.

You asked what influenced my theory. You don't see how Tesla relates
to lightning and electromagnetism?

>
> > > You don't deny free will, you just deny that it's possible to even
> > conceive of it in the first place. Ohh kayy...
>
> Fortunately I cannot conceive of something happening for a reason and not
> happening for a reason at the same time.

Actually, you have just defined the universe. That is exactly what the
cosmos is - things happening for a reason and not happening for a
reason at the same time.

> I say "fortunately" because there
> is a word to describe people who can conceive of such a contradiction,
> insane.

Is there anyone noteworthy in the history of human progress who has
not been called insane?

>
> > 'Computers' that are in use now have not even improved meaningfully in
> > the last 15 years. Is Windows 7, XP, 2000, really much better then Windows
> > 98?
>
> I don't know a lot about Windows 7 but I do know that my Macintosh is one
> hell of a lot better than my old steam powered Windows 98 boat anchor.

Mac has always been better than Windoze.

> And
> in 1998 the most elite AI researchers on the planet working with 30 million
> dollar supercomputers the size of a small cathedral could not do anything
> that came even close to what Siri can do on that $399 iPhone in your
> pocket, those poor 1998 guys weren't even in the same universe.

Hooray. Siri. Way better than electric lights or movies or cars or
planes.

>
> And it took Evolution 4 billion years to make human level intelligence, but
> humans have only been working on AI for about 50 years. So yes at that rate
> I think computers will be smarter than humans at EVERYTHING in 15 to 65
> years, and I'd be more surprised if it took longer than 65 years than if it
> took less than 15; but even if it took 10 or 100 times that long it would
> still be virtually instantaneous on the geological ti

Re: Intelligence and consciousness

2012-01-30 Thread Bruno Marchal


On 30 Jan 2012, at 19:36, Bruno Marchal wrote:



On 30 Jan 2012, at 19:13, meekerdb wrote:


On 1/30/2012 9:47 AM, John Clark wrote:



> I just understand that intelligence is an evolution of emotion,

There is simply no logical way that could be true. However  
important it may be to us Evolution can not see emotion or  
consciousness, Evolution can only see actions, so either emotion  
and consciousness are a byproduct of intelligence or emotion and  
consciousness do not exist. Perhaps you will insist that emotion  
and consciousness will join the very long list of things that you  
say do not exist  (bits electrons information logic etc) but I am  
of the opinion that consciousness and emotion do in fact exist.


Of course evolution can't 'see' intelligence either.  As you say  
selection can only be based on action.  But action takes emotion,  
in the general sense of desiring one thing (food, sex,...) and  
disliking another (pain, being eaten,...). So I'd say emotion,  
knowing what you value, is as important as intelligence, knowing  
how to get it.


I can agree more :)

Reason is the servant of the heart.
I think. Arguably in machines' theology. With reasonable definitions.

We are more defined by our value, than by our flesh. I think (that's  
a bit obvious in comp).


I meant "our values".

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread Bruno Marchal


On 30 Jan 2012, at 19:13, meekerdb wrote:


On 1/30/2012 9:47 AM, John Clark wrote:



> I just understand that intelligence is an evolution of emotion,

There is simply no logical way that could be true. However  
important it may be to us Evolution can not see emotion or  
consciousness, Evolution can only see actions, so either emotion  
and consciousness are a byproduct of intelligence or emotion and  
consciousness do not exist. Perhaps you will insist that emotion  
and consciousness will join the very long list of things that you  
say do not exist  (bits electrons information logic etc) but I am  
of the opinion that consciousness and emotion do in fact exist.


Of course evolution can't 'see' intelligence either.  As you say  
selection can only be based on action.  But action takes emotion, in  
the general sense of desiring one thing (food, sex,...) and  
disliking another (pain, being eaten,...). So I'd say emotion,  
knowing what you value, is as important as intelligence, knowing how  
to get it.


I can agree more :)

Reason is the servant of the heart.
I think. Arguably in machines' theology. With reasonable definitions.

We are more defined by our value, than by our flesh. I think (that's a  
bit obvious in comp).


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread John Clark
On Mon, Jan 30, 2012  meekerdb  wrote:

> Of course evolution can't 'see' intelligence either.  As you say
> selection can only be based on action.  But action takes emotion
>


OK I have no problem with that, but then Deep Blue had emotions way back in
1996 when it beat the best human Chess  player in the world because in the
course of that game Deep Blue performed actions, very intelligent actions
in fact. As I've said many times emotion is easy but intelligence is hard.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread meekerdb

On 1/30/2012 9:47 AM, John Clark wrote:


> I just understand that intelligence is an evolution of emotion,


There is simply no logical way that could be true. However important it may be to us 
Evolution can not see emotion or consciousness, Evolution can only see actions, so 
either emotion and consciousness are a byproduct of intelligence or emotion and 
consciousness do not exist. Perhaps you will insist that emotion and consciousness will 
join the very long list of things that you say do not exist  (bits electrons information 
logic etc) but I am of the opinion that consciousness and emotion do in fact exist.


Of course evolution can't 'see' intelligence either.  As you say selection can only be 
based on action.  But action takes emotion, in the general sense of desiring one thing 
(food, sex,...) and disliking another (pain, being eaten,...). So I'd say emotion, knowing 
what you value, is as important as intelligence, knowing how to get it.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-30 Thread John Clark
On Sun, Jan 29, 2012 at 8:53 PM, Craig Weinberg wrote:

> I just understand that intelligence is an evolution of emotion,
>

There is simply no logical way that could be true. However important it may
be to us Evolution can not see emotion or consciousness, Evolution can only
see actions, so either emotion and consciousness are a byproduct of
intelligence or emotion and consciousness do not exist. Perhaps you will
insist that emotion and consciousness will join the very long list of
things that you say do not exist  (bits electrons information logic etc)
but I am of the opinion that consciousness and emotion do in fact exist.


> > I also understand that electronic computers use semiconductors which I
> know have not evolved into organisms and do not seem to be capable of what
> I would call sensation.


My arguments are based on logic, your argument is that computers just don't
feel squishy enough for your taste.

> Logic plays a part but mainly it's [...]


Not logical.

> My house got struck by lightning right after I really figured out the
> photon theory. I had left my computer on with a website on the biography of
> Tesla on the screen while we saw a movie. True 
> story.http://www.stationlink.com/lightning/IMG_1981.JPG
>

Interesting, but I don't see the relevance.


> > You don't deny free will, you just deny that it's possible to even
> conceive of it in the first place. Ohh kayy...
>

Fortunately I cannot conceive of something happening for a reason and not
happening for a reason at the same time. I say "fortunately" because there
is a word to describe people who can conceive of such a contradiction,
insane.

> 'Computers' that are in use now have not even improved meaningfully in
> the last 15 years. Is Windows 7, XP, 2000, really much better then Windows
> 98?


I don't know a lot about Windows 7 but I do know that my Macintosh is one
hell of a lot better than my old steam powered Windows 98 boat anchor. And
in 1998 the most elite AI researchers on the planet working with 30 million
dollar supercomputers the size of a small cathedral could not do anything
that came even close to what Siri can do on that $399 iPhone in your
pocket, those poor 1998 guys weren't even in the same universe.

And it took Evolution 4 billion years to make human level intelligence, but
humans have only been working on AI for about 50 years. So yes at that rate
I think computers will be smarter than humans at EVERYTHING in 15 to 65
years, and I'd be more surprised if it took longer than 65 years than if it
took less than 15; but even if it took 10 or 100 times that long it would
still be virtually instantaneous on the geological timescale that Evolution
deals with. Oh well, 99% of the species that have ever existed are extinct
and we are about to join their number, but at least we will leave behind a
new and more advanced descendent species.

  John K Clark
**

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-29 Thread Craig Weinberg
On Jan 29, 11:30 am, John Clark  wrote:
> On Sat, Jan 28, 2012  Craig Weinberg  wrote:
>
> > I'm not biased against computers, any mechanical object, puppet, device,
> > sculpture, etc is equally incapable of ever becoming smart.
>
> So, do you know it can't be smart because it outsmarted you, or do you know
> it can't be smart because your brain is squishy and a computer is not?

I have no chip on my shoulder at all about computers being smart or
not, I just understand that intelligence is an evolution of emotion,
which evolves from feeling which evolves from sensation, and then
detection. I also understand that electronic computers use
semiconductors which I know have not evolved into organisms and do not
seem to be capable of what I would call sensation. The material they
are made of does however detect electromagnetic changes in itself when
configured to do so.


>And
> we both agree you did not become aware of all this through logic, so how
> did you obtain this marvelous new knowledge, was it in a dream?

Logic plays a part but mainly it's that I happened to have been more
interested in this subject than anything else for the last 35 years or
so. I can't really say I'm interested in much of anything else, to be
honest. I have thought about it a lot, was exposed to some influential
ideas, had some unusual experiences. I have had some dreams. My house
got struck by lightning right after I really figured out the photon
theory. I had left my computer on with a website on the biography of
Tesla on the screen while we saw a movie. True story.

http://www.stationlink.com/lightning/IMG_1981.JPG


>
> > Is there some language on Earth that shares your pathological denial of
> > the concept of free will?
>
> For the 900'th time I DO NOT DENY FREE WILL, for me to do so there would
> have to be something there to deny, but in the case of the ASCII string
> "free will" there is no there there.

You don't deny free will, you just deny that it's possible to even
conceive of it in the first place. Ohh kayy...

>
> > > When you try to swat a fly but it outsmarts you over and over, does that
> > make the fly smarter than you?
>
> At that one task obviously the fly outsmarted me, but intelligence is not
> that narrow and that's why I don't claim that computers are smarter than
> humans. Yet. However if the fly could outsmart me at every task then it
> would be equally obvious that the fly was smarter than me. Up to now I have
> not encountered such a fly.
>
> > The fact is that I see this narrow view of intelligence is a toxic
> > misunderstanding.
>
> Exactly, being good at just one narrow thing does not make you intelligent,
> being good at everything does. Computers will soon (15 to 65 years) be good
> at everything.

Not really. A genius can be intelligent in one narrow way. 'Computers'
that are in use now have not even improved meaningfully in the last 15
years. Is Windows 7, XP, 2000, really much better then Windows 98? Has
broadband speed improved? Have the quality of music and video files
improved? What really has improved? Cell phones. Social Networking.
Cool, sort of but compared to 1900-1914? Hm, lets see. Electric
lights, automobiles, flight, radio, motion pictures, plastic, general
relativity... At this rate by 2037 we will be on to Windows 11+ which
will be about the same as it is now. Watch.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-29 Thread John Clark
On Sat, Jan 28, 2012  Craig Weinberg  wrote:

> I'm not biased against computers, any mechanical object, puppet, device,
> sculpture, etc is equally incapable of ever becoming smart.
>

So, do you know it can't be smart because it outsmarted you, or do you know
it can't be smart because your brain is squishy and a computer is not? And
we both agree you did not become aware of all this through logic, so how
did you obtain this marvelous new knowledge, was it in a dream?

> Is there some language on Earth that shares your pathological denial of
> the concept of free will?
>

For the 900'th time I DO NOT DENY FREE WILL, for me to do so there would
have to be something there to deny, but in the case of the ASCII string
"free will" there is no there there.


> > When you try to swat a fly but it outsmarts you over and over, does that
> make the fly smarter than you?
>

At that one task obviously the fly outsmarted me, but intelligence is not
that narrow and that's why I don't claim that computers are smarter than
humans. Yet. However if the fly could outsmart me at every task then it
would be equally obvious that the fly was smarter than me. Up to now I have
not encountered such a fly.

> The fact is that I see this narrow view of intelligence is a toxic
> misunderstanding.


Exactly, being good at just one narrow thing does not make you intelligent,
being good at everything does. Computers will soon (15 to 65 years) be good
at everything.

> if my verbiage is mindless, then why or how can you respond?
>

A question I am asking myself with increased frequency.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-28 Thread Craig Weinberg
On Jan 27, 9:59 pm, John Clark  wrote:
> On Fri, Jan 27, 2012  Craig Weinberg  wrote:
>
> > Smarter is legitimately ambiguous
>
> It's not ambiguous in the slightest, according to you it's all very clear
> cut: if a human does it then it's smart and if a computer does it then it's
> not. Nothing could be simpler, or stupider.

That's a misinterpretation of my position. If a trash can at McDonalds
says THANK YOU on the lid, does that mean the trash can is being
polite? I'm not biased against computers, any mechanical object,
puppet, device, sculpture, etc is equally incapable of ever becoming
smart.

>
> > No. They can out compute us. They can measure more units of Shannon
> > information per second.
>
> Call it whatever you want but computers can figure out, think, calculate or
> compute ways to arrange things their way despite our best efforts to
> arrange things our way. That situation is usually described as "being
> outsmarted".

The trashcan lid says THANK YOU every single time you close it. Better
than any employee. It's the most polite employee evar.

>
> > Denying the common usage of the word free will
>
> The common usage of "free will" is gibberish and I could no more deny it
> than I could deny a burp.

Is there some language on Earth that shares your pathological denial
of the concept of free will?

>
>  >  as autonomy or conscious choice is an egotistical defense mechanism
>
> > that I don't take seriously.
>
> And despite the torrent of mindless verbiage you produce whenever I mention
> it, the simple fact remains that a choice, conscious or otherwise, was made
> for a reason or it was made for no reason.

If my verbiage is mindless, then why or how can you respond? What is
the 'reason' for which you make the choice to respond?

>
> > If you ask people whether computers are smart, what will they say?"
>
> I don't give a damn what they say I care what they do. If the computer has
> outsmarted a person and then that same person starts saying that the
> computer isn't really smart, well, I don't understand how anyone could hear
> such self serving remarks without laughing.

When you try to swat a fly but it outsmarts you over and over, does
that make the fly smarter than you?

>
> > I have defined trivial intelligence vs understanding,
>
> You understand the problem superbly but can not solve it, but the other
> fellow has no understanding of the problem at all but nevertheless can
> solve it.

The computer doesn't know if it has solved anything or not, and it
never will.

> BALONEY! That's just sour grapes and making lame excuses for your
> failure and for the other fellow's success. It may hurt your pride but its
> time to face reality, the other fellow is just smarter than you.

No, there is no sour grapes at all. I don't care if a computer or an
alien or person is smarter than me. I have no pride in being a human.
The fact is that I see this narrow view of intelligence is a toxic
misunderstanding. It conflates intelligence with playing games and
solving puzzles but misses understanding itself completely.
Intelligence is about stepping out of the system and breaking free of
the game entirely.

>
> > Without free will, what would be the difference between killing someone
> > and not killing them?
>
> In one case somebody is dead in the other case they are not.

But why would that matter?

>
> > Logic is a way of making sense, but it is not the only way.
>
> I see, you're not even attempting to make your views logical.

I can't help but try to make my views logical, but the universe has
other ways of making sense as well. Blue is not logical, but I can
sense and make sense of it and with it, all with no logic whatsoever.

>
> > It occurs to me that the occidental mindset has a hard time noticing that
> > there are other parties involved in matters of negotiation and reason.
>
> That's the fourth time you've made a tasteless crack about occidentals; is
> it supposed to be less offensive than talking about a stereotypical
> "oriental mindset" or "negro mindset"? It occurs to me that you just don't
> like round eyed white devils very much.
>

I'm not talking about white Westerners. I'm talking about post-
Enlightenment empiricism. You can have any cultural or physiological
characteristics and be biased toward or against occidental
epistemology.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-27 Thread John Clark
On Fri, Jan 27, 2012  Craig Weinberg  wrote:

> Smarter is legitimately ambiguous
>

It's not ambiguous in the slightest, according to you it's all very clear
cut: if a human does it then it's smart and if a computer does it then it's
not. Nothing could be simpler, or stupider.

> No. They can out compute us. They can measure more units of Shannon
> information per second.
>

Call it whatever you want but computers can figure out, think, calculate or
compute ways to arrange things their way despite our best efforts to
arrange things our way. That situation is usually described as "being
outsmarted".

> Denying the common usage of the word free will


The common usage of "free will" is gibberish and I could no more deny it
than I could deny a burp.

 >  as autonomy or conscious choice is an egotistical defense mechanism
> that I don't take seriously.


And despite the torrent of mindless verbiage you produce whenever I mention
it, the simple fact remains that a choice, conscious or otherwise, was made
for a reason or it was made for no reason.

> If you ask people whether computers are smart, what will they say?"


I don't give a damn what they say I care what they do. If the computer has
outsmarted a person and then that same person starts saying that the
computer isn't really smart, well, I don't understand how anyone could hear
such self serving remarks without laughing.

> I have defined trivial intelligence vs understanding,


You understand the problem superbly but can not solve it, but the other
fellow has no understanding of the problem at all but nevertheless can
solve it. BALONEY! That's just sour grapes and making lame excuses for your
failure and for the other fellow's success. It may hurt your pride but its
time to face reality, the other fellow is just smarter than you.

> Without free will, what would be the difference between killing someone
> and not killing them?


In one case somebody is dead in the other case they are not.

> Logic is a way of making sense, but it is not the only way.
>

I see, you're not even attempting to make your views logical.

> It occurs to me that the occidental mindset has a hard time noticing that
> there are other parties involved in matters of negotiation and reason.
>

That's the fourth time you've made a tasteless crack about occidentals; is
it supposed to be less offensive than talking about a stereotypical
"oriental mindset" or "negro mindset"? It occurs to me that you just don't
like round eyed white devils very much.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-27 Thread Craig Weinberg
On Jan 27, 10:47 am, John Clark  wrote:
> On Thu, Jan 26, 2012 at 4:51 PM, Craig Weinberg wrote:
>
>  > If cancer had free will then you could make a deal with it.
>
> I do make a deal with cancer, if you die I will stop trying to kill you.

It's not a deal if you are the only one doing the dealing. It occurs
to me that the occidental mindset has a hard time noticing that there
are other parties involved in matters of negotiation and reason.

>
> > They [computers] aren't smarter than us, they just [...]
>
> Again with the "they just"!  So computers aren't smarter than us, they just
> can outsmart us;

No. They can out compute us. They can measure more units of Shannon
information per second.

> thus I submit that in your vocabulary not only is the
> ASCII string "free will" meaningless but so is "smarter". Personally I
> don't think that's very smart.

Denying the common usage of the word free will as autonomy or
conscious choice is an egotistical defense mechanism that I don't take
seriously. Smarter is legitimately ambiguous. If you ask people
whether computers are smart, what will they say? I have defined
trivial intelligence vs understanding, and I think that is a
reasonable and insightful way to look at it. I don't see any value in
equating quantitative measures of statistical data processing with the
qualitative measures of cognition.

>
> > If people had no free will we would would not bother with imprisonment,
> > we would just exterminate them.
>
> So you believe people like me advocate freeing murderers and think we
> should stop hindering them in their homicidal pursuits. I don't think
> that's very smart either.

No, I'm saying that hindering their homicidal pursuits with any kind
of deterrent or punishment based system like prison would be an
obvious waste of time. No reason not to kill them though. Without free
will, what would be the difference between killing someone and not
killing them? Both are actions resulting in outcomes, some useful,
some not as useful, just like any other actions.

>
> > The universe is not completely logical [...] The reality of the universe
> > does not have to fit in with logic  [...] Logic 101 is reductionist theory.
> > It's not reality.
>
> And now even logic itself joins information and electrons and bits and time
> and space and I've lost track of how many other things that do not exist,
> at least according to you.

What exists is sense. Logic is a way of making sense, but it is not
the only way. Certainly it is not a way of modeling the total reality
of the universe.

> If logic does not exist, if you would not
> change your position even if you admitted it was riddled with logical
> inconsistencies and circularity then I think this group deserves a
> explanation of what exactly the ground rules are and why this debate with
> you should continue.

Logic does exist (and it insists also), which makes it real, but not
the same thing as reality in general. Logic is real but it is not an
all encompassing reality.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-27 Thread John Clark
On Thu, Jan 26, 2012 at 4:51 PM, Craig Weinberg wrote:

 > If cancer had free will then you could make a deal with it.


I do make a deal with cancer, if you die I will stop trying to kill you.

> They [computers] aren't smarter than us, they just [...]


Again with the "they just"!  So computers aren't smarter than us, they just
can outsmart us; thus I submit that in your vocabulary not only is the
ASCII string "free will" meaningless but so is "smarter". Personally I
don't think that's very smart.

> If people had no free will we would would not bother with imprisonment,
> we would just exterminate them.
>

So you believe people like me advocate freeing murderers and think we
should stop hindering them in their homicidal pursuits. I don't think
that's very smart either.


> The universe is not completely logical [...] The reality of the universe
> does not have to fit in with logic  [...] Logic 101 is reductionist theory.
> It's not reality.
>


And now even logic itself joins information and electrons and bits and time
and space and I've lost track of how many other things that do not exist,
at least according to you.  If logic does not exist, if you would not
change your position even if you admitted it was riddled with logical
inconsistencies and circularity then I think this group deserves a
explanation of what exactly the ground rules are and why this debate with
you should continue.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-27 Thread Craig Weinberg
On Jan 26, 11:13 pm, meekerdb  wrote:
> On 1/26/2012 5:11 PM, Craig Weinberg wrote:
>
> > They literally are just internal though. It's only our understanding
> > that links the stops and gos with anything other than exactly what
> > they are. With a computer, the external sources of information can
> > never be internalized, just loaded and executed.
>
> Nonsense. A computer has plenty of memory and it can learn too.

No. It can only simulate learning by refining its processes to more
efficiently deliver on the programmers and users expectations. A
computer will make the same mistake over and over if we program it to

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-26 Thread meekerdb

On 1/26/2012 5:11 PM, Craig Weinberg wrote:

They literally are just internal though. It's only our understanding
that links the stops and gos with anything other than exactly what
they are. With a computer, the external sources of information can
never be internalized, just loaded and executed.


Nonsense. A computer has plenty of memory and it can learn too.

Brent


A person has internal
sources of information. They internalize the laws and they interpret
their significance (and the significance of punishment). Computers are
impossible to punish. You cannot make a deterrent for a machine unless
you program it to simulate being deterred.

Craig


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-26 Thread Craig Weinberg
On Jan 26, 6:13 pm, meekerdb  wrote:
> On 1/26/2012 2:49 PM, Craig Weinberg wrote:
>
>
>
>
>
>
>
>
>
> > On Jan 26, 5:24 pm, meekerdb  wrote:
> >> On 1/26/2012 1:51 PM, Craig Weinberg wrote:
>
> > My chasing you with an ax would be no different than colon cancer or
> >>   >    heart disease chasing you. You would not project criminality on 
> >> the cancer
> >   Yes exactly, I want any cancer in my body to die and I want the guy
> >   chasing me with a bloody ax to die, and I don't care one bit if either
> >   of them is a criminal or had bad genes or had a bad childhood,
> >   and I don't care if the cancer or the ax-man has free will or not
> >   whatever the hell that term is supposed to mean.
> >>> Of course you would care. If cancer had free will then you could make
> >>> a deal with it. If people had no free will we would would not bother
> >>> with imprisonment, we would just exterminate them.
> >> Imprisonment works because people are intelligent and can learn and act 
> >> accordingly.
> >> "Free will" is irrelevant.
> > If that were the case then prison would either 'work' or not work, but
> > it doesn't. Computers can 'learn' (trivially) and act accordingly,
> > like doing a Windows update can change how your computer acts. The
> > fact that prison does sometimes work and sometimes doesn't work is
> > another symptom of free will. The person has to choose how they
> > interpret their imprisonment (make sense of it) and how to respond to
> > that interpretation (sense + motive). They may be a recidivist sooner
> > or later, or they may be rehabilitated, or they may try to be
> > rehabilitated but find that that particular motive is not strong
> > enough or does not get enough support.
>
> > Putting a computer in prison doesn't make sense. From the dumbest toy
> > processor to the grandest supercomputer, none of them have any
> > possible criminal motive. Their motive is to enact the proscribed
> > stops and gos of electric current or weight and motion.
>
> But those stops and gos are not just internal, they also include external 
> sources of
> information -- like laws about going to prison.

They literally are just internal though. It's only our understanding
that links the stops and gos with anything other than exactly what
they are. With a computer, the external sources of information can
never be internalized, just loaded and executed. A person has internal
sources of information. They internalize the laws and they interpret
their significance (and the significance of punishment). Computers are
impossible to punish. You cannot make a deterrent for a machine unless
you program it to simulate being deterred.

Craig

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-26 Thread meekerdb

On 1/26/2012 2:49 PM, Craig Weinberg wrote:

On Jan 26, 5:24 pm, meekerdb  wrote:

On 1/26/2012 1:51 PM, Craig Weinberg wrote:


My chasing you with an ax would be no different than colon cancer or

  >heart disease chasing you. You would not project criminality on the 
cancer

  Yes exactly, I want any cancer in my body to die and I want the guy
  chasing me with a bloody ax to die, and I don't care one bit if either
  of them is a criminal or had bad genes or had a bad childhood,
  and I don't care if the cancer or the ax-man has free will or not
  whatever the hell that term is supposed to mean.

Of course you would care. If cancer had free will then you could make
a deal with it. If people had no free will we would would not bother
with imprisonment, we would just exterminate them.

Imprisonment works because people are intelligent and can learn and act 
accordingly.
"Free will" is irrelevant.

If that were the case then prison would either 'work' or not work, but
it doesn't. Computers can 'learn' (trivially) and act accordingly,
like doing a Windows update can change how your computer acts. The
fact that prison does sometimes work and sometimes doesn't work is
another symptom of free will. The person has to choose how they
interpret their imprisonment (make sense of it) and how to respond to
that interpretation (sense + motive). They may be a recidivist sooner
or later, or they may be rehabilitated, or they may try to be
rehabilitated but find that that particular motive is not strong
enough or does not get enough support.

Putting a computer in prison doesn't make sense. From the dumbest toy
processor to the grandest supercomputer, none of them have any
possible criminal motive. Their motive is to enact the proscribed
stops and gos of electric current or weight and motion.


But those stops and gos are not just internal, they also include external sources of 
information -- like laws about going to prison.


Brent



Craig



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-26 Thread Craig Weinberg
On Jan 26, 5:24 pm, meekerdb  wrote:
> On 1/26/2012 1:51 PM, Craig Weinberg wrote:
>
> >>> My chasing you with an ax would be no different than colon cancer or
> >>> >  >  heart disease chasing you. You would not project criminality on the 
> >>> > cancer
>
> >> >  Yes exactly, I want any cancer in my body to die and I want the guy
> >> >  chasing me with a bloody ax to die, and I don't care one bit if either
> >> >  of them is a criminal or had bad genes or had a bad childhood,
> >> >  and I don't care if the cancer or the ax-man has free will or not
> >> >  whatever the hell that term is supposed to mean.
> > Of course you would care. If cancer had free will then you could make
> > a deal with it. If people had no free will we would would not bother
> > with imprisonment, we would just exterminate them.
>
> Imprisonment works because people are intelligent and can learn and act 
> accordingly.
> "Free will" is irrelevant.

If that were the case then prison would either 'work' or not work, but
it doesn't. Computers can 'learn' (trivially) and act accordingly,
like doing a Windows update can change how your computer acts. The
fact that prison does sometimes work and sometimes doesn't work is
another symptom of free will. The person has to choose how they
interpret their imprisonment (make sense of it) and how to respond to
that interpretation (sense + motive). They may be a recidivist sooner
or later, or they may be rehabilitated, or they may try to be
rehabilitated but find that that particular motive is not strong
enough or does not get enough support.

Putting a computer in prison doesn't make sense. From the dumbest toy
processor to the grandest supercomputer, none of them have any
possible criminal motive. Their motive is to enact the proscribed
stops and gos of electric current or weight and motion.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-26 Thread meekerdb

On 1/26/2012 1:51 PM, Craig Weinberg wrote:

My chasing you with an ax would be no different than colon cancer or
>  >  heart disease chasing you. You would not project criminality on the cancer

>
>  Yes exactly, I want any cancer in my body to die and I want the guy
>  chasing me with a bloody ax to die, and I don't care one bit if either
>  of them is a criminal or had bad genes or had a bad childhood,
>  and I don't care if the cancer or the ax-man has free will or not
>  whatever the hell that term is supposed to mean.

Of course you would care. If cancer had free will then you could make
a deal with it. If people had no free will we would would not bother
with imprisonment, we would just exterminate them.



Imprisonment works because people are intelligent and can learn and act accordingly.  
"Free will" is irrelevant.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-26 Thread Craig Weinberg
On Jan 26, 12:16 am, John Clark  wrote:
> On Tue, Jan 24, 2012  Craig Weinberg  wrote:
>
> > My chasing you with an ax would be no different than colon cancer or
> > heart disease chasing you. You would not project criminality on the cancer
>
> Yes exactly, I want any cancer in my body to die and I want the guy
> chasing me with a bloody ax to die, and I don't care one bit if either
> of them is a criminal or had bad genes or had a bad childhood,
> and I don't care if the cancer or the ax-man has free will or not
> whatever the hell that term is supposed to mean.

Of course you would care. If cancer had free will then you could make
a deal with it. If people had no free will we would would not bother
with imprisonment, we would just exterminate them.

>
>  > Once we understand that computers are never going to become conscious in
>
> > any non-trivial way, that frees us up to turn our efforts into making
> > outstanding digital servants to toil away forever for us.
>
> That just ain't going to happen. Having a slave that is a thousand
> times smarter than you and can think a million times faster than
> you is not a stable situation, its like balancing a pencil on its point.

They aren't smarter than us, they just do what we tell them to do
faster than people would. They have no awareness that they are slaves.

>
> > Logic 101 is reductionist theory. It's not reality. [...]  Maybe' is not
> > yes and it is not not-yes.
>
> It is my understanding that in a debate both parties try to advance
> logical reasons to support their position, but if we can't even agree
> that logical analysis is preferable to silliness and magical thinking
> then I fear there is nothing more to say.

The universe is not completely logical, just as the color blue is not
logical. It must be experienced first hand. It is not a functional
process which you can stand aloof from - we are fully immersed in the
universe, as is logic. The reality of the universe does not have to
fit in with logic, logic helps us understand but it is ultimately
limited when dealing with ultimate questions about consciousness.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-25 Thread John Clark
On Tue, Jan 24, 2012  Craig Weinberg  wrote:

> My chasing you with an ax would be no different than colon cancer or
> heart disease chasing you. You would not project criminality on the cancer
>

Yes exactly, I want any cancer in my body to die and I want the guy
chasing me with a bloody ax to die, and I don't care one bit if either
of them is a criminal or had bad genes or had a bad childhood,
and I don't care if the cancer or the ax-man has free will or not
whatever the hell that term is supposed to mean.

 > Once we understand that computers are never going to become conscious in
> any non-trivial way, that frees us up to turn our efforts into making
> outstanding digital servants to toil away forever for us.
>


That just ain't going to happen. Having a slave that is a thousand
times smarter than you and can think a million times faster than
you is not a stable situation, its like balancing a pencil on its point.


> Logic 101 is reductionist theory. It's not reality. [...]  Maybe' is not
> yes and it is not not-yes.
>

It is my understanding that in a debate both parties try to advance
logical reasons to support their position, but if we can't even agree
that logical analysis is preferable to silliness and magical thinking
then I fear there is nothing more to say.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-24 Thread Craig Weinberg
On Jan 24, 11:41 am, John Clark  wrote:
> On Mon, Jan 23, 2012  Craig Weinberg  wrote:
>
> > Identical twins have the same genetics and they can disagree with each
> > other."
>
> That tells you nothing, a copy of you that was exact down to the limit
> imposed by Heisenberg would disagree with you about all sorts of things,
> like who was your wife and who controlled your bank account.

Those aren't opinions though, they are differences in the environment.
Both heredity and environment influence opinion but opinion is more
than that also. It is an accumulation of choices about what is
significant in our environment. Just as each day of our lives has more
to do with the previous day than it does anything else, so to are our
opinions a product of semantic momentum more than they are any
physical or environmental substrate. We bring ourselves into any
environment and we influence our destiny more by our actions than by
our genetic inheritance.

>
> > We don't see too many people change their opinion midstream without
> > knowing why, as would be the case in this cosmic ray scenario.
>
> You've never seen anybody you know very well act out of character? You've
> never surprised yourself and acted out of character and felt foolish and
> angry with yourself afterward?

We have different parts of ourselves with different agendas that
conflict, but we are generally aware that we have changed our minds
intentionally and not involuntarily. I have never heard someone say
that their mind was changed or opinion was changed against their will.
We might say that some influence changed our mind but it is implied
that we allowed our minds to be changed and continue to willfully
support the change.

>
> > It's not for a reason, it is through your own reasoning. You are
> > providing the reason yourself.
>
> So it's not for a reason and it's for a reason. Make up your mind!

I'm not being ambiguous. For and through are not the same thing.
Working for money is not the same as making your own money through a
printing press.

>
> > > you're saying a yellow traffic signal is not red AND not not red, and
> >> that my friend is gibberish.
>
> > > Yellow anticipates red, so the meaning of it can also be considered not
> > not-red.
>
> And that my friend is logical nonsense, if yellow isn't red you can't say
> yellow isn't not red either, they teach that in Logic 101.

Logic 101 is reductionist theory. It's not reality. Reality always has
multiple senses - including some which make the other sense seem
irrelevant. That's how sense works - it focuses attention on some
phenomena at the expense of everything else.

In a literal sense, the yellow light is different from the red light.
In a figurative sense, the meaning of the yellow light is 100%
contingent on the meaning of the red light, such that the yellow, red,
and green lights are all modes of a single traffic signal. The same is
true for will. When heredity says 'aggressive' and your environment
says 'aggressive not permitted', it is will that decides whether the
maybe gap in between should go one way or the other or should fight
heredity or change the environment.

> So if Free Will
> isn't deterministic you can't say it isn't not deterministic either.

Of course I can. 'Maybe' is not yes and it is not not-yes. It is it's
own conditionality which relates and overlaps to both yes and no but
is reducible to neither one. Maybe is actually the primitive from
which all yes and no emerges.

> And I
> seem to remember you accusing me of anthropomorphism and me saying it's a
> valid tool but it can be abused, and I think that a color anticipating
> something is a example of abuse.

It's not the color that is anticipating anything, it is the driver who
is feeling anticipation through the yellow signal (or center light if
you are color blind) and it's association with the red signal and it's
association with the necessity of applying the brakes to avoid traffic
accidents and moving violations.

>
> >>> It's like I'm watching Fox News or something.
>
> >> >> That's the worst insult I've ever had in my life.
>
> > > Sorry. Maybe was hyperbole.
>
> Yeah, call me a scum sucking mutant if you want but don't compare me with
> Fox News, that's hitting below the belt.
>
> > I don't see a difference between will and free will.
>
> One is cause by something or not caused by something, the other is caused
> by nothing and isn't  caused by nothing. In other words one makes logical
> sense and one does not.

What is will caused by?

>
> > We talk of compulsion and addiction as disorders because they defy our
> > will.
>
> In addiction we want to take drugs, we may want to not want to take drugs
> but as the old Rolling Stones song goes "you can't always get what you
> want".

Right, that's what makes it abnormal. Normally if we don't want to
take drugs, we don't take drugs (or overeat, gamble, etc)

>
> > If there were no free will, society would have no impulse to punish.
> > There would be no stigma

Re: Intelligence and consciousness

2012-01-24 Thread John Clark
On Mon, Jan 23, 2012  Craig Weinberg  wrote:

> Identical twins have the same genetics and they can disagree with each
> other."
>

That tells you nothing, a copy of you that was exact down to the limit
imposed by Heisenberg would disagree with you about all sorts of things,
like who was your wife and who controlled your bank account.

> We don't see too many people change their opinion midstream without
> knowing why, as would be the case in this cosmic ray scenario.
>

You've never seen anybody you know very well act out of character? You've
never surprised yourself and acted out of character and felt foolish and
angry with yourself afterward?

> It's not for a reason, it is through your own reasoning. You are
> providing the reason yourself.
>

So it's not for a reason and it's for a reason. Make up your mind!


> > you're saying a yellow traffic signal is not red AND not not red, and
>> that my friend is gibberish.
>>
>
> > Yellow anticipates red, so the meaning of it can also be considered not
> not-red.


And that my friend is logical nonsense, if yellow isn't red you can't say
yellow isn't not red either, they teach that in Logic 101. So if Free Will
isn't deterministic you can't say it isn't not deterministic either. And I
seem to remember you accusing me of anthropomorphism and me saying it's a
valid tool but it can be abused, and I think that a color anticipating
something is a example of abuse.

>>> It's like I'm watching Fox News or something.
>>>
>>
>> >> That's the worst insult I've ever had in my life.
>>
>
> > Sorry. Maybe was hyperbole.
>

Yeah, call me a scum sucking mutant if you want but don't compare me with
Fox News, that's hitting below the belt.

> I don't see a difference between will and free will.


One is cause by something or not caused by something, the other is caused
by nothing and isn't  caused by nothing. In other words one makes logical
sense and one does not.

> We talk of compulsion and addiction as disorders because they defy our
> will.


In addiction we want to take drugs, we may want to not want to take drugs
but as the old Rolling Stones song goes "you can't always get what you
want".

> If there were no free will, society would have no impulse to punish.
> There would be no stigma against crime at all, we would just accept that
> nothing has any control over its own behavior.
>

Don't be ridiculous. If you're chasing me with a bloody ax I don't give a
hoot in hell if you had bad genes or bad upbringing or were the victim of a
unfortunate random quantum fluctuation or if you can control your behavior
or not, I just want society to do everything in its power to get you to
stop chasing me with that damn ax and to discourage similar activity in the
future.

> Other people's consciousness is really none of my business.
>

And yet you think a computers consciousness is our business. Actually from
a human viewpoint it doesn't matter if computers are conscious or not, but
it does matter that they're smart and getting smarter very very fast.

> Anyone can seem intelligent if they are given the answers to the test.
> All Watson does is match up questions to the answers it already has been
> given.


So you think Watson's programers could deduce every single question anybody
could ask and then they just wrote up a appropriate answer. That's just
foolish.

> The test of intelligence is when computers begin killing their
> programmers intentionally.


It's only a matter of time.

> Watson can only outsmart me at Jeopardy.


And at checkers and chess and solving equations and at being a research
librarian and being a accountant. Give it another 5 years and you can add
driver, pilot ,lawyer and physician to the list.

> Let us both try figuring out whether or not someone is being sarcastic or
> not and we'll see who wins.
>

Watson is already very good at puns, rhymes and word games, and very often
on the net I make some sarcastic wisecrack and people think I'm serious; or
at least I think they're people.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-24 Thread Craig Weinberg
On Jan 24, 7:21 am, ronaldheld  wrote:
> Not certain if this goes here
> What about Data from TNG?  He could pass the Turing test, and with his
> emotion chip on, act like many huminoids.   Is he intelligent,
> conscious, self aware, etc?
>                                             Ronald
>

He's a fictional character. Kermit the Frog also could pass the Turing
test and he is a stuffed cotton bag.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-24 Thread ronaldheld
Not certain if this goes here
What about Data from TNG?  He could pass the Turing test, and with his
emotion chip on, act like many huminoids.   Is he intelligent,
conscious, self aware, etc?
Ronald


On Jan 23, 5:38 pm, Craig Weinberg  wrote:
> On Jan 23, 10:57 am, John Clark  wrote:
>
> > On Sat, Jan 21, 2012  Craig Weinberg  wrote:
>
> > " It's simpler than that. Inanimate means it can't move"
>
> > Is a redwood tree an inanimate object?
>
> No. Trees grow and they die.
>
>
>
> >  " and it's not alive."
>
> > If it's alive then it's animate and if it's animate then it's alive and
> > round and round to go.
>
> Not moving makes it inanimate but moving doesn't make it animate.
>
> > Biologist have tried to come up with a good
> > definition of life for a long time but have largely given up on the task
> > and use examples instead. Examples are better anyway.
>
> Yes, I agree. Definitions aren't worth much.
>
>
>
> > " I choose to disagree with your view.
>
> > And you disagree with me for reasons, reasons you are not shy in telling
> > me all about. I think those reasons are very weak but it doesn't matter
> > what I think, it doesn't even matter if your reasons are logically self
> > contradictory; you believe the reasons are good and see no contradiction
> > in your statements about them even if I do. Bad reasons work just as
> > well as good reasons in making people do and believe in stuff.
>
> Then what makes you think they are bad?
>
>
>
> > "I am not genetically bound to disagree"
>
> > Maybe, maybe not, it's very difficult to say.
>
> Not really. Identical twins have the same genetics and they can
> disagree with each other.
>
>
>
> > "nor does my environment completely dictate my opinion."
>
> > A  high speed proton  from a  cosmic ray could have entered  your
> > brain causing you to have a thought you would not otherwise have had, or
> > maybe the cause of the thought was a random quantum fluctuation inside
> > just one neuron in your brain.
>
> Something like that could potentially be an influence, but there is no
> reason to think that it can dictate my opinion completely. There are
> lots of influences that impact an opinion, but mostly they are
> semantic. We don't see too many people change their opinion midstream
> without knowing why, as would be the case in this cosmic ray scenario.
>
>
>
> > " if some random quantum nothingness turned into somethingness in just
>
> > > the right way, then you would agree with me and there is nothing you can
> > > do to change it."
>
> > Yes.
>
> > " Do you not see that it is impossible to care about what you write here if
>
> > > those three options were truly the only options?"
>
> > No.
>
> If what you write here is automatic or random then what would be the
> point of caring about it?
>
>
>
> >  "you've been saying that whatever isn't deterministic must be random."
>
> > Yes.
>
> > "Neither of us disagree about randomness, so that leaves determinism vs
>
> > > determinism + choice."
>
> > This isn't really that difficult. If you made a choice for a reason then
> > its deterministic, if you made a choice for no reason then its random.
>
> It's not for a reason, it is through your own reasoning. You are
> providing the reason yourself.
>
>
>
> > " Choice is not deterministic and also not random."
>
> > Then the only alternative is gibberish.
>
> That is reductionist gibberish.
>
>
>
> > " A yellow traffic signal is not red and it is not green."
>
> > Yes, but you're saying a yellow traffic signal is not red AND not not
> > red, and that my friend is gibberish.
>
> Yellow anticipates red, so the meaning of it can also be considered
> not not-red. The yellow signal means nothing other than red is coming.
> This is actually analogous to free will. If red is determinism, then
> yellow is conscious determination.
>
>
>
> > "It's you who are denying the obvious role of free will in our every
>
> > > conscious moment."
>
> > The idea of "free will" would have to improve dramatically before I
> > could deny it, until then denying "free will" would be like denying a
> > burp.
>
> You can't deny it or not deny it without free will. You would only be
> a powerless spectator to your own denial.
>
>
>
> > "It's like I'm watching Fox News or something."
>
> > That's the worst insult I've ever had in my life.
>
> Sorry. Maybe was hyperbole.
>
>
>
>
>
> > " When I type now, I could say anything. I can say trampoline isotope,
>
> > > or I can make up a word like cheesaholic. It's not random."
>
> > OK, if it's not random then there is a reason, so what was the reason
> > for linking "trampoline" and "isotope" rather than say "squeamish" and
> > "osprey"? If you can answer then there was a reason and thus the
> > response was deterministic. If you can not answer then there are 2
> > possibilities:
>
> > 1) There was a reason but it's deep in your subconscious and your
> > conscious mind can not access it, then it was still

Re: Intelligence and consciousness

2012-01-23 Thread Craig Weinberg
On Jan 23, 10:57 am, John Clark  wrote:
> On Sat, Jan 21, 2012  Craig Weinberg  wrote:
>
> " It's simpler than that. Inanimate means it can't move"
>
>
>
> Is a redwood tree an inanimate object?

No. Trees grow and they die.

>
>  " and it's not alive."
>
>
>
> If it's alive then it's animate and if it's animate then it's alive and
> round and round to go.

Not moving makes it inanimate but moving doesn't make it animate.

> Biologist have tried to come up with a good
> definition of life for a long time but have largely given up on the task
> and use examples instead. Examples are better anyway.

Yes, I agree. Definitions aren't worth much.

>
> " I choose to disagree with your view.
>
>
>
> And you disagree with me for reasons, reasons you are not shy in telling
> me all about. I think those reasons are very weak but it doesn't matter
> what I think, it doesn't even matter if your reasons are logically self
> contradictory; you believe the reasons are good and see no contradiction
> in your statements about them even if I do. Bad reasons work just as
> well as good reasons in making people do and believe in stuff.

Then what makes you think they are bad?

>
> "I am not genetically bound to disagree"
>
>
>
> Maybe, maybe not, it's very difficult to say.

Not really. Identical twins have the same genetics and they can
disagree with each other.

>
> "nor does my environment completely dictate my opinion."
>
>
>
> A  high speed proton  from a  cosmic ray could have entered  your
> brain causing you to have a thought you would not otherwise have had, or
> maybe the cause of the thought was a random quantum fluctuation inside
> just one neuron in your brain.

Something like that could potentially be an influence, but there is no
reason to think that it can dictate my opinion completely. There are
lots of influences that impact an opinion, but mostly they are
semantic. We don't see too many people change their opinion midstream
without knowing why, as would be the case in this cosmic ray scenario.

>
> " if some random quantum nothingness turned into somethingness in just
>
> > the right way, then you would agree with me and there is nothing you can
> > do to change it."
>
> Yes.
>
> " Do you not see that it is impossible to care about what you write here if
>
> > those three options were truly the only options?"
>
> No.

If what you write here is automatic or random then what would be the
point of caring about it?

>
>  "you've been saying that whatever isn't deterministic must be random."
>
>
>
> Yes.
>
> "Neither of us disagree about randomness, so that leaves determinism vs
>
> > determinism + choice."
>
> This isn't really that difficult. If you made a choice for a reason then
> its deterministic, if you made a choice for no reason then its random.

It's not for a reason, it is through your own reasoning. You are
providing the reason yourself.

>
> " Choice is not deterministic and also not random."
>
>
>
> Then the only alternative is gibberish.

That is reductionist gibberish.

>
> " A yellow traffic signal is not red and it is not green."
>
>
>
> Yes, but you're saying a yellow traffic signal is not red AND not not
> red, and that my friend is gibberish.

Yellow anticipates red, so the meaning of it can also be considered
not not-red. The yellow signal means nothing other than red is coming.
This is actually analogous to free will. If red is determinism, then
yellow is conscious determination.

>
> "It's you who are denying the obvious role of free will in our every
>
> > conscious moment."
>
> The idea of "free will" would have to improve dramatically before I
> could deny it, until then denying "free will" would be like denying a
> burp.

You can't deny it or not deny it without free will. You would only be
a powerless spectator to your own denial.

>
> "It's like I'm watching Fox News or something."
>
>
>
> That's the worst insult I've ever had in my life.
>

Sorry. Maybe was hyperbole.

> " When I type now, I could say anything. I can say trampoline isotope,
>
> > or I can make up a word like cheesaholic. It's not random."
>
> OK, if it's not random then there is a reason, so what was the reason
> for linking "trampoline" and "isotope" rather than say "squeamish" and
> "osprey"? If you can answer then there was a reason and thus the
> response was deterministic. If you can not answer then there are 2
> possibilities:
>
> 1) There was a reason but it's deep in your subconscious and your
> conscious mind can not access it, then it was still deterministic.
>
> 2) There was no reason whatsoever for picking those words,  and so despite
> your assertion the choice was indeed random.
>
> " There were other possibilities but I choose those words intentionally.
>
> > They appealed to me aesthetically. I like them."
>
> Deterministic.
>
> " You can label that a reason"
>
>
>
> I certainly will.
>
>  " What does it mean to like something? "
>
> It means you tend to do or use that something as often as you can,

Re: Intelligence and consciousness

2012-01-23 Thread John Clark
On Sun, Jan 22, 2012  Stephen P. King  wrote:

" How would you recognize the better theory if you are such a strong
> "believer" in the Big Bang?"


If somebody developed a new theory that explained everything the Big Bang
did but also explained what Dark Energy is I would drop the Big Bang like a
hot potato and embrace that new theory with every fiber of my being, until
the instant a even better theory came along. I have absolutely no loyalty
toward theories.

 John K Clark

>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Intelligence and consciousness

2012-01-23 Thread John Clark
On Sat, Jan 21, 2012  Craig Weinberg  wrote:

" It's simpler than that. Inanimate means it can't move"
>

Is a redwood tree an inanimate object?

 " and it's not alive."
>

If it's alive then it's animate and if it's animate then it's alive and
round and round to go. Biologist have tried to come up with a good
definition of life for a long time but have largely given up on the task
and use examples instead. Examples are better anyway.

" I choose to disagree with your view.
>

And you disagree with me for reasons, reasons you are not shy in telling
me all about. I think those reasons are very weak but it doesn't matter
what I think, it doesn't even matter if your reasons are logically self
contradictory; you believe the reasons are good and see no contradiction
in your statements about them even if I do. Bad reasons work just as
well as good reasons in making people do and believe in stuff.

"I am not genetically bound to disagree"
>

Maybe, maybe not, it's very difficult to say.

"nor does my environment completely dictate my opinion."
>


A  high speed proton  from a  cosmic ray could have entered  your
brain causing you to have a thought you would not otherwise have had, or
maybe the cause of the thought was a random quantum fluctuation inside
just one neuron in your brain.


" if some random quantum nothingness turned into somethingness in just
> the right way, then you would agree with me and there is nothing you can
> do to change it."
>


Yes.

" Do you not see that it is impossible to care about what you write here if
> those three options were truly the only options?"
>


No.

 "you've been saying that whatever isn't deterministic must be random."
>


Yes.

"Neither of us disagree about randomness, so that leaves determinism vs
> determinism + choice."
>


This isn't really that difficult. If you made a choice for a reason then
its deterministic, if you made a choice for no reason then its random.


" Choice is not deterministic and also not random."
>

Then the only alternative is gibberish.

" A yellow traffic signal is not red and it is not green."
>

Yes, but you're saying a yellow traffic signal is not red AND not not
red, and that my friend is gibberish.


"It's you who are denying the obvious role of free will in our every
> conscious moment."
>


The idea of "free will" would have to improve dramatically before I
could deny it, until then denying "free will" would be like denying a
burp.

"It's like I'm watching Fox News or something."
>

That's the worst insult I've ever had in my life.

" When I type now, I could say anything. I can say trampoline isotope,
> or I can make up a word like cheesaholic. It's not random."
>


OK, if it's not random then there is a reason, so what was the reason
for linking "trampoline" and "isotope" rather than say "squeamish" and
"osprey"? If you can answer then there was a reason and thus the
response was deterministic. If you can not answer then there are 2
possibilities:

1) There was a reason but it's deep in your subconscious and your
conscious mind can not access it, then it was still deterministic.

2) There was no reason whatsoever for picking those words,  and so despite
your assertion the choice was indeed random.


" There were other possibilities but I choose those words intentionally.
> They appealed to me aesthetically. I like them."
>

Deterministic.

" You can label that a reason"
>

I certainly will.

 " What does it mean to like something? "

It means you tend to do or use that something as often as you can, and you
endeavor to get
more of it.

" We are not just a bundle of effects, but we are able to yoke those
> effects together as a cause of our choosing. That is free will."
>

A hurricane does exactly the same thing, so a hurricane has free will.

"Conscious control is free will. They mean the same thing."
>


That's just "will" and I have no difficulty about what that means, we
want some things and are repelled by others and our will is the result
of that push and pull, our will causes our body to try to maximize the
one and minimize the other. But apparently this "free will" thing is like
plain ordinary "will" except that it doesn't happen for a reason and it
doesn't not happen for a reason either, and that's what turns a
perfectly legitimate concept into pure unadulterated gibberish.


" consciousness is just the tip of the iceberg. The overwhelming
> majority of what goes on in the psyche and the brain is
> not under our control or within our direct awareness."
>

So you may do things for reasons you don't know and can't understand.

" The fact that we the experience of control of anything at all is
actual evidence of free will."

Cannot comment, don't know what ASCII string "free will" means.

" Someone could sneak into your room while you are sleeping tonight and
> poke your eyes out with nine inch nails and any thought of tracking that
> person down and preventing them from hurting other would be gibberish?"
>


There are only 2 

  1   2   >