Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-09 Thread Bruno Marchal


On 05 Apr 2013, at 11:17, Telmo Menezes wrote:


On Fri, Apr 5, 2013 at 1:09 AM, meekerdb meeke...@verizon.net wrote:

On 4/4/2013 3:50 PM, Telmo Menezes wrote:


On Wed, Apr 3, 2013 at 10:44 PM, Jason Resch  
jasonre...@gmail.com wrote:




On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes te...@telmomenezes.com 


wrote:





On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whatsons...@gmail.com 


wrote:




Then shouldn't a powerful computer be able to quickly deduce the
winning
Arimaa mappings?



You're making the same mistake as John Clark, confusing the  
physical
computer with the algorithm. Powerful computers don't help us if  
we

don't
have the right algorithm. The central mystery of AI, in my  
opinion, is

why
on earth haven't we found a general learning algorithm yet.  
Either it's

too
complex for our monkey brains, or you're right that computation  
is not

the
whole story. I believe in the former, but not I'm not sure, of  
course.
Notice that I'm talking about generic intelligence, not  
consciousness,

which
I strongly believe to be two distinct phenomena.



Another point toward Telmo's suspicion that learning is complex:

If learning and thinking intelligently at a human level were
computationally
easy, biology wouldn't have evolved to use trillions of  
synapses.  The

brain
is very expensive metabolically (using 20 - 25% of the total body's
energy,
about 100 Watts).  If so many neurons were not needed to do what  
we do,
natural selection would have selected those humans with fewer  
neurons and

reduced food requirements.


Yes but one can imagine a situation where there is a simple
(sufficiently-)general purpose algorithm that needs some place where
to store memories and everything it has learned. In this case, we
could implement such an algorithm in one of our puny laptops and get
some results, and then just ride what's left of Moore's law all the
way to the singularity. We don't know of any such algorithm.



But it doesn't follow from human brain complexity that no such  
algorithm
exists.  Evolution doesn't necessarily do things efficiently.  
Because it
can't start-over, it always depends on modification of what already  
works.
But I think there are other theoretical and evolutionary reasons  
that would

limit the scope of general intelligence.  Just to take an example,
mathematics is very hard for a lot of people.  Mathematical  
thinking is not
something that has been evolutionarily useful until recent times  
(and maybe

not even now).


Agreed.

What puzzles me the most is not that evolution hasn't found it
(although we're not sure, there's a lot we don't know about the brain
still). It's that the swarm of smart people that have been looking for
it haven't found it. I still have some hope that it's simple but
highly counter-intuitive.


All recursively enumerable class of total computable function is  
learnable. There is a simple algorithm: dovetail on that class, and  
output the programs which match the data. This will converge (in the  
computer science sense = eventually output something correct) to a  
program explaining the input-outputs given.
That algorithm will already not work if they are strictly partial  
function in the class, and it will not work on non recursively  
enumerable class (like all total functions). Such an algorithm is of  
course not really a practical algorithm, but they can be accelerated,  
--- even more so, if we allow weakening of the identification  
criteria. Some previews knowledge and rule of thumbs can also  
accelerate them, with non computable (= huge) gain.


The field of theoretical learning is very rich, and leads to the idea  
that competence is something never really universal, quite domain  
dependent, and speedable when allowing the usual things that evolution  
exploited all the time: the making of error, randomness, team or swarm  
of machines, etc. It leads to a large variety of possible  
implementation of competence, and exploits maximally the high general  
intelligence which exists already in the any universal machine. But as  
I said, such high competence can also restrict that intelligence.  
Intelligence is needed to develop competence, but competence tends to  
make that intelligence blind. With language and culture, that  
blindness can pass from a generation to another, so that babies can  
became more quickly stubborn than without, until the next paradigm  
shift.


Bruno






Telmo.



Brent


--
You received this message because you are subscribed to the Google  
Groups

Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-09 Thread Bruno Marchal


On 05 Apr 2013, at 16:16, Craig Weinberg wrote:




On Friday, April 5, 2013 9:41:40 AM UTC-4, Bruno Marchal wrote:

On 05 Apr 2013, at 00:07, Craig Weinberg wrote (to Jason)

There are algorithms for implementing anything that does not  
involve infinities.


Why do you think so? What algorithm implements purple or pain?


What make you think that purple or pain don't involve infinities?

They might, but why does that make a difference?


(Also, many algorithm does involve infinities. Machines can provide  
name for ordinals up to the Church-Kleene omega_1^CK ordinal, and  
they can reason in ZF like any of us.
I don't see why computers cannot beat the humans in the naming of  
infinities, even if that task can be considered as the least  
algorithmic one ever conceived by humans).


Why do you think that purple is a name?


Why do you think I would think that purple is a name?

Bruno





Craig


Bruno




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-09 Thread Bruno Marchal


On 05 Apr 2013, at 16:30, Jason Resch wrote:





On Fri, Apr 5, 2013 at 8:41 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 05 Apr 2013, at 00:07, Craig Weinberg wrote (to Jason)

There are algorithms for implementing anything that does not  
involve infinities.


Why do you think so? What algorithm implements purple or pain?


What make you think that purple or pain don't involve infinities?

(Also, many algorithm does involve infinities. Machines can provide  
name for ordinals up to the Church-Kleene omega_1^CK ordinal, and  
they can reason in ZF like any of us.
I don't see why computers cannot beat the humans in the naming of  
infinities, even if that task can be considered as the least  
algorithmic one ever conceived by humans).



I should clarify what I meant by infinities.  I meant there are  
algorithms that for computing anything that can be solved which does  
not require an infinite number of steps or infinite precision to do  
so.  So unless infinite precision or infinite steps are required to  
emulate brain behavior, a computer should be capable of expressing  
all outwardly visisble behaviors any human can.  (Craig has disputed  
this point before)


OK, I guessed so.

Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-08 Thread Telmo Menezes
On Thu, Apr 4, 2013 at 11:50 PM, meekerdb meeke...@verizon.net wrote:
 On 4/4/2013 3:35 PM, Telmo Menezes wrote:

 On Wed, Apr 3, 2013 at 11:44 PM, meekerdb meeke...@verizon.net wrote:

 On 4/3/2013 2:44 PM, Jason Resch wrote:

 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we
 don't
 have the right algorithm. The central mystery of AI, in my opinion, is
 why
 on earth haven't we found a general learning algorithm yet. Either it's
 too
 complex for our monkey brains, or you're right that computation is not
 the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which
 I strongly believe to be two distinct phenomena.


 Then do you think there could be philosophical zombies?

 Yes.


 Could it be that some humans are zombies, or do you assume that to be a
 zombie would mean being physically different from a human being?

I don't know.

 How would you
 operationally test a robot to see whether it was (a) intelligent

 I don't see intelligence as a binary property, but relative to goals.
 The classical answer for human-like intelligence is something like the
 Turing test, but I don't like it. I don't think that a generic AI
 should be measured by it's ability to fool us into making us think
 it's human. Instead I'd have to ask you first what do you want the
 robot for? Personally I would want robots to free Humanity from
 unwanted labor. This is a high-level goal that requires what I
 consider to be generic AI. Can it learn all sorts of tasks like
 driving a car, working in a factory, following fuzzy requirements, etc


 Yes, I agree with that.  I'd say intelligence is being able to learn to be
 competent at many tasks, but there is no completely general intelligence.

Agreed.

 I think for social beings it includes being able to explain, to give reasons,
 which implies some empathy.

No doubt. But could the ability to model other beings be sufficient
for empathy? A sort of dispassionate empathy? I think so.


 etc?

 (b)
 conscious?

 I don't believe that such a test can exist. I don't even think we can
 know if a glass of water is conscious.


 Have you ever been unconscious?

I don't know. All I know is that there are periods of my timeline that
I cannot remember. During college, there where a couple of incidents
that I cannot remember but my friends would tell you I was conscious.

Telmo.


 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-08 Thread Craig Weinberg


On Monday, April 8, 2013 7:42:22 AM UTC-4, telmo_menezes wrote:

 On Thu, Apr 4, 2013 at 11:50 PM, meekerdb meek...@verizon.netjavascript: 
 wrote: 
  On 4/4/2013 3:35 PM, Telmo Menezes wrote: 
  
  On Wed, Apr 3, 2013 at 11:44 PM, meekerdb 
  meek...@verizon.netjavascript: 
 wrote: 
  
  On 4/3/2013 2:44 PM, Jason Resch wrote: 
  
  You're making the same mistake as John Clark, confusing the physical 
  computer with the algorithm. Powerful computers don't help us if we 
  don't 
  have the right algorithm. The central mystery of AI, in my opinion, 
 is 
  why 
  on earth haven't we found a general learning algorithm yet. Either 
 it's 
  too 
  complex for our monkey brains, or you're right that computation is 
 not 
  the 
  whole story. I believe in the former, but not I'm not sure, of 
 course. 
  Notice that I'm talking about generic intelligence, not 
 consciousness, 
  which 
  I strongly believe to be two distinct phenomena. 
  
  
  Then do you think there could be philosophical zombies? 
  
  Yes. 
  
  
  Could it be that some humans are zombies, or do you assume that to be a 
  zombie would mean being physically different from a human being? 

 I don't know. 


Sociopaths are emotional zombies. Corruption or addiction can make people 
into moral zombies.  Driving for a long time can make someone a highway 
zombie. Cults and brainwashing can make someone an intellectual zombie. 
There are many ways which aspects of personal human consciousness can 
become subconscious, sub-personal, and sub-human.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-06 Thread Jason Resch
If you see the image in the mirror and interact with it, then there has to
be something conscious somewhere.  Just like a human controlling a remote
control car.  The consciousness might exist somewhere else, but the car can
behave as intelligently as a human.

Jason


On Fri, Apr 5, 2013 at 10:55 AM, Craig Weinberg whatsons...@gmail.comwrote:



 On Friday, April 5, 2013 10:30:29 AM UTC-4, Jason wrote:




 On Fri, Apr 5, 2013 at 8:41 AM, Bruno Marchal mar...@ulb.ac.be wrote:


 On 05 Apr 2013, at 00:07, Craig Weinberg wrote (to Jason)

  There are algorithms for implementing anything that does not involve
 infinities.


 Why do you think so? What algorithm implements purple or pain?


 What make you think that purple or pain don't involve infinities?

 (Also, many algorithm does involve infinities. Machines can provide name
 for ordinals up to the Church-Kleene omega_1^CK ordinal, and they can
 reason in ZF like any of us.
 I don't see why computers cannot beat the humans in the naming of
 infinities, even if that task can be considered as the least algorithmic
 one ever conceived by humans).


 I should clarify what I meant by infinities.  I meant there are
 algorithms that for computing anything that can be solved which does not
 require an infinite number of steps or infinite precision to do so.  So
 unless infinite precision or infinite steps are required to emulate brain
 behavior, a computer should be capable of expressing all outwardly visisble
 behaviors any human can.  (Craig has disputed this point before)


 A mirror can express all outwardly visible behaviors of a human already.
 Put a speaker at mouth level behind the mirror, a camera at eye level, a
 microphone at ear level, and voila, you have a mirror zombie. The only
 difference with an AI zombie is that the behaviors have been approximated
 statistically from correlations of analyzed recordings so that the
 mirroring is divided up into bits and controlled mathematically. Taking
 this to the level of brain behavior only makes the bits much more numerous.

 Craig




 Jason

  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-06 Thread Craig Weinberg


On Saturday, April 6, 2013 2:53:53 AM UTC-4, Jason wrote:

 If you see the image in the mirror and interact with it, then there has to 
 be something conscious somewhere.  Just like a human controlling a remote 
 control car.  The consciousness might exist somewhere else, but the car can 
 behave as intelligently as a human.


No, there only has *to have been* consciousness at some time. The 'mirror' 
could be a recording that you made of yourself projected behind the glass 
in which you can appear to interview yourself for an hour about your 
childhood. Could an audience tell that you were not interviewing your 
identical twin? What computer programming does is allow us to record 
formalized functions which reflect our presence which are so fragmented and 
numerous that they can be reconstituted from the bottom-up rather than the 
top down.

A spoken phrase is digitally synthesized not as a conscious thought or 
feeling being stepped down into a phrase made of words, but as an assembly 
of dumb phonemes associated quantitatively to a programmatic condition. It 
is an an a-signifying sequence which can be replayed as as needed, without 
intelligence, understanding, or responsibility, but as a logicalphonetic 
machine...a kind of crossword puzzle for a powered filing cabinet to fill 
out.

The intelligence of a computer program is evidence of an intelligent, 
conscious programmer's efforts, but that's all. Everything else is the 
pathetic fallacy.

Craig


 Jason


 On Fri, Apr 5, 2013 at 10:55 AM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Friday, April 5, 2013 10:30:29 AM UTC-4, Jason wrote:




 On Fri, Apr 5, 2013 at 8:41 AM, Bruno Marchal mar...@ulb.ac.be wrote:


 On 05 Apr 2013, at 00:07, Craig Weinberg wrote (to Jason)

  There are algorithms for implementing anything that does not involve 
 infinities.


 Why do you think so? What algorithm implements purple or pain?


 What make you think that purple or pain don't involve infinities?

 (Also, many algorithm does involve infinities. Machines can provide 
 name for ordinals up to the Church-Kleene omega_1^CK ordinal, and they can 
 reason in ZF like any of us. 
 I don't see why computers cannot beat the humans in the naming of 
 infinities, even if that task can be considered as the least algorithmic 
 one ever conceived by humans).


 I should clarify what I meant by infinities.  I meant there are 
 algorithms that for computing anything that can be solved which does not 
 require an infinite number of steps or infinite precision to do so.  So 
 unless infinite precision or infinite steps are required to emulate brain 
 behavior, a computer should be capable of expressing all outwardly visisble 
 behaviors any human can.  (Craig has disputed this point before)


 A mirror can express all outwardly visible behaviors of a human already. 
 Put a speaker at mouth level behind the mirror, a camera at eye level, a 
 microphone at ear level, and voila, you have a mirror zombie. The only 
 difference with an AI zombie is that the behaviors have been approximated 
 statistically from correlations of analyzed recordings so that the 
 mirroring is divided up into bits and controlled mathematically. Taking 
 this to the level of brain behavior only makes the bits much more numerous.

 Craig

  


 Jason

  -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-05 Thread Telmo Menezes
On Fri, Apr 5, 2013 at 1:09 AM, meekerdb meeke...@verizon.net wrote:
 On 4/4/2013 3:50 PM, Telmo Menezes wrote:

 On Wed, Apr 3, 2013 at 10:44 PM, Jason Resch jasonre...@gmail.com wrote:



 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes te...@telmomenezes.com
 wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whatsons...@gmail.com
 wrote:



 Then shouldn't a powerful computer be able to quickly deduce the
 winning
 Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we
 don't
 have the right algorithm. The central mystery of AI, in my opinion, is
 why
 on earth haven't we found a general learning algorithm yet. Either it's
 too
 complex for our monkey brains, or you're right that computation is not
 the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which
 I strongly believe to be two distinct phenomena.


 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were
 computationally
 easy, biology wouldn't have evolved to use trillions of synapses.  The
 brain
 is very expensive metabolically (using 20 - 25% of the total body's
 energy,
 about 100 Watts).  If so many neurons were not needed to do what we do,
 natural selection would have selected those humans with fewer neurons and
 reduced food requirements.

 Yes but one can imagine a situation where there is a simple
 (sufficiently-)general purpose algorithm that needs some place where
 to store memories and everything it has learned. In this case, we
 could implement such an algorithm in one of our puny laptops and get
 some results, and then just ride what's left of Moore's law all the
 way to the singularity. We don't know of any such algorithm.


 But it doesn't follow from human brain complexity that no such algorithm
 exists.  Evolution doesn't necessarily do things efficiently. Because it
 can't start-over, it always depends on modification of what already works.
 But I think there are other theoretical and evolutionary reasons that would
 limit the scope of general intelligence.  Just to take an example,
 mathematics is very hard for a lot of people.  Mathematical thinking is not
 something that has been evolutionarily useful until recent times (and maybe
 not even now).

Agreed.

What puzzles me the most is not that evolution hasn't found it
(although we're not sure, there's a lot we don't know about the brain
still). It's that the swarm of smart people that have been looking for
it haven't found it. I still have some hope that it's simple but
highly counter-intuitive.

Telmo.


 Brent


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-05 Thread Bruno Marchal


On 05 Apr 2013, at 00:07, Craig Weinberg wrote (to Jason)

There are algorithms for implementing anything that does not involve  
infinities.


Why do you think so? What algorithm implements purple or pain?


What make you think that purple or pain don't involve infinities?

(Also, many algorithm does involve infinities. Machines can provide  
name for ordinals up to the Church-Kleene omega_1^CK ordinal, and they  
can reason in ZF like any of us.
I don't see why computers cannot beat the humans in the naming of  
infinities, even if that task can be considered as the least  
algorithmic one ever conceived by humans).


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-05 Thread Craig Weinberg


On Friday, April 5, 2013 9:41:40 AM UTC-4, Bruno Marchal wrote:


 On 05 Apr 2013, at 00:07, Craig Weinberg wrote (to Jason)

 There are algorithms for implementing anything that does not involve 
 infinities.


 Why do you think so? What algorithm implements purple or pain?


 What make you think that purple or pain don't involve infinities?


They might, but why does that make a difference?
 


 (Also, many algorithm does involve infinities. Machines can provide name 
 for ordinals up to the Church-Kleene omega_1^CK ordinal, and they can 
 reason in ZF like any of us. 
 I don't see why computers cannot beat the humans in the naming of 
 infinities, even if that task can be considered as the least algorithmic 
 one ever conceived by humans).


Why do you think that purple is a name?

Craig
 


 Bruno




 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-05 Thread Jason Resch
On Fri, Apr 5, 2013 at 8:41 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 05 Apr 2013, at 00:07, Craig Weinberg wrote (to Jason)

 There are algorithms for implementing anything that does not involve
 infinities.


 Why do you think so? What algorithm implements purple or pain?


 What make you think that purple or pain don't involve infinities?

 (Also, many algorithm does involve infinities. Machines can provide name
 for ordinals up to the Church-Kleene omega_1^CK ordinal, and they can
 reason in ZF like any of us.
 I don't see why computers cannot beat the humans in the naming of
 infinities, even if that task can be considered as the least algorithmic
 one ever conceived by humans).


I should clarify what I meant by infinities.  I meant there are algorithms
that for computing anything that can be solved which does not require an
infinite number of steps or infinite precision to do so.  So unless
infinite precision or infinite steps are required to emulate brain
behavior, a computer should be capable of expressing all outwardly visisble
behaviors any human can.  (Craig has disputed this point before)

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-05 Thread Craig Weinberg


On Friday, April 5, 2013 10:30:29 AM UTC-4, Jason wrote:




 On Fri, Apr 5, 2013 at 8:41 AM, Bruno Marchal mar...@ulb.ac.bejavascript:
  wrote:


 On 05 Apr 2013, at 00:07, Craig Weinberg wrote (to Jason)

  There are algorithms for implementing anything that does not involve 
 infinities.


 Why do you think so? What algorithm implements purple or pain?


 What make you think that purple or pain don't involve infinities?

 (Also, many algorithm does involve infinities. Machines can provide name 
 for ordinals up to the Church-Kleene omega_1^CK ordinal, and they can 
 reason in ZF like any of us. 
 I don't see why computers cannot beat the humans in the naming of 
 infinities, even if that task can be considered as the least algorithmic 
 one ever conceived by humans).


 I should clarify what I meant by infinities.  I meant there are algorithms 
 that for computing anything that can be solved which does not require an 
 infinite number of steps or infinite precision to do so.  So unless 
 infinite precision or infinite steps are required to emulate brain 
 behavior, a computer should be capable of expressing all outwardly visisble 
 behaviors any human can.  (Craig has disputed this point before)


A mirror can express all outwardly visible behaviors of a human already. 
Put a speaker at mouth level behind the mirror, a camera at eye level, a 
microphone at ear level, and voila, you have a mirror zombie. The only 
difference with an AI zombie is that the behaviors have been approximated 
statistically from correlations of analyzed recordings so that the 
mirroring is divided up into bits and controlled mathematically. Taking 
this to the level of brain behavior only makes the bits much more numerous.

Craig

 


 Jason


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-05 Thread John Clark
On Thu, Apr 4, 2013 at 6:43 PM, Telmo Menezes te...@telmomenezes.comwrote:

 There probably isn't a general learning algorithm there are just lots of
 lesser learning algorithms,


  Then you need a higher-level algorithm to decide which lesser algorithm
 to use for each situation.


Yes, or use a rule of thumb to decide on a algorithm, or use lots of
algorithms and check for one that gives a reasonable sounding answer.

 and more important than any algorithm or even deduction in general are
 rules of thumb, like induction.


 I don't understand the distinction. Rules of thumb and induction seem
 algorithmic to me.


Within their domain of applicability algorithms can be proven to ALWAYS
work; rules of thumb USUALLY work and even that can not be proven but is
know through induction and sometimes, not usually but sometimes, induction
can be wrong. So just like us even a computerized Jupiter brain would not
be immune from error.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-05 Thread Bruno Marchal


On 03 Apr 2013, at 16:48, Craig Weinberg wrote:





snip



Computationalism is the opposite of reductionism. It points on the  
failure of the reductionist conception on numbers and machines.


But it does so by swallowing whole all phenomena which are not  
numbers and machines.



Not at all. It does that by attributing consciousness to person when  
supported by the relevant arithmetical relations.






Therefore it is also reductionist,


Not at all. You are reductionist, as you reduce those person into  
inexistent and see only their local bodies/arithmetical relations.





and *radically so* because all realism is reduced to the  
consequences of arithmetic.


That is just because those consequences are enough for the ontology,  
once we assume comp, and we prove in detail that the epistemology is  
already not boundable by any mathematics.





I think it's telling that you deny that it is reductionist and claim  
that it could be 'the opposite' of reductionism. It seems like you  
don't care that all of the universe including our experience is  
dehydrated into interchangeable granular sequences, as long as it  
elevates and liberates arithmetic.


That is comp, and it liberate indeed the robots and many arithmetical  
creature from reductionism. We get a view from inside, mainly private  
and unknown from outside for those creature. You reduce them into  
zombie.




What would a universe of numbers really want with personhood and  
sensation?


It is an open question if the universe of numbers want something,  
but it is provable that it cannot avoid the universal numbers,  
relatively to each other, to develop complex multileveled public and  
private relationships and personhood.


It should seem obvious that you are the reductionist here, as you  
attribute consciousness to less entities than a computationalist.  
Indeed you invoke a pathetic fallacy where a non reductionist, like a  
computationalist, will invoke a possibly genuine consciousness,  
sensation and personhood.


Bruno





http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread Craig Weinberg


On Thursday, April 4, 2013 12:08:25 AM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 9:54 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, April 3, 2013 8:58:37 PM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes te...@telmomenezes.com
  wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg 
 whats...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the 
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical 
 computer with the algorithm. Powerful computers don't help us if we 
 don't 
 have the right algorithm. The central mystery of AI, in my opinion, is 
 why 
 on earth haven't we found a general learning algorithm yet. Either it's 
 too 
 complex for our monkey brains, or you're right that computation is not 
 the 
 whole story. I believe in the former, but not I'm not sure, of course. 
 Notice that I'm talking about generic intelligence, not consciousness, 
 which I strongly believe to be two distinct phenomena.
   


 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were 
 computationally easy, biology wouldn't have evolved to use trillions of 
 synapses.  The brain is very expensive metabolically (using 20 - 25% of 
 the 
 total body's energy, about 100 Watts).  If so many neurons were not 
 needed 
 to do what we do, natural selection would have selected those humans with 
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved 
 survival through learning, and that that is what makes the physiological 
 investment pay off.


 Right, so my point is that we should not expect things like human 
 intelligence or human learning to be trivial or easy to get in robots, when 
 the human brain is the most complex thing we know, and can perform more 
 computations than even the largest super computers of today.


 Absolutely, but neither should we expect that complexity alone 


 I don't think anyone has argued that complexity alone is sufficient.


What else are you suggesting makes the difference?
 


  

 can make an assembly of inorganic parts into a subjective experience 
 which compares to that of an animal. 


 Both are made of the same four fundamental forces interacting with each 
 other, why should the number of protons in the nucleus of some atoms in 
 those organic molecules make any difference to the subject? 


Why does arsenic have a different effect on the body than sugar? Different 
forms signify different possibilities and potentials on many different 
levels. The number of protons causes some things on some levels by virtue 
of its topological potentials, but that is not the cause of order in the 
the cosmos, or awareness. Gold could be any number of protons in a nucleus, 
it just happens to be using that configuration, just like the ip address of 
a website does not determine its content.

 

 What led you to chose the chemical elements as the origin of sense and 
 feeling, as opposed to higher level structures (neurology, circuits, etc.) 
 or lower level structures (quarks, gluons, electrons)?


The chemical elements have nothing to do with the origin of sense and 
feeling at all, just like the letters of the alphabet have nothing to do 
with the origin of Shakespeare. Shakespeare used words, words are made of 
certain combinations of letters and not others, which is what makes them 
words.
 


  


  
  

 What I question is why that improvement would entail awareness.


 A human has to be aware to do the things it does, because zombies are 
 not possible.


 That's begging the question.


 Not quite, I provided an argument for my reasoning. 


If your argument is A = B because B = A then you are still begging the 
question.

 

 What is your objection, that zombies are possible, or that zombies are not 
 possible but that doesn't mean something that in all ways appears conscious 
 must be conscious?


My objection is that the premise of zombies is broken to begin with. It 
asks the wrong question and makes the wrong assumption about consciousness. 
There is no 'in all ways appears'...it is always 'in all ways appears to X 
under Y circumstance.


  

 Anything that is not exactly what it we might assume it is would be a 
 'zombie' to some extent. A human does not have to be aware to do the things 
 that it does, which is proved by blindsight, sleepwalking, brainwashing, 
 etc. A human may, in reality, have to be aware to perform all of the 
 functions that we do, but if comp were true, that would not be the case.
  

   Your examples of blind sight are not a disproof of the separability of 
 function and awareness,


 I understand why you think that, but ultimately it is proof of 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread Jason Resch
On Thu, Apr 4, 2013 at 1:06 AM, Craig Weinberg whatsons...@gmail.comwrote:



 On Thursday, April 4, 2013 12:08:25 AM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 9:54 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 3, 2013 8:58:37 PM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes 
 te...@telmomenezes.com wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg 
 whats...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we 
 don't
 have the right algorithm. The central mystery of AI, in my opinion, is 
 why
 on earth haven't we found a general learning algorithm yet. Either it's 
 too
 complex for our monkey brains, or you're right that computation is not 
 the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which I strongly believe to be two distinct phenomena.



 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were
 computationally easy, biology wouldn't have evolved to use trillions of
 synapses.  The brain is very expensive metabolically (using 20 - 25% of 
 the
 total body's energy, about 100 Watts).  If so many neurons were not 
 needed
 to do what we do, natural selection would have selected those humans with
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved
 survival through learning, and that that is what makes the physiological
 investment pay off.


 Right, so my point is that we should not expect things like human
 intelligence or human learning to be trivial or easy to get in robots, when
 the human brain is the most complex thing we know, and can perform more
 computations than even the largest super computers of today.


 Absolutely, but neither should we expect that complexity alone


 I don't think anyone has argued that complexity alone is sufficient.


 What else are you suggesting makes the difference?



To implement human learning an intelligence we need the right algorithm and
sufficient computational power to implement it.






 can make an assembly of inorganic parts into a subjective experience
 which compares to that of an animal.


 Both are made of the same four fundamental forces interacting with each
 other, why should the number of protons in the nucleus of some atoms in
 those organic molecules make any difference to the subject?


 Why does arsenic have a different effect on the body than sugar? Different
 forms signify different possibilities and potentials on many different
 levels. The number of protons causes some things on some levels by virtue
 of its topological potentials, but that is not the cause of order in the
 the cosmos, or awareness. Gold could be any number of protons in a nucleus,
 it just happens to be using that configuration, just like the ip address of
 a website does not determine its content.


If your theory is that sense is primitive then how do you justify your
belief that certain materials are associated with certain possibilities of
experience?






 What led you to chose the chemical elements as the origin of sense and
 feeling, as opposed to higher level structures (neurology, circuits, etc.)
 or lower level structures (quarks, gluons, electrons)?


 The chemical elements have nothing to do with the origin of sense and
 feeling at all, just like the letters of the alphabet have nothing to do
 with the origin of Shakespeare. Shakespeare used words, words are made of
 certain combinations of letters and not others, which is what makes them
 words.


Exactly, I just think the alphabet for spelling different conscious states
exists at a lower level than you do, e.g., in the logic of recursive
relationships, rather than in atomic elements.












 What I question is why that improvement would entail awareness.


 A human has to be aware to do the things it does, because zombies are
 not possible.


 That's begging the question.


 Not quite, I provided an argument for my reasoning.


 If your argument is A = B because B = A then you are still begging the
 question.


You are arguing it is possible to see without seeing, which I equate with
zombies, and which I think is logically inconsistent.  It is a proof by
contradiction that shows it is not possible to see without seeing.






 What is your objection, that zombies are possible, or that zombies are
 not possible but that doesn't mean something that in all ways appears
 conscious must be conscious?


 My objection is that the premise of zombies is broken to begin with.



Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread Craig Weinberg


On Thursday, April 4, 2013 2:31:09 AM UTC-4, Jason wrote:




 On Thu, Apr 4, 2013 at 1:06 AM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Thursday, April 4, 2013 12:08:25 AM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 9:54 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 3, 2013 8:58:37 PM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes 
 te...@telmomenezes.com wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whats...@gmail.com
  wrote:



 Then shouldn't a powerful computer be able to quickly deduce the 
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the 
 physical computer with the algorithm. Powerful computers don't help us 
 if 
 we don't have the right algorithm. The central mystery of AI, in my 
 opinion, is why on earth haven't we found a general learning algorithm 
 yet. 
 Either it's too complex for our monkey brains, or you're right that 
 computation is not the whole story. I believe in the former, but not 
 I'm 
 not sure, of course. Notice that I'm talking about generic 
 intelligence, 
 not consciousness, which I strongly believe to be two distinct 
 phenomena.
   


 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were 
 computationally easy, biology wouldn't have evolved to use trillions of 
 synapses.  The brain is very expensive metabolically (using 20 - 25% of 
 the 
 total body's energy, about 100 Watts).  If so many neurons were not 
 needed 
 to do what we do, natural selection would have selected those humans 
 with 
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved 
 survival through learning, and that that is what makes the physiological 
 investment pay off.


 Right, so my point is that we should not expect things like human 
 intelligence or human learning to be trivial or easy to get in robots, 
 when 
 the human brain is the most complex thing we know, and can perform more 
 computations than even the largest super computers of today.


 Absolutely, but neither should we expect that complexity alone 


 I don't think anyone has argued that complexity alone is sufficient.


 What else are you suggesting makes the difference?
  


 To implement human learning an intelligence we need the right algorithm 
 and sufficient computational power to implement it.



There is a curious dualism to computation. All of computation is based on 
representation - on the ability of the computer to set equivalence 
arbitrarily between any two generic tokens. This is the power of computers 
- anything can be scanned, downloaded, typed, spoken, etc and be treated as 
collections of generic cardinality. At the same time, another important 
feature of computation is the ability to take those very same kinds of 
collections and treat them as operational commands - ordinal sequences. We 
have then this binary distinction between that which is treated with 
absolute indifference and that which is treated as unquestioned author. 

This lack of subtlety is what characterizes machines. What it means to be 
mechanical or robotic is not only the absence of nuanced gradation between 
literal obedience to each discrete order and anesthetic generalization of 
every input and output, but it also means the absence of the perpendicular 
range of feelings and emotions which non-machines use to modulate their 
sensitivity and motivation directly. The aesthetic dimension is missing 
entirely in any possible machine. It doesn't matter how clever or correct 
the algorithm, how plentiful the processing resources, no collection of 
representations can ever conjure an authentic presence which is 
proprietary, caring, feeling, understanding, motivated, inspired, etc. 
human or otherwise. 

These qualities do not fall within the categories of blind execution or 
blind equivalence, they require personal investment. An algorithm is by 
definition impersonal and automatic, so that it can only provide ever more 
sophisticated patterns of the same generic command-data structure. An 
algorithm can simulate learning because learning is a function of 
organization of experiences rather than experiences. Learning is a problem 
of acquiring and retrieving X rather than inventing or appreciating X.
 


  

  
  

 can make an assembly of inorganic parts into a subjective experience 
 which compares to that of an animal. 


 Both are made of the same four fundamental forces interacting with each 
 other, why should the number of protons in the nucleus of some atoms in 
 those organic molecules make any difference to the subject? 


 Why does arsenic have a different effect on the body than sugar? 
 Different forms signify different 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread Jason Resch
On Thu, Apr 4, 2013 at 10:39 AM, Craig Weinberg whatsons...@gmail.comwrote:



 On Thursday, April 4, 2013 2:31:09 AM UTC-4, Jason wrote:




 On Thu, Apr 4, 2013 at 1:06 AM, Craig Weinberg whats...@gmail.comwrote:



 On Thursday, April 4, 2013 12:08:25 AM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 9:54 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 3, 2013 8:58:37 PM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes 
 te...@telmomenezes.com wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg 
 whats...@gmail.com wrote:



 Then shouldn't a powerful computer be able to quickly deduce the
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the
 physical computer with the algorithm. Powerful computers don't help 
 us if
 we don't have the right algorithm. The central mystery of AI, in my
 opinion, is why on earth haven't we found a general learning 
 algorithm yet.
 Either it's too complex for our monkey brains, or you're right that
 computation is not the whole story. I believe in the former, but not 
 I'm
 not sure, of course. Notice that I'm talking about generic 
 intelligence,
 not consciousness, which I strongly believe to be two distinct 
 phenomena.



 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were
 computationally easy, biology wouldn't have evolved to use trillions of
 synapses.  The brain is very expensive metabolically (using 20 - 25% 
 of the
 total body's energy, about 100 Watts).  If so many neurons were not 
 needed
 to do what we do, natural selection would have selected those humans 
 with
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved
 survival through learning, and that that is what makes the physiological
 investment pay off.


 Right, so my point is that we should not expect things like human
 intelligence or human learning to be trivial or easy to get in robots, 
 when
 the human brain is the most complex thing we know, and can perform more
 computations than even the largest super computers of today.


 Absolutely, but neither should we expect that complexity alone


 I don't think anyone has argued that complexity alone is sufficient.


 What else are you suggesting makes the difference?



 To implement human learning an intelligence we need the right algorithm
 and sufficient computational power to implement it.



 There is a curious dualism to computation. All of computation is based on
 representation - on the ability of the computer to set equivalence
 arbitrarily between any two generic tokens. This is the power of computers
 - anything can be scanned, downloaded, typed, spoken, etc and be treated as
 collections of generic cardinality. At the same time, another important
 feature of computation is the ability to take those very same kinds of
 collections and treat them as operational commands - ordinal sequences. We
 have then this binary distinction between that which is treated with
 absolute indifference and that which is treated as unquestioned author.



How data vs instruction memory is distinguished comes down to an
implementation detail, for example, in the Harvard architecture they are
physically distinct:
http://en.wikipedia.org/wiki/Harvard_architecture#Contrast_with_von_Neumann_architectures

As a security measure, most modern processors distinguish data memory
(which is writable but not executable) and program memory (which is
executable but not writable), in essense, this feature implements a virtual
Harvard architecture.

In any event though, I don't see how this is related to human learning or
intelligence.




 This lack of subtlety is what characterizes machines. What it means to be
 mechanical or robotic is not only the absence of nuanced gradation between
 literal obedience to each discrete order and anesthetic generalization of
 every input and output, but it also means the absence of the perpendicular
 range of feelings and emotions which non-machines use to modulate their
 sensitivity and motivation directly. The aesthetic dimension is missing
 entirely in any possible machine. It doesn't matter how clever or correct
 the algorithm, how plentiful the processing resources, no collection of
 representations can ever conjure an authentic presence which is
 proprietary, caring, feeling, understanding, motivated, inspired, etc.
 human or otherwise.


You are assuming humans are not machines, for which I have seen no
evidence.  If you have any please share it.




 These qualities do not fall within the categories of blind execution or
 blind equivalence, they require personal investment. An algorithm is by
 definition impersonal


What about an algorithm that implements a person?


 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread John Clark
Somebody wrote:

 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm.


 True, and a powerful brain won't help you if you have no education or
memories.


  The central mystery of AI, in my opinion, is why on earth haven't we
 found a general learning algorithm yet.


There probably isn't a general learning algorithm there are just lots of
lesser learning algorithms, and more important than any algorithm or even
deduction in general are rules of thumb, like induction.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread Telmo Menezes
On Wed, Apr 3, 2013 at 11:44 PM, meekerdb meeke...@verizon.net wrote:
 On 4/3/2013 2:44 PM, Jason Resch wrote:

 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness, which
 I strongly believe to be two distinct phenomena.


 Then do you think there could be philosophical zombies?

Yes.

 How would you
 operationally test a robot to see whether it was (a) intelligent

I don't see intelligence as a binary property, but relative to goals.
The classical answer for human-like intelligence is something like the
Turing test, but I don't like it. I don't think that a generic AI
should be measured by it's ability to fool us into making us think
it's human. Instead I'd have to ask you first what do you want the
robot for? Personally I would want robots to free Humanity from
unwanted labor. This is a high-level goal that requires what I
consider to be generic AI. Can it learn all sorts of tasks like
driving a car, working in a factory, following fuzzy requirements, etc
etc?

 (b)
 conscious?

I don't believe that such a test can exist. I don't even think we can
know if a glass of water is conscious.

 Brent


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread Telmo Menezes
On Thu, Apr 4, 2013 at 6:34 PM, John Clark johnkcl...@gmail.com wrote:
 Somebody wrote:

  You're making the same mistake as John Clark, confusing the physical
  computer with the algorithm. Powerful computers don't help us if we don't
  have the right algorithm.


  True, and a powerful brain won't help you if you have no education or
 memories.


  The central mystery of AI, in my opinion, is why on earth haven't we
  found a general learning algorithm yet.


 There probably isn't a general learning algorithm there are just lots of
 lesser learning algorithms,

Then you need a higher-level algorithm to decide which lesser
algorithm to use for each situation. It also has to be able to learn
the correct algorithm to use from its repertoire when new situations
are encountered. This ensamble of algorithms ends up being a general
learning algorithm.

 and more important than any algorithm or even
 deduction in general are rules of thumb, like induction.

I don't understand the distinction. Rules of thumb and induction seem
algorithmic to me. Even instinct, assuming it comes from evolution
(another algorithm).

   John K Clark



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread meekerdb

On 4/4/2013 3:35 PM, Telmo Menezes wrote:

On Wed, Apr 3, 2013 at 11:44 PM, meekerdb meeke...@verizon.net wrote:

On 4/3/2013 2:44 PM, Jason Resch wrote:

You're making the same mistake as John Clark, confusing the physical
computer with the algorithm. Powerful computers don't help us if we don't
have the right algorithm. The central mystery of AI, in my opinion, is why
on earth haven't we found a general learning algorithm yet. Either it's too
complex for our monkey brains, or you're right that computation is not the
whole story. I believe in the former, but not I'm not sure, of course.
Notice that I'm talking about generic intelligence, not consciousness, which
I strongly believe to be two distinct phenomena.


Then do you think there could be philosophical zombies?

Yes.


Could it be that some humans are zombies, or do you assume that to be a zombie would mean 
being physically different from a human being?







How would you
operationally test a robot to see whether it was (a) intelligent

I don't see intelligence as a binary property, but relative to goals.
The classical answer for human-like intelligence is something like the
Turing test, but I don't like it. I don't think that a generic AI
should be measured by it's ability to fool us into making us think
it's human. Instead I'd have to ask you first what do you want the
robot for? Personally I would want robots to free Humanity from
unwanted labor. This is a high-level goal that requires what I
consider to be generic AI. Can it learn all sorts of tasks like
driving a car, working in a factory, following fuzzy requirements, etc


Yes, I agree with that.  I'd say intelligence is being able to learn to be competent at 
many tasks, but there is no completely general intelligence.  I think for social beings it 
includes being able to explain, to give reasons, which implies some empathy.



etc?


(b)
conscious?

I don't believe that such a test can exist. I don't even think we can
know if a glass of water is conscious.


Have you ever been unconscious?

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread Telmo Menezes
On Wed, Apr 3, 2013 at 10:44 PM, Jason Resch jasonre...@gmail.com wrote:



 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes te...@telmomenezes.com
 wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whatsons...@gmail.com
 wrote:



 Then shouldn't a powerful computer be able to quickly deduce the winning
 Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness, which
 I strongly believe to be two distinct phenomena.



 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were computationally
 easy, biology wouldn't have evolved to use trillions of synapses.  The brain
 is very expensive metabolically (using 20 - 25% of the total body's energy,
 about 100 Watts).  If so many neurons were not needed to do what we do,
 natural selection would have selected those humans with fewer neurons and
 reduced food requirements.

Yes but one can imagine a situation where there is a simple
(sufficiently-)general purpose algorithm that needs some place where
to store memories and everything it has learned. In this case, we
could implement such an algorithm in one of our puny laptops and get
some results, and then just ride what's left of Moore's law all the
way to the singularity. We don't know of any such algorithm.


 Jason

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-04 Thread meekerdb

On 4/4/2013 3:50 PM, Telmo Menezes wrote:

On Wed, Apr 3, 2013 at 10:44 PM, Jason Resch jasonre...@gmail.com wrote:



On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes te...@telmomenezes.com
wrote:




On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whatsons...@gmail.com
wrote:



Then shouldn't a powerful computer be able to quickly deduce the winning
Arimaa mappings?


You're making the same mistake as John Clark, confusing the physical
computer with the algorithm. Powerful computers don't help us if we don't
have the right algorithm. The central mystery of AI, in my opinion, is why
on earth haven't we found a general learning algorithm yet. Either it's too
complex for our monkey brains, or you're right that computation is not the
whole story. I believe in the former, but not I'm not sure, of course.
Notice that I'm talking about generic intelligence, not consciousness, which
I strongly believe to be two distinct phenomena.



Another point toward Telmo's suspicion that learning is complex:

If learning and thinking intelligently at a human level were computationally
easy, biology wouldn't have evolved to use trillions of synapses.  The brain
is very expensive metabolically (using 20 - 25% of the total body's energy,
about 100 Watts).  If so many neurons were not needed to do what we do,
natural selection would have selected those humans with fewer neurons and
reduced food requirements.

Yes but one can imagine a situation where there is a simple
(sufficiently-)general purpose algorithm that needs some place where
to store memories and everything it has learned. In this case, we
could implement such an algorithm in one of our puny laptops and get
some results, and then just ride what's left of Moore's law all the
way to the singularity. We don't know of any such algorithm.


But it doesn't follow from human brain complexity that no such algorithm exists.  
Evolution doesn't necessarily do things efficiently. Because it can't start-over, it 
always depends on modification of what already works.  But I think there are other 
theoretical and evolutionary reasons that would limit the scope of general intelligence.  
Just to take an example, mathematics is very hard for a lot of people.  Mathematical 
thinking is not something that has been evolutionarily useful until recent times (and 
maybe not even now).


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Jason Resch
On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes te...@telmomenezes.comwrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whatsons...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the winning
 Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which I strongly believe to be two distinct phenomena.



Another point toward Telmo's suspicion that learning is complex:

If learning and thinking intelligently at a human level were
computationally easy, biology wouldn't have evolved to use trillions of
synapses.  The brain is very expensive metabolically (using 20 - 25% of the
total body's energy, about 100 Watts).  If so many neurons were not needed
to do what we do, natural selection would have selected those humans with
fewer neurons and reduced food requirements.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread meekerdb

On 4/3/2013 2:44 PM, Jason Resch wrote:
You're making the same mistake as John Clark, confusing the physical computer with the 
algorithm. Powerful computers don't help us if we don't have the right algorithm. The 
central mystery of AI, in my opinion, is why on earth haven't we found a general 
learning algorithm yet. Either it's too complex for our monkey brains, or you're right 
that computation is not the whole story. I believe in the former, but not I'm not sure, 
of course. Notice that I'm talking about generic intelligence, not consciousness, which 
I strongly believe to be two distinct phenomena.


Then do you think there could be philosophical zombies?  How would you operationally test 
a robot to see whether it was (a) intelligent (b) conscious?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Craig Weinberg


On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes 
 te...@telmomenezes.comjavascript:
  wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 Then shouldn't a powerful computer be able to quickly deduce the winning 
 Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical 
 computer with the algorithm. Powerful computers don't help us if we don't 
 have the right algorithm. The central mystery of AI, in my opinion, is why 
 on earth haven't we found a general learning algorithm yet. Either it's too 
 complex for our monkey brains, or you're right that computation is not the 
 whole story. I believe in the former, but not I'm not sure, of course. 
 Notice that I'm talking about generic intelligence, not consciousness, 
 which I strongly believe to be two distinct phenomena.
   


 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were 
 computationally easy, biology wouldn't have evolved to use trillions of 
 synapses.  The brain is very expensive metabolically (using 20 - 25% of the 
 total body's energy, about 100 Watts).  If so many neurons were not needed 
 to do what we do, natural selection would have selected those humans with 
 fewer neurons and reduced food requirements.


There's no question that human intelligence reflects an improved survival 
through learning, and that that is what makes the physiological investment 
pay off. What I question is why that improvement would entail awareness. 
There are a lot of neurons in our gut as well, and assimilation of 
nutrients is undoubtedly complex and important to survival, yet we are not 
compelled to insist that there must be some conscious experience to manage 
that intelligence. Learning is complex, but awareness itself is simple.

Craig
 


 Jason


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Jason Resch
Brent,

You're mail client is malfunctioning again, you are quoting something Telmo
wrote as coming from me.

My opinion on the matter of philosophical zombies is that they are
logically inconsistent.

Jason


On Wed, Apr 3, 2013 at 5:44 PM, meekerdb meeke...@verizon.net wrote:

 On 4/3/2013 2:44 PM, Jason Resch wrote:

 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which I strongly believe to be two distinct phenomena.


 Then do you think there could be philosophical zombies?  How would you
 operationally test a robot to see whether it was (a) intelligent (b)
 conscious?

 Brent


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to 
 everything-list+unsubscribe@**googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 To post to this group, send email to 
 everything-list@googlegroups.**comeverything-list@googlegroups.com
 .
 Visit this group at 
 http://groups.google.com/**group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en
 .
 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Jason Resch
On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes te...@telmomenezes.comwrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whats...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which I strongly believe to be two distinct phenomena.



 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were
 computationally easy, biology wouldn't have evolved to use trillions of
 synapses.  The brain is very expensive metabolically (using 20 - 25% of the
 total body's energy, about 100 Watts).  If so many neurons were not needed
 to do what we do, natural selection would have selected those humans with
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved survival
 through learning, and that that is what makes the physiological investment
 pay off.


Right, so my point is that we should not expect things like human
intelligence or human learning to be trivial or easy to get in robots, when
the human brain is the most complex thing we know, and can perform more
computations than even the largest super computers of today.



 What I question is why that improvement would entail awareness.


A human has to be aware to do the things it does, because zombies are not
possible.  Your examples of blind sight are not a disproof of the
separability of function and awareness, only examples of broken links in
communication (quite similar to split brain patients).


 There are a lot of neurons in our gut as well, and assimilation of
 nutrients is undoubtedly complex and important to survival, yet we are not
 compelled to insist that there must be some conscious experience to manage
 that intelligence. Learning is complex, but awareness itself is simple.


I think the nerves in the gut can manifest as awareness, such as cravings
for certain foods when the body realizes it is deficient in some particular
nutrient.  Afterall, what is the point of all those nerves if they have no
impact on behavior?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread meekerdb

Hmm. You're right I was intending to ask Telmo that question.

Brent

On 4/3/2013 5:52 PM, Jason Resch wrote:

Brent,

You're mail client is malfunctioning again, you are quoting something Telmo wrote as 
coming from me.


My opinion on the matter of philosophical zombies is that they are logically 
inconsistent.

Jason


On Wed, Apr 3, 2013 at 5:44 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 4/3/2013 2:44 PM, Jason Resch wrote:

You're making the same mistake as John Clark, confusing the physical 
computer
with the algorithm. Powerful computers don't help us if we don't have 
the right
algorithm. The central mystery of AI, in my opinion, is why on earth 
haven't we
found a general learning algorithm yet. Either it's too complex for our 
monkey
brains, or you're right that computation is not the whole story. I 
believe in
the former, but not I'm not sure, of course. Notice that I'm talking 
about
generic intelligence, not consciousness, which I strongly believe to be 
two
distinct phenomena.


Then do you think there could be philosophical zombies?  How would you 
operationally
test a robot to see whether it was (a) intelligent (b) conscious?

Brent


-- 
You received this message because you are subscribed to the Google Groups

Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to
everything-list+unsubscr...@googlegroups.com
mailto:everything-list%2bunsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com
mailto:everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.



--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


No virus found in this message.
Checked by AVG - www.avg.com http://www.avg.com
Version: 2013.0.3267 / Virus Database: 3162/6222 - Release Date: 04/03/13



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Craig Weinberg


On Wednesday, April 3, 2013 8:58:37 PM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes 
 te...@telmomenezes.comwrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whats...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the 
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical 
 computer with the algorithm. Powerful computers don't help us if we don't 
 have the right algorithm. The central mystery of AI, in my opinion, is why 
 on earth haven't we found a general learning algorithm yet. Either it's 
 too 
 complex for our monkey brains, or you're right that computation is not the 
 whole story. I believe in the former, but not I'm not sure, of course. 
 Notice that I'm talking about generic intelligence, not consciousness, 
 which I strongly believe to be two distinct phenomena.
   


 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were 
 computationally easy, biology wouldn't have evolved to use trillions of 
 synapses.  The brain is very expensive metabolically (using 20 - 25% of the 
 total body's energy, about 100 Watts).  If so many neurons were not needed 
 to do what we do, natural selection would have selected those humans with 
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved survival 
 through learning, and that that is what makes the physiological investment 
 pay off.


 Right, so my point is that we should not expect things like human 
 intelligence or human learning to be trivial or easy to get in robots, when 
 the human brain is the most complex thing we know, and can perform more 
 computations than even the largest super computers of today.


Absolutely, but neither should we expect that complexity alone can make an 
assembly of inorganic parts into a subjective experience which compares to 
that of an animal. 


  

 What I question is why that improvement would entail awareness.


 A human has to be aware to do the things it does, because zombies are not 
 possible.


That's begging the question. Anything that is not exactly what it we might 
assume it is would be a 'zombie' to some extent. A human does not have to 
be aware to do the things that it does, which is proved by blindsight, 
sleepwalking, brainwashing, etc. A human may, in reality, have to be aware 
to perform all of the functions that we do, but if comp were true, that 
would not be the case.
 

   Your examples of blind sight are not a disproof of the separability of 
 function and awareness,


I understand why you think that, but ultimately it is proof of exactly that.
 

 only examples of broken links in communication (quite similar to split 
 brain patients).


A broken link in communication which prevents you from being aware of the 
experience which is informing you is the same thing as function being 
separate from awareness. The end result is that it is not necessary to 
experience any conscious qualia to receive optical information. There is no 
difference functionally between a broken link in communication and 
separability of function and awareness. The awareness is broken in the 
dead link, but the function is retained, thus they are in fact separate.

 

 There are a lot of neurons in our gut as well, and assimilation of 
 nutrients is undoubtedly complex and important to survival, yet we are not 
 compelled to insist that there must be some conscious experience to manage 
 that intelligence. Learning is complex, but awareness itself is simple.


 I think the nerves in the gut can manifest as awareness, such as cravings 
 for certain foods when the body realizes it is deficient in some particular 
 nutrient.  Afterall, what is the point of all those nerves if they have no 
 impact on behavior?


Oh I agree, because my view is panexperiential. The gut doesn't have the 
kind of awareness that a human being has as a whole, because the other 
organs of the body are not as significant as the brain is to the organism. 
If we are going by the comp assumption though, then there is an implication 
that nothing has any awareness unless it is running a very sophisticated 
program.


Craig
 


 Jason
  
  
  



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Jason Resch
On Wed, Apr 3, 2013 at 9:54 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, April 3, 2013 8:58:37 PM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes 
 te...@telmomenezes.comwrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whats...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's 
 too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which I strongly believe to be two distinct phenomena.



 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were
 computationally easy, biology wouldn't have evolved to use trillions of
 synapses.  The brain is very expensive metabolically (using 20 - 25% of the
 total body's energy, about 100 Watts).  If so many neurons were not needed
 to do what we do, natural selection would have selected those humans with
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved
 survival through learning, and that that is what makes the physiological
 investment pay off.


 Right, so my point is that we should not expect things like human
 intelligence or human learning to be trivial or easy to get in robots, when
 the human brain is the most complex thing we know, and can perform more
 computations than even the largest super computers of today.


 Absolutely, but neither should we expect that complexity alone


I don't think anyone has argued that complexity alone is sufficient.



 can make an assembly of inorganic parts into a subjective experience which
 compares to that of an animal.


Both are made of the same four fundamental forces interacting with each
other, why should the number of protons in the nucleus of some atoms in
those organic molecules make any difference to the subject?  What led you
to chose the chemical elements as the origin of sense and feeling, as
opposed to higher level structures (neurology, circuits, etc.) or lower
level structures (quarks, gluons, electrons)?







 What I question is why that improvement would entail awareness.


 A human has to be aware to do the things it does, because zombies are not
 possible.


 That's begging the question.


Not quite, I provided an argument for my reasoning.  What is your
objection, that zombies are possible, or that zombies are not possible but
that doesn't mean something that in all ways appears conscious must be
conscious?



 Anything that is not exactly what it we might assume it is would be a
 'zombie' to some extent. A human does not have to be aware to do the things
 that it does, which is proved by blindsight, sleepwalking, brainwashing,
 etc. A human may, in reality, have to be aware to perform all of the
 functions that we do, but if comp were true, that would not be the case.


   Your examples of blind sight are not a disproof of the separability of
 function and awareness,


 I understand why you think that, but ultimately it is proof of exactly
 that.


 only examples of broken links in communication (quite similar to split
 brain patients).


 A broken link in communication which prevents you from being aware of the
 experience which is informing you is the same thing as function being
 separate from awareness. The end result is that it is not necessary to
 experience any conscious qualia to receive optical information. There is no
 difference functionally between a broken link in communication and
 separability of function and awareness. The awareness is broken in the
 dead link, but the function is retained, thus they are in fact separate.


So you take the split brain patient's word for it that he didn't see the
word PAN flashed on the screen
http://www.youtube.com/watch?v=ZMLzP1VCANot=1m50s

Perhaps his left hemisphere didn't see it, but his right hemisphere
certainly did, as his right hemisphere is able to draw a picture of that
pan (something in his brain saw it).

I can't experience life through your eyes right now because our brains are
disconnected, should you take my word for my assertion that you must not be
experiencing anything because the I in Jason's skull doesn't experience any
visual stimulus from Craig's eyes?






 There are a lot of neurons in our gut as well, and assimilation of
 nutrients is undoubtedly complex and important to survival, yet 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-02 Thread Bruno Marchal


On 02 Apr 2013, at 15:38, Craig Weinberg wrote:




On Tuesday, April 2, 2013 4:44:45 AM UTC-4, Bruno Marchal wrote:

On 01 Apr 2013, at 17:23, Craig Weinberg wrote:



On Monday, April 1, 2013 6:12:48 AM UTC-4, Bruno Marchal wrote:

On 31 Mar 2013, at 21:54, Craig Weinberg wrote:




On Sunday, March 31, 2013 10:59:22 AM UTC-4, Bruno Marchal wrote:

On 30 Mar 2013, at 14:19, Craig Weinberg wrote:

If, instead of a video screen and joystick, I had an arcade game  
fitted with a speaker and microphone, I could have another  
computer programmed to play PacMan on the first machine using only  
modem-like screeching to satisfy the logic of the PacMan game.  
Instead of graphic ghosts and visible maze, there would be  
squealing sound representing what would have been the pixels on a  
screen. There would be no difference for this equipment at all. As  
long as the representation was isomorphic, it would make no  
difference to either computer that there was no visual experience  
of PacMan at all but instead just one dimensional noise streaming  
back and forth between two machines.


If you want me to believe that a machine could support an  
experience, then you have to explain why and how that is even a  
remote possibility without begging the question by smuggling in  
our own experience. If I do not agree that we are only machines,  
then I do not agree that our experience is evidence of machine  
experience.



I have never said it is an evidence. It is just by definition of  
comp, which is my working hypothesis. You are the one saying that  
comp is false.


That means that from the start, the only way to suggest that  
machines can't be conscious is to suggest that people can't be  
conscious,



This does not follow from what I said. All I say is that you make a  
stroing statement: machines cannot support thinking, but you don't  
provide any argument, at least none that I can understand. You refer  
to your experience, and indeed vindicate the right to do that in  
some post.


The argument that I have provided (repeatedly) is that machines are  
necessarily assembled as a configuration of forms in space according  
to an agenda or motivation which is foreign to the assembly. A  
living organism is not assembled but rather its growth and  
reproduction are autopoietic and native to the circumstance of its  
initiation as an event in time. My position is that all natural  
bodies are a reflection of some set of sensory-motor experiences on  
some level of description (speed, scale) and that forms and  
functions are driven by sense and motive. Artificial systems are not  
biological bodies since they are assembled rather than reproduced  
from a single cell, therefore the quality of the sense experiences  
are limited to the inorganic range of quality. They lack the  
experience of caring about their own survival, and consequently have  
no capacity for empathy, warmth, understanding, or emotion, which  
are, IMO the roots of intelligence which cannot be substituted by  
abstract rules.



I tend to agree with you, insofar as it looks like what the machines  
already tell me. You are good in phenomenology, but bad in logic, and  
it is sad, because it prevents you to appreciate that the machines  
might agree with you, somehow. But now you will have to convince those  
machines that they are machines, and some of them, like you, will have  
some serious difficulty to swallow the pill.


Comp share this with the Gödelian sentence: it asserts at some level  
its own non believability. The more you understand comp, the less you  
can believe in it. It is normal.











which would be sophistry. I'm not playing that game though. My  
interest is in understanding the nature of consciousness and its  
relation to physics and information. What I have come up with  
explains exactly why machine functions cannot be conflated with  
experience, and why presentations cannot arise from representations  
alone.







If a machine works without an experience, why invent any such  
thing as experience?



If you accept the antic theory of knowledge, then machines, once  
above the Löbian complexity threshold, cannot not have experience.


What's the antic theory of knowledge?


That knowing p - p, knowing p - knowing knowing p, that  
knowing (p - q) - knowing p - knowing q. . the modus ponens  
rule, and the necessitation rule from A infer knowing A. It the  
modal logic known as S4.


What relates this to a complexity threshold and the possibility of  
experience?



It is a bit technical, but in a nutshell, it means that such machines  
can refute Socrates' refutation of Theaetetus proposal to define  
knowledge by the true opinion.
Church's thesis rehabilitates Pythagoras, and Gödel's incompleteness  
rehabilitates the Theaetetus theory of knowledge, and this by giving  
the classical theory of knowledge, with one axiom more, to be precise.












I tried to look it up but nobody on the internet 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-01 Thread Bruno Marchal


On 31 Mar 2013, at 21:54, Craig Weinberg wrote:




On Sunday, March 31, 2013 10:59:22 AM UTC-4, Bruno Marchal wrote:

On 30 Mar 2013, at 14:19, Craig Weinberg wrote:

If, instead of a video screen and joystick, I had an arcade game  
fitted with a speaker and microphone, I could have another computer  
programmed to play PacMan on the first machine using only modem-like  
screeching to satisfy the logic of the PacMan game. Instead of  
graphic ghosts and visible maze, there would be squealing sound  
representing what would have been the pixels on a screen. There  
would be no difference for this equipment at all. As long as the  
representation was isomorphic, it would make no difference to either  
computer that there was no visual experience of PacMan at all but  
instead just one dimensional noise streaming back and forth between  
two machines.


If you want me to believe that a machine could support an  
experience, then you have to explain why and how that is even a  
remote possibility without begging the question by smuggling in our  
own experience. If I do not agree that we are only machines, then I  
do not agree that our experience is evidence of machine experience.



I have never said it is an evidence. It is just by definition of comp,  
which is my working hypothesis. You are the one saying that comp is  
false.






If a machine works without an experience, why invent any such thing  
as experience?



If you accept the antic theory of knowledge, then machines, once above  
the Löbian complexity threshold, cannot not have experience.






If Donkey Kong works just as well without anyone seeing him, then  
why have a modem sound either? Just connect the two machines directly.






The pathetic fallacy is not a logical fallacy.

No, it's more important than logic.


I think the pathetic fallacy is, as a fallacy, itself a pathetic  
fallacy. From which I can't conclude.


I understand that is your position, but I think that is a radically  
theoretical view which doesn't apply to the universe in which we  
actually live. In this universe, not everything that can be  
programmed to smile on command has emotions.


We cannot program emotion. We can program help yourself, or multiply  
yourself. Emotion have simple roots, but get quickly highly entangled  
in a non predictible way with the intensional variant of self- 
reference when emerging in long story.












You just say that you believe that comp is false, but machines have  
naturally that belief, as comp is provably counter-intuitive.


That's just comp feeding back on its own confirmation bias. Comp is  
a machine which can only see itself. It's the inevitable inversion  
meme which arises from mistaking forms and functions for reality  
rather than the capacity to project and receive them.


Yes, comp feedback in this way. You don't like that, apparently, but  
that's not an argument. I am not defending comp, I am just  
criticizing the reason you provide to think that comp is false.



I have repeatedly provided a whole list of reasons but your  
criticism is not really offering any criticism other than that you  
don't think my view has any merit.



On the contrary. I do see merit in some serious non-comp theory. I am  
criticizing only your philosophy/opinion, and non valid defense of  
it,  that it is obvious that machines cannot support persons.





You don't explain why though.



I am the one asking why. You are saying that a theory is wrong, and  
I just show that your reasoning is non valid. It only shows that it is  
hard for a person to believe she is locally supported by a machine.  
But hard to believe is not an argument.





There is no specific challenge to all of the things I mentioned. I  
say pathetic fallacy, you say you don't respect it.  I say the Map  
is not the Territory and the Menu is not the Meal but you don't seem  
to accept that these are comprehensible ideas.


They just comfort opinion, without making a point.



All seems to evaporate into a smoke screen and impatience. You don't  
take the argument seriously and always fall back on my ignorance of  
mathematical theory. My arguments question the foundation of math  
itself though.


That makes your point even weaker. It is up to you to either abandon  
your strong assertion that comp is false. You can study non-comp  
without it. I respect and encourage alternative to comp. But you says  
that comp is false, and just explain why you believe so, without  
showing a contradiction in comp.
























I have no tricks or invalid arguments that I know of, and I don't  
see that I am being careless at all.


Which means probably that you should learn a bit of argumentation,  
to be frank. Or just assume your theory and be cautious on the  
theory of other people.


I'm only interested in uncovering the truth about consciousness.  
What other people think and do is none of my business.


You are asserting without argument that a 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-31 Thread Bruno Marchal


On 30 Mar 2013, at 13:58, Telmo Menezes wrote:





On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg  
whatsons...@gmail.com wrote:



On Thursday, March 28, 2013 5:52:04 AM UTC-4, telmo_menezes wrote:



On Wed, Mar 27, 2013 at 6:29 PM, Craig Weinberg whats...@gmail.com  
wrote:



On Wednesday, March 27, 2013 1:03:27 PM UTC-4, telmo_menezes wrote:
Hi Craig,


On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg whats...@gmail.com  
wrote:

From the Quora 
http://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

This is interesting because I think it shows the weakness of the one- 
dimensional view of intelligence as computation. Whether a program  
can be designed to win or not is beside the point,


That's not really fair, is it?

Why not?

How else can I counter your argument against intelligence as  
computation if I am not allowed to use computation? My example would  
not prove that it's what the brain does, but it would prove that it  
can be. You are arguing that it cannot be.


I'm arguing that a screw is not the same thing as a nail because  
when you hammer a screw it doesn't go in as easily as a nail and  
when you use a screwdriver on a nail it doesn't go in at all.


Ok.

Sometimes the hammer is a better tool and sometimes the driver is.  
As humans, we have a great hammer and a decent screwdriver. A  
computer can't hammer anything, but it has a power screwdriver with  
a potentially infinite set of tips.


Ok, but if I understand your ideas, you're claiming that the hammer  
is also the fundamental stuff that reality is made of. Sorry if I'm  
misrepresenting what you're saying. If I'm not, I don't understand  
why computers can't have the hammer.






as it is the difference between this game and chess which hints at  
the differences between bottom-up mechanism and top-down  
intentionality


I see what you're saying but I disagree. It just highlights the weak  
points of tree-search approaches like min-max. What I gather from  
what happens when one plays Arimaa (or Go): due to combinatorial  
explosion, players (even human) play quite far away from the perfect  
game(s). The way we deal with combinatorial explosion is by mapping  
the game into something more abstract.


How do you know that any such mapping is going on? It seems like  
begging the question.


I don't know. I have a strong intuition in it's favor for a few  
reasons, scientific and otherwise.


Have you tried thinking about it another way? Where does 'mapping'  
come from? Can you begin mapping without already having a map?


Yes, I think I begin with a map based on previous experiences and  
then improve it as I discover it's weaknesses. I think the original  
map came from brute-force experimentation while my brain was  
developing in my early months of live. But this is just wild  
guessing, of course.



The non-scientific one is introspection. I try to observe my own  
thought process and I think I use such mappings.


Maybe you do. Maybe a lot of people do. I don't think that I do  
though. I think that a game can be played directly without  
abstracting it into another game.


Ok, I believe you but I don't have the same experience. My wife  
does. She works in a creative field and she is very intuitive, with  
the typical aversion for math. She can beat me at chess quite  
easily, without appearing to resort to conscious strategic thinking.  
She describes it as doing what feels right.



The scientific reason is that this type of approach has been used  
successfully to tackle AI problems that could not be solved with  
classical search algorithms.


I don't doubt that this game is likely to be solved eventually,  
maybe even soon, but the fact remains that it exposes some  
fundamentally different aesthetics between computation and  
intelligence. This is impressive to me because any game is already  
hugely biased in favor of computation. A game is ideal to be reduced  
to a set of logical rules, it's turn play is already a recursive  
enumeration. A game is already a computer program. Even so, we can  
see that it is possible to use a game to bypass computational values  
- of generic, unconscious repetition, and hint at something  
completely different and opposite.



Put another way, if there were top-down non-computational effort  
going into the game play, why would it look any different than what  
we see?


Our brain seems to be quite good at generating such mappings. We do  
it with chess too, I'm sure. Notice that, when two humans play  
Arimaa, both can count on each other's inabilities to play close to  
the perfect game. As with games with incomplete information, like  
Poker, part of it is modelling the opponent. Perhaps not  
surprisingly, artificial neural networks are quite good at producing  
useful mappings of this sort, and on predicting behaviours with  
incomplete information. Great progress has been achieved lately with  
deep 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-31 Thread Bruno Marchal


On 30 Mar 2013, at 14:19, Craig Weinberg wrote:




On Saturday, March 30, 2013 7:08:25 AM UTC-4, Bruno Marchal wrote:

On 29 Mar 2013, at 13:31, Craig Weinberg wrote:




On Friday, March 29, 2013 6:28:02 AM UTC-4, Bruno Marchal wrote:

On 28 Mar 2013, at 20:36, Craig Weinberg wrote:




On Thursday, March 28, 2013 1:29:19 PM UTC-4, Bruno Marchal wrote:

On 28 Mar 2013, at 13:23, Craig Weinberg wrote:


Strong AI may not really want to understand consciousness


This is a rhetorical trick. You put intention in the mind of  
others. You can't do that.


You can say something like,: I read some strong AI proponents and  
they dismiss consciousness, ..., and cite them, but you can't make  
affirmative statement on a large class of people.


That's interesting because it seems like you make statements about  
large classes of UMs frequently. You say that they have no answers  
on the deep questions, or that they don't see themselves as  
machines. What if Strong AI is a program...a meme or spandrel?


What if the soul is in the air, and that each time you cut your  
hair you become a zombie?


Then people would avoid cutting their hair I would imagine. Unless  
they were suffering. But seriously, what makes you think that  
Strong AI is not itself a rogue machine, implanted in minds to  
satisfy some purely quantitative inevitability?









You are coherent because you search a physical theory of  
consciousness, and that is indeed incompatible with comp.


I don't seek a physical theory of consciousness exactly, I more  
seek a sensory-motive theory of physics.


I will wait for serious progresses.







But your argument against comp are invalid, beg the questions, and  
contains numerous trick like above. Be more careful please.


That sounds like another 'magician's dismissal' to me. I beg no  
more question than comp does.


You miss the key point. There is no begging when making clear what  
you assume. You can assume comp, as you can assume non-comp. But  
you do something quite different; you pretend that comp is false.  
So we ask for an argument, and there you beg the question, by using  
all the time that comp must be false in your argument, and that is  
begging the question.


Comp is false not because I want it to be or assume it is, but  
because I understand that experience through time can be the only  
fundamental principle, and bodies across space is derived. I have  
laid out these reasons for this many times - how easy it is to  
succumb to the pathetic fallacy, how unlikely it is for experience  
to have any possible utility for arithmetic, how absent any sign of  
personality is in machines, how we can easily demonstrate  
information processing without particular qualia arising, etc.  
These are just off the top of my head. Anywhere you look in reality  
you can find huge gaping holes in Comp's assumptions if you choose  
to look, but you aren't going to see them if you are only listening  
to the echo chamber of Comp itself. Indeed, if we limit ourselves  
to only mathematical logic to look at mathematical logic, we are  
not going to notice that the entire universe of presentation is  
missing. Comp has a presentation problem, and it is not going to go  
away.




Well if you *understand* that time is fundamental, then comp is  
false for you.


I understand that *experience* (through 'time') is fundamental, only  
because no other option ultimately makes as much sense.


OK, but you never explain why. Of course experience are very  
important, but why could a machine not support one, when it can be  
shown that they will develop talk on their experience, and, if  
instrosoecive enough, be confronted to the same feeling that it has to  
be fundamental, and they are correct from the first person view.





The pathetic fallacy is not a logical fallacy.

No, it's more important than logic.


I think the pathetic fallacy is, as a fallacy, itself a pathetic  
fallacy. From which I can't conclude.






You just say that you believe that comp is false, but machines have  
naturally that belief, as comp is provably counter-intuitive.


That's just comp feeding back on its own confirmation bias. Comp is  
a machine which can only see itself. It's the inevitable inversion  
meme which arises from mistaking forms and functions for reality  
rather than the capacity to project and receive them.


Yes, comp feedback in this way. You don't like that, apparently, but  
that's not an argument. I am not defending comp, I am just criticizing  
the reason you provide to think that comp is false.


















I have no tricks or invalid arguments that I know of, and I don't  
see that I am being careless at all.


Which means probably that you should learn a bit of argumentation,  
to be frank. Or just assume your theory and be cautious on the  
theory of other people.


I'm only interested in uncovering the truth about consciousness.  
What other people think and do is none of my 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-31 Thread Telmo Menezes
On Sun, Mar 31, 2013 at 4:32 PM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 30 Mar 2013, at 13:58, Telmo Menezes wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Thursday, March 28, 2013 5:52:04 AM UTC-4, telmo_menezes wrote:




 On Wed, Mar 27, 2013 at 6:29 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, March 27, 2013 1:03:27 PM UTC-4, telmo_menezes wrote:

 Hi Craig,


 On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg whats...@gmail.comwrote:

 From the Quora http://www.quora.com/Board-**Gam**
 es/What-are-some-fun-games-**to-**play-on-an-8x8-**Checkerboard-**
 besides-chess-**checkershttp://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the
 one-dimensional view of intelligence as computation. Whether a program 
 can
 be designed to win or not is beside the point,


 That's not really fair, is it?


 Why not?


 How else can I counter your argument against intelligence as computation
 if I am not allowed to use computation? My example would not prove that
 it's what the brain does, but it would prove that it can be. You are
 arguing that it cannot be.


 I'm arguing that a screw is not the same thing as a nail because when you
 hammer a screw it doesn't go in as easily as a nail and when you use a
 screwdriver on a nail it doesn't go in at all.


 Ok.


 Sometimes the hammer is a better tool and sometimes the driver is. As
 humans, we have a great hammer and a decent screwdriver. A computer can't
 hammer anything, but it has a power screwdriver with a potentially infinite
 set of tips.


 Ok, but if I understand your ideas, you're claiming that the hammer is
 also the fundamental stuff that reality is made of. Sorry if I'm
 misrepresenting what you're saying. If I'm not, I don't understand why
 computers can't have the hammer.










 as it is the difference between this game and chess which hints at
 the differences between bottom-up mechanism and top-down intentionality


 I see what you're saying but I disagree. It just highlights the weak
 points of tree-search approaches like min-max. What I gather from what
 happens when one plays Arimaa (or Go): due to combinatorial explosion,
 players (even human) play quite far away from the perfect game(s). The way
 we deal with combinatorial explosion is by mapping the game into something
 more abstract.


 How do you know that any such mapping is going on? It seems like
 begging the question.


 I don't know. I have a strong intuition in it's favor for a few reasons,
 scientific and otherwise.


 Have you tried thinking about it another way? Where does 'mapping' come
 from? Can you begin mapping without already having a map?


 Yes, I think I begin with a map based on previous experiences and then
 improve it as I discover it's weaknesses. I think the original map came
 from brute-force experimentation while my brain was developing in my early
 months of live. But this is just wild guessing, of course.




  The non-scientific one is introspection. I try to observe my own
 thought process and I think I use such mappings.


 Maybe you do. Maybe a lot of people do. I don't think that I do though. I
 think that a game can be played directly without abstracting it into
 another game.


 Ok, I believe you but I don't have the same experience. My wife does. She
 works in a creative field and she is very intuitive, with the typical
 aversion for math. She can beat me at chess quite easily, without appearing
 to resort to conscious strategic thinking. She describes it as doing what
 feels right.



 The scientific reason is that this type of approach has been
 used successfully to tackle AI problems that could not be solved with
 classical search algorithms.


 I don't doubt that this game is likely to be solved eventually, maybe
 even soon, but the fact remains that it exposes some fundamentally
 different aesthetics between computation and intelligence. This is
 impressive to me because any game is already hugely biased in favor of
 computation. A game is ideal to be reduced to a set of logical rules, it's
 turn play is already a recursive enumeration. A game is already a computer
 program. Even so, we can see that it is possible to use a game to bypass
 computational values - of generic, unconscious repetition, and hint at
 something completely different and opposite.




 Put another way, if there were top-down non-computational effort going
 into the game play, why would it look any different than what we see?


 Our brain seems to be quite good at generating such mappings. We do it
 with chess too, I'm sure. Notice that, when two humans play Arimaa, both
 can count on each other's inabilities to play close to the perfect game. 
 As
 with games with incomplete information, like Poker, part of it is 
 modelling
 the opponent. Perhaps not surprisingly, artificial neural networks 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-31 Thread Craig Weinberg


On Sunday, March 31, 2013 10:59:22 AM UTC-4, Bruno Marchal wrote:


 On 30 Mar 2013, at 14:19, Craig Weinberg wrote:



 On Saturday, March 30, 2013 7:08:25 AM UTC-4, Bruno Marchal wrote:


 On 29 Mar 2013, at 13:31, Craig Weinberg wrote:



 On Friday, March 29, 2013 6:28:02 AM UTC-4, Bruno Marchal wrote:


 On 28 Mar 2013, at 20:36, Craig Weinberg wrote:



 On Thursday, March 28, 2013 1:29:19 PM UTC-4, Bruno Marchal wrote:


 On 28 Mar 2013, at 13:23, Craig Weinberg wrote:

 Strong AI may not really want to understand consciousness


 This is a rhetorical trick. You put intention in the mind of others. 
 You can't do that. 

 You can say something like,: I read some strong AI proponents and they 
 dismiss consciousness, ..., and cite them, but you can't make affirmative 
 statement on a large class of people.


 That's interesting because it seems like you make statements about large 
 classes of UMs frequently. You say that they have no answers on the deep 
 questions, or that they don't see themselves as machines. What if Strong AI 
 is a program...a meme or spandrel?


 What if the soul is in the air, and that each time you cut your hair you 
 become a zombie? 


 Then people would avoid cutting their hair I would imagine. Unless they 
 were suffering. But seriously, what makes you think that Strong AI is not 
 itself a rogue machine, implanted in minds to satisfy some purely 
 quantitative inevitability?
  







 You are coherent because you search a physical theory of consciousness, 
 and that is indeed incompatible with comp.


 I don't seek a physical theory of consciousness exactly, I more seek a 
 sensory-motive theory of physics.


 I will wait for serious progresses.




  


 But your argument against comp are invalid, beg the questions, and 
 contains numerous trick like above. Be more careful please.


 That sounds like another 'magician's dismissal' to me. I beg no more 
 question than comp does.


 You miss the key point. There is no begging when making clear what you 
 assume. You can assume comp, as you can assume non-comp. But you do 
 something quite different; you pretend that comp is false. So we ask for an 
 argument, and there you beg the question, by using all the time that comp 
 must be false in your argument, and that is begging the question.


 Comp is false not because I want it to be or assume it is, but because I 
 understand that experience through time can be the only fundamental 
 principle, and bodies across space is derived. I have laid out these 
 reasons for this many times - how easy it is to succumb to the pathetic 
 fallacy, how unlikely it is for experience to have any possible utility for 
 arithmetic, how absent any sign of personality is in machines, how we can 
 easily demonstrate information processing without particular qualia 
 arising, etc. These are just off the top of my head. Anywhere you look in 
 reality you can find huge gaping holes in Comp's assumptions if you choose 
 to look, but you aren't going to see them if you are only listening to the 
 echo chamber of Comp itself. Indeed, if we limit ourselves to only 
 mathematical logic to look at mathematical logic, we are not going to 
 notice that the entire universe of presentation is missing. Comp has a 
 presentation problem, and it is not going to go away.


 Well if you *understand* that time is fundamental, then comp is false for 
 you. 


 I understand that *experience* (through 'time') is fundamental, only 
 because no other option ultimately makes as much sense.


 OK, but you never explain why. Of course experience are very important, 
 but why could a machine not support one, when it can be shown that they 
 will develop talk on their experience, and, if instrosoecive enough, be 
 confronted to the same feeling that it has to be fundamental, and they are 
 correct from the first person view.


If, instead of a video screen and joystick, I had an arcade game fitted 
with a speaker and microphone, I could have another computer programmed to 
play PacMan on the first machine using only modem-like screeching to 
satisfy the logic of the PacMan game. Instead of graphic ghosts and visible 
maze, there would be squealing sound representing what would have been the 
pixels on a screen. There would be no difference for this equipment at all. 
As long as the representation was isomorphic, it would make no difference 
to either computer that there was no visual experience of PacMan at all but 
instead just one dimensional noise streaming back and forth between two 
machines.

If you want me to believe that a machine could support an experience, then 
you have to explain why and how that is even a remote possibility without 
begging the question by smuggling in our own experience. If I do not agree 
that we are only machines, then I do not agree that our experience is 
evidence of machine experience.

If a machine works without an experience, why invent any such thing as 
experience? If Donkey 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-30 Thread Bruno Marchal


On 29 Mar 2013, at 13:31, Craig Weinberg wrote:




On Friday, March 29, 2013 6:28:02 AM UTC-4, Bruno Marchal wrote:

On 28 Mar 2013, at 20:36, Craig Weinberg wrote:




On Thursday, March 28, 2013 1:29:19 PM UTC-4, Bruno Marchal wrote:

On 28 Mar 2013, at 13:23, Craig Weinberg wrote:


Strong AI may not really want to understand consciousness


This is a rhetorical trick. You put intention in the mind of  
others. You can't do that.


You can say something like,: I read some strong AI proponents and  
they dismiss consciousness, ..., and cite them, but you can't make  
affirmative statement on a large class of people.


That's interesting because it seems like you make statements about  
large classes of UMs frequently. You say that they have no answers  
on the deep questions, or that they don't see themselves as  
machines. What if Strong AI is a program...a meme or spandrel?


What if the soul is in the air, and that each time you cut your hair  
you become a zombie?


Then people would avoid cutting their hair I would imagine. Unless  
they were suffering. But seriously, what makes you think that Strong  
AI is not itself a rogue machine, implanted in minds to satisfy some  
purely quantitative inevitability?









You are coherent because you search a physical theory of  
consciousness, and that is indeed incompatible with comp.


I don't seek a physical theory of consciousness exactly, I more  
seek a sensory-motive theory of physics.


I will wait for serious progresses.







But your argument against comp are invalid, beg the questions, and  
contains numerous trick like above. Be more careful please.


That sounds like another 'magician's dismissal' to me. I beg no  
more question than comp does.


You miss the key point. There is no begging when making clear what  
you assume. You can assume comp, as you can assume non-comp. But you  
do something quite different; you pretend that comp is false. So we  
ask for an argument, and there you beg the question, by using all  
the time that comp must be false in your argument, and that is  
begging the question.


Comp is false not because I want it to be or assume it is, but  
because I understand that experience through time can be the only  
fundamental principle, and bodies across space is derived. I have  
laid out these reasons for this many times - how easy it is to  
succumb to the pathetic fallacy, how unlikely it is for experience  
to have any possible utility for arithmetic, how absent any sign of  
personality is in machines, how we can easily demonstrate  
information processing without particular qualia arising, etc. These  
are just off the top of my head. Anywhere you look in reality you  
can find huge gaping holes in Comp's assumptions if you choose to  
look, but you aren't going to see them if you are only listening to  
the echo chamber of Comp itself. Indeed, if we limit ourselves to  
only mathematical logic to look at mathematical logic, we are not  
going to notice that the entire universe of presentation is missing.  
Comp has a presentation problem, and it is not going to go away.




Well if you *understand* that time is fundamental, then comp is false  
for you.

The pathetic fallacy is not a logical fallacy.
You just say that you believe that comp is false, but machines have  
naturally that belief, as comp is provably counter-intuitive.












I have no tricks or invalid arguments that I know of, and I don't  
see that I am being careless at all.


Which means probably that you should learn a bit of argumentation,  
to be frank. Or just assume your theory and be cautious on the  
theory of other people.


I'm only interested in uncovering the truth about consciousness.  
What other people think and do is none of my business.


You are asserting without argument that a theory is incorrect, and you  
do this by assuming that it cannot do this or that, but with no  
argument that your personal feeling. I just explain to you that  
machines might have already that feeling, as it looks like when we  
listen to them.


Bruno







Craig


Bruno







Craig


Bruno



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-30 Thread Telmo Menezes
On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Thursday, March 28, 2013 5:52:04 AM UTC-4, telmo_menezes wrote:




 On Wed, Mar 27, 2013 at 6:29 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, March 27, 2013 1:03:27 PM UTC-4, telmo_menezes wrote:

 Hi Craig,


 On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg whats...@gmail.comwrote:

 From the Quora http://www.quora.com/Board-**Gam**
 es/What-are-some-fun-games-**to-**play-on-an-8x8-**Checkerboard-**
 besides-chess-**checkershttp://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the
 one-dimensional view of intelligence as computation. Whether a program can
 be designed to win or not is beside the point,


 That's not really fair, is it?


 Why not?


 How else can I counter your argument against intelligence as computation
 if I am not allowed to use computation? My example would not prove that
 it's what the brain does, but it would prove that it can be. You are
 arguing that it cannot be.


 I'm arguing that a screw is not the same thing as a nail because when you
 hammer a screw it doesn't go in as easily as a nail and when you use a
 screwdriver on a nail it doesn't go in at all.


Ok.


 Sometimes the hammer is a better tool and sometimes the driver is. As
 humans, we have a great hammer and a decent screwdriver. A computer can't
 hammer anything, but it has a power screwdriver with a potentially infinite
 set of tips.


Ok, but if I understand your ideas, you're claiming that the hammer is also
the fundamental stuff that reality is made of. Sorry if I'm misrepresenting
what you're saying. If I'm not, I don't understand why computers can't have
the hammer.










 as it is the difference between this game and chess which hints at the
 differences between bottom-up mechanism and top-down intentionality


 I see what you're saying but I disagree. It just highlights the weak
 points of tree-search approaches like min-max. What I gather from what
 happens when one plays Arimaa (or Go): due to combinatorial explosion,
 players (even human) play quite far away from the perfect game(s). The way
 we deal with combinatorial explosion is by mapping the game into something
 more abstract.


 How do you know that any such mapping is going on? It seems like begging
 the question.


 I don't know. I have a strong intuition in it's favor for a few reasons,
 scientific and otherwise.


 Have you tried thinking about it another way? Where does 'mapping' come
 from? Can you begin mapping without already having a map?


Yes, I think I begin with a map based on previous experiences and then
improve it as I discover it's weaknesses. I think the original map came
from brute-force experimentation while my brain was developing in my early
months of live. But this is just wild guessing, of course.




 The non-scientific one is introspection. I try to observe my own thought
 process and I think I use such mappings.


 Maybe you do. Maybe a lot of people do. I don't think that I do though. I
 think that a game can be played directly without abstracting it into
 another game.


Ok, I believe you but I don't have the same experience. My wife does. She
works in a creative field and she is very intuitive, with the typical
aversion for math. She can beat me at chess quite easily, without appearing
to resort to conscious strategic thinking. She describes it as doing what
feels right.



 The scientific reason is that this type of approach has been
 used successfully to tackle AI problems that could not be solved with
 classical search algorithms.


 I don't doubt that this game is likely to be solved eventually, maybe even
 soon, but the fact remains that it exposes some fundamentally different
 aesthetics between computation and intelligence. This is impressive to me
 because any game is already hugely biased in favor of computation. A game
 is ideal to be reduced to a set of logical rules, it's turn play is already
 a recursive enumeration. A game is already a computer program. Even so, we
 can see that it is possible to use a game to bypass computational values -
 of generic, unconscious repetition, and hint at something completely
 different and opposite.




 Put another way, if there were top-down non-computational effort going
 into the game play, why would it look any different than what we see?


 Our brain seems to be quite good at generating such mappings. We do it
 with chess too, I'm sure. Notice that, when two humans play Arimaa, both
 can count on each other's inabilities to play close to the perfect game. As
 with games with incomplete information, like Poker, part of it is modelling
 the opponent. Perhaps not surprisingly, artificial neural networks are
 quite good at producing useful mappings of this sort, and on predicting
 behaviours with incomplete information. Great progress has been 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-30 Thread Craig Weinberg


On Saturday, March 30, 2013 7:08:25 AM UTC-4, Bruno Marchal wrote:


 On 29 Mar 2013, at 13:31, Craig Weinberg wrote:



 On Friday, March 29, 2013 6:28:02 AM UTC-4, Bruno Marchal wrote:


 On 28 Mar 2013, at 20:36, Craig Weinberg wrote:



 On Thursday, March 28, 2013 1:29:19 PM UTC-4, Bruno Marchal wrote:


 On 28 Mar 2013, at 13:23, Craig Weinberg wrote:

 Strong AI may not really want to understand consciousness


 This is a rhetorical trick. You put intention in the mind of others. You 
 can't do that. 

 You can say something like,: I read some strong AI proponents and they 
 dismiss consciousness, ..., and cite them, but you can't make affirmative 
 statement on a large class of people.


 That's interesting because it seems like you make statements about large 
 classes of UMs frequently. You say that they have no answers on the deep 
 questions, or that they don't see themselves as machines. What if Strong AI 
 is a program...a meme or spandrel?


 What if the soul is in the air, and that each time you cut your hair you 
 become a zombie? 


 Then people would avoid cutting their hair I would imagine. Unless they 
 were suffering. But seriously, what makes you think that Strong AI is not 
 itself a rogue machine, implanted in minds to satisfy some purely 
 quantitative inevitability?
  







 You are coherent because you search a physical theory of consciousness, 
 and that is indeed incompatible with comp.


 I don't seek a physical theory of consciousness exactly, I more seek a 
 sensory-motive theory of physics.


 I will wait for serious progresses.




  


 But your argument against comp are invalid, beg the questions, and 
 contains numerous trick like above. Be more careful please.


 That sounds like another 'magician's dismissal' to me. I beg no more 
 question than comp does.


 You miss the key point. There is no begging when making clear what you 
 assume. You can assume comp, as you can assume non-comp. But you do 
 something quite different; you pretend that comp is false. So we ask for an 
 argument, and there you beg the question, by using all the time that comp 
 must be false in your argument, and that is begging the question.


 Comp is false not because I want it to be or assume it is, but because I 
 understand that experience through time can be the only fundamental 
 principle, and bodies across space is derived. I have laid out these 
 reasons for this many times - how easy it is to succumb to the pathetic 
 fallacy, how unlikely it is for experience to have any possible utility for 
 arithmetic, how absent any sign of personality is in machines, how we can 
 easily demonstrate information processing without particular qualia 
 arising, etc. These are just off the top of my head. Anywhere you look in 
 reality you can find huge gaping holes in Comp's assumptions if you choose 
 to look, but you aren't going to see them if you are only listening to the 
 echo chamber of Comp itself. Indeed, if we limit ourselves to only 
 mathematical logic to look at mathematical logic, we are not going to 
 notice that the entire universe of presentation is missing. Comp has a 
 presentation problem, and it is not going to go away.


 Well if you *understand* that time is fundamental, then comp is false for 
 you. 


I understand that *experience* (through 'time') is fundamental, only 
because no other option ultimately makes as much sense.
 

 The pathetic fallacy is not a logical fallacy.


No, it's more important than logic.
 

 You just say that you believe that comp is false, but machines have 
 naturally that belief, as comp is provably counter-intuitive. 


That's just comp feeding back on its own confirmation bias. Comp is a 
machine which can only see itself. It's the inevitable inversion meme which 
arises from mistaking forms and functions for reality rather than the 
capacity to project and receive them.
 











 I have no tricks or invalid arguments that I know of, and I don't see 
 that I am being careless at all.


 Which means probably that you should learn a bit of argumentation, to be 
 frank. Or just assume your theory and be cautious on the theory of other 
 people. 


 I'm only interested in uncovering the truth about consciousness. What 
 other people think and do is none of my business.


 You are asserting without argument that a theory is incorrect, 


I have been asserting my arguments in writing for thousands of hours. Why 
do you say that it is without argument unless it is simply too awful to 
accept that there is no valid counter-argument?
 

 and you do this by assuming that it cannot do this or that, but with no 
 argument that your personal feeling.


Why are common sense observations shared by all people since the beginning 
of humanity reduced to 'my personal feeling', but esoteric works of 
mathematics from the last couple of centuries are are infallible?
 

 I just explain to you that machines might have already that feeling, as it 
 looks 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-30 Thread Craig Weinberg


On Saturday, March 30, 2013 8:58:41 AM UTC-4, telmo_menezes wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Thursday, March 28, 2013 5:52:04 AM UTC-4, telmo_menezes wrote:




 On Wed, Mar 27, 2013 at 6:29 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, March 27, 2013 1:03:27 PM UTC-4, telmo_menezes wrote:

 Hi Craig,


 On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg whats...@gmail.comwrote:

 From the Quora http://www.quora.com/Board-**Gam**
 es/What-are-some-fun-games-**to-**play-on-an-8x8-**Checkerboard-**
 besides-chess-**checkershttp://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the 
 one-dimensional view of intelligence as computation. Whether a program 
 can 
 be designed to win or not is beside the point,


 That's not really fair, is it?


 Why not?


 How else can I counter your argument against intelligence as computation 
 if I am not allowed to use computation? My example would not prove that 
 it's what the brain does, but it would prove that it can be. You are 
 arguing that it cannot be.


 I'm arguing that a screw is not the same thing as a nail because when you 
 hammer a screw it doesn't go in as easily as a nail and when you use a 
 screwdriver on a nail it doesn't go in at all.


 Ok.
  

 Sometimes the hammer is a better tool and sometimes the driver is. As 
 humans, we have a great hammer and a decent screwdriver. A computer can't 
 hammer anything, but it has a power screwdriver with a potentially infinite 
 set of tips.


 Ok, but if I understand your ideas, you're claiming that the hammer is 
 also the fundamental stuff that reality is made of. Sorry if I'm 
 misrepresenting what you're saying. If I'm not, I don't understand why 
 computers can't have the hammer.


Yes exactly. The hammer is the fundamental stuff of reality, but computers 
are not real in the same sense as the parts they are made of are real. To 
overextend the metaphor, the fundamental stuff of reality would be tiny wax 
hammers which grow denser through time and hammering. Our sense and motive 
is like the 20lb sledge hammer, and we have hammered the wax hammers into a 
fragile, but very precise screwdriver. We can shape that screwdriver into a 
hammer instead, but no matter what we do, it is still going to be as soft 
as wax, because the density hasn't been aged into it. In reality, it isn't 
age, but experiences which accumulate as 'density' of significance. It's 
like leveling up. The materials of the computer have not graduated from 
physics to biological physics, so they can't participate in the biological 
levels of interaction.

 

  

   

  

  

 as it is the difference between this game and chess which hints at 
 the differences between bottom-up mechanism and top-down intentionality


 I see what you're saying but I disagree. It just highlights the weak 
 points of tree-search approaches like min-max. What I gather from what 
 happens when one plays Arimaa (or Go): due to combinatorial explosion, 
 players (even human) play quite far away from the perfect game(s). The 
 way 
 we deal with combinatorial explosion is by mapping the game into 
 something 
 more abstract. 


 How do you know that any such mapping is going on? It seems like 
 begging the question.


 I don't know. I have a strong intuition in it's favor for a few reasons, 
 scientific and otherwise.


 Have you tried thinking about it another way? Where does 'mapping' come 
 from? Can you begin mapping without already having a map?


 Yes, I think I begin with a map based on previous experiences and then 
 improve it as I discover it's weaknesses. I think the original map came 
 from brute-force experimentation while my brain was developing in my early 
 months of live. But this is just wild guessing, of course.


Is it really a map though, or do you just have access to condensed 
experiences of the territory?
 

  

  

  The non-scientific one is introspection. I try to observe my own 
 thought process and I think I use such mappings. 


 Maybe you do. Maybe a lot of people do. I don't think that I do though. I 
 think that a game can be played directly without abstracting it into 
 another game.


 Ok, I believe you but I don't have the same experience. My wife does. She 
 works in a creative field and she is very intuitive, with the typical 
 aversion for math. She can beat me at chess quite easily, without appearing 
 to resort to conscious strategic thinking. She describes it as doing what 
 feels right.


That may be the key to understanding a lot of what happens on this list - 
we just have different psychological characteristics. It's unfortunate 
though, because split-brained experiments have shown us how left-brained 
dominance insists that on reductionist views, even when they are 
objectively delusional. Consciousness, in the 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-30 Thread John Clark
On 29 Mar 2013, at 13:31, Craig Weinberg wrote

 I'm only interested in uncovering the truth about consciousness


On Sat, Mar 30, 2013  Bruno Marchal marc...@ulb.ac.be wrote:

 You are asserting without argument that a theory is incorrect, and you do
 this by assuming that it cannot do this or that, but with no argument


Bruno, did you really expect anything better? You are arguing with somebody
who has publicly admitted that they believe astrology and numerology are
valid methods of learning about the world.  Given that Craig is in the
habit of saying in nearly every post that consciousness is definitely X but
consciousness is most certainly not Y  I wondered what other methods he
used to become so knowledgeable in the ways of philosophy; so I asked if he
also believed that examining the entrails of a chicken can foretell the
future but he did not answer. I think its a perfectly reasonable question
given that all 3 were very popular during the medieval dark ages and
prediction by means of chicken entrails is not one bit more imbecilic than
astrology or numerology, so I just wondered if he had gone 2/3 of the way
toward being a perfectly respectable member of the dark ages or if he had
gone all the way.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-30 Thread Craig Weinberg


On Saturday, March 30, 2013 12:52:21 PM UTC-4, John Clark wrote:

 On 29 Mar 2013, at 13:31, Craig Weinberg wrote

  I'm only interested in uncovering the truth about consciousness


 On Sat, Mar 30, 2013  Bruno Marchal mar...@ulb.ac.be javascript:wrote:

  You are asserting without argument that a theory is incorrect, and you 
 do this by assuming that it cannot do this or that, but with no argument


 Bruno, did you really expect anything better? You are arguing with 
 somebody who has publicly admitted that they believe astrology and 
 numerology are valid methods of learning about the world.  


I never said that I believe anything. I said that astrology and numerology 
are the most interesting and useful topics which I have come across. That 
has nothing to do with your biased view of astrology as superstition, but 
with understanding the roots of discernment, and the depths of psychology.
 

 Given that Craig is in the habit of saying in nearly every post that 
 consciousness is definitely X but consciousness is most certainly not Y  I 
 wondered what other methods he used to become so knowledgeable in the ways 
 of philosophy; so I asked if he also believed that examining the entrails 
 of a chicken can foretell the future but he did not answer.


I'll pretend that you are really asking a question here, rather than just 
spreading your bigotry - which is insulting to Bruno btw, as you aren't 
even talking to him but just using his participation to leverage your 
tantrum.

If you were genuinely asking about prediction and divination, then I would 
answer that anything can be used as an oracle as far as being a prop to 
access super-personal insights - changing channels on the TV, flipping 
coins for the I Ching, pulling cards, rolling dice, etc. The result however 
is not, in my experience, a prediction, but an impersonal commentary on the 
moment/circumstance in which you are participating. Using entrails or any 
other spooky aesthetics for this kind of thing might change your results if 
it helped put you into a more sensitive mood, but I wouldn't know. 

 

 I think its a perfectly reasonable question given that all 3 were very 
 popular during the medieval dark ages and prediction by means of chicken 
 entrails is not one bit more imbecilic than astrology or numerology, so I 
 just wondered if he had gone 2/3 of the way toward being a perfectly 
 respectable member of the dark ages or if he had gone all the way.  


It would be reasonable coming from an honest person. When it comes from 
someone who values winning arguments over the truth, then it is just so 
much sniggering and spit.

Craig
 


   John K Clark  

  



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-29 Thread Bruno Marchal


On 28 Mar 2013, at 20:36, Craig Weinberg wrote:




On Thursday, March 28, 2013 1:29:19 PM UTC-4, Bruno Marchal wrote:

On 28 Mar 2013, at 13:23, Craig Weinberg wrote:


Strong AI may not really want to understand consciousness


This is a rhetorical trick. You put intention in the mind of others.  
You can't do that.


You can say something like,: I read some strong AI proponents and  
they dismiss consciousness, ..., and cite them, but you can't make  
affirmative statement on a large class of people.


That's interesting because it seems like you make statements about  
large classes of UMs frequently. You say that they have no answers  
on the deep questions, or that they don't see themselves as  
machines. What if Strong AI is a program...a meme or spandrel?


What if the soul is in the air, and that each time you cut your hair  
you become a zombie?








You are coherent because you search a physical theory of  
consciousness, and that is indeed incompatible with comp.


I don't seek a physical theory of consciousness exactly, I more seek  
a sensory-motive theory of physics.


I will wait for serious progresses.







But your argument against comp are invalid, beg the questions, and  
contains numerous trick like above. Be more careful please.


That sounds like another 'magician's dismissal' to me. I beg no more  
question than comp does.


You miss the key point. There is no begging when making clear what you  
assume. You can assume comp, as you can assume non-comp. But you do  
something quite different; you pretend that comp is false. So we ask  
for an argument, and there you beg the question, by using all the time  
that comp must be false in your argument, and that is begging the  
question.






I have no tricks or invalid arguments that I know of, and I don't  
see that I am being careless at all.


Which means probably that you should learn a bit of argumentation, to  
be frank. Or just assume your theory and be cautious on the theory of  
other people.


Bruno







Craig


Bruno



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-29 Thread Craig Weinberg


On Friday, March 29, 2013 6:28:02 AM UTC-4, Bruno Marchal wrote:


 On 28 Mar 2013, at 20:36, Craig Weinberg wrote:



 On Thursday, March 28, 2013 1:29:19 PM UTC-4, Bruno Marchal wrote:


 On 28 Mar 2013, at 13:23, Craig Weinberg wrote:

 Strong AI may not really want to understand consciousness


 This is a rhetorical trick. You put intention in the mind of others. You 
 can't do that. 

 You can say something like,: I read some strong AI proponents and they 
 dismiss consciousness, ..., and cite them, but you can't make affirmative 
 statement on a large class of people.


 That's interesting because it seems like you make statements about large 
 classes of UMs frequently. You say that they have no answers on the deep 
 questions, or that they don't see themselves as machines. What if Strong AI 
 is a program...a meme or spandrel?


 What if the soul is in the air, and that each time you cut your hair you 
 become a zombie? 


Then people would avoid cutting their hair I would imagine. Unless they 
were suffering. But seriously, what makes you think that Strong AI is not 
itself a rogue machine, implanted in minds to satisfy some purely 
quantitative inevitability?
 







 You are coherent because you search a physical theory of consciousness, 
 and that is indeed incompatible with comp.


 I don't seek a physical theory of consciousness exactly, I more seek a 
 sensory-motive theory of physics.


 I will wait for serious progresses.




  


 But your argument against comp are invalid, beg the questions, and 
 contains numerous trick like above. Be more careful please.


 That sounds like another 'magician's dismissal' to me. I beg no more 
 question than comp does.


 You miss the key point. There is no begging when making clear what you 
 assume. You can assume comp, as you can assume non-comp. But you do 
 something quite different; you pretend that comp is false. So we ask for an 
 argument, and there you beg the question, by using all the time that comp 
 must be false in your argument, and that is begging the question.


Comp is false not because I want it to be or assume it is, but because I 
understand that experience through time can be the only fundamental 
principle, and bodies across space is derived. I have laid out these 
reasons for this many times - how easy it is to succumb to the pathetic 
fallacy, how unlikely it is for experience to have any possible utility for 
arithmetic, how absent any sign of personality is in machines, how we can 
easily demonstrate information processing without particular qualia 
arising, etc. These are just off the top of my head. Anywhere you look in 
reality you can find huge gaping holes in Comp's assumptions if you choose 
to look, but you aren't going to see them if you are only listening to the 
echo chamber of Comp itself. Indeed, if we limit ourselves to only 
mathematical logic to look at mathematical logic, we are not going to 
notice that the entire universe of presentation is missing. Comp has a 
presentation problem, and it is not going to go away.






 I have no tricks or invalid arguments that I know of, and I don't see that 
 I am being careless at all.


 Which means probably that you should learn a bit of argumentation, to be 
 frank. Or just assume your theory and be cautious on the theory of other 
 people. 


I'm only interested in uncovering the truth about consciousness. What other 
people think and do is none of my business.

Craig
 


 Bruno






 Craig 


 Bruno



 http://iridia.ulb.ac.be/~marchal/




 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  


 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-29 Thread John Clark
The game Arinaa was designed by Omar Syed to be difficult for computers to
solve, he invented it to spur the improvement of artificial intelligence
software and so offered a $10,000 prize to the inventor of a software
program that could defeat any human player; however there were some
restrictions on the offer. Syed believes that even now a supercomputer
might be able to defeat any human so he insists that the program be run on
inexpensive off the shelf components. Also the $10,000 prize offer is only
good until 2020 because Syed figures that after that even a cheap home
computer will have supercomputer ability and so writing a champion
Arinaaprogram wouldn't be much of a challenge.

None of this indicates a inherent weakness of computers to me, in fact just
the opposite.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-29 Thread Craig Weinberg


On Friday, March 29, 2013 1:10:16 PM UTC-4, John Clark wrote:

 The game Arinaa was designed by Omar Syed to be difficult for computers 
 to solve, he invented it to spur the improvement of artificial intelligence 
 software and so offered a $10,000 prize to the inventor of a software 
 program that could defeat any human player; however there were some 
 restrictions on the offer. Syed believes that even now a supercomputer 
 might be able to defeat any human so he insists that the program be run on 
 inexpensive off the shelf components. Also the $10,000 prize offer is only 
 good until 2020 because Syed figures that after that even a cheap home 
 computer will have supercomputer ability and so writing a champion 
 Arinaaprogram wouldn't be much of a challenge.  

 None of this indicates a inherent weakness of computers to me, in fact 
 just the opposite.


It's not about computers being 'weak', just that computation is different 
from consciousness, or more to the point, it is the opposite of 
consciousness. Since any game is inherently pre-defined from quantitative 
axioms, it is not surprising to me that there would be no game which a 
computer could not outperform a human being. So what though? Computers can 
play by the rules, but people can cheat. People can make new rules or 
ignore them. The can pull the plug on computers, or drop junkyard magnets 
on top of them if they want to.

Craig


   John K Clark 








-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-29 Thread John Clark
On Fri, Mar 29, 2013 at 3:26 PM, Craig Weinberg whatsons...@gmail.comwrote:

 computation is different from consciousness, or more to the point, it is
 the opposite of consciousness.


Did you learn that from astrology or numerology or by examining the
entrails of a chicken?

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-29 Thread Craig Weinberg


On Friday, March 29, 2013 8:46:34 PM UTC-4, John Clark wrote:

 On Fri, Mar 29, 2013 at 3:26 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:

   computation is different from consciousness, or more to the point, it 
 is the opposite of consciousness.


 Did you learn that from astrology or numerology or by examining the 
 entrails of a chicken?


No, I learned it from listening to intellectual cowards parrot the 
prejudices of their betters.

Craig
 


  John K Clark

  
  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-28 Thread Telmo Menezes
On Wed, Mar 27, 2013 at 6:29 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, March 27, 2013 1:03:27 PM UTC-4, telmo_menezes wrote:

 Hi Craig,


 On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg whats...@gmail.comwrote:

 From the Quora http://www.quora.com/Board-**
 Games/What-are-some-fun-games-**to-play-on-an-8x8-**
 Checkerboard-besides-chess-**checkershttp://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the
 one-dimensional view of intelligence as computation. Whether a program can
 be designed to win or not is beside the point,


 That's not really fair, is it?


 Why not?


How else can I counter your argument against intelligence as computation if
I am not allowed to use computation? My example would not prove that it's
what the brain does, but it would prove that it can be. You are arguing
that it cannot be.






 as it is the difference between this game and chess which hints at the
 differences between bottom-up mechanism and top-down intentionality


 I see what you're saying but I disagree. It just highlights the weak
 points of tree-search approaches like min-max. What I gather from what
 happens when one plays Arimaa (or Go): due to combinatorial explosion,
 players (even human) play quite far away from the perfect game(s). The way
 we deal with combinatorial explosion is by mapping the game into something
 more abstract.


 How do you know that any such mapping is going on? It seems like begging
 the question.


I don't know. I have a strong intuition in it's favor for a few reasons,
scientific and otherwise. The non-scientific one is introspection. I try to
observe my own thought process and I think I use such mappings. The
scientific reason is that this type of approach has been
used successfully to tackle AI problems that could not be solved with
classical search algorithms.


 Put another way, if there were top-down non-computational effort going
 into the game play, why would it look any different than what we see?


 Our brain seems to be quite good at generating such mappings. We do it
 with chess too, I'm sure. Notice that, when two humans play Arimaa, both
 can count on each other's inabilities to play close to the perfect game. As
 with games with incomplete information, like Poker, part of it is modelling
 the opponent. Perhaps not surprisingly, artificial neural networks are
 quite good at producing useful mappings of this sort, and on predicting
 behaviours with incomplete information. Great progress has been achieved
 lately with deep learning. All this fits bottom-up mechanism and
 intelligence as computation. It doesn't prove anything because I can't
 attach the code for an excellent Arimaa player but, on the other hand, if I
 did I'm sure you'd come up with something else. :)


 Except that playing Arimaa is not particularly taxing on the human player.
 There is no suggestion of any complex algorithms and mappings, rather it
 seems to me, there is simplicity.


The mappings don't have to be complex at all (in terms of leading to heavy
computations). That's precisely their point.


 The human finds no fundamental difference between the difficulty between
 Arimaa and Chess, yet there is a clear difference for the computer.


Yes, the classical chess algorithms are clearly not how we do it. I agree
with you there.


 Again, if this does not indicate that there the model of intelligence as
 purely an assembly of logical parts, what actually would? In what way is
 the Strong AI position falsifiable?


I agree, I don't think it's falsifiable and thus not a scientific
hypothesis in the Popperian sense. I see it more as an ambitious goal that
nobody even knows if it's achievable. You might be right, even if we manage
to create an AI that is undistinguishable from human intelligence. I prefer
to believe in Strong AI because I'm interested in it's consequences and in
the intellectual challenge of achieving it. That's all, to be honest.

On the other hand, your hypothesis is also not falsifiable.





 A lot of progress has been made in Poker, both in mapping the game to
 something more abstract and modelling opponents:
 http://poker.cs.ualberta.ca/

 Cheers,
 Telmo.

 PS: The expression brute force annoys me a bit. It implies that
 traditional chess algorithms blindly search the entire space. That's just
 not true, they do clever tree-pruning and use heuristics. Still, they are
 indeed defeated by combinatorial explosion.


 It was a generalization, but I understood what they meant. The important
 thing is that the approach of computation is fundamentally passive and
 eliminative. Games which do not hinge on human intolerance for tedious
 recursive processes are going to be easier for computers because machines
 have no capacity for intolerance. The more tedious the better. Games which
 de-emphasize this as a criteria for success are less 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-28 Thread Craig Weinberg


On Thursday, March 28, 2013 5:52:04 AM UTC-4, telmo_menezes wrote:




 On Wed, Mar 27, 2013 at 6:29 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, March 27, 2013 1:03:27 PM UTC-4, telmo_menezes wrote:

 Hi Craig,


 On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg whats...@gmail.comwrote:

 From the Quora http://www.quora.com/Board-**
 Games/What-are-some-fun-games-**to-play-on-an-8x8-**
 Checkerboard-besides-chess-**checkershttp://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the 
 one-dimensional view of intelligence as computation. Whether a program can 
 be designed to win or not is beside the point,


 That's not really fair, is it?


 Why not?


 How else can I counter your argument against intelligence as computation 
 if I am not allowed to use computation? My example would not prove that 
 it's what the brain does, but it would prove that it can be. You are 
 arguing that it cannot be.


I'm arguing that a screw is not the same thing as a nail because when you 
hammer a screw it doesn't go in as easily as a nail and when you use a 
screwdriver on a nail it doesn't go in at all. Sometimes the hammer is a 
better tool and sometimes the driver is. As humans, we have a great hammer 
and a decent screwdriver. A computer can't hammer anything, but it has a 
power screwdriver with a potentially infinite set of tips.
 

  

  

  

 as it is the difference between this game and chess which hints at the 
 differences between bottom-up mechanism and top-down intentionality


 I see what you're saying but I disagree. It just highlights the weak 
 points of tree-search approaches like min-max. What I gather from what 
 happens when one plays Arimaa (or Go): due to combinatorial explosion, 
 players (even human) play quite far away from the perfect game(s). The way 
 we deal with combinatorial explosion is by mapping the game into something 
 more abstract. 


 How do you know that any such mapping is going on? It seems like begging 
 the question.


 I don't know. I have a strong intuition in it's favor for a few reasons, 
 scientific and otherwise.


Have you tried thinking about it another way? Where does 'mapping' come 
from? Can you begin mapping without already having a map?
 

 The non-scientific one is introspection. I try to observe my own thought 
 process and I think I use such mappings. 


Maybe you do. Maybe a lot of people do. I don't think that I do though. I 
think that a game can be played directly without abstracting it into 
another game.

The scientific reason is that this type of approach has been 
 used successfully to tackle AI problems that could not be solved with 
 classical search algorithms.


I don't doubt that this game is likely to be solved eventually, maybe even 
soon, but the fact remains that it exposes some fundamentally different 
aesthetics between computation and intelligence. This is impressive to me 
because any game is already hugely biased in favor of computation. A game 
is ideal to be reduced to a set of logical rules, it's turn play is already 
a recursive enumeration. A game is already a computer program. Even so, we 
can see that it is possible to use a game to bypass computational values - 
of generic, unconscious repetition, and hint at something completely 
different and opposite.
 

  

 Put another way, if there were top-down non-computational effort going 
 into the game play, why would it look any different than what we see?
  

 Our brain seems to be quite good at generating such mappings. We do it 
 with chess too, I'm sure. Notice that, when two humans play Arimaa, both 
 can count on each other's inabilities to play close to the perfect game. As 
 with games with incomplete information, like Poker, part of it is modelling 
 the opponent. Perhaps not surprisingly, artificial neural networks are 
 quite good at producing useful mappings of this sort, and on predicting 
 behaviours with incomplete information. Great progress has been achieved 
 lately with deep learning. All this fits bottom-up mechanism and 
 intelligence as computation. It doesn't prove anything because I can't 
 attach the code for an excellent Arimaa player but, on the other hand, if I 
 did I'm sure you'd come up with something else. :)


 Except that playing Arimaa is not particularly taxing on the human 
 player. There is no suggestion of any complex algorithms and mappings, 
 rather it seems to me, there is simplicity. 


 The mappings don't have to be complex at all (in terms of leading to heavy 
 computations). That's precisely their point.


Then shouldn't a powerful computer be able to quickly deduce the winning 
Arimaa mappings?
 

  

 The human finds no fundamental difference between the difficulty between 
 Arimaa and Chess, yet there is a clear difference for the computer. 


 Yes, the classical chess algorithms are 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-28 Thread Craig Weinberg


On Wednesday, March 27, 2013 9:32:46 PM UTC-4, stathisp wrote:



 On Thu, Mar 28, 2013 at 2:03 AM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:

 From the Quora 
 http://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the 
 one-dimensional view of intelligence as computation. Whether a program can 
 be designed to win or not is beside the point, as it is the difference 
 between this game and chess which hints at the differences between 
 bottom-up mechanism and top-down intentionality.

 In Arimaa, the rules invite personal preference as a spontaneous 
 initiative from the start - thus it does not make the reductionist 
 assumption of intelligence as a statistical extraction or 'best choice'. 
 Game play here begins intuitively and strategy is more proprietary-private 
 than generic-public. In addition the interaction of the pieces and 
 inclusion of the four trap squares suggests a game geography which is 
 rooted more in space-time sensibilities than in pure arithmetic like chess. 
 I'm not sure which aspects are the most relevant in the difference between 
 how a computer performs, but it seems likely to me that the difference is 
 specifically *not* related to computing power. To wit:

 There are tens of thousands of possibilities in each turn in Arimaa. 
 The 'brute force approach' to programming Arimaa fails miserably. Any human 
 who has played a bit of Arimaa can beat a computer hands down.

 This to me suggests that Arimaa does a good job of sniffing out the 
 general area where top-down consciousness differs fundamentally from bottom 
 up simulated intelligence.


 If this game shows where top-down consciousness differs fundamentally 
 from bottom up simulated intelligence would you accept a computer beating 
 a human at Arimaa as evidence that computers had the top-down 
 consciousness? 


No, that's why I wrote Whether a program can be designed to win or not is 
beside the point.. You may be able to build a screwdriver that is big 
enough to use as a hammer in some situations, but that doesn't mean that it 
is an actual claw hammer.

Would you accept an AI matching a human in any task whatsoever as evidence 
 of the computer having consciousness? If not, why bother pointing out 
 computers' failings if you believe they are a priori incapable of 
 consciousness or even intelligence?


I point out the computers failings to help discern the difference between 
consciousness and simulated intelligence. I'm interested in that because I 
have a hypothesis about what awareness actually is, and that hypothesis 
indicates that awareness cannot necessarily be assembled from the outside. 
I think computers are great, I use them all day every day by choice and by 
profession, but that doesn't make them the same thing as a person, or a 
proto-person. Not only are they not that, they are, in my hypothesis, the 
precise opposite of that. Machines are impersonal. Trying to build a person 
from impersonal parts is like trying to find some combination of walking 
north and south which will eventually take you east.

Thanks,
Craig
 



 -- 
 Stathis Papaioannou 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-27 Thread Telmo Menezes
Hi Craig,


On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg whatsons...@gmail.comwrote:

 From the Quora
 http://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the
 one-dimensional view of intelligence as computation. Whether a program can
 be designed to win or not is beside the point,


That's not really fair, is it?


 as it is the difference between this game and chess which hints at the
 differences between bottom-up mechanism and top-down intentionality


I see what you're saying but I disagree. It just highlights the weak points
of tree-search approaches like min-max. What I gather from what happens
when one plays Arimaa (or Go): due to combinatorial explosion, players
(even human) play quite far away from the perfect game(s). The way we deal
with combinatorial explosion is by mapping the game into something more
abstract. Our brain seems to be quite good at generating such mappings. We
do it with chess too, I'm sure. Notice that, when two humans play Arimaa,
both can count on each other's inabilities to play close to the perfect
game. As with games with incomplete information, like Poker, part of it is
modelling the opponent. Perhaps not surprisingly, artificial neural
networks are quite good at producing useful mappings of this sort, and on
predicting behaviours with incomplete information. Great progress has been
achieved lately with deep learning. All this fits bottom-up mechanism and
intelligence as computation. It doesn't prove anything because I can't
attach the code for an excellent Arimaa player but, on the other hand, if I
did I'm sure you'd come up with something else. :)

A lot of progress has been made in Poker, both in mapping the game to
something more abstract and modelling opponents:
http://poker.cs.ualberta.ca/

Cheers,
Telmo.

PS: The expression brute force annoys me a bit. It implies that
traditional chess algorithms blindly search the entire space. That's just
not true, they do clever tree-pruning and use heuristics. Still, they are
indeed defeated by combinatorial explosion.


 .

 In Arimaa, the rules invite personal preference as a spontaneous
 initiative from the start - thus it does not make the reductionist
 assumption of intelligence as a statistical extraction or 'best choice'.
 Game play here begins intuitively and strategy is more proprietary-private
 than generic-public. In addition the interaction of the pieces and
 inclusion of the four trap squares suggests a game geography which is
 rooted more in space-time sensibilities than in pure arithmetic like chess.
 I'm not sure which aspects are the most relevant in the difference between
 how a computer performs, but it seems likely to me that the difference is
 specifically *not* related to computing power. To wit:

 There are tens of thousands of possibilities in each turn in Arimaa. The
 'brute force approach' to programming Arimaa fails miserably. Any human who
 has played a bit of Arimaa can beat a computer hands down.

 This to me suggests that Arimaa does a good job of sniffing out the
 general area where top-down consciousness differs fundamentally from bottom
 up simulated intelligence.


 --

 *Arimaa, the strategy game that confounds computers! *
 It can be played, not only on an 8x8 chess board, but with the same chess
 pieces as well!
 The pieces are :

1. 8 Rabbits (Pawns)
2. 1 Elephant (King)
3. 1 Camel (Queen)
4. 2 Horses (Rooks)
5. 2 Dogs (Bishops)
6. 2 Cats (Knights)


 It doesn't matter in what way you want the 2 horses/dogs/cats to be
 designated by the 2 bishops/knights/rooks.

 *What sets apart Arimaa from Chess?*

- There is no draw in Arimaa. Good news for elimination tournaments.
- In Arimaa, a player has 64,864,400 choices for the first turn. Thus
unlike chess, memorizing openings is not gonna help you.
- There are tens of thousands of possibilities in each turn in Arimaa.
The 'brute force approach' to programming Arimaa fails miserably. Any human
who has played a bit of Arimaa can beat a computer hands down.
- It places less emphasis on tactics.


 I believe Arimaa is *way* better than chess in terms of abstract
 strategical thinking. It needs a higher level of intuition and
 understanding, discourages memorization and is simple to learn and play. It
 took me some time to play good chess, but it took me a small fraction of
 that time to learn and play good Arimaa.

 The Arimaa community is offering $10,000 for anyone who can come up with a
 program able to beat a top-level human Arimaa player, by 2020 : The
 Arimaa Challenge http://arimaa.com/arimaa/challenge/
 This will help us to attain the next pinnacle in Artificial Intelligence
 Programming.

 *Rules :*
 In the starting, both players 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-27 Thread Craig Weinberg


On Wednesday, March 27, 2013 1:03:27 PM UTC-4, telmo_menezes wrote:

 Hi Craig,


 On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:

 From the Quora 
 http://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the 
 one-dimensional view of intelligence as computation. Whether a program can 
 be designed to win or not is beside the point,


 That's not really fair, is it?


Why not?
 

  

 as it is the difference between this game and chess which hints at the 
 differences between bottom-up mechanism and top-down intentionality


 I see what you're saying but I disagree. It just highlights the weak 
 points of tree-search approaches like min-max. What I gather from what 
 happens when one plays Arimaa (or Go): due to combinatorial explosion, 
 players (even human) play quite far away from the perfect game(s). The way 
 we deal with combinatorial explosion is by mapping the game into something 
 more abstract. 


How do you know that any such mapping is going on? It seems like begging 
the question. Put another way, if there were top-down non-computational 
effort going into the game play, why would it look any different than what 
we see?
 

 Our brain seems to be quite good at generating such mappings. We do it 
 with chess too, I'm sure. Notice that, when two humans play Arimaa, both 
 can count on each other's inabilities to play close to the perfect game. As 
 with games with incomplete information, like Poker, part of it is modelling 
 the opponent. Perhaps not surprisingly, artificial neural networks are 
 quite good at producing useful mappings of this sort, and on predicting 
 behaviours with incomplete information. Great progress has been achieved 
 lately with deep learning. All this fits bottom-up mechanism and 
 intelligence as computation. It doesn't prove anything because I can't 
 attach the code for an excellent Arimaa player but, on the other hand, if I 
 did I'm sure you'd come up with something else. :)


Except that playing Arimaa is not particularly taxing on the human player. 
There is no suggestion of any complex algorithms and mappings, rather it 
seems to me, there is simplicity. The human finds no fundamental difference 
between the difficulty between Arimaa and Chess, yet there is a clear 
difference for the computer. Again, if this does not indicate that there 
the model of intelligence as purely an assembly of logical parts, what 
actually would? In what way is the Strong AI position falsifiable?
 


 A lot of progress has been made in Poker, both in mapping the game to 
 something more abstract and modelling opponents:
 http://poker.cs.ualberta.ca/

 Cheers,
 Telmo.

 PS: The expression brute force annoys me a bit. It implies that 
 traditional chess algorithms blindly search the entire space. That's just 
 not true, they do clever tree-pruning and use heuristics. Still, they are 
 indeed defeated by combinatorial explosion.


It was a generalization, but I understood what they meant. The important 
thing is that the approach of computation is fundamentally passive and 
eliminative. Games which do not hinge on human intolerance for tedious 
recursive processes are going to be easier for computers because machines 
have no capacity for intolerance. The more tedious the better. Games which 
de-emphasize this as a criteria for success are less vulnerable to any 
recursive elimination. The more a game can reward spontaneous creativity, 
versatility, style, grace, broadminded eclectic interpretations, the more a 
computer will fail to duplicate a person's success.

Craig

 

 .

 In Arimaa, the rules invite personal preference as a spontaneous 
 initiative from the start - thus it does not make the reductionist 
 assumption of intelligence as a statistical extraction or 'best choice'. 
 Game play here begins intuitively and strategy is more proprietary-private 
 than generic-public. In addition the interaction of the pieces and 
 inclusion of the four trap squares suggests a game geography which is 
 rooted more in space-time sensibilities than in pure arithmetic like chess. 
 I'm not sure which aspects are the most relevant in the difference between 
 how a computer performs, but it seems likely to me that the difference is 
 specifically *not* related to computing power. To wit:

 There are tens of thousands of possibilities in each turn in Arimaa. 
 The 'brute force approach' to programming Arimaa fails miserably. Any human 
 who has played a bit of Arimaa can beat a computer hands down.

 This to me suggests that Arimaa does a good job of sniffing out the 
 general area where top-down consciousness differs fundamentally from bottom 
 up simulated intelligence.


 --

 *Arimaa, the strategy 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-27 Thread Stathis Papaioannou
On Thu, Mar 28, 2013 at 2:03 AM, Craig Weinberg whatsons...@gmail.comwrote:

 From the Quora
 http://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the
 one-dimensional view of intelligence as computation. Whether a program can
 be designed to win or not is beside the point, as it is the difference
 between this game and chess which hints at the differences between
 bottom-up mechanism and top-down intentionality.

 In Arimaa, the rules invite personal preference as a spontaneous
 initiative from the start - thus it does not make the reductionist
 assumption of intelligence as a statistical extraction or 'best choice'.
 Game play here begins intuitively and strategy is more proprietary-private
 than generic-public. In addition the interaction of the pieces and
 inclusion of the four trap squares suggests a game geography which is
 rooted more in space-time sensibilities than in pure arithmetic like chess.
 I'm not sure which aspects are the most relevant in the difference between
 how a computer performs, but it seems likely to me that the difference is
 specifically *not* related to computing power. To wit:

 There are tens of thousands of possibilities in each turn in Arimaa. The
 'brute force approach' to programming Arimaa fails miserably. Any human who
 has played a bit of Arimaa can beat a computer hands down.

 This to me suggests that Arimaa does a good job of sniffing out the
 general area where top-down consciousness differs fundamentally from bottom
 up simulated intelligence.


If this game shows where top-down consciousness differs fundamentally from
bottom up simulated intelligence would you accept a computer beating a
human at Arimaa as evidence that computers had the top-down
consciousness? Would you accept an AI matching a human in any
task whatsoever as evidence of the computer having consciousness? If not,
why bother pointing out computers' failings if you believe they are a
priori incapable of consciousness or even intelligence?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.