Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread deepakjnath
Yes we could do a 4x4 tic tac toe game like this in a PC. The training sets
can be generated simply by playing the agents against each other using
random moves and letting the agents know if it passed or failed as a
feedback mechanism.

Cheers,
Deepak

On Wed, Jul 21, 2010 at 9:02 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike, I think we all agree that we should not have to tell an AGI the steps
 to solving problems. It should learn and figure it out, like the way that
 people figure it out.

 The question is how to do that. We know that it is possible. For example, I
 could write a chess program that I could not win against. I could write the
 program in such a way that it learns to improve its game by playing against
 itself or other opponents. I could write it in such a way that initially
 does not know the rules for chess, but instead learns the rules by being
 given examples of legal and illegal moves.

 What we have not yet been able to do is scale this type of learning and
 problem solving up to general, human level intelligence. I believe it is
 possible, but it will require lots of training data and lots of computing
 power. It is not something you could do on a PC, and it won't be cheap.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, July 19, 2010 9:07:53 PM

 *Subject:* Re: [agi] Of definitions and tests of AGI

 The issue isn't what a computer can do. The issue is how you structure the
 computer's or any agent's thinking about a problem. Programs/Turing machines
 are only one way of structuring thinking/problemsolving - by, among other
 things, giving the computer a method/process of solution. There is an
 alternative way of structuring a computer's thinking, which incl., among
 other things, not giving it a method/ process of solution, but making it
 rather than a human programmer do the real problemsolving.  More of that
 another time.

  *From:* Matt Mahoney matmaho...@yahoo.com
 *Sent:* Tuesday, July 20, 2010 1:38 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

  Creativity is the good feeling you get when you discover a clever
 solution to a hard problem without knowing the process you used to discover
 it.

 I think a computer could do that.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Mike Tintner tint...@blueyonder.co.uk
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, July 19, 2010 2:08:28 PM
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Yes that's what people do, but it's not what programmed computers do.

 The useful formulation that emerges here is:

 narrow AI (and in fact all rational) problems  have *a method of solution*
 (to be equated with general method)   - and are programmable (a program is
 a method of solution)

 AGI  (and in fact all creative) problems do NOT have *a method of solution*
 (in the general sense)  -  rather a one.off *way of solving the problem* has
 to be improvised each time.

 AGI/creative problems do not in fact have a method of solution, period.
 There is no (general) method of solving either the toy box or the
 build-a-rock-wall problem - one essential feature which makes them AGI.

 You can learn, as you indicate, from *parts* of any given AGI/creative
 solution, and apply the lessons to future problems - and indeed with
 practice, should improve at solving any given kind of AGI/creative problem.
 But you can never apply a *whole* solution/way to further problems.

 P.S. One should add that in terms of computers, we are talking here of
 *complete, step-by-step* methods of solution.


  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 5:09 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI



  And are you happy with:

 AGI is about devising *one-off* methods of problemsolving (that only apply
 to the individual problem, and cannot be re-used - at

  least not in their totality)



 Yes exactly, isn't that what people do?  Also, I think that being able to
 recognize where past solutions can be generalized and where past solutions
 can be varied and reused is a detail of how intelligence works that is
 likely to be universal.



  vs

 narrow AI is about applying pre-existing *general* methods of
 problemsolving  (applicable to whole classes of problems)?



  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 4:45 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Well, solving ANY problem is a little too strong.  This is AGI, not AGH
 (artificial godhead), though AGH could be an unintended consequence ;).  So
 I would rephrase solving any problem as being able to come up with
 reasonable approaches and strategies to any problem (just as humans are able
 to do).

 On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner 
 tint

Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread David Jones
Training data is not available in many real problems. I don't think training
data should be used as the main learning mechanism. It likely won't solve
any of the problems.

On Jul 21, 2010 2:52 AM, deepakjnath deepakjn...@gmail.com wrote:

Yes we could do a 4x4 tic tac toe game like this in a PC. The training sets
can be generated simply by playing the agents against each other using
random moves and letting the agents know if it passed or failed as a
feedback mechanism.

Cheers,
Deepak



On Wed, Jul 21, 2010 at 9:02 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike, I think we a...
-- 
cheers,
Deepak
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread Mike Tintner
Matt,

How did you learn to play chess?   Or write programs? How do you teach people 
to write programs?

Compare and contrast - esp. the nature and number/ extent of instructions -  
with how you propose to force a computer to learn below.

Why is it that if you tell a child [real AGI] what to do, it will never learn?

Why can and does a human learner get to ask questions and a computer doesn't?

How come you [a real AGI] can get to choose your instructors and textbooks, 
and/or whether you choose to pay attention to them, and a computer can't?

Why do computers stop learning once they've done what they're told, and humans 
and animals never stop and keep going on to learn ever new activities?

What and how many are the fundamental differences between how real AGI's and 
computers learn?




Mike, I think we all agree that we should not have to tell an AGI the steps to 
solving problems. It should learn and figure it out, like the way that people 
figure it out.


The question is how to do that. We know that it is possible. For example, I 
could write a chess program that I could not win against. I could write the 
program in such a way that it learns to improve its game by playing against 
itself or other opponents. I could write it in such a way that initially does 
not know the rules for chess, but instead learns the rules by being given 
examples of legal and illegal moves.


What we have not yet been able to do is scale this type of learning and problem 
solving up to general, human level intelligence. I believe it is possible, but 
it will require lots of training data and lots of computing power. It is not 
something you could do on a PC, and it won't be cheap.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 9:07:53 PM
Subject: Re: [agi] Of definitions and tests of AGI


The issue isn't what a computer can do. The issue is how you structure the 
computer's or any agent's thinking about a problem. Programs/Turing machines 
are only one way of structuring thinking/problemsolving - by, among other 
things, giving the computer a method/process of solution. There is an 
alternative way of structuring a computer's thinking, which incl., among other 
things, not giving it a method/ process of solution, but making it rather than 
a human programmer do the real problemsolving.  More of that another time.


From: Matt Mahoney 
Sent: Tuesday, July 20, 2010 1:38 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Creativity is the good feeling you get when you discover a clever solution to a 
hard problem without knowing the process you used to discover it.


I think a computer could do that.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28 PM
Subject: Re: [agi] Of definitions and tests of AGI


Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with general method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi

Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread rob levy
A child AGI should be expected to need help learning how to solve many
problems, and even be told what the steps are.  But at some point it needs
to have developed general problem-solving skills.  But I feel like this is
all stating the obvious.

On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike, I think we all agree that we should not have to tell an AGI the steps
 to solving problems. It should learn and figure it out, like the way that
 people figure it out.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread Mike Tintner
Infants *start* with general learning skills - they have to extensively 
discover for themselves how to do most things - control head, reach out, turn 
over, sit up, crawl, walk - and also have to work out perceptually what the 
objects they see are, and what they do... and what sounds are, and how they 
form words, and how those words relate to objects - and how language works

it is this capacity to keep discovering ways of doing things, that is a major 
motivation in their continually learning new activities - continually seeking 
novelty, and getting bored with too repetitive activities

obviously an AGI needs some help.. but at the mo. all projects get *full* help/ 
*complete* instructions - IOW are merely dressed up versions of narrow AI

no one AFAIK is dealing with the issue of how do you produce a true 
goalseeking agent who *can* discover things for itself?  - an agent, that 
like humans and animals, can *find* its way to its goals generally, as well as 
to learning new activities, on its own initiative  - rather than by following 
instructions.  (The full instruction method only works in artificial, 
controlled environments and can't possibly work in the real, uncontrollable 
world - where future conditions are highly unpredictable, even by the sagest 
instructor). [Ben BTW strikes me as merely gesturing at all this].

There really can't be any serious argument about this - humans and animals 
clearly learn all their activities with v. limited and largely general rather 
than step-by-step instructions.

You may want to argue there is an underlying general program that effectively 
specifies every step they must take (good luck) - but with respect to all their 
specialist.particular activities, - think having a conversation, sex, writing a 
post, an essay, fantasying, shopping, browsing the net, reading a newspaper - 
etc etc. - you got and get v. little step-by-step instruction about these and 
all your other activities

So AGI's require a fundamentally and massively different paradigm of 
instruction to the program, comprehensive, step-by-step paradigm of narrow AI.

[The rock wall/toybox tests BTW are AGI activities, where it is *impossible* to 
give full instructions, or produce a formula, whatever you may want to do].


From: rob levy 
Sent: Wednesday, July 21, 2010 3:56 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


A child AGI should be expected to need help learning how to solve many 
problems, and even be told what the steps are.  But at some point it needs to 
have developed general problem-solving skills.  But I feel like this is all 
stating the obvious.


On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney matmaho...@yahoo.com wrote:

  Mike, I think we all agree that we should not have to tell an AGI the steps 
to solving problems. It should learn and figure it out, like the way that 
people figure it out.




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread rob levy
I completely agree with this characterization, I was just pointing out the
importance already-existing generally intelligent entities in providing
scaffolding for the system's learning and meta-learning processes.

On Wed, Jul 21, 2010 at 12:25 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Infants *start* with general learning skills - they have to extensively
 discover for themselves how to do most things - control head, reach out,
 turn over, sit up, crawl, walk - and also have to work out perceptually what
 the objects they see are, and what they do... and what sounds are, and how
 they form words, and how those words relate to objects - and how language
 works

 it is this capacity to keep discovering ways of doing things, that is a
 major motivation in their continually learning new activities - continually
 seeking novelty, and getting bored with too repetitive activities

 obviously an AGI needs some help.. but at the mo. all projects get *full*
 help/ *complete* instructions - IOW are merely dressed up versions of narrow
 AI

 no one AFAIK is dealing with the issue of how do you produce a true
 goalseeking agent who *can* discover things for itself?  - an agent, that
 like humans and animals, can *find* its way to its goals generally, as well
 as to learning new activities, on its own initiative  - rather than by
 following instructions.  (The full instruction method only works in
 artificial, controlled environments and can't possibly work in the real,
 uncontrollable world - where future conditions are highly unpredictable,
 even by the sagest instructor). [Ben BTW strikes me as merely gesturing at
 all this].

 There really can't be any serious argument about this - humans and animals
 clearly learn all their activities with v. limited and largely general
 rather than step-by-step instructions.

 You may want to argue there is an underlying general program that
 effectively specifies every step they must take (good luck) - but with
 respect to all their specialist.particular activities, - think having a
 conversation, sex, writing a post, an essay, fantasying, shopping, browsing
 the net, reading a newspaper - etc etc. - you got and get v. little
 step-by-step instruction about these and all your other activities

 So AGI's require a fundamentally and massively different paradigm of
 instruction to the program, comprehensive, step-by-step paradigm of narrow
 AI.

 [The rock wall/toybox tests BTW are AGI activities, where it is
 *impossible* to give full instructions, or produce a formula, whatever you
 may want to do].

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Wednesday, July 21, 2010 3:56 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 A child AGI should be expected to need help learning how to solve many
 problems, and even be told what the steps are.  But at some point it needs
 to have developed general problem-solving skills.  But I feel like this is
 all stating the obvious.

 On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Mike, I think we all agree that we should not have to tell an AGI the
 steps to solving problems. It should learn and figure it out, like the way
 that people figure it out.


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-20 Thread Matt Mahoney
Mike, I think we all agree that we should not have to tell an AGI the steps to 
solving problems. It should learn and figure it out, like the way that people 
figure it out.

The question is how to do that. We know that it is possible. For example, I 
could write a chess program that I could not win against. I could write the 
program in such a way that it learns to improve its game by playing against 
itself or other opponents. I could write it in such a way that initially does 
not know the rules for chess, but instead learns the rules by being given 
examples of legal and illegal moves.

What we have not yet been able to do is scale this type of learning and problem 
solving up to general, human level intelligence. I believe it is possible, but 
it will require lots of training data and lots of computing power. It is not 
something you could do on a PC, and it won't be cheap.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 9:07:53 PM
Subject: Re: [agi] Of definitions and tests of AGI


The issue isn't what a computer can do. The issue  is how you structure the 
computer's or any agent's thinking about a  problem. Programs/Turing machines 
are only one way of structuring  thinking/problemsolving - by, among other 
things, giving the  computer a method/process of solution. There is an 
alternative way of  structuring a computer's thinking, which incl., among other 
things, not giving  it a method/ process of solution, but making it rather than 
a human  programmer do the real problemsolving.  More of that another  time.


From: Matt Mahoney 
Sent: Tuesday, July 20, 2010 1:38 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI

Creativity is the good feeling you get when you discover a clever solution  to 
a 
hard problem without knowing the process you used to discover it.

I think a computer could do that.

 -- Matt Mahoney, matmaho...@yahoo.com 





 From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28  PM
Subject: Re: [agi] Of  definitions and tests of AGI


Yes that's what people do, but it's not what  programmed computers do.
 
The useful formulation that emerges here  is:
 
narrow AI (and in fact all rational) problems   have *a method of solution*  
(to 
be equated with general  method)   - and are programmable (a program is a 
method of  solution)
 
AGI  (and in fact all creative) problems do  NOT have *a method of solution* 
(in 
the general sense)  -  rather  a one.off *way of solving the problem* has to be 
improvised each  time.
 
AGI/creative problems do not in fact have a method  of solution, period. There 
is no (general) method of solving either the toy box  or the build-a-rock-wall 
problem - one essential feature which makes them  AGI.
 
You can learn, as you indicate, from *parts* of any  given AGI/creative 
solution, and apply the lessons to future problems - and  indeed with practice, 
should improve at solving any given kind of AGI/creative  problem. But you can 
never apply a *whole* solution/way to further  problems.
 
P.S. One should add that in terms of computers, we  are talking here of 
*complete, step-by-step* methods of  solution.
 


From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI
  
And are you happy with:
 
AGI is about devising *one-off* methods ofproblemsolving (that only apply 
to 
the individual problem, and cannot bere-used - at 


least not in their totality)
 

Yes exactly, isn't that what people do?  Also, I think that being  able to 
recognize where past solutions can be generalized and where past  solutions can 
be varied and reused is a detail of how intelligence works that is  likely to 
be 
universal.

 
vs
 
narrow AI is about applying pre-existing*general* methods of 
problemsolving  
(applicable to whole classes ofproblems)?
 
 


From: rob levy 
Sent: Monday, July 19, 2010 4:45 PM
To: agi 
Subject: Re: [agi] Of definitions and tests ofAGI

Well, solving ANY problem is a little too strong.  This isAGI, not AGH 
(artificial godhead), though AGH could be an unintendedconsequence ;).  So 
I 
would rephrase solving any problem as being ableto come up with 
reasonable 
approaches and strategies to any problem (just ashumans are able to do).


On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to  solve problems? Develop a 
universal approach to solving any problem?  Or find a method of solving a 
class of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of  AGI


 
However, I see that there are no validdefinitions of AGI that 
explain 
what AGI is generally , and why

Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread deepakjnath
 *Subject:* Re: [agi] Of definitions and tests of AGI

 So if I have a system that is close to AGI, I have no way of really
 knowing it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many
 people will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem
 which can be solved and only solved by a true AGI. Without such a test bench
 how will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak



 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.uk
  wrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem.
 (See archives).

 I have submitted another still simpler valid test - build a rock wall
 from rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety
 of complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?)
 is that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with
 minimal guidance or even, ideally, with zero guidance (as humans and 
 animals
 are) - and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from 
 narrow
 AI's, which have to be individually programmed anew for each new task. 
 Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be 
 able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the
 distinctive AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S. Deepakjnath,

 It is vital to realise that the overwhelming majority of AGI-ers do not
 * want* an AGI test -  Ben has never gone near one, and is merely typical 
 in
 this respect. I'd put almost all AGI-ers here in the same league as the US
 banks, who only want mark-to-fantasy rather than mark-to-market tests of
 their assets.
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
No, Dave  I vaguely agree here that you have to start simple. To think of 
movies is massively confused - rather like saying: when we have created an 
entire new electric supply system for cars, we will have solved the problem of 
replacing gasoline - first you have to focus just on inventing a radically 
cheaper battery, before you consider the possibly hundreds to thousands of 
associated inventions and innovations.involved in creating a major new supply 
system.

Here it would be much simpler to focus on understanding a single photographic 
scene - or real, directly-viewed scene - of objects, rather than the many 
thousands involved in a movie.

In terms of language, it would be simpler to focus on understanding just two 
consecutive sentences of a text or section of dialogue  - or even as I've 
already suggested, just the flexible combinations of two words - rather than 
the hundreds of lines and many thousands of words involved in a movie or play 
script.

And even this is probably all too evolved, for humans only came to use formal 
representations of the world v. recently in evolution.

The general point -  a massively important one - is that AGI-ers cannot 
continue to think of AGI in terms of massively complex and evolved intelligent 
systems, as you are doing. You have to start with the simplest possible systems 
and gradually evolve them.  Anything else is a defiance of all the laws of 
technology - and will see AGI continuing to go absolutely nowhere.

From: deepakjnath 
Sent: Monday, July 19, 2010 5:19 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Exactly my point. So if I show a demo of an AGI system that can see two movies 
and understand that the plot of the movies are same even though they are 2 
entirely different movies, you would agree that we have created a true AGI.

Yes there are always lot of things we need to do before we reach that level. 
Its just good to know the destination so that we will know it when it arrives.





On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Jeez,  no AI program can understand *two* consecutive *sentences* in a text - 
can understand any text period - can understand language, period. And you want 
an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

  No AI can understand a photograph of a scene, period - a crowd scene, a house 
by the river. Programs are hard put to recognize any objects other than those 
in v. standard positions. And you want an AGI that can understand a *movie*. 

  You don't seem to realise that we can't take the smallest AGI  *step* yet - 
and you're fantasying about a superevolved AGI globetrotter.

  That's why Benjamin  I tried to focus on v. v. simple tests -  they're 
still way too complex  they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

  I recommend looking at Packbots and other military robots and hospital robots 
and the like, and asking how we can free them from their human masters and give 
them the very simplest of capacities to rove and handle the world independently 
- like handling and travelling on rocks. 

  Anyone dreaming of computers or robots that can follow Gone with The Wind 
or become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

  From: deepakjnath 
  Sent: Sunday, July 18, 2010 9:04 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Let me clarify. As you all know there are somethings computers are good at 
doing and somethings that Humans can do but a computer cannot.

  One of the test that I was thinking about recently is to have to movies show 
to the AGI. Both movies will have the same story but it would be a totally 
different remake of the film probably in different languages and settings. If 
the AGI is able to understand the sub plot and say that the story line is 
similar in the two movies then it could be a good test for AGI structure. 

  The ability of a system to understand its environment and underlying sub 
plots is an important requirement of AGI.

  Deepak


  On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Please explain/expound freely why you're not convinced - and indicate 
what you expect,  - and I'll reply - but it may not be till tomorrow.

Re your last point, there def. is no consensus on a general problem/test OR 
a def. of AGI.  

One flaw in your expectations seems to be a desire for a single test -  
almost by definition, there is no such thing as 

a) a single test - i.e. there should be at least a dual or serial test - 
having passed any given test, like the rock/toy test, the AGI must be presented 
with a new adjacent test for wh. it has had no preparation,  like say 
building

Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread deepakjnath
‘The intuitive mind is a sacred gift and the rational  mind is a faithful
servant. We have created a society that honours the servant and has
forgotten the gift.’

‘The intellect has little to do on the road to discovery. There comes a leap
in consciousness, call it intuition or what you will, and the solution comes
to you and you don’t know how or why.’

— Albert Einstein

We are here talking like programmers who needs to build a new system; Just
divide the problem, solve it one by one, arrange the pieces and voila. We
are missing something fundamentally here. That I believe has to come as a
stroke of genius to someone.

thanks,
Deepak




On Mon, Jul 19, 2010 at 4:10 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  No, Dave  I vaguely agree here that you have to start simple. To think
 of movies is massively confused - rather like saying: when we have created
 an entire new electric supply system for cars, we will have solved the
 problem of replacing gasoline - first you have to focus just on inventing a
 radically cheaper battery, before you consider the possibly hundreds to
 thousands of associated inventions and innovations.involved in creating a
 major new supply system.

 Here it would be much simpler to focus on understanding a single
 photographic scene - or real, directly-viewed scene - of objects, rather
 than the many thousands involved in a movie.

 In terms of language, it would be simpler to focus on understanding just
 two consecutive sentences of a text or section of dialogue  - or even as
 I've already suggested, just the flexible combinations of two words - rather
 than the hundreds of lines and many thousands of words involved in a movie
 or play script.

 And even this is probably all too evolved, for humans only came to use
 formal representations of the world v. recently in evolution.

 The general point -  a massively important one - is that AGI-ers cannot
 continue to think of AGI in terms of massively complex and evolved
 intelligent systems, as you are doing. You have to start with the simplest
 possible systems and gradually evolve them.  Anything else is a defiance of
 all the laws of technology - and will see AGI continuing to go absolutely
 nowhere.

  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Monday, July 19, 2010 5:19 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Exactly my point. So if I show a demo of an AGI system that can see two
 movies and understand that the plot of the movies are same even though they
 are 2 entirely different movies, you would agree that we have created a true
 AGI.

 Yes there are always lot of things we need to do before we reach that
 level. Its just good to know the destination so that we will know it when it
 arrives.




 On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Jeez,  no AI program can understand *two* consecutive *sentences* in a
 text - can understand any text period - can understand language, period. And
 you want an AGI that can understand a *story*. You don't seem to understand
 that requires cognitively a fabulous, massively evolved, highly educated,
 hugely complex set of powers .

 No AI can understand a photograph of a scene, period - a crowd scene, a
 house by the river. Programs are hard put to recognize any objects other
 than those in v. standard positions. And you want an AGI that can understand
 a *movie*.

 You don't seem to realise that we can't take the smallest AGI  *step* yet
 - and you're fantasying about a superevolved AGI globetrotter.

 That's why Benjamin  I tried to focus on v. v. simple tests -  they're
 still way too complex  they (or comparable tests) will have to be refined
 down considerably for anyone who is interested in practical vs sci-fi
 fantasy AGI.

 I recommend looking at Packbots and other military robots and hospital
 robots and the like, and asking how we can free them from their human
 masters and give them the very simplest of capacities to rove and handle the
 world independently - like handling and travelling on rocks.

 Anyone dreaming of computers or robots that can follow Gone with The
 Wind or become a child (real) scientist in the foreseeable future pace Ben,
 has no realistic understanding of what is involved.
  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 9:04 PM
   *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Let me clarify. As you all know there are somethings computers are good at
 doing and somethings that Humans can do but a computer cannot.

 One of the test that I was thinking about recently is to have to movies
 show to the AGI. Both movies will have the same story but it would be a
 totally different remake of the film probably in different languages and
 settings. If the AGI is able to understand the sub plot and say that the
 story line is similar in the two movies then it could be a good test for AGI
 structure

Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
Non-reply.

Name one industry/ section of technology that began with, say, the invention of 
the car,  skipping all the many thousands of stages from the invention of the 
wheel. What you and others are proposing is far, far more outrageous.

It won't require one but a million strokes of genius in one - a stroke of 
divinity. More fantasy AGI.


From: deepakjnath 
Sent: Monday, July 19, 2010 12:00 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


‘The intuitive mind is a sacred gift and the rational  mind is a faithful 
servant. We have created a society that honours the servant and has forgotten 
the gift.’

‘The intellect has little to do on the road to discovery. There comes a leap in 
consciousness, call it intuition or what you will, and the solution comes to 
you and you don’t know how or why.’

— Albert Einstein

We are here talking like programmers who needs to build a new system; Just 
divide the problem, solve it one by one, arrange the pieces and voila. We are 
missing something fundamentally here. That I believe has to come as a stroke of 
genius to someone.

thanks,
Deepak





On Mon, Jul 19, 2010 at 4:10 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  No, Dave  I vaguely agree here that you have to start simple. To think of 
movies is massively confused - rather like saying: when we have created an 
entire new electric supply system for cars, we will have solved the problem of 
replacing gasoline - first you have to focus just on inventing a radically 
cheaper battery, before you consider the possibly hundreds to thousands of 
associated inventions and innovations.involved in creating a major new supply 
system.

  Here it would be much simpler to focus on understanding a single photographic 
scene - or real, directly-viewed scene - of objects, rather than the many 
thousands involved in a movie.

  In terms of language, it would be simpler to focus on understanding just two 
consecutive sentences of a text or section of dialogue  - or even as I've 
already suggested, just the flexible combinations of two words - rather than 
the hundreds of lines and many thousands of words involved in a movie or play 
script.

  And even this is probably all too evolved, for humans only came to use formal 
representations of the world v. recently in evolution.

  The general point -  a massively important one - is that AGI-ers cannot 
continue to think of AGI in terms of massively complex and evolved intelligent 
systems, as you are doing. You have to start with the simplest possible systems 
and gradually evolve them.  Anything else is a defiance of all the laws of 
technology - and will see AGI continuing to go absolutely nowhere.

  From: deepakjnath 
  Sent: Monday, July 19, 2010 5:19 AM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Exactly my point. So if I show a demo of an AGI system that can see two 
movies and understand that the plot of the movies are same even though they are 
2 entirely different movies, you would agree that we have created a true AGI.

  Yes there are always lot of things we need to do before we reach that level. 
Its just good to know the destination so that we will know it when it arrives.





  On Mon, Jul 19, 2010 at 2:18 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Jeez,  no AI program can understand *two* consecutive *sentences* in a text 
- can understand any text period - can understand language, period. And you 
want an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

No AI can understand a photograph of a scene, period - a crowd scene, a 
house by the river. Programs are hard put to recognize any objects other than 
those in v. standard positions. And you want an AGI that can understand a 
*movie*. 

You don't seem to realise that we can't take the smallest AGI  *step* yet - 
and you're fantasying about a superevolved AGI globetrotter.

That's why Benjamin  I tried to focus on v. v. simple tests -  they're 
still way too complex  they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

I recommend looking at Packbots and other military robots and hospital 
robots and the like, and asking how we can free them from their human masters 
and give them the very simplest of capacities to rove and handle the world 
independently - like handling and travelling on rocks. 

Anyone dreaming of computers or robots that can follow Gone with The Wind 
or become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

From: deepakjnath 
Sent: Sunday, July 18, 2010 9:04 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Let me clarify. As you all know there are somethings computers are good at 
doing and somethings

Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy


 However, I see that there are no valid definitions of AGI that explain what
 AGI is generally , and why these tests are indeed AGI. Google - there are v.
 few defs. of AGI or Strong AI, period.



I like Fogel's idea that intelligence is the ability to solve the problem
of how to solve problems in new and changing environments.  I don't think
Fogel's method accomplishes this, but the goal he expresses seems to be the
goal of AGI as I understand it.

Rob



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI



  However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.




I like Fogel's idea that intelligence is the ability to solve the problem of 
how to solve problems in new and changing environments.  I don't think Fogel's 
method accomplishes this, but the goal he expresses seems to be the goal of AGI 
as I understand it. 


Rob
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy
Well, solving ANY problem is a little too strong.  This is AGI, not AGH
(artificial godhead), though AGH could be an unintended consequence ;).  So
I would rephrase solving any problem as being able to come up with
reasonable approaches and strategies to any problem (just as humans are able
to do).

On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a
 universal approach to solving any problem? Or find a method of solving a
 class of problems? Or what?

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 1:26 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI


 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.



 I like Fogel's idea that intelligence is the ability to solve the problem
 of how to solve problems in new and changing environments.  I don't think
 Fogel's method accomplishes this, but the goal he expresses seems to be the
 goal of AGI as I understand it.

 Rob
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy
Fogel originally used the phrase to argue that evolutionary computation
makes sense as a cognitive architecture for a general-purpose AI problem
solver.

On Mon, Jul 19, 2010 at 11:45 AM, rob levy r.p.l...@gmail.com wrote:

 Well, solving ANY problem is a little too strong.  This is AGI, not AGH
 (artificial godhead), though AGH could be an unintended consequence ;).  So
 I would rephrase solving any problem as being able to come up with
 reasonable approaches and strategies to any problem (just as humans are able
 to do).


 On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a
 universal approach to solving any problem? Or find a method of solving a
 class of problems? Or what?

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 1:26 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI


 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.



 I like Fogel's idea that intelligence is the ability to solve the problem
 of how to solve problems in new and changing environments.  I don't think
 Fogel's method accomplishes this, but the goal he expresses seems to be the
 goal of AGI as I understand it.

 Rob
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
OK. so you're saying:   AGI is solving problems where you have to *devise* a 
method of solution/solving the problem  and is that devising in effect or 
actually/formally? - **

vs

narrow AI wh. is where you *apply* a pre-existing method of solution/solving 
the problem  ?

And are you happy with:

AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at least not in their totality)

vs

narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




From: rob levy 
Sent: Monday, July 19, 2010 4:45 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase solving any problem as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


  From: rob levy 
  Sent: Monday, July 19, 2010 1:26 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI



However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.




  I like Fogel's idea that intelligence is the ability to solve the problem of 
how to solve problems in new and changing environments.  I don't think Fogel's 
method accomplishes this, but the goal he expresses seems to be the goal of AGI 
as I understand it. 


  Rob
agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy
 And are you happy with:

 AGI is about devising *one-off* methods of problemsolving (that only apply
 to the individual problem, and cannot be re-used - at

least not in their totality)



Yes exactly, isn't that what people do?  Also, I think that being able to
recognize where past solutions can be generalized and where past solutions
can be varied and reused is a detail of how intelligence works that is
likely to be universal.



 vs

 narrow AI is about applying pre-existing *general* methods of
 problemsolving  (applicable to whole classes of problems)?



  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 4:45 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Well, solving ANY problem is a little too strong.  This is AGI, not AGH
 (artificial godhead), though AGH could be an unintended consequence ;).  So
 I would rephrase solving any problem as being able to come up with
 reasonable approaches and strategies to any problem (just as humans are able
 to do).

 On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a
 universal approach to solving any problem? Or find a method of solving a
 class of problems? Or what?

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 1:26 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI


  However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.



 I like Fogel's idea that intelligence is the ability to solve the problem
 of how to solve problems in new and changing environments.  I don't think
 Fogel's method accomplishes this, but the goal he expresses seems to be the
 goal of AGI as I understand it.

 Rob
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription 
 http://www.listbox.com
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with general method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase solving any problem as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


  On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI



  However, I see that there are no valid definitions of AGI that explain 
what AGI is generally , and why these tests are indeed AGI. Google - there are 
v. few defs. of AGI or Strong AI, period.




I like Fogel's idea that intelligence is the ability to solve the problem 
of how to solve problems in new and changing environments.  I don't think 
Fogel's method accomplishes this, but the goal he expresses seems to be the 
goal of AGI as I understand it. 


Rob
  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  



agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Matt Mahoney
Creativity is the good feeling you get when you discover a clever solution to a 
hard problem without knowing the process you used to discover it.

I think a computer could do that.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28 PM
Subject: Re: [agi] Of definitions and tests of AGI


Yes that's what people do, but it's not what  programmed computers do.
 
The useful formulation that emerges here  is:
 
narrow AI (and in fact all rational) problems   have *a method of solution*  
(to 
be equated with general  method)   - and are programmable (a program is a 
method of  solution)
 
AGI  (and in fact all creative) problems do  NOT have *a method of solution* 
(in 
the general sense)  -  rather  a one.off *way of solving the problem* has to be 
improvised each  time.
 
AGI/creative problems do not in fact have a method  of solution, period. There 
is no (general) method of solving either the toy box  or the build-a-rock-wall 
problem - one essential feature which makes them  AGI.
 
You can learn, as you indicate, from *parts* of any  given AGI/creative 
solution, and apply the lessons to future problems - and  indeed with practice, 
should improve at solving any given kind of AGI/creative  problem. But you can 
never apply a *whole* solution/way to further  problems.
 
P.S. One should add that in terms of computers, we  are talking here of 
*complete, step-by-step* methods of  solution.
 


From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI
  
And are you happy with:
 
AGI is about devising *one-off* methods ofproblemsolving (that only apply 
to 
the individual problem, and cannot bere-used - at 


least not in their totality)
 

Yes exactly, isn't that what people do?  Also, I think that being  able to 
recognize where past solutions can be generalized and where past  solutions can 
be varied and reused is a detail of how intelligence works that is  likely to 
be 
universal.

 
vs
 
narrow AI is about applying pre-existing*general* methods of 
problemsolving  
(applicable to whole classes ofproblems)?
 
 


From: rob levy 
Sent: Monday, July 19, 2010 4:45 PM
To: agi 
Subject: Re: [agi] Of definitions and tests ofAGI

Well, solving ANY problem is a little too strong.  This isAGI, not AGH 
(artificial godhead), though AGH could be an unintendedconsequence ;).  So 
I 
would rephrase solving any problem as being ableto come up with 
reasonable 
approaches and strategies to any problem (just ashumans are able to do).


On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to  solve problems? Develop a 
universal approach to solving any problem?  Or find a method of solving a 
class of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of  AGI


 
However, I see that there are no validdefinitions of AGI that 
explain 
what AGI is generally , and why thesetests are indeed AGI. Google - 
there are v. few defs. of AGI or Strong AI,period.




I like Fogel's idea that intelligence is the ability to solve the  
problem 
of how to solve problems in new and changing environments.  I  don't 
think 
Fogel's method accomplishes this, but the goal he expresses  seems to be 
the 
goal of AGI as I understand it. 


Rob
agi | Archives  | Modify Your Subscription   
agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription   
agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription   
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread Mike Tintner
The issue isn't what a computer can do. The issue is how you structure the 
computer's or any agent's thinking about a problem. Programs/Turing machines 
are only one way of structuring thinking/problemsolving - by, among other 
things, giving the computer a method/process of solution. There is an 
alternative way of structuring a computer's thinking, which incl., among other 
things, not giving it a method/ process of solution, but making it rather than 
a human programmer do the real problemsolving.  More of that another time.


From: Matt Mahoney 
Sent: Tuesday, July 20, 2010 1:38 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Creativity is the good feeling you get when you discover a clever solution to a 
hard problem without knowing the process you used to discover it.


I think a computer could do that.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28 PM
Subject: Re: [agi] Of definitions and tests of AGI


Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with general method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase solving any problem as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


  On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to solve problems? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI



  However, I see that there are no valid definitions of AGI that explain 
what AGI is generally , and why these tests are indeed AGI. Google - there are 
v. few defs. of AGI or Strong AI, period.




I like Fogel's idea that intelligence is the ability to solve the problem 
of how to solve problems in new and changing environments.  I don't think 
Fogel's method accomplishes this, but the goal he expresses seems to be the 
goal of AGI as I understand it. 


Rob
  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  



agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread deepakjnath
So if I have a system that is close to AGI, I have no way of really knowing
it right?

Even if I believe that my system is a true AGI there is no way of convincing
the others irrefutably that this system is indeed a AGI not just an advanced
AI system.

I have read the toy box problem and rock wall problem, but not many people
will still be convinced I am sure.

I wanted to know that if there is any consensus on a general problem which
can be solved and only solved by a true AGI. Without such a test bench how
will we know if we are moving closer or away from our quest. There is no
map.

Deepak



On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall from
 rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain what
 AGI is generally , and why these tests are indeed AGI. Google - there are v.
 few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety of
 complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?) is
 that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with minimal
 guidance or even, ideally, with zero guidance (as humans and animals are) -
 and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from narrow
 AI's, which have to be individually programmed anew for each new task. Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the distinctive
 AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S. Deepakjnath,

 It is vital to realise that the overwhelming majority of AGI-ers do not *
 want* an AGI test -  Ben has never gone near one, and is merely typical in
 this respect. I'd put almost all AGI-ers here in the same league as the US
 banks, who only want mark-to-fantasy rather than mark-to-market tests of
 their assets.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread David Jones
If you can't convince someone, clearly something is wrong with it. I don't
think a test is the right way to do this. Which is why I haven't commented
much. When you understand how to create AGI, it will be obvious that it is
AGI or that it is what you intend it to be. You'll then understand how what
you have built fits into the bigger scheme of things. There is no such point
at which you can say something is AGI and not AGI. Intelligence is a
very subjective thing that really depends on your goals. Someone will always
say it is not good enough. But if it really works, people will quickly
realize it based on results.

What you want is to develop a system that can learn about the world or its
environment in a general way so that it can solve arbitrary problems, be
able to plan in general ways, act in general ways and perform the types of
goals you want it to perform.

Dave

On Sun, Jul 18, 2010 at 3:03 PM, deepakjnath deepakjn...@gmail.com wrote:

 So if I have a system that is close to AGI, I have no way of really knowing
 it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many people
 will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem which
 can be solved and only solved by a true AGI. Without such a test bench how
 will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak




 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall from
 rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety
 of complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?) is
 that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with minimal
 guidance or even, ideally, with zero guidance (as humans and animals are) -
 and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from narrow
 AI's, which have to be individually programmed anew for each new task. Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the distinctive
 AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S. Deepakjnath,

 It is vital to realise that the overwhelming majority of AGI-ers do not *
 want* an AGI test -  Ben has never gone near one, and is merely typical in
 this respect. I'd put almost all AGI-ers here in the same league as the US
 banks, who only want mark-to-fantasy rather than mark-to-market tests of
 their assets.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread Matt Mahoney
http://www.loebner.net/Prizef/loebner-prize.html

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, July 18, 2010 3:10:12 PM
Subject: Re: [agi] Of definitions and tests of AGI

If you can't convince someone, clearly something is wrong with it. I don't 
think 
a test is the right way to do this. Which is why I haven't commented much. 
When you understand how to create AGI, it will be obvious that it is AGI or 
that 
it is what you intend it to be. You'll then understand how what you have built 
fits into the bigger scheme of things. There is no such point at which you can 
say something is AGI and not AGI. Intelligence is a very subjective thing 
that really depends on your goals. Someone will always say it is not good 
enough. But if it really works, people will quickly realize it based on results.

What you want is to develop a system that can learn about the world or its 
environment in a general way so that it can solve arbitrary problems, be able 
to 
plan in general ways, act in general ways and perform the types of goals you 
want it to perform. 


Dave


On Sun, Jul 18, 2010 at 3:03 PM, deepakjnath deepakjn...@gmail.com wrote:

So if I have a system that is close to AGI, I have no way of really knowing it 
right? 


Even if I believe that my system is a true AGI there is no way of convincing 
the 
others irrefutably that this system is indeed a AGI not just an advanced AI 
system.

I have read the toy box problem and rock wall problem, but not many people 
will 
still be convinced I am sure.

I wanted to know that if there is any consensus on a general problem which can 
be solved and only solved by a true AGI. Without such a test bench how will we 
know if we are moving closer or away from our quest. There is no map.

Deepak





On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

I realised that what is needed is a *joint*  definition *and*  range of tests 
of 
AGI.
 
Benamin Johnston has submitted one valid test - the  toy box problem. (See 
archives).
 
I have submitted another still simpler valid test -  build a rock wall from 
rocks given, (or fill an earth hole with  rocks).
 
However, I see that there are no valid definitions  of AGI that explain what 
AGI 
is generally , and why these tests are indeed  AGI. Google - there are v. few 
defs. of AGI or Strong AI, period.
 
The most common: AGI is human-level intelligence -   is an 
embarrassing non-starter - what distinguishes human  intelligence? No 
explanation offered.
 
The other two are also inadequate if not as bad:  Ben's solves a variety of 
complex problems in a variety of complex  environments. Nope, so does  a 
multitasking narrow AI. Complexity does not  distinguish AGI. Ditto Pei's - 
something to do with insufficient knowledge and  resources...
Insufficient is open to narrow AI  interpretations and reducible to 
mathematically calculable probabilities.or  uncertainties. That doesn't 
distinguish AGI from narrow AI.
 
The one thing we should all be able to agree on  (but who can be sure?) is 
that:
 
** an AGI is a general intelligence system,  capable of independent learning**
 
i.e. capable of independently learning new  activities/skills with minimal 
guidance or even, ideally, with zero guidance (as  humans and animals are) - 
and 
thus acquiring a general, all-round range of  intelligence..  

 
This is an essential AGI goal -  the capacity  to keep entering and mastering 
new domains of both mental and physical skills  WITHOUT being specially 
programmed each time - that crucially distinguishes it  from narrow AI's, 
which 
have to be individually programmed anew for each new  task. Ben's AGI dog 
exemplified this in a v simple way -  the dog is  supposed to be able to 
learn 
to fetch a ball, with only minimal instructions, as  real dogs do - they can 
learn a whole variety of new skills with minimal  instruction.  But I am 
confident Ben's dog can't actually do  this.
 
However, the independent learning def. while  focussing on the distinctive 
AGI 
goal,  still is not detailed enough by  itself.
 
It requires further identification of the  **cognitive operations** which 
distinguish AGI,  and wh. are exemplified by  the above tests.
 
[I'll stop there for interruptions/comments   continue another time].
 
 P.S. Deepakjnath,
 
It is vital to realise that the overwhelming  majority of AGI-ers do not * 
want* 
an AGI test -  Ben has never gone near  one, and is merely typical in this 
respect. I'd put almost all AGI-ers here in  the same league as the US banks, 
who only want mark-to-fantasy rather than  mark-to-market tests of their 
assets.
agi | Archives  | Modify Your Subscription  


-- 
cheers,
Deepak

agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread Mike Tintner
Please explain/expound freely why you're not convinced - and indicate what 
you expect,  - and I'll reply - but it may not be till tomorrow.

Re your last point, there def. is no consensus on a general problem/test OR a 
def. of AGI.  

One flaw in your expectations seems to be a desire for a single test -  almost 
by definition, there is no such thing as 

a) a single test - i.e. there should be at least a dual or serial test - having 
passed any given test, like the rock/toy test, the AGI must be presented with a 
new adjacent test for wh. it has had no preparation,  like say building with 
cushions or sand bags or packing with fruit. (and neither rock/toy test state 
that clearly)

b) one kind of test - this is an AGI, so it should be clear that if it can pass 
one kind of test, it has the basic potential to go on to many different kinds, 
and it doesn't really matter which kind of test you start with - that is partly 
the function of having a good.definition of AGI .


From: deepakjnath 
Sent: Sunday, July 18, 2010 8:03 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


So if I have a system that is close to AGI, I have no way of really knowing it 
right? 

Even if I believe that my system is a true AGI there is no way of convincing 
the others irrefutably that this system is indeed a AGI not just an advanced AI 
system.

I have read the toy box problem and rock wall problem, but not many people will 
still be convinced I am sure.

I wanted to know that if there is any consensus on a general problem which can 
be solved and only solved by a true AGI. Without such a test bench how will we 
know if we are moving closer or away from our quest. There is no map.

Deepak




On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I realised that what is needed is a *joint* definition *and*  range of tests 
of AGI.

  Benamin Johnston has submitted one valid test - the toy box problem. (See 
archives).

  I have submitted another still simpler valid test - build a rock wall from 
rocks given, (or fill an earth hole with rocks).

  However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.

  The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

  The other two are also inadequate if not as bad: Ben's solves a variety of 
complex problems in a variety of complex environments. Nope, so does  a 
multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's - 
something to do with insufficient knowledge and resources...
Insufficient is open to narrow AI interpretations and reducible to 
mathematically calculable probabilities.or uncertainties. That doesn't 
distinguish AGI from narrow AI.

  The one thing we should all be able to agree on (but who can be sure?) is 
that:

  ** an AGI is a general intelligence system, capable of independent learning**

  i.e. capable of independently learning new activities/skills with minimal 
guidance or even, ideally, with zero guidance (as humans and animals are) - and 
thus acquiring a general, all-round range of intelligence..  

  This is an essential AGI goal -  the capacity to keep entering and mastering 
new domains of both mental and physical skills WITHOUT being specially 
programmed each time - that crucially distinguishes it from narrow AI's, which 
have to be individually programmed anew for each new task. Ben's AGI dog 
exemplified this in a v simple way -  the dog is supposed to be able to learn 
to fetch a ball, with only minimal instructions, as real dogs do - they can 
learn a whole variety of new skills with minimal instruction.  But I am 
confident Ben's dog can't actually do this.

  However, the independent learning def. while focussing on the distinctive AGI 
goal,  still is not detailed enough by itself.

  It requires further identification of the **cognitive operations** which 
distinguish AGI,  and wh. are exemplified by the above tests.

  [I'll stop there for interruptions/comments  continue another time].

   P.S. Deepakjnath,

  It is vital to realise that the overwhelming majority of AGI-ers do not * 
want* an AGI test -  Ben has never gone near one, and is merely typical in this 
respect. I'd put almost all AGI-ers here in the same league as the US banks, 
who only want mark-to-fantasy rather than mark-to-market tests of their assets.
agi | Archives  | Modify Your Subscription  




-- 
cheers,
Deepak

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread deepakjnath
Let me clarify. As you all know there are somethings computers are good at
doing and somethings that Humans can do but a computer cannot.

One of the test that I was thinking about recently is to have to movies show
to the AGI. Both movies will have the same story but it would be a totally
different remake of the film probably in different languages and settings.
If the AGI is able to understand the sub plot and say that the story line is
similar in the two movies then it could be a good test for AGI structure.

The ability of a system to understand its environment and underlying sub
plots is an important requirement of AGI.

Deepak

On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Please explain/expound freely why you're not convinced - and indicate
 what you expect,  - and I'll reply - but it may not be till tomorrow.

 Re your last point, there def. is no consensus on a general problem/test OR
 a def. of AGI.

 One flaw in your expectations seems to be a desire for a single test -
 almost by definition, there is no such thing as

 a) a single test - i.e. there should be at least a dual or serial test -
 having passed any given test, like the rock/toy test, the AGI must be
 presented with a new adjacent test for wh. it has had no preparation,
 like say building with cushions or sand bags or packing with fruit. (and
 neither rock/toy test state that clearly)

 b) one kind of test - this is an AGI, so it should be clear that if it can
 pass one kind of test, it has the basic potential to go on to many different
 kinds, and it doesn't really matter which kind of test you start with - that
 is partly the function of having a good.definition of AGI .


  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 8:03 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 So if I have a system that is close to AGI, I have no way of really knowing
 it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many people
 will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem which
 can be solved and only solved by a true AGI. Without such a test bench how
 will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak



 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall from
 rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate if not as bad: Ben's solves a variety
 of complex problems in a variety of complex environments. Nope, so does  a
 multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
 something to do with insufficient knowledge and resources...
 Insufficient is open to narrow AI interpretations and reducible to
 mathematically calculable probabilities.or uncertainties. That doesn't
 distinguish AGI from narrow AI.

 The one thing we should all be able to agree on (but who can be sure?) is
 that:

 ** an AGI is a general intelligence system, capable of independent
 learning**

 i.e. capable of independently learning new activities/skills with minimal
 guidance or even, ideally, with zero guidance (as humans and animals are) -
 and thus acquiring a general, all-round range of intelligence..

 This is an essential AGI goal -  the capacity to keep entering and
 mastering new domains of both mental and physical skills WITHOUT being
 specially programmed each time - that crucially distinguishes it from narrow
 AI's, which have to be individually programmed anew for each new task. Ben's
 AGI dog exemplified this in a v simple way -  the dog is supposed to be able
 to learn to fetch a ball, with only minimal instructions, as real dogs do -
 they can learn a whole variety of new skills with minimal instruction.  But
 I am confident Ben's dog can't actually do this.

 However, the independent learning def. while focussing on the distinctive
 AGI goal,  still is not detailed enough by itself.

 It requires further identification of the **cognitive operations** which
 distinguish AGI,  and wh. are exemplified by the above tests.

 [I'll stop there for interruptions/comments  continue another time].

  P.S

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread David Jones
Deepak,

I think you would be much better off focusing on something more practical.
Understanding a movie and all the myriad things going on, their
significance, etc... that's AI complete. There is no way you are going to
get there without a hell of a lot of steps in between. So, you might as well
focus on the steps required to get there. Such a test is so complicated,
that you cannot even start, except to look for simpler test cases and goals.


My approach to testing agi has been to define what AGI must accomplish.
Which I have in the following steps:
1) understand the environment
2) understand ones own actions and how they affect the environment
3) understand language
4) learn goals from other people through language
5) perform planning and attempt to achieve goals
6) other miscellaneous requirements.

Each step must be accomplished in a general way. By general, I mean that it
can solve many many problems with the same programming.

Each step must be done in order because each step requires previous steps to
proceed. So, to me, the most important place to start is general environment
understanding.

Then, now that you know where to start, you pick more specific goals and
test cases. How do you develop and test general environment understanding?
What is a simple test case you can develop on? What are the fundamental
problems and principles involved? What is required to solve these problems?

Those are the sorts of tests you should be considering. But that only comes
after you decide what AGI requires and steps required. Maybe you'll agree
with me, maybe you won't. So, that's how I would recommend going about it.

Dave

On Sun, Jul 18, 2010 at 4:04 PM, deepakjnath deepakjn...@gmail.com wrote:

 Let me clarify. As you all know there are somethings computers are good at
 doing and somethings that Humans can do but a computer cannot.

 One of the test that I was thinking about recently is to have to movies
 show to the AGI. Both movies will have the same story but it would be a
 totally different remake of the film probably in different languages and
 settings. If the AGI is able to understand the sub plot and say that the
 story line is similar in the two movies then it could be a good test for AGI
 structure.

 The ability of a system to understand its environment and underlying sub
 plots is an important requirement of AGI.

 Deepak

 On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Please explain/expound freely why you're not convinced - and indicate
 what you expect,  - and I'll reply - but it may not be till tomorrow.

 Re your last point, there def. is no consensus on a general problem/test
 OR a def. of AGI.

 One flaw in your expectations seems to be a desire for a single test -
 almost by definition, there is no such thing as

 a) a single test - i.e. there should be at least a dual or serial test -
 having passed any given test, like the rock/toy test, the AGI must be
 presented with a new adjacent test for wh. it has had no preparation,
 like say building with cushions or sand bags or packing with fruit. (and
 neither rock/toy test state that clearly)

 b) one kind of test - this is an AGI, so it should be clear that if it can
 pass one kind of test, it has the basic potential to go on to many different
 kinds, and it doesn't really matter which kind of test you start with - that
 is partly the function of having a good.definition of AGI .


  *From:* deepakjnath deepakjn...@gmail.com
 *Sent:* Sunday, July 18, 2010 8:03 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 So if I have a system that is close to AGI, I have no way of really
 knowing it right?

 Even if I believe that my system is a true AGI there is no way of
 convincing the others irrefutably that this system is indeed a AGI not just
 an advanced AI system.

 I have read the toy box problem and rock wall problem, but not many people
 will still be convinced I am sure.

 I wanted to know that if there is any consensus on a general problem which
 can be solved and only solved by a true AGI. Without such a test bench how
 will we know if we are moving closer or away from our quest. There is no
 map.

 Deepak



 On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  I realised that what is needed is a *joint* definition *and*  range of
 tests of AGI.

 Benamin Johnston has submitted one valid test - the toy box problem. (See
 archives).

 I have submitted another still simpler valid test - build a rock wall
 from rocks given, (or fill an earth hole with rocks).

 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.

 The most common: AGI is human-level intelligence -  is an
 embarrassing non-starter - what distinguishes human intelligence? No
 explanation offered.

 The other two are also inadequate

Re: [agi] Of definitions and tests of AGI

2010-07-18 Thread Mike Tintner
Jeez,  no AI program can understand *two* consecutive *sentences* in a text - 
can understand any text period - can understand language, period. And you want 
an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

No AI can understand a photograph of a scene, period - a crowd scene, a house 
by the river. Programs are hard put to recognize any objects other than those 
in v. standard positions. And you want an AGI that can understand a *movie*. 

You don't seem to realise that we can't take the smallest AGI  *step* yet - and 
you're fantasying about a superevolved AGI globetrotter.

That's why Benjamin  I tried to focus on v. v. simple tests -  they're still 
way too complex  they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

I recommend looking at Packbots and other military robots and hospital robots 
and the like, and asking how we can free them from their human masters and give 
them the very simplest of capacities to rove and handle the world independently 
- like handling and travelling on rocks. 

Anyone dreaming of computers or robots that can follow Gone with The Wind or 
become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

From: deepakjnath 
Sent: Sunday, July 18, 2010 9:04 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Let me clarify. As you all know there are somethings computers are good at 
doing and somethings that Humans can do but a computer cannot.

One of the test that I was thinking about recently is to have to movies show to 
the AGI. Both movies will have the same story but it would be a totally 
different remake of the film probably in different languages and settings. If 
the AGI is able to understand the sub plot and say that the story line is 
similar in the two movies then it could be a good test for AGI structure. 

The ability of a system to understand its environment and underlying sub plots 
is an important requirement of AGI.

Deepak


On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Please explain/expound freely why you're not convinced - and indicate what 
you expect,  - and I'll reply - but it may not be till tomorrow.

  Re your last point, there def. is no consensus on a general problem/test OR a 
def. of AGI.  

  One flaw in your expectations seems to be a desire for a single test -  
almost by definition, there is no such thing as 

  a) a single test - i.e. there should be at least a dual or serial test - 
having passed any given test, like the rock/toy test, the AGI must be presented 
with a new adjacent test for wh. it has had no preparation,  like say 
building with cushions or sand bags or packing with fruit. (and neither 
rock/toy test state that clearly)

  b) one kind of test - this is an AGI, so it should be clear that if it can 
pass one kind of test, it has the basic potential to go on to many different 
kinds, and it doesn't really matter which kind of test you start with - that is 
partly the function of having a good.definition of AGI .


  From: deepakjnath 
  Sent: Sunday, July 18, 2010 8:03 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  So if I have a system that is close to AGI, I have no way of really knowing 
it right? 

  Even if I believe that my system is a true AGI there is no way of convincing 
the others irrefutably that this system is indeed a AGI not just an advanced AI 
system.

  I have read the toy box problem and rock wall problem, but not many people 
will still be convinced I am sure.

  I wanted to know that if there is any consensus on a general problem which 
can be solved and only solved by a true AGI. Without such a test bench how will 
we know if we are moving closer or away from our quest. There is no map.

  Deepak




  On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

I realised that what is needed is a *joint* definition *and*  range of 
tests of AGI.

Benamin Johnston has submitted one valid test - the toy box problem. (See 
archives).

I have submitted another still simpler valid test - build a rock wall from 
rocks given, (or fill an earth hole with rocks).

However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.

The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

The other two are also inadequate if not as bad: Ben's solves a variety of 
complex problems in a variety of complex environments. Nope, so does  a 
multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's