Matt,

How did you learn to play chess?   Or write programs? How do you teach people 
to write programs?

Compare and contrast - esp. the nature and number/ extent of instructions -  
with how you propose to force a computer to learn below.

Why is it that if you "tell a child [real AGI] what to do, it will never learn"?

Why can and does a human learner get to ask questions and a computer doesn't?

How come you [a real AGI] can get to choose your instructors and textbooks, 
and/or whether you choose to pay attention to them, and a computer can't?

Why do computers stop learning once they've done what they're told, and humans 
and animals never stop and keep going on to learn ever new activities?

What and how many are the fundamental differences between how real AGI's and 
computers learn?




Mike, I think we all agree that we should not have to tell an AGI the steps to 
solving problems. It should learn and figure it out, like the way that people 
figure it out.


The question is how to do that. We know that it is possible. For example, I 
could write a chess program that I could not win against. I could write the 
program in such a way that it learns to improve its game by playing against 
itself or other opponents. I could write it in such a way that initially does 
not know the rules for chess, but instead learns the rules by being given 
examples of legal and illegal moves.


What we have not yet been able to do is scale this type of learning and problem 
solving up to general, human level intelligence. I believe it is possible, but 
it will require lots of training data and lots of computing power. It is not 
something you could do on a PC, and it won't be cheap.

 
-- Matt Mahoney, matmaho...@yahoo.com 





--------------------------------------------------------------------------------
From: Mike Tintner <tint...@blueyonder.co.uk>
To: agi <agi@v2.listbox.com>
Sent: Mon, July 19, 2010 9:07:53 PM
Subject: Re: [agi] Of definitions and tests of AGI


The issue isn't what a computer can do. The issue is how you structure the 
computer's or any agent's thinking about a problem. Programs/Turing machines 
are only one way of structuring thinking/problemsolving - by, among other 
things, giving the computer a method/process of solution. There is an 
alternative way of structuring a computer's thinking, which incl., among other 
things, not giving it a method/ process of solution, but making it rather than 
a human programmer do the real problemsolving.  More of that another time.


From: Matt Mahoney 
Sent: Tuesday, July 20, 2010 1:38 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Creativity is the good feeling you get when you discover a clever solution to a 
hard problem without knowing the process you used to discover it.


I think a computer could do that.

 
-- Matt Mahoney, matmaho...@yahoo.com 





--------------------------------------------------------------------------------
From: Mike Tintner <tint...@blueyonder.co.uk>
To: agi <agi@v2.listbox.com>
Sent: Mon, July 19, 2010 2:08:28 PM
Subject: Re: [agi] Of definitions and tests of AGI


Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with "general" method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase "solving any problem" as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


  On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:

    Whaddya mean by "solve the problem of how to solve problems"? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


    From: rob levy 
    Sent: Monday, July 19, 2010 1:26 PM
    To: agi 
    Subject: Re: [agi] Of definitions and tests of AGI



      However, I see that there are no valid definitions of AGI that explain 
what AGI is generally , and why these tests are indeed AGI. Google - there are 
v. few defs. of AGI or Strong AI, period.




    I like Fogel's idea that intelligence is the ability to "solve the problem 
of how to solve problems" in new and changing environments.  I don't think 
Fogel's method accomplishes this, but the goal he expresses seems to be the 
goal of AGI as I understand it. 


    Rob
          agi | Archives  | Modify Your Subscription   

          agi | Archives  | Modify Your Subscription  



        agi | Archives  | Modify Your Subscription   

        agi | Archives  | Modify Your Subscription  



      agi | Archives  | Modify Your Subscription   

      agi | Archives  | Modify Your Subscription  

      agi | Archives  | Modify Your Subscription   

      agi | Archives  | Modify Your Subscription  

      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to