Yes that's what people do, but it's not what programmed computers do.

The useful formulation that emerges here is:

narrow AI (and in fact all rational) problems  have *a method of solution*  (to 
be equated with "general" method)   - and are programmable (a program is a 
method of solution)

AGI  (and in fact all creative) problems do NOT have *a method of solution* (in 
the general sense)  -  rather a one.off *way of solving the problem* has to be 
improvised each time.

AGI/creative problems do not in fact have a method of solution, period. There 
is no (general) method of solving either the toy box or the build-a-rock-wall 
problem - one essential feature which makes them AGI.

You can learn, as you indicate, from *parts* of any given AGI/creative 
solution, and apply the lessons to future problems - and indeed with practice, 
should improve at solving any given kind of AGI/creative problem. But you can 
never apply a *whole* solution/way to further problems.

P.S. One should add that in terms of computers, we are talking here of 
*complete, step-by-step* methods of solution.



From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


  
  And are you happy with:

  AGI is about devising *one-off* methods of problemsolving (that only apply to 
the individual problem, and cannot be re-used - at 

  least not in their totality)


Yes exactly, isn't that what people do?  Also, I think that being able to 
recognize where past solutions can be generalized and where past solutions can 
be varied and reused is a detail of how intelligence works that is likely to be 
universal.

 
  vs

  narrow AI is about applying pre-existing *general* methods of problemsolving  
(applicable to whole classes of problems)?




  From: rob levy 
  Sent: Monday, July 19, 2010 4:45 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  Well, solving ANY problem is a little too strong.  This is AGI, not AGH 
(artificial godhead), though AGH could be an unintended consequence ;).  So I 
would rephrase "solving any problem" as being able to come up with reasonable 
approaches and strategies to any problem (just as humans are able to do).


  On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:

    Whaddya mean by "solve the problem of how to solve problems"? Develop a 
universal approach to solving any problem? Or find a method of solving a class 
of problems? Or what?


    From: rob levy 
    Sent: Monday, July 19, 2010 1:26 PM
    To: agi 
    Subject: Re: [agi] Of definitions and tests of AGI



      However, I see that there are no valid definitions of AGI that explain 
what AGI is generally , and why these tests are indeed AGI. Google - there are 
v. few defs. of AGI or Strong AI, period.




    I like Fogel's idea that intelligence is the ability to "solve the problem 
of how to solve problems" in new and changing environments.  I don't think 
Fogel's method accomplishes this, but the goal he expresses seems to be the 
goal of AGI as I understand it. 


    Rob
          agi | Archives  | Modify Your Subscription   

          agi | Archives  | Modify Your Subscription  



        agi | Archives  | Modify Your Subscription   

        agi | Archives  | Modify Your Subscription  



      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to