Kirchner, Sweller and Clarks criticism matches my own subjective feelings to a 
great extent.

However, my theories about how an agi program could learn fits their criticisms 
as well.  I have already given a great deal of thought about how an infant 
learns, by the way, and I read a number of books that specifically studied some 
aspects of neo-natal learning as was inferred through the contemporary methods 
of cognitive science.  That certainly does not mean that I (or anyone else) 
actually understands the details of how humans learn though.

The problem of getting a computer program to integrate new information based on 
previously learned knowledge and previously learned methods of learning, 
especially those methods of learning that have been shown to be useful with a 
subject and the way the subject is presented (ie the different ways the the 
computer program might be exposed to the subject matter) is obviously 
significant to the question of AI research.  It is perfectly reasonable to 
examine the problem by both studying how humans (or other intelligent 
creatures) learn and how we can get actual computer programs to learn.

I am thinking about writing a program that has highly structured ways of 
learning where it would be necessary to instruct the computer program to learn 
by being aware of its tendencies to integrate new information in particular 
ways.  This structural learning would not be human-level learning, but the 
structural methods would be more sophisticated than any contemporary AI 
paradigm.  If I actually do this, I would hope to achieve an intermediate step 
toward human-level learning. The goal here is to show, in as simple a manner as 
possible, how a more sophisticated AI program could integrate new information 
in a way that the program would demonstrate continued extensibility, at least 
up to a point.

This concept of an intermediate, less ambitious goal, is typical of developing 
technologies and indeed is typical of the advancement of all knowledge.

I mention this because the article reminded me of an idea I have had where I 
would build a system of learning for a computer program and then study how it 
works.  The thing is though, you have to start with something simple and build 
up from there.  On the other hand, if the program does not show some ability to 
continue to learn and integrate new information past the bottlenecks that have 
been encountered with other AI paradigms of the past then it is not likely to 
be a true intermediate step toward a better AI product.

Jim Bromer

----- Original Message ----
From: Mike Tintner <[EMAIL PROTECTED]>
To: [email protected]
Sent: Sunday, June 15, 2008 11:02:39 AM
Subject: Re: [agi] A citicism of minimal guidance during instruction.

Interesting paper. But it all depends on what level of intelligence you are 
looking at. Learning science or medicine is not likely to be the first thing 
an AGI tackles.

You also have to consider how an infant learns about the world. Clearly a great 
deal of that at least is independent activity - e.g learning to crawl, walk, 
discriminate objects, follow them visually, play with and experiment with 
objects - and learning the physical properties of objects, learning the rules 
of language etc.

Jim:
> Here is an interesting criticism of minimal guidance during instruction.
> http://www.cogtech.usc.edu/publications/kirschner_Sweller_Clark.pdf



      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to