Re: [agi] AGI bottlenecks

2006-06-09 Thread James Ratcliff
I had similar feelings about William Pearson's recent message aboutsystems that use reinforcement learning: A reinforcement scenario, from wikipedia is defined as "Formally, the basic reinforcement learning model consists of: 1. a set of environment states S; 2. a set of actions A; and 3. a

Re: [agi] AGI bottlenecks

2006-06-09 Thread James Ratcliff
Richard, Can you explain differently, in other words the second part of this post. I am very interested in this as a large part of an AI system. I believe in some fashion there needs to be a controlling algorithm that tells the AI that it is doing "Right" be it either an internal or external human

Re: [agi] AGI bottlenecks

2006-06-09 Thread Richard Loosemore
James, It is a little hard to know where to start, to be honest. Do you have a background in any particular area already, or are you pre-college? If the latter, and if you are interested in the field in a serious way, I would recommend that you hunt down a good programme in cognitive

Motivational system was Re: [agi] AGI bottlenecks

2006-06-09 Thread William Pearson
On 09/06/06, Richard Loosemore [EMAIL PROTECTED] wrote: Likewise, an artificial general intelligence is not a set of environment states S, a set of actions A, and a set of scalar rewards in the Reals.) Watching history repeat itself is pretty damned annoying. While I would agree with you

Re: [agi] AGI bottlenecks

2006-06-09 Thread James Ratcliff
Richard, I am a grad student and have studied this for a number of years already. I have dabbled in a few of the areas, but been unhappy in general with most peoples approaches as generally too specific (expert systems) or studying fringe problems of AI. I have been spending all my time reading

Re: [agi] AGI bottlenecks

2006-06-02 Thread William Pearson
On 01/06/06, Richard Loosemore [EMAIL PROTECTED] wrote: I had similar feelings about William Pearson's recent message about systems that use reinforcement learning: A reinforcement scenario, from wikipedia is defined as Formally, the basic reinforcement learning model consists of: 1. a

Re: [agi] AGI bottlenecks

2006-06-02 Thread Richard Loosemore
Will, Comments taken, but the direction of my critique may have gotten lost in the details: Suppose I proposed a solution to the problem of unifying quantum mechanics and gravity, and suppose I came out with a solution that said that the unified theory involved (a) a specific interface to

Re: [agi] AGI bottlenecks

2006-06-01 Thread Richard Loosemore
Yan King Yin wrote: We need to identify AGI bottlenecks and tackle them systematically. Basically the AGI problem is to: 1. design a knowledge representation 2. design learning algorithms 3. fill the thing with knowledge The difficulties are: 1. the KR may be inadequate, but the designer

Re: [agi] AGI bottlenecks

2006-05-30 Thread sanjay padmane
On 5/30/06, Yan King Yin [EMAIL PROTECTED] wrote: We need to identify AGI bottlenecks and tackle them systematically. Basically the AGI problem is to: 1. design a knowledge representation 2. design learning algorithms 3. fill the thing with knowledgeWhat do you expect it to do after that? Or do