Philip Sutton wrote:

 ****
 I've just joined the list so my comment might be reploughing well tilled soil.

A couple of the recents posts to the list and an email conversation with Ben has made me think that a key issue about AGI might be whether the distiguishing feature of an AGI is its raw cleverness in specific applications or its capacity to operate as a mind.   I have a feeling that what distiguishes a mind from raw intelligence is the ability of a mind to anticipate (model the future) and to be self reflective (monitor & model oneself).  This suggests that an AGI would need the ability to model aspects of the world and also to monitor/model aspects of itself so it can reflect on it's own actions/intentions.

I don't think any intelligence can model everything about itself or the world - the task is too big and is also limited by the impossibility of getting much of the needed data.  But the ability to model parts/aspects of the world and parts/aspects of itself is a huge step forward for a mind.
****
 
 
As the list is very new, little ground has been covered on the list specifically, though many of us on the list have known each other for a while and have discussed the same issues in other formus...
 
About anticipation and reflection, I agree that these are key aspects of intelligence.  But they too have degrees, and some narrow-AI systems display both of them in meaningful ways.
 
For instance, consider a financial prediction AI system like the Webmind Market Predictor I worked on (among other things) from 1997-2000.  This system anticipated the future (the future prices of financial instruments).  And it monitored and modeled itself, in a way ... it constantly monitored its behavior & its parameter values, and it analyzed its past behavior for flaws ("modeling itself").
 
I guess there is a sense in which the self-modeling of Webmind MP wasn't as deep or thorough as the self-modeling of a "real mind", but it's not trivial to specify what this sense is, is it?   Webmind MP didn't fully model its own inner workings, but neither has any human ever modeled his/her own inner workings....
 
In terms of my def'n of intelligence as "achieving complex goals in complex environments", we can observe that prediction & self-modeling are key aspects of achieving many complex goals.  But as I've observed before, this def'n doesn't avoid subjectivity either, because "complexity" is subjective -- it can only be formally defined by reference to some "reference universal Turing machine".  All these reference machines are equivalent but only in the infinite-memory-and-processing-power limit...
 
-- Ben G
 
 
 
 
 

Reply via email to