On Friday, August 13, 2021, at 10:45 PM, immortal.discoveries wrote:
> I like this part. Well, cognition is very similar to goals and priming, it 
> all weighs in in predicting the next word to a sentence.
> 

Thanks. But I think it's quite different: predictive value should be maximized 
indefinitely, while in goal pursuit you minimize the "error",  which is bounded 
by 0. 

On Friday, August 13, 2021, at 10:45 PM, immortal.discoveries wrote:
> Yes I have all these studied in my Guide to AGI

I don't think instincts and conditioning are relevant to AGI per se, only pure 
curiosity is. The former: supervision and RL, are optional add-ons, useful as 
short-cuts to unsupervised learning, or means to make it serve you.

On Friday, August 13, 2021, at 10:45 PM, immortal.discoveries wrote:
> it all weighs in in predicting the next word to a sentence.
 
I think that's an extremely short-sighted perspective. Both because symbolic 
data should never be used as primary input, and because the scope of predicted 
"next" is proportional to the scope of all represented experience.
But that's a subject of my intro: http://www.cognitivealgorithm.info, esp. part 
1.  
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Mb6ced10480ace4a5865292b7
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to