It is interesting, AGI needs some things if you don't do a complete brute force 
to get an AGI program. So depending on the implementation, ex. GANs, 
Transformers, etc, you will be using some tricks, more or less. But there is a 
sweet spot of tricks we want to use, like: more data, RL, casualty, related, 
priming, gaps, delays, exponential, multi-modal, categories, to name many of 
them. So I think finding the right implementation is sort of easy, I think the 
issue is more about the way you do it....ex. using backprop is one way to 
update weights, I just do it another way.

BTW I'm looking into GPT more, I'm combing some my files and asking others, so 
I can do both paths at once. I really like GPT. I know looking into GANs may be 
an interesting approach. I haven't seen as good results from GANs, nor do I 
find how they work exactly interesting, it seems they can 'use' Transformers. 
The link below shows what they can do, and many these things DALL-E can do so 
idk. As far as a human like base to work on, I am saying GPT would be an ideal 
start to suggest how to get closer to AGI, not that you 'can't' work on 
possibly better but currently less better approaches.

https://analyticsindiamag.com/gans-biggest-breakthrough-in-ai/
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M2e41e39bad96b37cc5f6e9bf
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to