It really doesn't matter if you use an advanced PPM or GPT, it comes out to the 
same score and the same predictions in the end, and similar resource needs. It 
doesn't matter if you use backprop or trie tree counts. You can't get around 
the resource needs, basically. You could add new mechanisms, like maybe some 
reasoning ability or better categories, but it won't change the world probably 
of the prediction score by very much.

In the end, it doesn't matter what you call it, graph, hypergraph, atoms, all 
it comes down to is PPM and some other stuff...I tend to find most others's 
theories complex it when the rules of AI are so simple and the code is even 
simpler. Their code as well is 40 times bigger than mine, same problem again.

If you want to beat GPT, your only chance is to make a very simple algorithm. 
GPT is worked, by many educated people over years of work. So if you want to 
come up to its level fast, you need to either learn it or build something very 
simple. I don't see why others want to make their own architecture everywhere, 
when GPT exists. Do they think they are really have some other real AGI or will 
be more efficient than GPT?



Everyone tells me to code mine up first - but others are allowed to say they 
have some way better architecture that will be way more efficient and smarter 
but still has no results to prove this. It does not predict as good or diverse 
yet, and as far as I know they have not shown clear what this thing is that GPT 
lacks. Why not add it to GPT? It's ok if you want to deconstruct GPT but you 
haven't made it yet and need to explain first how GPT works and how your AI 
works, why is explaining it to other AGI researchers like me so hard anyway??
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1675e049c274c867-Md3950509c94ded79f184c7dd
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to