I took a break for 4 years, but I'm researching where I left off at in my code 
I was working on.

I'm solving many problems fast and have a clear and full roadmap.

I'm going to attempt to see where it leads to still. The idea is based on 
stemming from exact matches in a network as deep as possible (I'm not going the 
backprop route like modern AIs). I know some of this has definitely been tried 
out there, but there's a lot of things to it and I don't think anyone has 
successfully implemented it all. I really haven't seen it's limits and I see 
how it can all be implemented. I have a complete plan already to make 
everything work while staying efficient and on GPU.

It's definitely a little silly to work on it when we have advanced AIs already 
and a clear roadmap, but the architecture I'm working on could be better 
because it's fully transparent and extremely tiny code, and can train / upgrade 
its intelligence even after training, and fast training.

Most "near-SOTA" AI code I see online and in the Large Text Compression 
Benchmark page are very large. A lot of this, but not all of this, is due to 
documentation and ex. TensorFlow usage, but that's also not good at all.

Even nanogpt and tinygpt github repos are actually like 20 files each 200 lines 
long under the source code folder. Very weird. I'm aiming for a very tiny code 
of say 200 lines. I'll see how far I progress.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cf3be509c7cd2f2-Mfd0067b19715d9e0e8445a8f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to