So we have a context of 9 tic tac toe squares with 2 of your Xs in a row and 
his Os all over the place, you predict something probable and rewardful, the 
3rd X to make a row. GPT would naturally learn this, Blender would also the 
reward part too, basically.

As for a FORK, this is like two-of favorite meals. Give me some fries.....or I 
could have said Give me some cake. I predict them about 50% each, based on how 
rewardful and popular they are seen in the data. In that case 50% the time I 
choose fries, then next time cake because fries has been inhibited and fired 
its neural energy now, changing the distribution.

It's ok to pursue logic but I can't help but point out this sound exactly like 
my and Transformer AI. In fact, both those are same, simply the approach is 
different to solve the efficiency problem. In this case, I don't see how yours 
would be efficient, it seems like a GOFAI no? Isn't it GOFAI? This is not 
something that scales like GPT, AFAIK your logic based approach is focusing on 
a few rules and disregards how many resources it needs (compute doesn't matter, 
memory neither).

*_How does your approach, to predict B for some context A, be efficient like 
GPT? There is a lot to leverage when given a context, and GPT leverages it. Or, 
if you intend to use Transformer+logic, why? Transformer already does all 
methods you mentioned to leverage context._*
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-Madc00c6b2628f6dd840d2df0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to