^
I'm still trying to figure out how this can be incorporated into a GPT-2/ IGPT. 
It's like an Internet of Things. It's prediction, and it uses patterns, but it 
doesn't seem like it can answer questions. Yet my brain can solve the tests. Ok 
I gave it thought. Some of the tests are not physics based, hence useless, for 
example theĀ  cycadelic pattern fill in, that isn't a sequence prediction or 
even object repair, just art repair... There is probably a maze test, and that 
and the laser test seems to be a tree search/video prediction than a static 
prediction. Good for predicting a string threaded through or a video of a man 
escaping a cave system. You can ask it to de-noise or rotate or summarize or 
translate objects (cat2dog). There's probably a stack/group test. I'm not sure 
how these helps answer big questions like GPT-2 "can". It seems like the 
dynamics of the net are controllable and hence many of the tasks can be 
useless, some rarely used, and some often used. Is it confusing to anyone else? 
How often do you rotate objects or solves mazes in GPT-2??? Rarely, right? And 
what's the stacking for, I know an invention may stack memory cells in rows, I 
guess the word "row" can be, in vision, actually modifying the old object, ex. 
GPT-2 may write the apple turned brown and was cut in half and stacked, then 
melted in an oven. Generating video would require morphing/ re-arranging the 
object. But that's based on data/objects's relative locations fed to it.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T604ad04bc1ba220c-M3df9d3bd8fd8aae6948065d0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to