It appears WebGPT is doing what the GOOGLE TRILLION TOKENS thing James Bowery 
posted recently, but using the web, not a dataset.

Also it's doing what I said to do, apparently, which is to predict words and 
those link to motor actions like notepad and browser etc controls, or a robot 
body even. Using vision would work too.

Also they seem to be making it "grab" (motor command) a preferred passage. So 
it therefore maybe is predicting GrabIt when it sees/predicts a preferred 
condition/state it wants....A>B....like I mean they fine tune it strongly on 
ex. "lots o food button" > "grab it!"  - - - - - "such motor action".....so 
then when it sees that similar passage, it predicts to grab it strongly. So 
they essentially can/ are/ will be making it predict food a lot for example 
let's say, and also making it predict grab_it but ONLY if sees food first, 
conditionally, i.e. you could fine tune it by shoeing it lots of (or strongly 
raise) "food" token and "food, grab it" token. That way, when it sees food, it 
predict that token "food, grab it" in full because it's not finished activating 
the node yet. At least...in theory...this makes sense....
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2451a036e13f64be-M271665662201c98268afe5fa
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to