This is perhaps absurd/ off/ wrong/ whatever, but you could just think how Elon 
Musk's plan for the Tesla Bot, could work like this: you ask it to like I think 
Elon said "put that nut on that wheel using that wrench", then it has primed 
those features in its brain nut, wheel, wrench, and so it has to start looking 
for those objects (seeks to recognize them externally), which seems like we 
have the technology to do that when you think how good DALL-E is at prediction 
based on recognized context, or how good Google's similar image search is (it 
recognizes uploaded images too). Then once it seems them, it has to of course 
do it in the right order. It must grab the nut, and is again going to need to 
recognize the "turning 'whateverwasprimedhereobject' using 'primedtool' ". It 
is fun to think about. I hope they "put it together" (using existing tools. It 
can take some work in this case but maybe we are closer this time. They have to 
make it cheap but very useful, and both those requirements are really hard, but 
you can almost feel them in reach).

There's a lot of thingies to make that work, and you shave that list of things 
needed down if it is more general purpose and learns those things. Even if it 
is just an experiment to make just one of them, it would be interesting to try, 
maybe.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf6dddebe1e89183a-Mb7d0ef24433a137b1c58d244
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to