--- Mike Tintner <[EMAIL PROTECTED]> wrote:

> Matt: I realize that a
> full (Turing test) model can only be learned by having a full range of human
> experiences in a human body.
> 
> Pray expand. I thought v. few here think that. Your definition seems to 
> imply AGI must inevitably be embodied.  It also implies an evolutionary 
> model of embodied AGI - - a lower intelligence animal-level model will have 
> to have a proportionately lower agility animal body. It also prompts the v. 
> interesting speculation - (and has it ever been discussed on either 
> forum?) - of what kind of superbody a superagi would have to have?  (I would
> personally find *that* area of future speculation interesting if not super).
> 
> Thoughts there too? No superhero fans around? 

A superagi would have billions of sensors and actuators all over the world --
keyboards, cameras, microphones, speakers, display devices, robotic
manipulators, direct brain interfaces, etc.

My claim is that an ideal language model (not AGI) requires human embodiment. 
But we don't need -- or want -- an ideal model.  Turing realized that passing
the imitation game requires duplicating human weaknesses as well as strengths.
 From his famous 1950 paper:

Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1.
It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.

Why would we want to do that?  We can do better.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=94603346-a08d2f

Reply via email to