On 2/13/2013 3:10 AM, Telmo Menezes wrote:
The main reason Watson and similar programs fail to have human like
>  intelligence is that they lack human like values and motivations
True, but they could have generic intelligence -- the ability to learn
something new in a new domain, just by being told to do it.


I don't know if that could work. If you wanted the robot to learn to do some task you'd have stand there and say learn this, no learn that, learn this,... Being able to learn already requires some degree of generality.

Such
slaves would be tremendously useful and free us from labor. There is
no lack of motivation to create such things.

>  - and deliberately so
Deliberately implies that we have the option. I'm pretty sure a lot of
people would very much like to create an artificial human, but they
failed so far.

As Bruno would say, the want to create human level *competence*. But they haven't thought about the problem of that entailing human level intelligence (although some have, c.f. John McCarthy's website).


>  because we don't want them to be making autonomous decisions
>  based on their internal values.  That's why I usually take something like an
>  advanced Mars rover as an example of intelligence.
I agree, but not general intelligence.

I as my professor used to say, "Artificial intelligence is just whatever can't be 
done yet."

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to