On 2/12/2013 4:53 PM, Craig Weinberg wrote:


On Tuesday, February 12, 2013 5:49:04 PM UTC-5, Brent wrote:

    On 2/12/2013 2:40 PM, Telmo Menezes wrote:

        I don't know what sort of computer your typed you post on but by 1997 
standards
        it is almost certainly a supercomputer, probably the most powerful
        supercomputer in the world. I'll wager it would take you less than five 
minutes
        to find and download a free chess playing program on the internet that 
if run
        on the very machine you're writing your posts on that would beat the 
hell out
        of you. It wouldn't surprise me at all if Watson had a sub sub sub 
routine that
        enabled it to play Chess at least as well as Depp Blue,


    Maybe (although I believe you're underestimating the complexity of a good 
chess
    program). But can Watson, for example, introspect on the chess game and 
update his
    view of the world accordingly? Can he read a new text and figure out how to 
play
    better? I'm not saying that these things are impossible, just that they 
haven't
    been achieved yet.

        after all you never know when the subject of Jeopardy will turn out to 
be
        Chess. And if Watson didn't already have this capability it could be 
added at
        virtually no cost.


    But could you ask Watson to go and learn by himself? Because you could ask 
that of
    a person. Or to go and learn to fish.

            > I have no doubt that Watson is quite competent, but I don't see 
any of
            its behavior as reflecting intelligence.


        If a person did half of what Watson did you would not hesitate for one 
second
        in calling him intelligent, but Watson is made of silicon not carbon so 
you don't.


    Nor for another second in considering him/her profoundly autistic.

    The main reason Watson and similar programs fail to have human like 
intelligence is
    that they lack human like values and motivations - and deliberately so 
because we
don't want them to be making autonomous decisions based on their internal values. That's why I usually take something like an advanced Mars rover as an example of
    intelligence.  Being largely autonomous a Mars rover must have a hierarchy 
of values
    that it acts on.


Just because something performs actions doesn't mean that it has values or motivations. As you say, "we don't want them to be making autonomous decisions based on their internal values" - and they don't, and they wouldn't even if we did want that, because there is no internal value possible with a machine. Values arise directly and indirectly through experience, but a machine is just a collection of parts which embody very simple experiences that never evolve or grow.

More fallacious and unsupported assertions. Machines can grow and learn - though of course in applications we try to give them as much knowledge as we can initially. But that's why Mars rovers are a good example. The builders and programmers have only limited knowledge of what will be encountered and so instead of trying to anticipate every possibility they have to provide for some ability to learn from experience.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to