On Tue, Feb 12, 2013 at 11:49 PM, meekerdb <meeke...@verizon.net> wrote:
> On 2/12/2013 2:40 PM, Telmo Menezes wrote:
>> I don't know what sort of computer your typed you post on but by 1997
>> standards it is almost certainly a supercomputer, probably the most powerful
>> supercomputer in the world. I'll wager it would take you less than five
>> minutes to find and download a free chess playing program on the internet
>> that if run on the very machine you're writing your posts on that would beat
>> the hell out of you. It wouldn't surprise me at all if Watson had a sub sub
>> sub routine that enabled it to play Chess at least as well as Depp Blue,
> Maybe (although I believe you're underestimating the complexity of a good
> chess program). But can Watson, for example, introspect on the chess game
> and update his view of the world accordingly? Can he read a new text and
> figure out how to play better? I'm not saying that these things are
> impossible, just that they haven't been achieved yet.
>> after all you never know when the subject of Jeopardy will turn out to be
>> Chess. And if Watson didn't already have this capability it could be added
>> at virtually no cost.
> But could you ask Watson to go and learn by himself? Because you could ask
> that of a person. Or to go and learn to fish.
>>> > I have no doubt that Watson is quite competent, but I don't see any of
>>> > its behavior as reflecting intelligence.
>> If a person did half of what Watson did you would not hesitate for one
>> second in calling him intelligent, but Watson is made of silicon not carbon
>> so you don't.
> Nor for another second in considering him/her profoundly autistic.
> The main reason Watson and similar programs fail to have human like
> intelligence is that they lack human like values and motivations
True, but they could have generic intelligence -- the ability to learn
something new in a new domain, just by being told to do it. Such
slaves would be tremendously useful and free us from labor. There is
no lack of motivation to create such things.
> - and deliberately so
Deliberately implies that we have the option. I'm pretty sure a lot of
people would very much like to create an artificial human, but they
failed so far.
> because we don't want them to be making autonomous decisions
> based on their internal values. That's why I usually take something like an
> advanced Mars rover as an example of intelligence.
I agree, but not general intelligence.
> Being largely autonomous
> a Mars rover must have a hierarchy of values that it acts on.
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to email@example.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to firstname.lastname@example.org.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.