On Mon, Mar 23, 2015 at 6:03 PM, meekerdb <[email protected]> wrote:

>  On 3/23/2015 1:24 AM, Telmo Menezes wrote:
>
>
>
> On Sun, Mar 22, 2015 at 5:50 PM, John Clark <[email protected]> wrote:
>
>>  On Sat, Mar 21, 2015  Kim Jones <[email protected]> wrote:
>>
>>    >> I said it before I'll say it again, only somebody terrified of
>>>> machine intelligence would make that argument.
>>>
>>>
>>>  > Who is making that argument? Not me. Not Bruno.
>>>
>>
>>  I flat out don't believe that. Forget about consciousness, nobody would
>> say as Bruno has that the Turing Test can't even detect intelligence unless
>> they were terrified of machine intelligence.
>>
>
>  I would, and in my experience most AI researchers don't take the Turing
> Test half as seriously has you do. The efforts to pass it are mostly to get
> the attention of mainstream media.
>
>  In my opinion the fundamental problem with the Turing Test is that
> passing it is an act of deception. The computer has to fake being a human.
> It's in the same situation that you would be if you had to prove your
> intelligence by successfully convincing a panel of female fashion models
> that you are a female fashion model yourself. But perhaps worse, because
> the computer has no human body, human memories, human emotions, etc. It has
> to lie.
>
>
> An interesting choice of example.  The test Turing actually proposed was
> that an AI and a man both pretend to be a woman.
>

I know, my choice of example was not completely innocent :)


>   The question was whether you could tell which was which by conversing
> with them.  So they were both practicing deception.
>
> I agree with you point.  One telling point is that programs that have done
> well in the Loebner competition make mistakes, i.e. act unintelligently in
> some ways.  This is because never making a mistake, e.g. a typo, is a sure
> sign of not being human.
>
>
>  I grant you that it would take intelligence on your part to sell the
> female fashion model story. So you could argue that the Turing Test detects
> intelligence, even though it's does not necessarily set a good direction
> for useful research.
>
>  I think it's even worse though. Human behavior is full of patterns, that
> can be exploited by brute force. This is what Watson does, essentially.
> Watson is more or less a traditional database of character strings with
> sophisticated indexing and querying algorithms. Watson appears to be an
> amazing piece of software and I think it displays intelligence, but in a
> much narrower fashion than the hype surrounding it seem to assume.
>
>
>>   It's just intellectual cowardness because he's insisting we use very
>> different rules when judging if something is intelligent or not depending
>> on whether that something is made of protoplasm or silicon.
>>
>
>  I don't think we do. I propose a different test.
>
>  I show you a computer program that you can have a conversation with. You
> talk with it for half an hour and then I tell you I'm going to shut it down
> forever. It will essentially die. How distressed are you?
>
>
> There were protests at MIT when they shut Eliza off.
>

I didn't know that.

I've read a funny story once about Prof. Weizenbaum leaving Eliza running
in his office computer. He forgot about a meeting with a salesperson, and
the guy thought that Eliza was a chat system with Weizenbaum on the other
side. He had a long conversation and tried a sales pitch until he left,
frustrated. I can't find it, unfortunately. If anyone knows where it is, I
would appreciate it.

Telmo.


>
> Brent
>
>
>   What if I point a gun at a bonobo monkey?
>
>  Here's an example where mistreating a robot causes me some distress:
> https://www.youtube.com/watch?v=M8YjvHYbZ9w
>
>  It could be in part because the robot is fairly anatomically close to a
> mammal, but the sophistication and intent of its movements play an
> important part. I wouldn't be distressed if it were an inanimate object.
>
>
>>   What's next, reserving judgement on whether a person behaved
>> intelligently until we know the gender and the color of the person's skin?
>>
>>  All I'm saying is that whatever method we use in judging the
>> intelligence of our fellow human beings, and we all do it every waking hour
>> of every day of our lives, we should use the same method in judging
>> machines.
>>
>
>  And I'm saying we already do.
>
>  Telmo.
>
>
>>
>>
>>    John K Clark
>>
>>
>>
>>
>>>
>>   --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to