On Monday, November 24, 2014 4:56:02 PM UTC, John Clark wrote:
>
> On Mon, Nov 24, 2014  Telmo Menezes <[email protected] <javascript:>> 
> wrote:
>
> > I am not opposed to this idea, but as usual the very hard problem of 
>> defining intelligence is hand-waved. 
>>
>
> I wave my hand only to indicate the wide sweep of definitions of 
> intelligence you are free to use without one word of complaint from me.  
> Well... that isn't entirely true, there is one definition I would object 
> to, "intelligence is whatever computers arn't good at YET".  
>
> > I don't even ask for "any measure of intelligence", I would just ask you 
>> to name one.
>>
>
> I will name several: winning at checkers, winning at chess, winning at 
> Jeopardy, solving equations, driving a car, translating a language, 
> recognizing images, becoming the world's best research librarian. 
>  
>
>> > All the AI we have so far gives as a little from a lot. The real goal 
>> of AI is to get a lot from a little.
>>
>
> A human translator can't get good at translating language X to Y unless he 
> hears a lot of both languages X and Y, and the same is true of computers.
>
>  > With what I consider real AI, an artificial translator could also be 
>> taught how to drive a car. 
>>
>
> Computers can do both and subroutines exist so what's the problem?  
>
> > The extreme compartmentalisation of capabilities is the smoking gun that 
>> the "intelligence" part of AI is not increasing.
>>
>
> A computer that beat the 2 best human players of Jeopardy on planet Earth 
> blew that argument into (sorry but I just have to say it) bits .  
>
> >> And human beings move from being mediocre translators to being very 
>>> good translators by observing how great translators do it.
>>>
>>
>> > And they can also do this for a number of different skills with the 
>> same software.
>>
>
> I see no evidence that humans use the same mental software to translate 
> languages, solve differential equations, walk and chew gum at the same time,
> and write about philosophy on the internet;  I think humans use different 
> subroutines for different tasks just as computers do.
>
> >> Translation certainly won't be the last profession where machines 
>>> become better at there job than any human; and I predict  that the next 
>>> time it happens somebody will try to find a excuse for it just like you did 
>>> and say "Yes a machine is a better poet or surgeon or joke writer or 
>>> physicists than I am but it doesn't really count because (insert lame 
>>> excuse here)".
>>>
>>
>> > I am sure of that too, but I reserve my decision on which side of the 
>> argument I'm in until I see these "surgeons", "joke writers" or 
>> "physicists" that you talk about.
>>
>
> That just means you are a reasonable man. The people who exasperate me are 
> those who say that even though X does very intelligent things that doesn't 
> mean that X is intelligent. My point is that I don't believe in magic so I 
> think that all the brilliant things humans have done over the last few 
> thousand years happened because of the way the atoms in the 3 pounds of 
> grey goo inside their bone box were organized, and so there is no reason 
> that other things, like computers, couldn't be as intelligent or more so if 
> they were organized in the right way.   
>
>   John K Clark 
>

But what is distinctive about your position, that would not be available 
if our knowledge of what intelligence was had not advanced? 

There's two logical explanations for your position. One of them is simply 
that you are saying what you would be saying if technology had advanced but 
the understanding of how to create A.I. had not. 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to