in a reverse turing test, if a human could convince other humans that he was a macine/computer would he then be unintelligent.    From" fooled by randomness", if memory serves.


-----Original Message-----
From: Rob Howard <[EMAIL PROTECTED]>
Sent: Dec 25, 2006 12:32 PM
To: 'The Friday Morning Applied Complexity Coffee Group'
Subject: Re: [FRIAM] The what is AI question

What if the analogy of intelligence is unexpected predictability? I can roll a pair of dice, and that is unpredictable; but it???s not unexpected. I expect a Gaussian curve of totals.

 

A few thousand years ago, the states of the moon were unpredictable (eclipses, elevation, and to some extent, phases). Humans consequently animated it with intelligence by calling it Luna???the moon goddess. All deities have intelligence. The same occurred with the planets, weather; and even social conditions like love and war. Only when these things became expectedly predictable did they loose their intelligence. You all remember ELIZA! At least for the first five minutes of play, the game did take on intelligence. However, after review of the actual code did the game instantly lose it mystery. Kasparov bestowed intelligence on Deep Blue, which I???m sure the programmers did not.

 

In this sense, intelligence is not a property that external things have. It???s something that we bestow upon, or perceive in external things. Is not one of the all time greatest insults on one???s intelligence the accusation of being predictable?

 

I suspect that any measure of intelligence will be relative to the observer???s ability to predict expected causal effects and be pleasantly surprised???not too unlike the Turing Test.

 

Robert Howard

Phoenix, Arizona

 


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Pamela McCorduck
Sent: Sunday, December 24, 2006 3:55 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] The what is AI question

 

 

On Dec 24, 2006, at 2:47 PM, Giles Bowkett wrote:



On 12/24/06, phil henshaw <[EMAIL PROTECTED]> wrote:

I'm a little confused.   If AI is the art of replicating the mechanisms

of human intelligence with machines, doesn't that assume that brain

function is digital?   I don???t think that's been demonstrated as yet.

 

The metaphor makes sense, but the thing is, we really don't have

enough there to generalize from. In practical terms, most

implementations of AI tend to be very targeted. Like the techniques

which emulate inference and causality are very, very different from

the techniques which emulate language and grammar. (Just as an

example.) What you really have is not a grand unified theory of human

consciousness so much as a grab-bag of techniques that sorta work.

Some techniques are effective enough to offer insight into the

individual processes they emulate, but there really isn't anything

consistent enough to offer general insight into intelligence itself.

 

Perhaps.  Newell and Simon might disagree, and say that at a certain level of abstraction, the ability to create and manipulate symbols is the sign.

 

But I agree that AI has been targeted (to Minsky's loud regret) and we cannot yet draw from that a grand unified theory.  I'm serene; physics has been at it for a lot longer, and they're having trouble with grand unified theories too.

 

P.

 

 

 

"My idea of good company, Mr. Elliot, is the company of clever, well-informed people, who have a great deal of conversation; that is what I call good company."

 

"You are mistaken," said he gently, "that is not good company, that is the best."

 

                                                Jane Austen, Persuasion



 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to