On Dec 24, 2006, at 2:47 PM, Giles Bowkett wrote:
On 12/24/06, phil henshaw <[EMAIL PROTECTED]> wrote:
I'm a little confused. If AI is the art of replicating the
mechanisms
of human intelligence with machines, doesn't that assume that brain
function is digital? I don뭪 think that's been demonstrated as
yet.
The metaphor makes sense, but the thing is, we really don't have
enough there to generalize from. In practical terms, most
implementations of AI tend to be very targeted. Like the techniques
which emulate inference and causality are very, very different from
the techniques which emulate language and grammar. (Just as an
example.) What you really have is not a grand unified theory of human
consciousness so much as a grab-bag of techniques that sorta work.
Some techniques are effective enough to offer insight into the
individual processes they emulate, but there really isn't anything
consistent enough to offer general insight into intelligence itself.
Perhaps. Newell and Simon might disagree, and say that at a certain
level of abstraction, the ability to create and manipulate symbols is
the sign.
But I agree that AI has been targeted (to Minsky's loud regret) and
we cannot yet draw from that a grand unified theory. I'm serene;
physics has been at it for a lot longer, and they're having trouble
with grand unified theories too.
P.
"My idea of good company, Mr. Elliot, is the company of clever, well-
informed people, who have a great deal of conversation; that is what
I call good company."
"You are mistaken," said he gently, "that is not good company, that
is the best."
Jane Austen, Persuasion
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org