Hi Brad,

Of course I understand that to get the academic community (or anyone else)
really excited about Novamente as an AGI system, we'll need splashy demos.
They will come in time, don't worry ;-) ....  We have specifically chosen to
develop Novamente in accordance with a solid long-term design, rather than
with a view toward creating splashy short-term demos.  When we have taken
short-cuts it has been in order to get the system to do commercially useful
things for generating revenue, rather than to make splashy demos for the
academic community.

And, I hope my comments didn't seem to be "dissing" Deb Roy's work.  It's
really good stuff, and was among the more interesting stuff at this
conference, for sure.

I don't fault the academic AI community for not being psyched about
Novamente, which is unproven.  I do fault them for such things as

* still being psyched about SOAR and ACT-R, which have been around for
decades and have proved both theoretically and pragmatically very sorely
limited

* foolishiness such as Psychometric AI, which posits fairly trivial
puzzle-solving achievements as supposed progress toward human-level AI (I
note that Selmer Bringsjord is a very smart guy with some great research
achievements; I just don't think his "Psychometric AI" idea is one of
them...)

* being psyched about clearly impractical architectures like  Minsky's
Emotion Machine, which is even more unproven than Novamente (unlike him, we
do have a partially-complete software system that does some useful stuff),
and seems unimplementable in principle due to its over-complexity

Regarding Minsky, a quote from p. 118 of AI Magazine Summer 2004 is:

"Minsky responded by arguing that today, when our theories still explain too
little, we should elaborate rather than simplify, and we should be building
theories with more parts, not fewer.  This general philosophy pervades his
architectural design, with its many layers, representations, critics,
reasoning  methods and other diverse types of components.  Only once we have
built an architecture rich enough to explain most of what people can do will
it make sense to try and simplify things.  But today, we are still far from
an architecture that explains even a tiny fraction of human cognition."

Now, I understand well that the human brain is a mess with a lot of
complexity, a lot of different parts doing diverse things.  However, what I
think Minsky's architecture does is to explicitly embed, in his AI design, a
diversity of phenomena that are better thought of as being emergent.  My
argument with him then comes down to a series of detailed arguments as to
whether this or that particular cognitive phenomenon

a) is explicitly encoded or emergent in human cognitive neuroscience
b) is better explicitly encoded, or coaxed to emerge, from an AI system

In each case, it's a judgment call, and some cases are better understood
based on current AI or neuroscience knowledge than others.  But I think
Minsky has a consistent, very strong bias toward explicit encoding.  This is
the same kind of bias underlying Cyc and a lot of GOFAI.

For instance, Minsky's architecture contains a separate component dealing
with "Self-Ideals: assessing one's activities with respect to the ideals
established via interactions with one's role models."  I don't think this
should be put into one's AI system via drawing a little box around it with a
connector going to other components.  Rather, this seems to me like
something that should emerge from lower-level social and cognitive and
motivational components and dynamics.

-- Ben G


> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Behalf Of Brad Wyble
> Sent: Sunday, October 24, 2004 11:05 AM
> To: [EMAIL PROTECTED]
> Subject: RE: [agi] Ben vs. the AI academics...
>
>
> On Sun, 24 Oct 2004, Ben Goertzel wrote:
>
> >
> > One idea proposed by Minsky at that conference is something I
> disagree with
> > pretty radically.  He says that until we understand human-level
> > intelligence, we should make our theories of mind as complex as
> possible,
> > rather than simplifying them -- for fear of leaving something out!  This
> > reminds me of some of the mistakes we made at Webmind Inc.  I
> believe our
> > approach to AI there was fundamentally sound, yet the theory
> underlying it
> > (not the philosophy of mind, but the intermediate level
> > computational-cog-sci theory) was too complex which led to a
> software system
> > that was too large and complex and hard to maintain and tune.
> Contra Minsky
> > and Webmind, in Novamente I've sought to create the simplest
> possible design
> > that accounts for all the diverse phenomena of mind on an
> emergent level.
> > Minsky is really trying to jam every aspect of the mind into
> his design on
> > the explicit level.
>
>
> Can you provide a quote from Minsky about this?  That's certainly an
> interesting position to take.  The entire field of cognitive
> psychology is
> intent on reducing the complexity of its own function so that it can be
> understood by itself.
>
> On the other hand, Minsky's point is probably more one of evolutionary
> progress across the entire field, we should try many avenues and select
> those that work best, rather than getting locked into narrow
> visions of how the brain works as has happened repeatedly throughout the
> history of Psychology.
>
>
>
>
> Re: Deb, his stuff is clearly an amazing accomplishment, although I think
> that his success is more of a technical than a deeply theoretical flavor.
>
>
>
> On a more general note, I wouldn't expect to impress the AI
> community with
> just your theories and ideas.  There are many AI frameworks out
> there, and
> it takes too much effort to understand new ones that come along
> until they
> do something amazing.
>
> So you'll need a truly impressive demo to make a splash.   Until you
> do that, every AI conference you go to will be like this one.  Deb's
> learned this lesson and learned it well :)
>
> -Brad
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to