Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Bill Hibbard
Yeah, it was fun to watch you stir them up, Ben. But they did
take you seriously in the discussions, for example when they
included your provocative quote in the plenary summary.

A lot of the systems had impressive behavior, but most were
dead end approaches, in my opinion, because they made logical
reasoning fundamental with learning as an add-on. The most
impressive talk from the main stream AI community was by Deb
Roy, who is achieving interesting vision-language coordination
with systems that are fundamentally about learning.

It was good to see Ben again, and to meet Moshe Looks and Pei
Wang.

Cheers,
Bill

On Sat, 23 Oct 2004, Ben Goertzel wrote:

 Hmmm...

 I just had a somewhat funny experience with the traditional AI research
 community

 Moshe Looks and I gave a talk Friday at the AAAI Symposium on Achieving
 Human-Level Intelligence Through Integrated Systems and Research.  Our talk
 was an overview of Novamente; if you're curious our conference paper is at

 http://www.realai.net/AAAI04.pdf

 Anyway, I began my talk by noting that, in my opinion, Seeking human-level
 intelligence is not necessarily the best approach to AI.  We humans aren't
 all that smart anyway, in the grand scheme of things; and it may be that the
 best approach to superintelligence doesn't even pass through humanlike
 intelligence, since human wetware is pretty different from computer
 hardware.  Wow, did that piss off the audience!! (an audience which, as I
 later found out, consisted largely of advocates of the SOAR and ACT-R
 cognitive modeling systems, which seek to model human cognition in detail,
 not by modeling human brain function but via tuning various logic and search
 algorithms to have similar properties to human cognition.)  Moshe and I went
 on to give a talk on Novamente, which was hard to do because we (like many
 others who were accepted for the symposium but not part of the AAAI inner
 circle) were allocated only 12 minutes plus 3 minutes for questions  (Of
 course, it's not hard to summarize Novamente at a certain level of
 abstraction in 12 minutes, but it's pretty much impossible to be at all
 *convincing* to skeptical AI experts in that time-frame.)  So far as I
 could tell, no one really understood much of what we were talking about --
 because they were so irritated at me for belittling humanity, and because
 the Novamente architecture is too different from the usual for these guys
 to really understand it from such a compressed presentation.

 After our talk, one of the more esteemed members of the audience irritatedly
 asked me how I knew human intelligence wasn't the maximal possible
 intelligence -- had I actually experienced superior intelligences myself?  I
 was tempted to refer him to Terrence McKenna and his superintelligent
 9-dimensional machine-elves, but instead I just referred to computation
 theory and the obvious limitations of the human brain.  Then he asked
 whether our system actually did anything, and I mentioned the Biomind and
 language-processing applications, which seemed to surprise him even though
 we had just talked about them in our prsentation.

 Most of the talks on Friday and Saturday were fairly unambitious, though
 some of them were interesting technically -- the only other person
 presenting a real approach to human-level intelligence, besides me and
 Moshe, was Pei Wang.  Nearly all of the work presented was from a
 logic-based approach to AI.  Then there were some folks who posited that
 logic is a bad approach and AI researchers should focus entirely on
 perception and action, and let cognition emerge directly from these.  Then
 someone proposed that if you get the right knowledge representation,
 human-level AI is solved and you can use just about any algorithms for
 learning and reasoning, etc.  In general I didn't think the discussion ever
 dug into the really deep and hard issues of achieving human-level AI, though
 it came close a couple times.  For instance, there was a talk describing
 work using robot vision and arm-motion to ground linguistic concepts -- but
 it never got beyond the trivial level of using supervised categorization to
 ground particular words in sets of pictures, or using preprogrammed
 arm-control schema triggered by the output of a language parser in
 preprogrammed ways..

 There was a lot of talk about how hard it is for academics to get funding
 for academic research aimed at human-level AI, and tomorrow morning's
 session (which I plan to skip -- better to stay home and work on Novamente!)
 will include some brainstorming on how to improve this situation gradually
 over the next N years.  It seemed that the only substantial funding source
 for the work presented in the symposium was DARPA.

 Then, Sat. night, there was a session in which the people from our symposium
 got together with the people from the 5 other AAAI symposia being held in
 the same hotel.  One member from each symposium was supposed to get up and
 give a 

p.s., Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Bill Hibbard
My talk is available at:

  http://www.ssec.wisc.edu/~billh/g/FS104HibbardB.pdf

There was a really interesting talk by the neuroscientist
Richard Grainger with some publications available at:

  http://www.brainengineering.com/publications.html

Cheers,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Ben Goertzel



 A lot of the systems had impressive behavior, but most were
 dead end approaches, in my opinion, because they made logical
 reasoning fundamental with learning as an add-on. The most
 impressive talk from the main stream AI community was by Deb
 Roy, who is achieving interesting vision-language coordination
 with systems that are fundamentally about learning.

 It was good to see Ben again, and to meet Moshe Looks and Pei
 Wang.

 Cheers,
 Bill

Hey Bill --

Just a brief comment about Deb Roy's work., then some more comments on
human-level AI in general.

Deb's work is really interesting, however it actually represents a move AWAY
from learning, as compared to his thesis work a few years ago.  Then he was
focusing on having his software system learn visuomotor groundings for
linguistic terms (nouns like apple, etc.).  Now he has been making his
system do more complex stuff, but via hard-coding control schema into it,
rather than via learning.  When Moshe and I talked to him after his talk,
however, he said his next step would be to implement some approach to
learning these control schemata.  But he didn't seem to have such a clear
idea about how he'd do it.  When Moshe asked him about how he's implement
grounding of prepositional and subject-argument relationships (arguably the
nontrivial part of language-grounding), he said he didn't have an approach
to that yet because he didn't know any good way to represent that kind of
knowledge on the cognitive level.

So, I think his work has tremendous promise; but yet, I couldn't help wish
he'd pushed it in a more learning-based direction during the last few years.

On the other hand, I can sympathize, because -- for instance -- over the
last year we've had a Novamente team member (Mike Ross) create a hard-coded
language processing module.  Why?  Because we needed it for a commercial
project.  (Deb Roy is an academic -- academics don't have revenue pressures,
but they often have demo pressures associated with funding sources!).  Now,
during 2005 Mike will replace this hard-coded language processing module
with a learning-oriented language processing module.  Basically, the need
for incremental useful results can be a burden.  It's good because it keeps
you from moving a long way in a useless direction, but it can also
tremendously slow down progress toward long-term goals.

Earlier this year, in the US Virgin Islands, Marvin Minsky and his friends
had a private AI symposium on the topic of human-level intelligence.  It was
written in up the June 2004 issue of the AI Magazine, and it seems to have
been a bit more interesting than the AAAI symposium that we just attended.
No real solutions were proposed, though; the focus was on Minsky's and
Sloman's architectures for human-level AI (e.g. Minsky's Emotion  Machine
stuff).

One idea proposed by Minsky at that conference is something I disagree with
pretty radically.  He says that until we understand human-level
intelligence, we should make our theories of mind as complex as possible,
rather than simplifying them -- for fear of leaving something out!  This
reminds me of some of the mistakes we made at Webmind Inc.  I believe our
approach to AI there was fundamentally sound, yet the theory underlying it
(not the philosophy of mind, but the intermediate level
computational-cog-sci theory) was too complex which led to a software system
that was too large and complex and hard to maintain and tune.  Contra Minsky
and Webmind, in Novamente I've sought to create the simplest possible design
that accounts for all the diverse phenomena of mind on an emergent level.
Minsky is really trying to jam every aspect of the mind into his design on
the explicit level.

Another idea that came up at the Virgin Islands symposium was to create a
simulation world in which AI systems control agents that collectively try to
solve simple object-manipulation tasks.  The prototype case is a bunch of
kids collaborating to build towers out of blocks.  The idea was also raised
of making the simulation more realistic by making the block-building take
place in a simulated livingroom or restaurant or some such.  I like this
line of thinking because it is closely related to the AGI-SIM simulation
world project that we're currently working on (an open-source sim-world to
be used for Novamente bue also by other projects).

-- Ben



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
On Sun, 24 Oct 2004, Ben Goertzel wrote:
One idea proposed by Minsky at that conference is something I disagree with
pretty radically.  He says that until we understand human-level
intelligence, we should make our theories of mind as complex as possible,
rather than simplifying them -- for fear of leaving something out!  This
reminds me of some of the mistakes we made at Webmind Inc.  I believe our
approach to AI there was fundamentally sound, yet the theory underlying it
(not the philosophy of mind, but the intermediate level
computational-cog-sci theory) was too complex which led to a software system
that was too large and complex and hard to maintain and tune.  Contra Minsky
and Webmind, in Novamente I've sought to create the simplest possible design
that accounts for all the diverse phenomena of mind on an emergent level.
Minsky is really trying to jam every aspect of the mind into his design on
the explicit level.

Can you provide a quote from Minsky about this?  That's certainly an 
interesting position to take.  The entire field of cognitive psychology is 
intent on reducing the complexity of its own function so that it can be 
understood by itself.

On the other hand, Minsky's point is probably more one of evolutionary 
progress across the entire field, we should try many avenues and select 
those that work best, rather than getting locked into narrow 
visions of how the brain works as has happened repeatedly throughout the 
history of Psychology.


Re: Deb, his stuff is clearly an amazing accomplishment, although I think 
that his success is more of a technical than a deeply theoretical flavor.


On a more general note, I wouldn't expect to impress the AI community with 
just your theories and ideas.  There are many AI frameworks out there, and 
it takes too much effort to understand new ones that come along until they 
do something amazing.

So you'll need a truly impressive demo to make a splash.   Until you 
do that, every AI conference you go to will be like this one.  Deb's 
learned this lesson and learned it well :)

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Ben Goertzel

Hi Brad,

Of course I understand that to get the academic community (or anyone else)
really excited about Novamente as an AGI system, we'll need splashy demos.
They will come in time, don't worry ;-)   We have specifically chosen to
develop Novamente in accordance with a solid long-term design, rather than
with a view toward creating splashy short-term demos.  When we have taken
short-cuts it has been in order to get the system to do commercially useful
things for generating revenue, rather than to make splashy demos for the
academic community.

And, I hope my comments didn't seem to be dissing Deb Roy's work.  It's
really good stuff, and was among the more interesting stuff at this
conference, for sure.

I don't fault the academic AI community for not being psyched about
Novamente, which is unproven.  I do fault them for such things as

* still being psyched about SOAR and ACT-R, which have been around for
decades and have proved both theoretically and pragmatically very sorely
limited

* foolishiness such as Psychometric AI, which posits fairly trivial
puzzle-solving achievements as supposed progress toward human-level AI (I
note that Selmer Bringsjord is a very smart guy with some great research
achievements; I just don't think his Psychometric AI idea is one of
them...)

* being psyched about clearly impractical architectures like  Minsky's
Emotion Machine, which is even more unproven than Novamente (unlike him, we
do have a partially-complete software system that does some useful stuff),
and seems unimplementable in principle due to its over-complexity

Regarding Minsky, a quote from p. 118 of AI Magazine Summer 2004 is:

Minsky responded by arguing that today, when our theories still explain too
little, we should elaborate rather than simplify, and we should be building
theories with more parts, not fewer.  This general philosophy pervades his
architectural design, with its many layers, representations, critics,
reasoning  methods and other diverse types of components.  Only once we have
built an architecture rich enough to explain most of what people can do will
it make sense to try and simplify things.  But today, we are still far from
an architecture that explains even a tiny fraction of human cognition.

Now, I understand well that the human brain is a mess with a lot of
complexity, a lot of different parts doing diverse things.  However, what I
think Minsky's architecture does is to explicitly embed, in his AI design, a
diversity of phenomena that are better thought of as being emergent.  My
argument with him then comes down to a series of detailed arguments as to
whether this or that particular cognitive phenomenon

a) is explicitly encoded or emergent in human cognitive neuroscience
b) is better explicitly encoded, or coaxed to emerge, from an AI system

In each case, it's a judgment call, and some cases are better understood
based on current AI or neuroscience knowledge than others.  But I think
Minsky has a consistent, very strong bias toward explicit encoding.  This is
the same kind of bias underlying Cyc and a lot of GOFAI.

For instance, Minsky's architecture contains a separate component dealing
with Self-Ideals: assessing one's activities with respect to the ideals
established via interactions with one's role models.  I don't think this
should be put into one's AI system via drawing a little box around it with a
connector going to other components.  Rather, this seems to me like
something that should emerge from lower-level social and cognitive and
motivational components and dynamics.

-- Ben G


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Brad Wyble
 Sent: Sunday, October 24, 2004 11:05 AM
 To: [EMAIL PROTECTED]
 Subject: RE: [agi] Ben vs. the AI academics...


 On Sun, 24 Oct 2004, Ben Goertzel wrote:

 
  One idea proposed by Minsky at that conference is something I
 disagree with
  pretty radically.  He says that until we understand human-level
  intelligence, we should make our theories of mind as complex as
 possible,
  rather than simplifying them -- for fear of leaving something out!  This
  reminds me of some of the mistakes we made at Webmind Inc.  I
 believe our
  approach to AI there was fundamentally sound, yet the theory
 underlying it
  (not the philosophy of mind, but the intermediate level
  computational-cog-sci theory) was too complex which led to a
 software system
  that was too large and complex and hard to maintain and tune.
 Contra Minsky
  and Webmind, in Novamente I've sought to create the simplest
 possible design
  that accounts for all the diverse phenomena of mind on an
 emergent level.
  Minsky is really trying to jam every aspect of the mind into
 his design on
  the explicit level.


 Can you provide a quote from Minsky about this?  That's certainly an
 interesting position to take.  The entire field of cognitive
 psychology is
 intent on reducing the complexity of its own function so that it can

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Ben Goertzel
 Now, I understand well that the human brain is a mess with a lot
 of complexity, a lot of different parts doing diverse things.
 However, what I think Minsky's architecture does is to explicitly
 embed, in his AI design, a diversity of phenomena that are better
 thought of as being emergent.  My argument with him then comes
 down to a series of detailed arguments as to whether this or that
 particular cognitive phenomenon

 a) is explicitly encoded or emergent in human cognitive neuroscience
 b) is better explicitly encoded, or coaxed to emerge, from an AI system

A not incidental point here is that Minsky's design lacks any learning
dynamics that could possibly lead to anything emerging.

I had an argument with Minsky about this in the late 90's, and he basically
told me he thought the notion of emergence as applied to cognitive systems
was a crock of nonsense...

Basically, the people at this human-level AAAI symposium seemed divided
into:

* those who agree with  Minsky that cognitive emergence is a crock
* those who think that cognition emerges entirely from perception and action

Complex, self-organizing dynamics of cognition is a foreign concept, a kind
of gibberish, to most [of course, not all] of these folks ;-)

-- Ben G


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
Hi Brad,
really excited about Novamente as an AGI system, we'll need splashy demos.
They will come in time, don't worry ;-)   We have specifically chosen to
Looking forward to it as ever :)  I can understand your frustration with 
this state of affairs.  Getting people to buy into your theoretical 
framework requires a major time investment on their part.

This is why my own works stays within the bounds of conventional 
experimental and psychological research.  I speak the same langauge as 
everyone else, and so it's easy to cross pollenate ideas.  Of course, this 
is also why SOAR and similar architectures have such appeal despite their 
limitations.  Because the SOAR community is speaking the same language to 
one another, it's possible (in theory) for the whole of them to make 
faster progress than if they each had their own pet architechture.

This synergy is very real, but may be outweighed by SOAR's 
limitations.


And, I hope my comments didn't seem to be dissing Deb Roy's work.  It's
really good stuff, and was among the more interesting stuff at this
conference, for sure.
Not at all, I think we're in general agreement about the value of his 
work.


Now, I understand well that the human brain is a mess with a lot of
complexity, a lot of different parts doing diverse things.  However, what I
think Minsky's architecture does is to explicitly embed, in his AI design, a
diversity of phenomena that are better thought of as being emergent.  My
argument with him then comes down to a series of detailed arguments as to
whether this or that particular cognitive phenomenon
a) is explicitly encoded or emergent in human cognitive neuroscience
b) is better explicitly encoded, or coaxed to emerge, from an AI system
In each case, it's a judgment call, and some cases are better understood
based on current AI or neuroscience knowledge than others.  But I think
Minsky has a consistent, very strong bias toward explicit encoding.  This is
the same kind of bias underlying Cyc and a lot of GOFAI.

Whether something is explicit or emergent depends only on your 
perspective of what counts as explicit.  I'll assume you mean anatomically 
explicit in some way (where anatomical refers to features of both
neurophysiology and box/arrow design).

With this assumption, I think b follows from a.   Evolution has 
always looked for the efficient solution, so if evolution has explicitly 
encoded these behaviors, it's likely the best way to do it, at least as 
far as we'll be able to determine with our stupid human brains :)

There's certainly a huge preponderance of evidence that our brains have 
leaned towards specific anatomically explicit solutions to 
problems in the domains that we can examine easily (near the motor and 
sensory areas).

Of course, in many cases these anatomically explicity solutions are 
emergent from developmental processes, but I still think they should be 
considered explicit.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Ben Goertzel

Hi,

 Looking forward to it as ever :)  I can understand your frustration with
 this state of affairs.  Getting people to buy into your theoretical
 framework requires a major time investment on their part.

 This is why my own works stays within the bounds of conventional
 experimental and psychological research.  I speak the same langauge as
 everyone else, and so it's easy to cross pollenate ideas.  Of
 course, this
 is also why SOAR and similar architectures have such appeal despite their
 limitations.  Because the SOAR community is speaking the same language to
 one another, it's possible (in theory) for the whole of them to make
 faster progress than if they each had their own pet architechture.

Yes, this issue of specialized languages is a hard problem for AGI work.
This is one reason that, when hiring people for Novamente projects, I have a
bias toward former Webmind-ers  Even though Novamente is a quite
different software system and mathematical framework from Webmind, it's
based on the same sort of conceptual language, and the folks who worked at
Webmind are used to that language.

I noticed at this conference that different researchers were using basic
words like knowledge and representation and learning and evolution
in very different ways -- which makes communication tricky!

When Pei Wang and I worked together in 1998-2001, we spent about a month
initially just establishing a common language in which we could communicate
to really understand what our agreements and disagreements were...


 Whether something is explicit or emergent depends only on your
 perspective of what counts as explicit.  I'll assume you mean
 anatomically
 explicit in some way (where anatomical refers to features of both
 neurophysiology and box/arrow design).

In an AI context, it means whether something exists explicitly in the source
code, rather than coming about dynamically as an indirect result of the
sourcecode, in the bit-patterns in RAM created by the executable running...

 With this assumption, I think b follows from a.   Evolution has
 always looked for the efficient solution, so if evolution has explicitly
 encoded these behaviors, it's likely the best way to do it, at least as
 far as we'll be able to determine with our stupid human brains :)


 There's certainly a huge preponderance of evidence that our brains have
 leaned towards specific anatomically explicit solutions to
 problems in the domains that we can examine easily (near the motor and
 sensory areas).

 Of course, in many cases these anatomically explicity solutions are
 emergent from developmental processes, but I still think they should be
 considered explicit.

Agreed.  And I think that sensorimotor stuff is more likely to be explicit
rather than emergent in the brain  And that, in coding an AI system,
it's hopeless to try to make too much of cognition explicit rather than
emergent -- but the same statement probably doesn't hold for perception 
action...

-- Ben G


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
So much for getting work done today :)
I noticed at this conference that different researchers were using basic
words like knowledge and representation and learning and evolution
in very different ways -- which makes communication tricky!
Don't get me started on Working Memory.
In an AI context, it means whether something exists explicitly in the source
code, rather than coming about dynamically as an indirect result of the
sourcecode, in the bit-patterns in RAM created by the executable running...
A fair definition.
Agreed.  And I think that sensorimotor stuff is more likely to be explicit
rather than emergent in the brain  And that, in coding an AI system,
it's hopeless to try to make too much of cognition explicit rather than
emergent -- but the same statement probably doesn't hold for perception 
action...

If that were the case, would you not expect to see more variance in high 
level behaviors?  Instead we tend to see the same types of behavior 
expressed, the only difference between people being the relative amount of 
expression of these tendencies.

But I guess that's an arguable point, whether these observed tendencies 
among a population of people are actually there, or are only a product of 
the theories used to classify them.


-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Pei Wang
I just got home and have no time to write long emails --- I type much slower
then Ben does. ;-)
I'm very glad to meet Ben again, and Bill and Moshe for the first time (as
well as some other people who are not in this list).
The Symposium description and schedule can be found at
http://xenia.media.mit.edu/~nlc/conferences/fss04.html, and it won't be hard
to find the homepage of a speaker if one talk sounds interesting.
A preprint version of my paper is at
http://www.cis.temple.edu/~pwang/drafts/PeiWang-FSS04.pdf.
Pei
- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED] Org [EMAIL PROTECTED]; [EMAIL PROTECTED] Listbox. Com
[EMAIL PROTECTED]
Sent: Saturday, October 23, 2004 10:05 PM
Subject: [agi] Ben vs. the AI academics...



Hmmm...
I just had a somewhat funny experience with the traditional AI research
community
Moshe Looks and I gave a talk Friday at the AAAI Symposium on Achieving
Human-Level Intelligence Through Integrated Systems and Research.  Our
talk
was an overview of Novamente; if you're curious our conference paper is at
http://www.realai.net/AAAI04.pdf
Anyway, I began my talk by noting that, in my opinion, Seeking
human-level
intelligence is not necessarily the best approach to AI.  We humans aren't
all that smart anyway, in the grand scheme of things; and it may be that
the
best approach to superintelligence doesn't even pass through humanlike
intelligence, since human wetware is pretty different from computer
hardware.  Wow, did that piss off the audience!! (an audience which, as I
later found out, consisted largely of advocates of the SOAR and ACT-R
cognitive modeling systems, which seek to model human cognition in detail,
not by modeling human brain function but via tuning various logic and
search
algorithms to have similar properties to human cognition.)  Moshe and I
went
on to give a talk on Novamente, which was hard to do because we (like many
others who were accepted for the symposium but not part of the AAAI inner
circle) were allocated only 12 minutes plus 3 minutes for questions
(Of
course, it's not hard to summarize Novamente at a certain level of
abstraction in 12 minutes, but it's pretty much impossible to be at all
*convincing* to skeptical AI experts in that time-frame.)  So far as I
could tell, no one really understood much of what we were talking about --
because they were so irritated at me for belittling humanity, and because
the Novamente architecture is too different from the usual for these
guys
to really understand it from such a compressed presentation.
After our talk, one of the more esteemed members of the audience
irritatedly
asked me how I knew human intelligence wasn't the maximal possible
intelligence -- had I actually experienced superior intelligences myself?
I
was tempted to refer him to Terrence McKenna and his superintelligent
9-dimensional machine-elves, but instead I just referred to computation
theory and the obvious limitations of the human brain.  Then he asked
whether our system actually did anything, and I mentioned the Biomind and
language-processing applications, which seemed to surprise him even though
we had just talked about them in our prsentation.
Most of the talks on Friday and Saturday were fairly unambitious, though
some of them were interesting technically -- the only other person
presenting a real approach to human-level intelligence, besides me and
Moshe, was Pei Wang.  Nearly all of the work presented was from a
logic-based approach to AI.  Then there were some folks who posited that
logic is a bad approach and AI researchers should focus entirely on
perception and action, and let cognition emerge directly from these.  Then
someone proposed that if you get the right knowledge representation,
human-level AI is solved and you can use just about any algorithms for
learning and reasoning, etc.  In general I didn't think the discussion
ever
dug into the really deep and hard issues of achieving human-level AI,
though
it came close a couple times.  For instance, there was a talk describing
work using robot vision and arm-motion to ground linguistic concepts --
but
it never got beyond the trivial level of using supervised categorization
to
ground particular words in sets of pictures, or using preprogrammed
arm-control schema triggered by the output of a language parser in
preprogrammed ways..
There was a lot of talk about how hard it is for academics to get funding
for academic research aimed at human-level AI, and tomorrow morning's
session (which I plan to skip -- better to stay home and work on
Novamente!)
will include some brainstorming on how to improve this situation gradually
over the next N years.  It seemed that the only substantial funding source
for the work presented in the symposium was DARPA.
Then, Sat. night, there was a session in which the people from our
symposium
got together with the people from the 5 other AAAI symposia being held in
the same hotel.  One member from each symposium 

Re: [agi] Ben vs. the AI academics...

2004-10-24 Thread Pei Wang
One idea proposed by Minsky at that conference is something I disagree 
with
pretty radically.  He says that until we understand human-level
intelligence, we should make our theories of mind as complex as possible,
rather than simplifying them -- for fear of leaving something out!  This
reminds me of some of the mistakes we made at Webmind Inc.  I believe our
approach to AI there was fundamentally sound, yet the theory underlying 
it
(not the philosophy of mind, but the intermediate level
computational-cog-sci theory) was too complex which led to a software 
system
that was too large and complex and hard to maintain and tune.  Contra 
Minsky
and Webmind, in Novamente I've sought to create the simplest possible 
design
that accounts for all the diverse phenomena of mind on an emergent level.
Minsky is really trying to jam every aspect of the mind into his design 
on
the explicit level.

Can you provide a quote from Minsky about this?  That's certainly an 
interesting position to take.  The entire field of cognitive psychology is 
intent on reducing the complexity of its own function so that it can be 
understood by itself.
The AI magazine paper is on-line available at The St. Thomas common sense 
symposium: designing architectures for human-level intelligence 
(http://web.media.mit.edu/~push/StThomas-AIMag.pdf)

Pei
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]