On Apr 25, 2007, at 11:12 AM, Richard Loosemore wrote:
Benjamin Goertzel wrote:
Yes, in the mid-1990's (when I was working in a psych dept as a
cog sci
research fellow), I tried pretty hard... I got to a certain
point, found
big gaps, and filled them in with non-human-cognition-based
stuff ... that's
the approach that led to the Webmind design, NOvamente's (inferior)
predecessor...
And on the basis of your failure, you conclude that no other
attempt will succeed? Hardly an argument.
Also, if you think, today, that the best that can be done is to
glue together Baars and Edelman's theories, it sounds like you were
not even close to the kind of approach that I am talking about
anyway, so your conclusions about your own effort may not carry over.
I'm only a beginner at AGI but I'll try to describe my approach to
evaluating AGI projects, more in case anyone wants to give me
feedback on it than because I think I have a lot of insight.
First I want to understand what intelligence can do given real world
constraints. What are the problems that can be solved or that I want
solved. Existing intelligences are a key guide to what is manageable
but not the only possibilities. An important part of understanding
intelligence is understanding types and components of intelligence.
High and low level components that solve problems. Some of the high
level parts are essential for particular sorts of problems though
they can be sliced up in various ways. Some low level concepts may
be fundamental to working designs but many low level and mid level
methods can probably be done lots of ways. Looking at existing
methods can often be a great way to get an answer.
If I want to evaluate a particular AGI design I ask how will it solve
some aspect of intelligent behavior. If the design seems to solve
many or most problems of intelligence and I want to critique it I
would look for fundamental things it seems not to address (or address
adequately). If I am to think human level AGI unlikely in the near
future I must find fundamental parts of intelligence which no one
seems close to developing. If I am to believe AGI will definitely
work soon I must see how all the aspects of intelligence are covered
in practically doable ways. I don't need to know the details of the
programs or math. I don't even need to know how a system works.
Rather I need to see that it is likely to handle all key aspects of
the problem domain in question.
So I'm looking for things that are really hard and basic that no one
knows how to do or I am looking for a system that seems to fully
cover the puzzle of some level of intelligence being evaluated.
Human level AGI should handle most human type intelligence. If I (or
the evaluator) can see how the system will do that well enough for it
to be a manageable engineering problem the problem may be
conceptually solved. If we do not know how it does something but it
does do it, the problem is also solved.
When I look at AGI ideas many or most problems seem understood on an
intuitive plausible level. The details of making it all work may be
difficult. It may be that some key but not obvious pieces of the
puzzle are missing and will prove difficult. There is a back and
forth between people suggesting hard parts of the problem and other
people describing how they think those hard parts can be solved. Key
hard parts of the problem that no one can convincingly solve suggest
an unknown time before we get AGI. If the best known solution
requires better hardware that is potentially a roadblock.
People on either side of the argument who can't describe a fairly
complete system and are not familiar with most of the best work being
done may contribute ideas but I will not take their conclusions
seriously (never mind that I can barely evaluate things one way or
another until I can describe a fairly complete system and am familiar
with most of the best work being done)
To me it is the understanding of the components of intelligence and
how they interact that is interesting rather than a particular test
or definition. How does a system handle creativity. Is it
scalable. How good can it get with existing or expected hardware?
What are it's flaws. Do you seem to have all the key parts? How
well are they integrated? To me it is the testing and understanding
of parts and of the integration of parts that is more significant
than testing the system in general until the system is close to
completion (at some level). But if there is a key part of
intelligence that one does not really know how to do that's a
problem. Or if the parts all seem okay but how to integrate them is
a mystery that is a problem.
This sort of approach does not suggest easy answers or tests. It
involves a good understanding of intelligence and many aspect of
intelligence each of which have their own details. There may be some
key insights which make it all more manageable. There may be some
show stoppers which no one has really brought up yet. My job as a
beginner is to learn enough about intelligence and the projects
people are working on to actually apply the approach I describe.
-Will Wiser
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936