Mike,
Ben is a bit too much of a gentleman so let me rephrase it for you . . . .
This mailing list is a community that is somewhat isolated in that it has a
shared goal of AGI and that it has evolved it's vocabulary somewhat to further
that goal. Learning that vocabulary is not the easiest thing but it is
somewhat incumbent on *you* to learn it if you wish to be effective in this
group. It is also incumbent upon you to learn what the fundamental knowledge
and assumptions of the majority of the group are (and there are a lot of very
complex ones) and get up to speed on the knowledge. Yes -- this is *NOT* easy.
Questions like "how does your system or these other systems that you are
talking about, represent "goals" , "move", "obstacle".. "path.."? Literally,
what form do those concepts take within any of these systems -and what
meanings/sense/ referents are attached to them and how? Do they actually use
the general concept "goal" as such - as distinct, obviously, from having their
own specific goals?" are good questions -- FOR A NOVICE but indicate both that
you haven't done your homework and that you don't realize the difficulty and
effort involved in answering the question (much less the reality that the
information is already available if you'd expend some effort and go looking for
it).
Statements like "You see, if any computer system can represent those
concepts as the human brain actually does, then I would suggest that it's at
least half solved the problem of AGI." are generally regarded as being so
obvious that they shouldn't need to be said.
When Ben says that "the approach you are advocating is EXACTLY THE APPROACH
TAKEN BY THE MAINSTREAM OF ACADEMIC AI RESEARCH TODAY" (and uses caps -- which
is *very* unlike Ben), he is basically saying that (in his opinion -- but his
opinions tend to be well worth listening to) either you don't understand the
difference between narrow AI and AGI -- OR -- you are being 100% unsuccessful
in communicating. You are having *very* similar interactions with most of the
other members of this list.
We all can understand how nasty and huge the learning curve is and we would
like to help newcomers. On the other hand, it would also help if your e-mails
were not frequently perceived as saying that the people in the group have
absolutely no clue what they're doing or that they aren't doing AGI (or that
they aren't doing it THE RIGHT WAY). Yes, there is some dispute in the group
about what AGI is and much dispute about exactly how to do it -- but this is
because NO ONE has more than good guesses at this point. Also, to the new
observer, it may appear that there is far more disagreement than there actually
is because what is most frequently talked about is the areas of difference (as
we attempt to hash things out and convince each other and learn and develop)
rather than the BROAD areas that most of us agree upon (and those that don't
agree on them -- like frequently, Richard Loosemore -- at least understand them
well enough to have valid and constructive conversations about them).
We want to welcome new members to this group but your assumptions and
communications style are not making it easy for us (and hopefully, you can
recognize the time and effort spent bringing you up to speed). A total novice
debating an expert may be a great experience for the novice but does *very*
little for the group as a whole except expend time and attention (since the
novice is very unlikely to contribute to the expert's understanding until he
gets up to speed). I would suggest that it would be most effective if you
would adopt a course of LEARNING what the group believes and how it
communicates FIRST and DEBATING LATER (after you both have something to debate
about *and* the ability to effectively communicate it).
Mark
----- Original Message -----
From: Mike Tintner
To: [email protected]
Sent: Thursday, May 03, 2007 10:16 AM
Subject: Re: [agi] The University of Phoenix Test [was: Why do you think your
AGI design will work?]
Ben,
It took a while for me to communicate to you what I meant by defining
"problem classes." etc. I suspect that something similar is happening here.
What I am saying LOOKS obvious to you, I understand. But I don't think it is at
all. I don't think you're understanding the actual meaning - because the words
here are so familiar that you're not appreciating that they can be and are
being used in radically different ways and on different levels..
Let's cut to the heart of it -
how does your system or these other systems that you are talking about,
represent "goals" , "move", "obstacle".. "path.."? Literally, what form do
those concepts take within any of these systems -and what meanings/sense/
referents are attached to them and how? Do they actually use the general
concept "goal" as such - as distinct, obviously, from having their own specific
goals?
You see, if any computer system can represent those concepts as the human
brain actually does, then I would suggest that it's at least half solved the
problem of AGI.
----- Original Message -----
From: Benjamin Goertzel
To: [email protected]
Sent: Thursday, May 03, 2007 2:45 PM
Subject: Re: [agi] The University of Phoenix Test [was: Why do you think
your AGI design will work?]
On 5/3/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
James,
It's interesting - there is a general huge block here - and culture-wide
- to thinking about intelligence in terms of problems as opposed to the means
and media of solution (or if you like the tasks vs the tools).
What you seem to be missing, Mike, is that the approach you are advocating
is EXACTLY THE APPROACH TAKEN BY THE MAINSTREAM OF ACADEMIC AI RESEARCH TODAY.
If you go to the AAAI conference, you can see 1000 papers presented
discussing AI in the context of particular problems, problem classes, etc.
There is really nothing unusual about what you are suggesting. It's just
that this particular email list is dominated by people who have decided a
different approach is more promising.
Your list is all about means - AGI that uses this language or that, and
that uses a body or not. Similarly, Pei's and Ben's expositions of their
systems are all about how-it-works rather than what-it-does.
Because how-it-works is the hard part! What-it-does is not the hard part.
What I suggest is an AGI - and almost certainly it will be a robot - that
is given a general set of concepts and education about "moving" and
"navigating" past "obstacles" towards "goals" - much, I guess, like an infant
first learns about navigating round its environment in a very general way,
before it gets down to complex, specific activities.
Ok, that's fine ;-) ... But that is already what we are doing, with the
exception that it's a virtual robot in a sim world rather than a physical robot
in the real world.
Enumerating such goals is not very hard nor very fascinating, it's the
how-it-works that has been the bottleneck in AGI.
obviously, a huge # of people have worked on the robotics goals you mention
above, over the last decades. The bottleneck has been knowing how the software
should work ... not articulation of the goal itself...
Note that infants - and the human brain - do have this central capacity
to hold very general concepts - to think in terms of "go there" or "move a bit"
- which are supremely general - and understand that "go" can mean "crawl" "run"
"hop" "jump" "ride on scooter" "walk" etc - and that "obstacle" or
"something in the way" can refer to literally an infinity of differently shaped
objects, from a carpet to a human being to a tricycle. - and that "move" can
mean "move any part of your body - arms, legs etc".
[All this fits, I suspect, if loosely with Hawkins' ideas].
Once you have an AGI that has a brain structured in this way - with a
tree of generality/ particularity & abstractness/ concreteness - to understand
that there are many ways of moving towards goals - then you can teach it, or it
can learn an in principle infinite variety of physical, navigational,
goal-seeking activities - from navigating mazes to searching buildings to
hunting and chasing other agents/ animals to navigating videogame mazes etc. -
for it will know that there are many ways to move its body along many different
kinds of paths past many different kinds of obstacles to many different kinds
of goals.
I thought I made much of that last para. clear already - but obviously
it didn't communicate - I'm curious why not. Do try and explain what you found
confusing.
It's not that you didn't communicate these ideas ... it's just that,
frankly, these are fairly obvious ideas, and articulating them doesn't take you
very far toward creating an AGI! ;-)
Ben
----------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
----------------------------------------------------------------------------
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.467 / Virus Database: 269.6.2/785 - Release Date: 02/05/2007
14:16
------------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936