I think your approach here *is* representative - &, as you indicate,
the details of different approaches to AGI in this discussion, aren't
that important. What is common IMO to your and the thinking of others
here is that you all start by asking yourselves : what kinds of
programming will solve AGI? Because programming is what interests you
most and is your life.
Actually, that isn't necessarily accurate. I'm currently collaborating
with a cognitive scientist, and I've seen other people here hint at
drawing their own inspiration from cognitive science and other
non-programming disciplines.
I reason the problem like this:
1. I know intelligence is possible, by looking at the animal kingdom.
2. I don't believe that the animal kingdom is doing something that is
formally uncomputable (i.e., intelligence is computable).
3. I can see the things that intelligence can do, and have ideas about
how it may work.
4. I recognize that biological computing machinery is vastly different
to artificial computing machinery.
5. I assume that it is possible to build intelligence on current
artificial computing machinery (i.e., intelligence is computable on
current computers)
5. So, my goal is to translate those ideas about intelligence to the
hardware that we have available.
Programming comes into it not because we are obsessed with programming,
but because we have to make do with the computing machinery that is
available to us. We're attempting to exploit the strengths of computing
machinery (such as its ability to do fast search and precise logical
deduction) to make up for the weaknesses of the machinery (such as the
difficulty in analogizing or associative learning). I don't believe
there is only one path to intelligence, and we must be very conscious of
the platform that we are building on.
What you have to do in order to produce a true, crux idea, I suggest,
is not just define your approach but APPLY IT TO A PROBLEM EXAMPLE OR
TWO of general intelligence - show how it might actually work.
Well, that is what many of us are doing. We have these plausible crux
ideas, and we're now attempting to apply it to problems of general
intelligence. It takes time to build systems, and the more ambitious the
demonstration the longer it takes to build. I have my own challenge
problems in the pipeline (I have to start very small, and have been
using the commonsense problem page*), and I know most serious groups
involved in system building have their own problems too.
* http://www-formal.stanford.edu/leora/commonsense/
I've mentioned Semantic Web reasoning and General Game Playing. Even
something like the Weka toolkit could be seen as a kind of general
intelligence - you can run their machine learning algorithms on any kind
of dataset and it will discover novel patterns. I admit that those are
weak examples from an AGI perspective because they are purely symbolic
domains, but it seems that AGI comes in where those kind of examples
end. My point is, however, that general purpose reasoning is possible -
I think there are plenty of signs of how it might actually work.
You have to show how, for example, your GA might enable your
lego-constructing system to solve an unfamiliar problem about building
a dam of rocks in water. You must show that even though it had only
learned about regularly-shaped bricks, it could neverthless recognize
irregularly-shaped rocks as, say, "building blocks"; and even though
it had only learned to build on solid ground, it could nevertheless
proceed to build on ground submerged in water. [I think BTW, when you
try to do this, you will find that GA's *won't* work]
Why not?
Genetic algorithms have been used in robots that learn how to move. You
can connect a GA up to a set of motors and set up the algorithm so that
movement is rewarded. Attach the motors to legs and put it on land, and
the robot will eventually learn that walking maximizes its goals. Put
the motors into fins and a tail and put it in water, and the robot will
eventually learn that swimming maximizes its goals. Isn't this a perfect
example of how GAs can problem-solve across domains?
Or to address your specific (but more challenging) problem directly...
Lets say, instead, that we're using GAs to generate high level
strategies, plans and reasoning... the GA may evolve, on land, some
wall-building strategies:
1. Start with the base
2. Put lego blocks on top of other lego blocks
3. Make sure lego blocks are stacked at an even height
4. Make sure there are no gaps
When we give the robot the goal of building a dam, and it may then take
those existing strategies and evolve generalizations:
Here's one:
1. Start with the base
2. Put things on top of other things
3. Make sure things are stacked at an even height
4. Make sure there are no gaps
This could happen by a cross-over or mutation that generalizes
categories (Lego block -> Thing) -- and it may be the case that an
AGI-optimized GA would have a bias towards generalization and
specialization mutations because they are so useful in problem solving.
As for the recognition of irregular objects as building blocks, again, I
see no reason that genetic algorithms could not evolve classification or
categorization routines: the system would take low-level features
supplied by the vision processing code and evolve classifiers of
interesting objects in the vicinity.
In RoboCup, this process of learning (rather than hard-coding)
categorization is of interest to many groups. The group that I share a
lab with recently abstracted their object categorization code so that
the transformation of low-level visual features to high-level categories
is performed by a semantic web reasoner. The system would let you throw
a different color or shape ball on the field, and the robot would chase
the new ball simply after changing the ball declaration in the ontology.
OWL is a (fairly) general language, it seems reasonable to claim that it
would be possible to set up another version of the system where the
declarations in the ontology itself can be evolved with a GA (a GA
searches strings in a language to find an optimal solution, so surely it
could search for OWL statements that provide categorizations that
maximize goal scoring).
You don't just have to tell me in general terms what your programming
approach can do, you have to apply it to specific true AGI
END-PROBLEMS - and invite additional tests.
I suggest you look again at any of the approaches you mention, as
formally outlined, and I suggest you will not find a single one, that
is actually applied to an end-problem, to a true test of its AGI
domain-crossing potential.
I thought I had already provided evidence that many approaches could
succeed on an "end-problem". Particularly in the sections on logic and
hybrid systems.
And I think if you go through the archives here you also won't find a
single attempt in relevant discussions to do likewise. On the
contrary, end-problems are shunned like the plague.
I don't believe end-problems are "shunned like the plague" - in fact, I
think it is the opposite case. We have all have our long term challenge
problems, but we also have pragmatic constraints: time, resources,
knowledge, algorithms that prevent us from being overly ambitious.
For those of us who are research students or professional researchers,
we know that in order to be taken seriously in academic circles we must
find ways of evaluting our claims: some work can appeal to computational
or mathematical arguments, but others, like my own, are faced with the
problem that there is no objective measurable definition of intelligence
and there's unlikely to be one anytime soon. This means that evaluation
on challenge problems actually plays an extremely crucial and important
part of our research programs.
Why are challenge problems not discussed much on this mailing list? I
can think of some examples of people who have discussed their goals, and
when I read behind the lines of work in the area I do see that people
have their ideas about what their aiming for, so I think you're being
pessimistic if you think it isn't discussed at all.
However, I can also think of good reasons why specific end goals aren't
discussed that much...
If I were to jump up and say I'm addressing some grand problem far
beyond the current state-of-the-art not only is this head-in-the-clouds
dreaming, there is a danger of coming across as a "kook". You have to be
realistic.
Similarly, if I were to discuss "toy" problems, I would be dismissed as
thinking too small (even if I have a coherent idea about why the toy
problem is a crucial first step).
If I were to present a medium-sized problem, then it will (by necessity)
have flaws and holes that will be attacked for not being general enough.
I suspect that discussing research in the context of these challenge
problems may be better received in more significant publications like
theses, books or journal articles where you have time to lay out a
coherent argument and discuss the limitations.
Furthermore, an argument could be made for having a vision, but leaving
the specific end-problem until AFTER the system building. Exams are
rarely given out before the test - because if teachers were to hand out
the exam then students would only memorize the answers. Similarly an
given formal measure of intelligence could be easily gamed by an AGI
builder. When you proposed a "simple mathematical test of cog sci", many
people responded by pointing out that if you've got a clear mathematical
definition of intelligence, it really isn't hard to optimize for that
specific definition. For example, I suspect you could use something like
a randomized L-system - it could create aesthetically pleasing diagrams
for the imagination3 website that you would swear were highly original
and creative (themed and patterned in parts, but neither entirely random
nor entirely structured) if you had never seen an L-system before.
-Benjamin Johnston
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=94089934-d438ee