Thankyou for another really constructive response - and I think that
I, at any rate, am really starting to get somewhere. I didn't quite
get to the nub of things with my last post. I think I can do a better
job this time & develop the argument still more fully later.
Hi Mike,
I have five comments.
------------
1. You seem to be using a more specific definition of AGI than I. I
don't believe that all AGI work must necessarily focus on real-world
embodiment. Don't you think it is possible to have an artificial general
intelligence (such as an AGI info-bot) that inhabits a virtual symbolic
world (such as a database); a world in which initial classification of
objects is irrelevant to the agent?
I think AGI can encompass a range of different kinds of intelligences
that inhabit not just real world environments, but also virtual
environments, language-based environments or even purely formal symbolic
environments. Some approaches might be better suited to particular
environments.
------------
2. I don't believe it it right to say that nobody is looking at
generalization. I illustrated how generalization might be achieved
automatically by a mutation operator in a GA biased towards
generalization (for all instances of a given symbol, substitute it with
a more general symbol), or how GA might be used to automatically acquire
categorizations of abstract concepts from raw sensory features.
Generalization lies at the very core of machine learning and AGI and
there are plenty of formal and informal attempts to describe it.
------------
3. It certainly is my own experience that I got into this area because I
was intrigued by feelings that true intelligence is different from
classical logical deduction or the standard kinds of machine learning
algorithms. I suspect that most people here have felt (and still do) the
same way, and it looks like you feel that way too.
When I look at various approaches, if I focus on the similarities
instead of the differences, it strikes me that we're all attempting to
attack the same deep issue from different perspectives. When I read your
post, claiming that generalization is important, I think to myself
"yeah, that is what everybody else is saying and attempting to solve --
I even gave you several examples of how generalization could work", so I
then find myself surprised that you claim that nobody is looking at it!
I'll illustrate my point with fuzzy/uncertain logics, because you
directly attacked them in a previous post...
My own initial reaction to modified logics that support "fuzzy"
propositions was also that it didn't match my intuitions of how
intelligence works - that they're "not even looking at the same
fundamental problem". But, if - as you also say - the problem is that
formal methods can't be used until after "you've classified the real
world and predigested its irregularities into nice neat regular units",
then I realize that maybe this fuzzy approach really does make sense:
they're trying to use the universality of logic but they're also trying
to skip over the need for "nice neat regular units" by letting the logic
natively accept "ugly messy irregular units". You might not buy this
particular reasoning and so you may need to find one of your own that
maps their objectives to your own view of the challenge of intelligence;
but I think you will find that with an open mind, you really will start
seeing that there are connections to your own ideas. That is, everybody
does have some kind of grasp on the same fundamental problem, but
they're just looking at it from different angles.
When you start to formulate your ideas into a coherent argument (that
doesn't use vague words like "structured" without definition), you might
then start forming your own ideas of how to approach the problem.
Hypothetically... you might reason that generalization is fundamental,
so you could (again, hypothetically) start off by experimenting with a
translation of this abstract idea into a concrete computational model
where self-modifying programs can take their own subroutines and
automatically search for generalizations of those subroutines (and maybe
you also have another process of hierarchical learning to discover "x is
a generalization of y" patterns). At that point you'll have your own AGI
system building program, at then maybe you'll come across somebody else
who sees what you are doing, who ignores the background work that got
you there and your long term vision of where you want to go with it, but
simply claims "hey, no, intelligence isn't self-modifying subroutine
abstraction, duh! why don't you come up with a crux idea?".
------------
4. If you're trying to develop your own argument, then I'd recommend
taking a look at some of the more philosophical works in the research
literature - not just in AGI but also in areas like embodied robotics,
commonsense reasoning, cognitive science, qualitative reasoning and
cognitive robotics. I personally found that writings on the symbol
grounding problem were very helpful in clarifying a lot of my own
thoughts (and in understanding how my own opinion relates to established
positions). I'm sure there's something out there that would do the same
for you, whether it be in the grounding problem (like me) or something
completely different.
------------
5. Finally, I think you should be a little more careful when you say
things like "...such human powers of generalization are still way, way
beyond current computers." If human powers of generalization weren't
still a long way behind current computers, then AGI wouldn't be a
research problem. You can't use a lack of current progress to argue
against the feasibility of "crux ideas" that haven't even been properly
tried yet! It is like saying that I must be a bad cook because the raw
chicken in my fridge is currently inedible.
-Ben
Mike Tintner wrote:
Benjamin,
Thankyou for another really constructive response - and I think that
I, at any rate, am really starting to get somewhere. I didn't quite
get to the nub of things with my last post. I think I can do a better
job this time & develop the argument still more fully later.
Why are those ideas not crux ideas - those schools of programming not
true AGI? You almost hit the nail on the head with:
"My point is, however, that general purpose reasoning is possible -
I think there are plenty of signs of how it might actually work."
i.e. none of those approaches actually show true "general purpose
reasoning," you only hope and believe that some new ones will in the
future (and have some good suggestions about how).
What all those schools lack, to be a bit more precise, is an explicit
"generalizing procedure" - let's call it a "real world
generalization procedure." They don't tell you directly how they are
going to generalize across domains - how, having learnt one skill,
they can move on to another. The GA's, if I've understood, didn't
generalize their skills - didn't recognize that they could adapt their
walking skills to water - their minders did. An explicit real-world
generalization procedure must tell you how the system itself is going
to recognize an unfamiliar domain as related to the familiar one(s).
How the lego construction system will recognize irregular-shaped rocks
as belonging to a larger class that includes the lego bricks. How
Ben's pet who, say, knows all about navigating neat, flat office
buildings will be able to recognize a very different, messy bomb site
or rocky hillside as nevertheless all examples of navigable.terrains.
How a football playing robot will recognize other games such as rugby,
hockey etc as examples of "ball games [I may be able to play]". How in
other words the AGI system will recognize unfamiliar (and not
obviously classifiable) problems as having something in common with
familiar ones. And how those systems will have general ways of
adapting their skills/ solutions. How the lego system will adapt its
bricklaying movements to form rocklaying movements, or the soccer
player will adapt its arm and leg movements to rugby.
I think you'll find that all the schools of programming only wave at
this... they don't offer you an explicit method. I'll take a bet, for
example, that Ben G cannot provide you with even a virtual world
generalization procedure. The AGI systems/agents, it must be stressed,
have to be able to recognize *independently* that they can move on to
new domains - even though they will of course also need to seek help
to learn the rules etc, as we humans do.
And I think it's clear, if only in a very broad way, how the human
mind achieves this (which I'll expound in more detail another time) -
it has what you could call a "general activity language" - and learns
every skill *as an example of a general activity*. Every particular
skill is learned in a broad, general way in terms of concepts that can
be and are applied to all skills/ activities (as well as more
skill-specific terminology). Very general, "modular" concepts.
But such human powers of generalization are still way, way beyond
current computers. The human mind's ability to cross domains is
dependent on the ability, for example, to generalize from something
as concrete as "taking steps across a field" to something as abstract
as "taking steps to solve a problem in philosophy or formal logic".
And the reason that I classify all this as *real world* generalization
is that it cannot be achieved by logic or mathematics, which is what
all the schools you mention depend on, (no?) They can't help you
classify the bricks and rocks as alike, or rugby as like football, or
a rocky bomb site as like an office floor, let alone steps across a
field as like steps in an argument. They can only be brought into play
*after* you've classified the real world and predigested its
irregularities into nice neat regular units that they can operate on.
That initial classification/ generalization requires the general skill
that is still beyond all AGI's - and actually, I think, doesn't even
have a name.
bENJAMIN: mt:>> I think your approach here *is* representative - &,
as you indicate,
the details of different approaches to AGI in this discussion,
aren't that important. What is common IMO to your and the thinking
of others here is that you all start by asking yourselves : what
kinds of programming will solve AGI? Because programming is what
interests you most and is your life.
Actually, that isn't necessarily accurate. I'm currently
collaborating with a cognitive scientist, and I've seen other people
here hint at drawing their own inspiration from cognitive science and
other non-programming disciplines.
I reason the problem like this:
1. I know intelligence is possible, by looking at the animal kingdom.
2. I don't believe that the animal kingdom is doing something that is
formally uncomputable (i.e., intelligence is computable).
3. I can see the things that intelligence can do, and have ideas
about how it may work.
4. I recognize that biological computing machinery is vastly
different to artificial computing machinery.
5. I assume that it is possible to build intelligence on current
artificial computing machinery (i.e., intelligence is computable on
current computers)
5. So, my goal is to translate those ideas about intelligence to the
hardware that we have available.
Programming comes into it not because we are obsessed with
programming, but because we have to make do with the computing
machinery that is available to us. We're attempting to exploit the
strengths of computing machinery (such as its ability to do fast
search and precise logical deduction) to make up for the weaknesses
of the machinery (such as the difficulty in analogizing or
associative learning). I don't believe there is only one path to
intelligence, and we must be very conscious of the platform that we
are building on.
What you have to do in order to produce a true, crux idea, I
suggest, is not just define your approach but APPLY IT TO A PROBLEM
EXAMPLE OR TWO of general intelligence - show how it might actually
work.
Well, that is what many of us are doing. We have these plausible crux
ideas, and we're now attempting to apply it to problems of general
intelligence. It takes time to build systems, and the more ambitious
the demonstration the longer it takes to build. I have my own
challenge problems in the pipeline (I have to start very small, and
have been using the commonsense problem page*), and I know most
serious groups involved in system building have their own problems too.
* http://www-formal.stanford.edu/leora/commonsense/
I've mentioned Semantic Web reasoning and General Game Playing. Even
something like the Weka toolkit could be seen as a kind of general
intelligence - you can run their machine learning algorithms on any
kind of dataset and it will discover novel patterns. I admit that
those are weak examples from an AGI perspective because they are
purely symbolic domains, but it seems that AGI comes in where those
kind of examples end. My point is, however, that general purpose
reasoning is possible - I think there are plenty of signs of how it
might actually work.
You have to show how, for example, your GA might enable your
lego-constructing system to solve an unfamiliar problem about
building a dam of rocks in water. You must show that even though it
had only learned about regularly-shaped bricks, it could neverthless
recognize irregularly-shaped rocks as, say, "building blocks"; and
even though it had only learned to build on solid ground, it could
nevertheless proceed to build on ground submerged in water. [I think
BTW, when you try to do this, you will find that GA's *won't* work]
Why not?
Genetic algorithms have been used in robots that learn how to move.
You can connect a GA up to a set of motors and set up the algorithm
so that movement is rewarded. Attach the motors to legs and put it on
land, and the robot will eventually learn that walking maximizes its
goals. Put the motors into fins and a tail and put it in water, and
the robot will eventually learn that swimming maximizes its goals.
Isn't this a perfect example of how GAs can problem-solve across
domains?
Or to address your specific (but more challenging) problem directly...
Lets say, instead, that we're using GAs to generate high level
strategies, plans and reasoning... the GA may evolve, on land, some
wall-building strategies:
1. Start with the base
2. Put lego blocks on top of other lego blocks
3. Make sure lego blocks are stacked at an even height
4. Make sure there are no gaps
When we give the robot the goal of building a dam, and it may then
take those existing strategies and evolve generalizations:
Here's one:
1. Start with the base
2. Put things on top of other things
3. Make sure things are stacked at an even height
4. Make sure there are no gaps
This could happen by a cross-over or mutation that generalizes
categories (Lego block -> Thing) -- and it may be the case that an
AGI-optimized GA would have a bias towards generalization and
specialization mutations because they are so useful in problem solving.
As for the recognition of irregular objects as building blocks,
again, I see no reason that genetic algorithms could not evolve
classification or categorization routines: the system would take
low-level features supplied by the vision processing code and evolve
classifiers of interesting objects in the vicinity.
In RoboCup, this process of learning (rather than hard-coding)
categorization is of interest to many groups. The group that I share
a lab with recently abstracted their object categorization code so
that the transformation of low-level visual features to high-level
categories is performed by a semantic web reasoner. The system would
let you throw a different color or shape ball on the field, and the
robot would chase the new ball simply after changing the ball
declaration in the ontology. OWL is a (fairly) general language, it
seems reasonable to claim that it would be possible to set up another
version of the system where the declarations in the ontology itself
can be evolved with a GA (a GA searches strings in a language to find
an optimal solution, so surely it could search for OWL statements
that provide categorizations that maximize goal scoring).
You don't just have to tell me in general terms what your
programming approach can do, you have to apply it to specific true
AGI END-PROBLEMS - and invite additional tests.
I suggest you look again at any of the approaches you mention, as
formally outlined, and I suggest you will not find a single one,
that is actually applied to an end-problem, to a true test of its
AGI domain-crossing potential.
I thought I had already provided evidence that many approaches could
succeed on an "end-problem". Particularly in the sections on logic
and hybrid systems.
And I think if you go through the archives here you also won't find
a single attempt in relevant discussions to do likewise. On the
contrary, end-problems are shunned like the plague.
I don't believe end-problems are "shunned like the plague" - in fact,
I think it is the opposite case. We have all have our long term
challenge problems, but we also have pragmatic constraints: time,
resources, knowledge, algorithms that prevent us from being overly
ambitious.
For those of us who are research students or professional
researchers, we know that in order to be taken seriously in academic
circles we must find ways of evaluting our claims: some work can
appeal to computational or mathematical arguments, but others, like
my own, are faced with the problem that there is no objective
measurable definition of intelligence and there's unlikely to be one
anytime soon. This means that evaluation on challenge problems
actually plays an extremely crucial and important part of our
research programs.
Why are challenge problems not discussed much on this mailing list? I
can think of some examples of people who have discussed their goals,
and when I read behind the lines of work in the area I do see that
people have their ideas about what their aiming for, so I think
you're being pessimistic if you think it isn't discussed at all.
However, I can also think of good reasons why specific end goals
aren't discussed that much...
If I were to jump up and say I'm addressing some grand problem far
beyond the current state-of-the-art not only is this
head-in-the-clouds dreaming, there is a danger of coming across as a
"kook". You have to be realistic.
Similarly, if I were to discuss "toy" problems, I would be dismissed
as thinking too small (even if I have a coherent idea about why the
toy problem is a crucial first step).
If I were to present a medium-sized problem, then it will (by
necessity) have flaws and holes that will be attacked for not being
general enough. I suspect that discussing research in the context of
these challenge problems may be better received in more significant
publications like theses, books or journal articles where you have
time to lay out a coherent argument and discuss the limitations.
Furthermore, an argument could be made for having a vision, but
leaving the specific end-problem until AFTER the system building.
Exams are rarely given out before the test - because if teachers were
to hand out the exam then students would only memorize the answers.
Similarly an given formal measure of intelligence could be easily
gamed by an AGI builder. When you proposed a "simple mathematical
test of cog sci", many people responded by pointing out that if
you've got a clear mathematical definition of intelligence, it really
isn't hard to optimize for that specific definition. For example, I
suspect you could use something like a randomized L-system - it could
create aesthetically pleasing diagrams for the imagination3 website
that you would swear were highly original and creative (themed and
patterned in parts, but neither entirely random nor entirely
structured) if you had never seen an L-system before.
-Benjamin Johnston
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.516 / Virus Database:
269.19.20/1262 - Release Date: 2/6/2008 9:13 AM
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93139505-4aa549