Abram Demski wrote:
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
& approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work. This is as Loosemore suggests.
On the other hand, I do not want to agree with Loosemore too strongly.
Mathematics and mathematical proof is a very important tool, and I
feel like he wants to reject it. His image of an AGI seems to be a
system built up out of totally dumb pieces, with intelligence emerging
unexpectedly. Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.
Okay, let me try to make some kind of reply to your comments here and in
your original blog post.
It is very important to understand that the paper I wrote was about the
methodology of AGI research, not about specific theories/models/systems
within AGI. It is about the way that we come up with ideas for systems
and the way that we explore those systems, not about the content of
anyone's particular ideas.
So, in the above text you refer to a split between logical and messy
methods - now, it may well be that my paper would lead someone to
embrace 'messy' methods and reject 'logical' ones, but that is a side
effect of the argument, not the argument itself. It does happen to be
the case that I believe that logic-based methods are mistaken, but I
could be wrong about that, and it could turn out that the best way to
build an AGI is with a completely logic-based AGI, along with just one
small mechanism that was Complex. That would be perfectly consistent
with my argument (though a little surprising, for other reasons).
Similarly, you suggest that I "have an image of an AGI that is built out
of totally dumb pieces, with intelligence emerging unexpectedly." Some
people have suggested that that is my view of AGI, but whether or not
those people are correct in saying that [aside: they are not!], that
does not relate to the argument I presented, because it is all about
specific AGI design preferences, whereas the thing that I have called
the "Complex Systems Problem" is fairly neutral on most design decisions.
In your original blog post, also, you mention the way that AGI planning
mechanisms can be built in such a way that they contain a logical
substrate, but with heuristics that force the systems to make
'sub-optimal' choices. This is a specific instance of a more general
design pattern: logical engines that have 'inference control
mechanisms' riding on their backs, preventing them from deducing
everything in the universe whilst trying to come to a simple decision.
The problem is that you have portrayed the distinction between 'pure'
logical mechanisms and 'messy' systems that have heuristics riding on
their backs, as equivalent to a distinction that you thought I was
making between non-complex and complex AGI systems. I hope you can see
now that this is not what I was trying to argue. My target would be the
methodologies that people use to decide such questions as which
heuristics to using in a planning mechanism, whether the representation
used by the planning mechanism can co-exist with the learning
mechanisms, and so on.
Now, having said all of that, what does the argument actually say, and
does it make *any* claims at all about what sort of content to put in an
AGI design?
The argument says that IF intelligent systems belong to the 'complex
systems' class, THEN a it would be a dreadful mistake to use a certain
type of scientific or engineering approach to build intelligent systems.
I tried to capture this with an analogy at one point: if you we John
Horton Conway, sitting down on Day 1 of his project to find a cellular
automaton with certain global properties, you would not be able to use
any standard scientific, engineering or mathematical tools to discover
the rules that should go into your system - you would, in fact, have no
option but to try rules at random until you found rules that gave the
global behavior that you desired.
My point was that a modified form of that same problem (that inability
to use our scientific intuitions to just go from a desired global
behavior to the mechanisms that will generate that global behavior)
could apply to the question of building an AGI. I do not suggest that
the problem will manifest itself in exactly the same way (it is not that
we would make zero progress with current techniques, and have to use
completely random trial and error, like Conway had to), but what I do
claim is that there could be *enough* complexity to mean that our
current approaches will simply go round and round in circles, never
getting beyond a certain level of intelligence.
The details of the argument are there to be argued about, but it is
crucial not to misunderstand it and think that my paper was some kind of
attack on particular AGI designs. It is an attack on how we go about
choosing AGI designs, and how we develop them.
In the end, my conclusion was that the only way to make progress was to
build a system that stays as close to the human design as possible. The
reasoning behind that claim was pretty simple: if you have a working
example in front of you, then that working example is the result of some
hard work that somebody else (evolution) did to explore all the possible
complex mechanisms, so rather than spend the next billion years looking
for another set of algorithms that give the desired global behavior, use
the design is right there in front of you.
Finally, I should mention one general misunderstanding about
mathematics. This argument has a superficial similarity to Godel's
theorem, but you should not be deceived by that. Godel was talking
about formal deductive systems, and the fact that there are unreachable
truths within such systems. My argument is about the feasibility of
scientific discovery, when applied to systems of different sorts. These
are two very different domains. You might think that they could be
mapped onto one another by using the work of Chaitin, Kolmogorov et al.
to formalize the scientific process (specifying theories as algorithms,
for example), but this is really not possible. Quite appart from
anything else, if you try to pretend that scientific discovery can be
formalized, then you immediately pre-empt the main question that
underlies all of this.... if scientific discovery is just a formal
(logico-deductive) process, then thinking is a formal process, and then
you have built in the assumption that intelligence is NOT a complex
system. That would clearly be nonsensical.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com