Derek Zahn wrote:
Richard,
If I can make a guess at where Jim is coming from:
Clearly, "intelligent systems" CAN be produced. Assuming we can
define "intelligent system" well enough to recognize it, we can
generate systems at random until one is found. That is impractical,
however. So, we can look at the problem as one of search
optimization. Evolution produced intelligent systems through a
biased search, for example, so it is at least possible to improve
search over completely random generate and test.
What other ways can be used to speed up search? Jim is suggesting
some methods that he believes may help. If I understand what you've
said about your approach, you have some very different methods than
what he is proposing to focus the search. I do not understand
exactly what Jim is proposing; presumably he is aiming to use his SAT
solver to guide the search toward areas that contain partial
solutions or promising partial models of some sort.
It seems to me very difficult to define the goal formally, very
difficult to develop a meta system in which a sufficiently broad
class of candidate systems can be expressed, and very difficult to
describe the "splices" or "reductions" or partial models in such a
way to smooth the fitness landscape and thus speed up search. So I
don't know how practical such a plan is.
But (again assuming I understand Jim's approach) it avoids your
complex system arguments because it is not making any effort to
predict global behavior from the low-level system components, it's
just searching through possibilities.
I hear what you say here, but the crucial issue is defining this thing
called intelligence. And, in the end, that is where the complex systems
argument makes itself felt (so this is not really avoiding the complex
systems problem, but just hiding it).
Let me explain these thoughts. If we really could only "define
'intelligent system' well enough to recognize it" then the generate and
test you are talking about would be extremely blind ... we would not
make any specific design decisions, but generate completely random
systems and say "Is this one intelligent?" each time we built one.
Clearly, that would be ridiculously slow (as you point out). Even the
evolutionary biassed search - in which you build simple systems and
gradually elaborate them as you test them in combat - would still take a
few billion years and a planet-sized computer.
But then you introduce the idea of speeding up the search in some way.
Ahhh... now there's the rub. To make the search more efficient, you
have to have some idea of an error function: you look at the
intelligence of the current best try, and you feed that into a function
that suggests what kind of changes in the low-level mechanisms will give
rise to a *beneficial* change in the overall intelligence (an
improvement, i.e.). To do any better than random, you really must have
an error function.... this almost the very definition of doing a search
that is not random, no? You have to have some idea of how a change in
design will cause a change in high level behavior, and that is the error
function.
If the system you are talking about is not complex, then, no problem:
an error function is findable, at least in principle. But the very
definition of a complex system is that such an error function cannot
(absolutely cannot) be found. You cannot say, "I need to improve the
overall intelligence, *thus*, and THEREFORE I will make this change in
the local mechanisms, because I have reason to believe that such a
global change will be effected by this local change". That is the one
statement about a complex system that is verboten.
So it is that one quiet little statement about finding better ways to do
the search that brings down the whole argument. If intelligent systems
can be built without making them complex, all well and good. But if
that is not possible (and the evidence indicates that it is not), then
you must be very careful not to set up a research methodology in which
you make the assumption that you are going to adjust the low level
mechanisms in a way that will 'improve' the global performance in a
desired way. If anyone does include that implicit assumption in their
methodology, they are unknowingly inserting a "And Then A Miracle
Happens Here" step.
I shouold quickly add one comment about that last paragraph. AI
researchers clearly do do exactly what i have just said is impossible!
They frequently look at the poor performance of an AI system and say "I
think a change in this mechanism will improve things" ... and then, sure
enough, they do get an improvement. So does that mean my argument that
there is a complex systems problem just wrong? No: I have clearly said
(though many people have missed this point I think) that what AI
researchers have been doing is implicitly using their understanding of
human psychology (of their own minds, for the most part) to get ideas
for how to design AI systems, and that quiet, haphazard borrowing from
the human design is what has enabled AI to get as far as it has.
It is because AI researchers have been using a really clumsy,
unsystematic version of the research methodology that I propose, that
they have made any progress at all. But what I believe is that this
clumsy version of the methodology has gone as far as it can, so what I
am effectively saying, with all my talk about a complex systems problem,
is that people need to understand why their work has sometimes
succeeded, and stop kidding themselves (as many, many AI researchers do)
that they are actually designing AI systems without regard to the human
design.
Okay, enough for now.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com