Benjamin Goertzel wrote:
"Self-organizing complexity" and "computational complexity"
are quite separate technical uses of the word "complexity", though I
do think there
are subtle relationships.
As an example of a relationship btw the two kinds of complexity, look
at Crutchfield's
work on using formal languages to model the symbolic dynamics generated by
dynamical systems as they approach chaos. He shows that as the parameter
values of a dynamical system approach those that induce a chaotic regime in
the system, the formal languages implicit in the symbolic-dynamics
representation
of the system's dynamics pass through more and more complex language classes.
This is true: you can find connections between the two usages, even
though they start out being in principle different.
The Crutchfield work sounds like a good illustration of the "complexity
= edge of chaos" idea.
And of course, recognizing a grammar in a more complex language class has
a higher computational complexity.
So, Crutchfield's work shows a connection btw self-organizing complexity and
computational complexity, via the medium of formal languages and symbolic
dynamics.
As another, more pertinent example, the Novamente design seeks to avoid
the combinatorial explosions implicit in each of its individual AI
learning/reasoning
components, via integrating these components together in an appropriate way.
This integration, via its impact on the overall system dynamics,
leads to a certain degree of complexity in the self-organizing-systems sense
Indeed: that being one of the ways that complexity creeps in. All AI
systems have to allow for the fact that some mechanisms have to be told
to time out and submit their best guess, for example, and when that
happens the overall behavior of the system becomes a good deal more
subtly related to its design spec.
Richard Loosemore
-- Ben G
On Dec 11, 2007 10:09 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Mike Tintner wrote:
Essentially, Richard & others are replaying the same old problems of
computational explosions - see "computational complexity" in this
history of cog. sci. review - no?
No: this is a misunderstanding of "complexity" unfortunately (cf the
footnote on p1 of my AGIRI paper): computational complexity refers to
how computations scale up, which is not at all the same as the
"complexity" issue, which is about whether or not a particular system
can be explained.
To see the difference, imagine an algorithm that was good enough to be
intelligent, but scaling it up to the size necessary for human-level
intelligence would require a computer the size of a galaxy. Nothing
wrong with the algorithm, and maybe with a quantum computer it would
actually work. This algorithm would be suffering from a computational
complexity problem.
By contrast, there might be proposed algorithms for iimplementing a
human-level intelligence which will never work, no matter how much they
are scaled up (indeed, they may actually deteriorate as they are scaled
up). If this was happening because the designers were not appreciating
that they needed to make subtle and completely non-obvious changes in
the algorithm, to get its high-level behavior to be what they wanted it
to be, and if this were because intelligence requires
complexity-generating processes inside the system, then this would be a
complex systems problem.
Two completely different issues.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74589437-aa2865