Vladimir Nesov wrote:
Richard,
These last two messages with replies to Mark's questions clarify your
position more clearly than much of your prior writing (although I
didn't keep track of later discussions too closely). I think it's
important to show in the same example all the controversial aspects:
relatively simple rules, use cases where an aspect of global behavior
can be modeled by a simple theory (two-body problem, F-14, most of the
planets in short term, gliders in GoL), and use cases for the same
global system where there is no simple model (n-body problem, Pluto,
more general initial state in GoL).
Yes, I am coming to the view that this stuff needs to be explained with
many examples, if the message has any hope of getting across.
More generally, I find it incredibly strange that these complex system
ideas cause *so* much consternation. Back in the early 90s I read all
about the early history of complex systems research, and it was
noticeable that these ideas provoked some extremely strong reactions.
People didn't just disagree with the ideas, they were besides themselves
with fury. (I am not saying that Mark is doing that, btw, I'm talking
about the broader reaction).
The funny thing is that I thought all of that was over, and that people
now understood what the deal was with complex systems, but what I am
finding is that I am fighting exactly the same battles as the earlier
folks did, back in the 80s and 90s.
But all the same, problems that you describe as complex are just
numerical calculation problems. In the case of symbol interaction,
initial conditions (rules) are unknown and results are discontinuous,
which requires much methodical enumeration to find the rules that give
required global behavior, no clever tricks work.
I am not quite sure what you mean by this, but my general answer is that
it really is not a matter of "numerical calculation problems".
I think the basic idea that nobody gets (because everyone just dances
around the issue) is that if you had a God's-eye perspective, you would
be able to plot a distribution graph showing the amount of difficulty
that humans have in understanding various kinds of systems (natural and
artificial).
Looking at that distribution, you would see that most of nature's
systems just happen to be clustered in a hump quite close to the origin
(i.e. they are low-difficulty), whereas most of the artificial systems
in the universe are way, way off up at the high ened of the scale, in a
second 'hump'. What this means is that there are two qualitatively
different types of systems in the universe: low-difficulty ones, and a
second group of extremely high-difficulty ones.
But the problem is that people assume that this graph does not have two
humps, but is in fact continuous, and that as time goes on our ability
to understand systems further up the graph becomes greater. According
to that idea science is a relentless march into higher and higher
regions of this "difficulty-space", so if you came back in a hundred
years' time you would find that people are routinely deciphering systems
that today require superhuman intellect.... and then in a thousand years
time our elementary school kids will learn String Theory (if it survives
that long!), and so on.
This view of the relentless march of the human intellect is so strong
that I think it comes as a shock to people to be told that things might
be different, and that it might be trivially easy to create systems of a
certain sort which have a difficulty-level that is so far off the scale
that we do not know where to start analysing them, and we may *never*
know how to analyze them.
But this is exactly what the complex systems idea is about. It really
is almost trivial to build an artificial system in which the overall,
global behavior of the system is interesting and regular and "lawful",
but where we have no idea how to prove that this behavior should emerge
from the local rules.
In that context, it would be a complete misunderstanding to say that the
"problems that you describe as complex are just numerical calculation
problems." If all you mean is that we can simulate them if we want to
understand them (the way we simulate the weather in order to predict
it), then this is true, but in the context of the problem we have - the
problem of building intelligent systems - this fact is of no practical use.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com