Mark Waser wrote:
I'm afraid that I'm losing track of your major point but . . . .

First off, you are violating your own definition of complexity . . . .

You said --> A system is deemed "complex" if the smallest size of a theory that will explain that system is so large that, for today's human minds, the discovery of that theory is simply not practical. Notice that this definition does not imply that there any such systems in the real world, it just says that *if* the theory size were ever to go off the scale *then* the system would (by definition) be complex.

By this definition, gravity is not complex. Yet, below you are arguing that it is, at least, a little bit complex (which seems to be getting more and more analogous to "a little bit pregnant" :-).

No, wait, this is not right.

When we talk of 'gravity' we often mean solar-system dynamics (remember, the SYSTEM is what we need to talk about, not the RULES .... gravity itself is just the low level mechanism, i.e. the rules). But solar systems are a special case of a gravitational system in which most of the behavior is analyzable (thanks to Newton). As I said, if you consider the general case of an n-body system then it is fully and completely complex.

But when I say that gravity (by which I mean the solar system) is *partially* complex, I mean that when the orbits are as badly behaved as Pluto's is, the system is unstable. In the specific case of our solar system the presence of Pluto means that the dynamics become grossly unpredictable once in a while. That is the 'little bit of complexity'.

That idea of partial complexity is not to be sniffed at. This is not like partial pregnancy. It just means that we can explain some fraction of the system's behavior, but not all of it. Or, that we can explain it most of the time completely, but some of the time the explanation breaks down.

Hope that clears it up: I think I have stayed consistent with the original definition I laid out.


Second, you keep whip-sawing between dismissing obviously complex systems like the adaptive aerodynamics of an F-14 as not complex (because "whatever that complexity was, it was simple and predictable enough that the control software could actually be written and the complexity could be cancelled out.") and then saying that the least little bit of complexity will make an AI virtually impossible to design.

You can't have it both ways. WHY is it that engineers can manage the complexity of high-speed adaptive aerodynamics yet you are absolutely positive that they can't do the same thing for intelligence?

The problem is that you cannot look all systems that are complex as if they are all complex in the same way. Each one must be examined for its own particular characteristics.

In the case of the F-14, it is not the case that there are large numbers of elements that each interact with the others in ways that give rise to the worst kinds of complexity. The plane's designers treat the system as having only TWO components: the plane's body and the environment, with the environment having an unpredictable effect on the plane (it is a noisy signal). They simply [sic!] build a reactive system into the plane so that the plane is measuring the behavior of teh environment and cancelling it out all the time. These two system components do not interact with one another in a way that includes any of the elements that give rise to complexity: the plane's control system just does one simple function, and that is to cancel out all fluctuations to make the plane fly straight.

In this case there is a clear situation in which the complexity and randomness is ignored ... the *content* of that randomness and complexity is of no significance whatsoever, because the control system is designed to do only one thing, and that is to cancel it out. A the level of the control system, this F-14 is diabolically simple, it is not complex.

The engineers do *not* "manage the complexity", they ignore it completely, and pretend that it is just a random signal (in fact, it may be just a random signal with no structure, for all I know: I have not investigated in detail).

By contrast (and as I did say before), in the case of intelligence the complexity is happening in and among the very things that cannot be cancelled out. It is impossible to build an AGI by putting into one box all of the symbols and symbol-mechanisms that might possibly cause any complexity, then have an outside system treat that entire box as if it were just a noise source! That outside system would just do its best to pretend that all the stuff going on with the symbols was meaningless noise, cancel it out, and delivery a final output from the system that was .... well, what? Nothing. The AGI would do nothing intelligent. All symbol activity would have to be cancelled by the compensating mechanism.

And within those symbols, there would be huge potential for complexity to arise, unlike in the F-14 case. The components in a symbol system interact in the most floridly tangled ways possible, whereas in the F-14 the components that give rise to the signal that is treated as noise, do not even have many of those tangled characteristics (the four forces of doom) anyway. (Plus, even if they did, they are then cancelled out).

So, these two examples are about as far apart as they possibly could be. There is no way that I am trying to have it both ways.



I think that the shoe is really on the other foot . . . . what problems *haven't* been eventually solved once we learn enough? True -- intelligence is the mother of all problems, but that doesn't mean that it's too difficult to engineer (like virtually anything else that humankind has put its mind to).

What complex systems have *not* been solved?

I could set up a program that generated rule-sets to insert into a cellular automaton, and I could get this program to generate thousands or millions of rule-sets every second, and run it for years without it ever repeating the same rule-set. I could accumulate countless billions of systems that way. I doubt that any of those systems would be analysable.

Or, to put it the other way around, if I asked you to engineer a system to have a specified global behavior, but stipulated that the rules of interaction between the components should be as tangled as those in intelligent systems, nobody would ever be able to find the rules that would generate the prescribed global behavior.

People have studied huge numbers of artificial complex systems (I have no idea how many), and as far as I know nobody has ever built one (with that much tangledness in the local rules) to have a particular behavior.

If you count up the number of "unsolved" complex systems, I daresay it would grossly outnumber all of the solved natural systems that have been understood since the birth of science.



Richard Loosemore





----- Original Message ----- From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Tuesday, April 29, 2008 7:52 PM
Subject: [agi] Complexity is in the system, not the rules themselves


Mark Waser wrote:
If I understand Richard correctly, he is assuming that it is
necessary to make symbols themselves complex and that each symbol
needs his four forces of doom: Memory, Development, Identity, and
Non-Linearity.

I have no problem with the first three but am not so sure that I
agree with the non-linearity.  Certainly, the interactions between
symbols are non-linear but I believe that they are reasonably bounded
-- particularly if you use some intelligent design principles (pun
intended).  For example, nature re-uses virtually everything -- I
have to believe that this applies to cognition as well.  Similarly,
look at software design patterns (as per Gamma, et. al.).  I don't
believe at all that rules governing the behavior of inter-symbol
interactions are necessarily complex.  I believe that inter-symbol
interaction will eventually be soluble with a reasonable number of
rules (and rules generated from those rules).  Just like gravity, the
behavior generated by the rules WILL be complex but the rules will
not.  And just like gravity, there will be more than enough
regularity that we will be able to predict and control the stability
of inter-symbol interaction *as long as* we understand the rules well
enough.


More than once in your recent posts, you have said one particular thing that does not make any sense to me, so I need to focus on it.

What you said in the above case was "I don't believe ... that rules governing the behavior of inter-symbol interactions are necessarily complex".

The problem with this statement is that strictly speaking one can never say that the RULES governing a system are "complex".

Now, before you jump on me (because I have probably made the same mistake), I should say that we sometimes talk that way as a kind of shorthand, but right now we must tread very carefully, so I am going to be very precise:

The rules that govern a system are just rules - they are not, by themselves, "complex". The SYSTEM can be complex (meaning: you cannot understand global behavior from local rules), but the rules themselves are not complex.

But then what can you say about the rules? What you can say about them is whether or not they seem likely to generate complexity. Certain kinds of simple, linear, elegant and separable rules tend not to generate complexity, but other kinds of ugly, tangled rules do tend to generate complexity in the system as a whole.

What do I mean by "ugly, tangled rules"? Well, that was the whole point of me listing the so-called four forces of doom. That list of rule characteristics:

  - Memory
  - Development
  - Identity
  - Nonlinearity

... is just the sort that tends to make the system as a whole complex. These rules are not "complex" by themselves, it is just that in our empirical studies of large numbers of experimental systems, putting THOSE kinds of rules in tends to make the system as a whole behave in a complex way. Most often it makes the system just random, of course! But if complexity is going to happen, it is usually because the rules have one or more of those features.

So, to illustrate why this is a big deal, look at the quote above: you say that

I have no problem with the first three but am not so sure that I
agree with the non-linearity.  Certainly, the interactions between
symbols are non-linear but I believe that they are reasonably bounded...

This is not something you can defend: if you think that the rules that govern the behavior of symbols do tend to have three of the four characteristics, then you must expect that the system as a whole will be complex, because this is just an empirical fact.

In particular, you cannot say "... the interactions between symbols are non-linear but I believe that they are reasonably bounded...". Reasonably bounded? That does not buy you anything at all: we can put the tiniest amount of nonlinearity into a system and leave out all the others, and the system still can be complex!

Now, it is certainly true that we sometimes utter phrases like "the rules governing the system are complex", but that is sloppy, because what we mean is that the rules have enough of these characteristics that the system is complex. I sometimes do this myself, even though I shouldn't, but it is generally harmless.

So when you say:

> Just like gravity, the
> behavior generated by the rules WILL be complex but the rules will
> not.

... I have to say that this is a meaningless statement on two counts. First of all, if the rules have some of those four complexity-generating characteristics, the system as a whole will almost always be either complex or random-and-boring. We just do not know of any (many?) examples of a system that has those four in the elements, but where the system as a whole is easily predictable or analysable from its element rules! For anyone to say that they believe that intelligent systems will be the exception is to fly in the face of all empirical evidence... it would be the biggest fluke in the history of the universe if a system had all those, and yet was not either complex or random.

Second, you compare to gravity. Disastrous example: the gravitational force only has one of the four characteristics, and that is nonlinearity in n-body systems, where n > 2. And if n > 2, but the system is dominated by one large mass and a few widely separated little ones, then the system is mostly not complex (and this, of course, is what applies in the case of the solar system and Earth satellites). Hoping that the intelligent systems case will happen to be like the *special* case that is like the solar system is a big stretch: there is almost no mapping between the cases.

Reading and re-reading your passage above, I cannot find anything that says why we should expect the case of interacting symbols to not give rise to complexity. I hear you when you say you *believe* that there will not be a problem .... but if you keep in mind everything I have just said, can you say why you believe that in this case the evidence for complexity will be overwhelming, but the complexity will simply not be there?




Richard Loosemore

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to