Mark Waser wrote:
> Hey Richard,
>
>    You and I seem to have stalled out on the complexity question . . . .

Oh, it is only that I am under so much time pressure that I have become exhausted by it. Sorry about that.

(Plus I have become distracted by comments elsewhere...)



> I don't mean to be difficult but I'm still not sure that I'm getting the point . . . . so let me try to recap what I think you're saying . . . .
>
> Question 1.
>
> Richard > The purpose of the argument
> Richard > My purpose is to explain that if the task of building an artificial intelligence involves trying to engineer a "complex system", then we are in big trouble because all the methods currently used by AI researchers depend on the fact that intelligent systems are not complex systems.
>
> OK. I have no problem with saying that most methods used depend upon the fact that intelligent systems are not complex and that that is a problem.
>
> Richard > When is a system complex?
> Richard > When trying to decide if a given system is complex, it is important to be clear about some of the distinctions I made in the definition of complexity (yesterday's post). > Richard > First, the strict definition of a complex system is that it has some observable behavior that can only be explained by a theory that is too large for us to discover (and possibly there is no explanation at all, except for simulating the entire system). So the most basic criterion for complexity is the size of the theory that "explains" the system's behavior.
>
> I have a couple of problems here. First, your strict definition is not the definition that I am accustomed to. I would be willing to accept your definition *EXCEPT* that you repeatedly refer to systems that do not meet your definition as complex. For example, the theory of gravity is *VERY* simple. The theory perfectly explains observable behavior if you have the necessary computing power and accuracy of initial measurements. Yet, you insist that "if you consider the general case of an n-body system then it is fully and completely complex." You did leave yourself an odd out by including the phrase "except for simulating the entire system" BUT since the theory of gravitation is THE correct explanation for an n-body system I don't see why you believe you can maintain this definition and also claim that an n-body system is complex.

Okay, let me see if I can untangle this.

One preliminary issue that need to be cleared up:

Remember that it is not really 'theories' themselves that are complex, it is systems. The theory of gravity is simple, yes, but I would never say that the theory itself is 'complex', I would say that a particular gravitational system is complex. (At least, I should be talking that way: if I have erred, that was a simple mistake, or a shortcut that I should not have made).

To illustrate why this is important, consider that when I tried to define complexity, I talked about whether or not a particular system had 'regularities' in its behavior - for example, elliptical orbits for planets - which could or could not be explained by a theory. This talk of regularities is about a given system, not the laws or theories that gvern that system. So it would make sense to say "I observe a planetary system, and it appears that the planets are moving in elliptical orbits: how come?", whereas if I said "The law of gravity has elliptical orbits that need to be explained", that would be pretty meaningless.

So now, going back to your question, we have to talk about specific gravitational systems. We are so used to the solar system that we kind of assume that it is the only interesting case, but of course it is not. If we do stick to a solar system, it is mostly "not complex" because most of the regularities that we observe can be explained by analysing the equations of gravitation and showing that elliptical orbits are a prediction of the theory. The exceptions are some weird effects in ring systems (braiding) and Pluto's orbit (it goes berserk every once in a while and disturbs the whole system).

If you look at a 3-body system in which the masses are similar, though, it is very hard in practice to predict any regularities that appear (if there are any ... such a system might just be random). Okay: but hold that thought for a moment, because I will come back to the 3-body case shortly.

Now, it sounds like you are saying that IN PRINCIPLE the law of gravity explains everything about the solar system, in the sense that if we had enough computational power, accurate enough initial conditions, etc, we could predict the orbits perfectly. Indeed, according to this point of view the law of gravity explains everything about all systems, includng 3-body systems, because you do not need anything else to account for their behavior. In other words, there is nothig voodoo about gravitational orbits, because Newton (+Einstein) explained it all in principle. We may not be able to do the practical calculations, but that is beside the point: we have "explained" motin under gravity because we have pinned down the laws.

I understand this fully, and do not disagree with it in any way.

BUT, that general sense of "explain" is not the one I am always talking about, because the only thing that matters in the complex systems idea is whether you can "explain" regularities observed in a system, and explain here does not mean "explain in principle", it means explain in the sense of being able to predict the regularities in practice.

So, let me jump to the case of the braiding efects in Saturn's rings. The last I heard (and I hope I am not out of date on this), people could only 'explain' these by doing computer simulations in which there were particles and planets and little moons. When you have this combination, you observe (in the simulations) something like braiding. So the braiding is a high-level regularity, and the explanation is .... well, the only explanation is that a simulation does the same kind of thing. Nobody did an analysis of the basic Newtonian equation and said "I predict that braiding effects will occur when there are rings and little moons". As far as I know, we do not expect such an analysis to be possible.

So now, a planet + rings + little moons situation is a complex system ONLY because the explanation for the braiding regularity can only be done with a simulation of the system: you cannot use the equations of gravity to make an actual prediction of this regularity.

Why this focus on a particular meaning of the word "explanation"? Well, because in many other cases it is a very important practical consideration, whether we can look at the rules and know what the system will do. When we get down to brass tacks, the complex system idea is ONLY about whether we can practically explain a given regularity. A situation where someone says "This system is governed by Law X, and in principle we can explain all cases with Law X" is of no earthly use to us in some situations. Sometimes, we really care about where a particular regularity 'comes from', or what factors in the system will affect it.

There are, for example, some interesting circumstances arising in the exoplanet searches right now. It appears that there are some n-body regularities involved in solar systems with big, Jupiter-like gas giants. It may be that in these systems, taken as a class, there is a tendency for Jupiters to migrate inwards. Nobody predicted this, but astronomers have observed so many systems that have been swept clean by a 'hot Jupiter' that ended up very close to its sun, that they now think this is a consistent pattern of behavior ... a regularity. The explanation, as always, is that simulations show the same tendency, now that we go looking for it.

http://www.newscientist.com/article/dn6432-jupiter-drifted-towards-sun-in-its-youth.html

Does this make sense so far? When I say "explain" I really mean the practical problem of accounting for particular regularities without having to do simulations. I think that I have been consistent, here, but only because I (try to) keep clear of saying that a law is complex, and only talk about a particular system (or class of systems) being complex. In the next few steps of the argument, it becomes critically important whether you are talking about explanation in principle, or explanation of particular regularities, because most of the complex systems cases of interest are not as simple as the various gavitational cases. Gravity has very few reasons to make us suspect that it will give rise to complexity, but there are other systems where the basic mechanisms scream "I am probably going to be complex!", and those are the more interesting cases.

If this much is clear, we can go to the next step, but if not, tell me what is not clear.


*************

Okay, I just read your question number 2 in a separate post, so I will deal with that as well.

Mark Waser wrote:
> Question 2.
>
> A lot of the time it seems as if you are saying that engineering a
> complex system is impossible . . . . (and that AGI *IS* complex).
>
> Am I correct in this interpretation of your words?
>
> If so, are you saying that AGI is entirely doomed or do you have some
> solution?


Oh no no no!

The first statement is correct in the sense that if go through the following steps:

1) Decide (or admit) that your target system that you want to build will have interesting high-level regualrities that are complex, and then

2) Write down what you want those high-level regularities to look like, and then

3) Set out to 'engineer' some low-level mechanisms that will make those high-level regularities appear ...

... then you would be doomed IF in the general case.

[Try to imagine that you take on a commission to design a replacement for the laws of Game of Life which will cause a particular set of 'creatures' to appear. There is no way that you would even try.]

The only exception to this doomsday scenario is if someone else already did invent such a system by trial and error - for example, if evolution has produced a system that is intelligent, and your goal is to make a system that is also intelligent.

Under those circumstance, where there is already an example of something like your target, you could conceivably build one that worked the same way by doing two things:

1) Design yours to be as much like the example as possible, and

2) Change your methodology so that you are always making explorations of the 'near vicinity' in the space of intelligent systems.

What that second one means is this. If your current best guess as to the design of the human cognitive system is one point in the space of all possible cognitive systems, make sure that you build a full working model of that best-guess system, and arrange to do as many variations on that design as possible .... exploring these variations by both automatic parameter-adjusting techniques and by getting people to think of variations on the design that just make sense.

The theory behind this approach is that nature will not have built something so fragile that one small deviation from the exact design will result in a total loss of intelligence, so you can probably score a near miss in the design space and still get something worthwhile. So even if your best guess is wrong, maybe a viable solution is nearby, and by exploring you will hit it.

Most importantly, one thing you would not do is to insist that everything in the low-level mechanism that you are designing *must* be interpretable in high-level terms! There is absolutely no reason to believe that all the components in the low level should mean something higher up. This is a general point that could do with a lot more explaining, but it shoudl be ituitively obvious: that is what complexity is about, after all ... that there should be a disconnect between low-level and high-level.

You can see many examples of this last issue in the way that people go about designing AGI systems. First they start with a 'clean' idea about what the low-level stuff means: there are logical atoms, and probabilities and predicates and truth values, etc etc. Then, inevitably, they find that these things do not, by themselves, actually give rise to a fully intelligent system. So they start to introduce extras. First it is the p-values that are not really probabilities. Or fuzzy truth values. Or there are 'activation levels'. Or belief values. But even these do not work, so more and more junk is added to try to kludge it. Eventually you start to wonder *why* someone's particular kludge is supposed to work: the clean version had some theory behind it, but now the kludge is probably going to make the system complex, so where are the guarantees that the overall behavior will be what it is supposed to be? And then, finally, you look at the mess and say to yourself "Why do we bother to make any of these parameters interpretable anyway, because now things are so unclean, the original interpretations have all gone out of th window?"





Now, one last point.

I do not believe for a minute that all of cognition is complex, in the same way that all of the Game of Life is complex. So the situation is not that dire.

But what I do believe is that it would only take a small amount of complexity - like the existence of one simple parameter, maintained by a little mechanism, which is located inside every symbol in the system - for the building of a stable intelligence to be impossible UNLESS we were deliberately and systematically looking for such things. If there were a parameter like the one I just mentioned, nobody would ever find it, because nobody is even looking for such a thing. That parameter could play the role of a catalyst in an enzymatic reaction: with it, everything works, but without it, complete failure.

Nature may have already discovered that the only way to get an intelligent system to work is to have just a few things in the design that work in a complex way. Then we come along, do some introspection (which is, ultimately, what AGI researchers do) and build something that looks a bit like the human design. But our introspections do not happen to go down to the level of the 'catalyst' parameter .... so all of our AGI systems turn out to become unstable when scaled up to full size.

All because we thought we could do it without that little 'catalyst' parameter, when nature had already discovered that there were no solutions that function without it.

That is all that would be needed to turn another 50 years of AI/AGI research into a complete waste of time, but the response from the community is that they *hope* that this is not how it will be. They are flying on hope alone.





Richard Loosemore




















-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to