John G. Rose wrote:
Could you say that it takes a complex system to know a complex system? If an
AGI is going to try to say predict the weather, it doesn't have infinite cpu
cycles to simulate so it'll have to come up with something better. Sure it
can build a probabilistic historical model but that is kind of cheating. So
for it to emulate the weather, I think, or to semi-understand it there has
to be some complex systems activity going on there in its cognition. No?

I'm not sure that this what Richard is taking about but an AGI is going to
bump into complex systems all over the place. Also it will encounter what
seems to be complex and later on it may determine that it is not. And
perhaps, a key component in the cognition engine in order for it to
understand complexity differentials in systems from a relationist standpoint
it would need some sort of complexity .. not a comparator but a...sort of
harmonic leverage. Can't think of the right words....

Either way this complexity thing is getting rather annoying because on one
hand you think it can drasticly enhance an AGI and is required and on the
other hand you think it is unnecessary - I'm not talking about creativity or
thought emergence or similar but complexity as integral component in a
computational cognition system.

There has always been a lot of confusion about what exactly I mean by the "complex systems problem" (CSP), so let me try, once again, to give a quick example of how it could have an impact on AGI, rather than what the argument is.

(One thing to bear in mind is that the complex systems problem is about how researchers and engineers should go about building an AGI. The whole point of the CSP is to say that IF intelligent systems are of a certain sort, THEN it will be impossible to build intelligent systems using today's methodology).

What I am going to do is give an example of how the CSP might make an impact on intelligent systems. This is only a made-up example, so try to see it is as just an illustration.

Suppose that when evolution was trying to make improvements to the design of simple nervous systems, it hit upon the idea of using mechanisms that I will call "concept-builder" units, or CB units. The simplest way to understand the CB units is to say that each one is forever locked into a peculiar kind of battle with the other units. The CBs spend a lot of energy engaging in the battle with other CB units, but they also sometimes do other things, like fall asleep (in fact, most of them are asleep at any given moment), or have babies (they spawn new CB units) and sometimes they decide to lock onto a small cluster of other CB units and become obsessed with what those other CBs are doing.

So you should get the idea that these CB units take part in what can only be described as "organized mayhem".

Now, if we were able to look inside a CB system and see what the CBs are doing [Note: we can do this, to a limited extent: it is called "introspection"], we would notice many aspects of CB behavior that were quite regular and sensible. We would say, for example, that the CB units appear to be representing concepts like [chair] and [upside-down] and [desperation], and we would also say that when some CB units have babies, it looks rather like a couple of existing concepts being combined to form a new concept.

In fact, we might notice so many regular, ordered, understandable things happening in the CB-system that we would start to believe that the CB units were not engaging in what I just called "organized mayhem" at all! We might say that the whole thing was pretty comprehensible and ordered.

In fact, we might be tempted to try to build a version of the system in which the behaviors were tidied up and cleaned - a system in which the 'meaning' of each CB unit was precisely defined, and in which the building of new CBs always proceeded in a very precise, understandable way. And then, after we started our project to build a "cleaned-up" version of a CB system, we would say that all we were doing was eliminating a lot of wasteful noise and inefficiency in the original CB system that was built by evolution.

But now, here is a little problem that we have to deal with. It turns out that the CB system built by evolution was functioning *because* of all that chaotic, organized mayhem, *not* in spite of it. It was not really a nice, organized, understandable mechanism plus a bit of noise and wastfulness ..... it was a mechanism whose proper functioning absolutely depended on a proper balance of those fighting CB units. In fact, the overall intelligence of the system would drop like a stone if some of those mechanisms were taken away. It was like an ecology: all the competing species are in perfect balance, not because they are cooperating so that everyone gets the resources they need, but because nobody is cooperating with anyone else at all.

Now, here comes a crucial idea that many people seem to miss. The CB system would not have to be as bad as an ecology, with intelligence "emerging" suddenly out of complete randomness. I would not expect the situation to be nearly as drastic as that. Instead, it could be the case that the CB system is 99% understandable, but with just a 1% part that appears to serve no meaningful purpose. Really: just a small touch of random, incomprehensible stuff embedded in what otherwise seems to be a fairly logical system.

But so what?, you might ask. We can just go ahead and build a "cleaned up" version of a CB system, and when we get it mostly working, we just look at what else we need to add to it, to get that extra ingredient of random, incomprehensible stuff that nature seems to have included in its design. One way or another, you might think, we find our own substitute for that last 1%: we could figure out how the last 1% actually has an effect on the natural system, and having understood it, we build an equivalent for our system. Or, we just keep tweaking some parameters until we get there.

Sorry: not going to work. This is why the CSP is so serious. If you start by building a cleaned-up version of the system, and then try to discover those last few mechanisms, you will find that there is no relationship between what you want those mechanisms to do for the system, and what the mechanisms actually look like. Repeat: no relationship at all. You might as well start by opening up God's Compleat Catalogue of All Possible Mechanisms, then work your way through the book from "Aaaaaaaaaaaaaabbbcceddghjduy" on page 1 to "Zzzzzzzzzzzzzzzzyxyshzsusido" on page Googolplex, because any one of those mechanisms might be the one that does the trick.

Even worse, if you start by building your own version of the CB system, there might not be any extra mechanisms you can add to it to get it up to the same level of performance as nature's CB system.

Those last two paragraphs can be summarized as follows. Evolution explored the space of possible intelligent mechanisms. In the course of doing so, it discovered a class of systems that work, but it may well be that the ONLY systems in the whole universe that can function as well as a human intelligence involve a small percentage of weirdness that just balances out to make the system work. There may be no cleaned-up versions that work.

That leaves you, the would-be system builder with these choices:

(1) Build your own cleaned-up, rationalized version anyway, and hope that it will work without any complex-weirdness. If the above circumstances are the real situation, this will never work.

(2) Build your own cleaned-up, rationalized version anyway, and then try to find out what extra complex-weirdness you have to add to it to make it work. If the above circumstances are the real situation, this will could conceivably work, but you will have to go through God's Compleat Catalogue of All Possible Mechanisms in order to do it: you could therefore need a computer the size of a planet and spend the same amount of time that evolution spent on the problem. It would probably not be quite as long as that, but even if it were one billionth the effort, it would still take forever.

(3) Build a version of the CB system that is as close as possible to the human cognitive system, because we know that evolution has already done the heavy lifting. If the above circumstances are the real situation, this could work, provided we can get close enough to the human design to close the gap with some jiggling of parameters.

Option 1 is what many AI researchers have been doing, and continue to do. When you tell these people that they should take some notice of the complex systems issue, they don't simply disagree with you, they start foaming at the mouth and screaming "fraud!" or "crackpot!". Eliezer Yudkowsky is the paradigm case of this reaction.

Option 2 is what some more enlightened AGI researchers are doing. Ben is in this group.

Now the final conclusions.

A) Please be clear about one thing. This argument is about risks, not about certainties. I am not saying "All intelligent systems are DEFINITELY complex systems", I am asking "What is the chance that they are complex?". I am not saying "The impact of complexity would DEFINITELY be catastrophic", I am asking "What are the risks of failure, if we ignore the problem and it turns out to be real?". My conclusion is that the risks are so high that we should assume the worst.

B) One difficulty with all of this is that when you look into the detailed argument you find that it will never be possible to ascertain the exact risk of this scenario being real. Whatever decision you make, you will have to make it blind. No waiting around to find out for sure that there is a problem.

C) Lastly, if you look at the what you would expect to happen if the CSP is a real problem, you will notice something interesting: the pattern of failure we have seen over the last fifty years in AI is exactly what we would have expected if the CSP was indeed as real as I think it is.




Richard Loosemore




















-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to