Mark Waser wrote:
Richard,
I'm afraid that you have successfully talked me out of the complex systems camp.
> Richard Loosemore wrote:
>> 6) A system is deemed “complex” if the smallest size of a theory that will explain that system is so large that, for today’s human minds, the discovery of that theory is simply not practical. Notice that this definition does not imply that there any such systems in the real world, it just says that *if* the theory size were ever to go off the scale *then* the system would (by definition) be complex. I just don't believe that the core of intelligence is complex according to this definition. The combination of the core of intelligence *plus* the world is clearly complex but I don't believe that a boot-strap intelligence need be complex (by this definition) This definition is *not* what I understand to be complex.


Okay, let's take an example to illustrate it.

You know about those (abysmally depressing) experiments in which cats are raised so that they only ever see vertical stripes, and then as adults their brains cannot detect horizontal stripes?

Assume, for the sake of argument, that a similar principle exists at a higher level of the cognitive system. Specifically, assume that the adult human mind is largely driven by "operators" which act on the "symbols", in such a way that the operators are a bit like biological catalyst molecules and the symbols are like the atoms and molecules that are manipulated by the catalysts.

We will suppose that (quite plausibly) most of these operators do not come hard-wired, but are grown as a result of developmental experience (which means, experience of thinking and experience of the world, both).

Now one more assumption, which requires a little more imagination. This has to do with what kinds of things these (finished, adult-version) operators really are ... because we have two choices:

Choice 1: The operators end up being clean and modular in their design, which means that if we were able to examine them from the outside, we would be able to understand how they worked because their structure was NOT deeply entangled with the design of the symbols, and the other stuff in the system. Call this the "God Is A Smalltalk Programmer" choice, because in this case the adult version of the cognitive system, after all the operators have developed, looks like a nice piece of OO programming, with no hideous dependencies between the entities.

Choice 2: [And I am sure you can see this coming already] Suppose that the operators develop in such a way that there are hideous dependencies between them ... like, really horrible design in which it is almost impossible to see how the operators work because everything developed as a big kludge. This is the "God Is A Spaghetti Programmer" choice, for obvious reasons.

In choice 2, we are talking about cognitive systems that have the same core mechanisms (the same basic drivers that are there before the development process kicks off and the operators start getting built and the symbols start being collected), but the actual format of the resulting operators and symbols is deeply ugly. It may well be that every individual mind actually looks quite different on the inside, even though the net result is a system that thinks in a roughly similar way to all others.

Now, am I saying that the cognitive system is definitely built this way? No, but it is at least a serious possibility. I could make a stronger case in favor, but rather than do that I will just say that this is at least as plausible as the alternatives.

It should be clear that the second choice is a complex system. Can I take that to be agreed?

Is the second choice more plausible than the first? Well, yes: when a bunch of mechanisms interact in the way described, the usual result is not Smalltalk-like OOPSLA beauty, but a horrible (though functioning) kludge. That is just the way nature is. Agreed?

The question is, can we come along and try to build a clean, OO-like version of this operator system and get a non-complex version of an intelligent system working? My argument is this: if the only working examples of an AGI are these nature-built kludges, what are the chances that a clean version can be built? In reasonable time? Without being able to dissect and play with a working model of a nature-built system?

I cannot see any reason that COMPELS me to believe that such a clean version can be built. It may be that the only version of intelligence that can ever be built, is one which grows up from some initial (pre-operator) mechanisms, and that ANY system that attempts to function with operators built after-the-fact will slowly diverge from stability as it interacts with the world.

In fact, looking at complex systems themselves (the real examples that people play with in the computer laboratory), I strongly suspect that we cannot do this.

How does this relate to what you said above? Well, your statement that you "don't believe that the core of intelligence is complex according to this definition" seems, to me, to indicate that you have compelling reasons to suppose that the above scenario really cannot be correct.

What makes you believe so (and believe it so strongly)?




Richard Loosemore
















-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to