Mark Waser wrote:
[snip]
Personally, I believe that Richard's complex systems problem is overblown as well. I believe that if a mind is composed of many heterogeneous units/subsystems (though the units may themselves be composed of nearly identical units/design) that have bounded interfaces and if the organization of these pieces and their aggregate interfaces are scale-invariant in this boundedness -- that the complexity will be firewalled and generally not cause rampant instability. I believe that the complexity problems that many systems have shown are due to overly large subunits, excessive connectivity, and inadequate firewalling/interface controlling. This is one of the reasons why I don't believe that single mathematical designs of everything are likely to work. Not because they cant be made to work if the higher level organization is designed correctly (as nature might have done) but because the I don't believe that the necessary higher-level organization is going to form spontaneously (unless you've got evolutionary-sized time scales and numbers of entities -- and if you look at the fact that recent scientific evidence is pushing much of neural stuff all the way back to sponges . . . . ). My biggest complaint about OpenCog has always been that I think that you have the lower-level details down correctly but you're not worrying about the higher-level stuff (or, at least, while you believe that you have it handled, there's been nothing for others to look at). That seems to be less the case with some of the new OpenCog documentation but I haven't finished absorbing that yet.

Mark,

Actually, this is addressed not just to your thoughts above, but also to Dave Hart's comments yesterday.

I think I have been guilty of not being specific enough about exactly what is meant by saying that 'complexity' is a problem, and as a result I have seen many interpretations that don't fit the idea that I actually had in mind. So, when I read your description above, my first thought was "Oh: that is not where I would have seen the complexity as being a problem at all".

Roughly speaking, what I am most concerned about when I mention the CSP is when researchers make *design* decisions that then have subtle, unintended and uncontrollable consequences. So, a person might choose a particular knowledge representation scheme, say, and then try to tweak it later, when (as always happens) they find that it seems to be limiting what they can do. The important question, to my mind, is what kind of relationship exists between the (lowest-level, most basic) design choices and the (high-level) behavior of the system.

Let me be more concrete. Suppose that someone decided to use traditional symbols (items that simply had a label attached, like [cat] and [like] and [running]), together with a method for combining these symbols into statements, and with simple truth values attached to the statements. In this kind of system you would find statements like "I hate Brahms. (p = 0.91)".

Now, we all know that there are differences between how easy it is to get different AI systems to work, depending on these design choices. For example, if some other AI researcher picked a different scheme in which there were not just 'truth values' like p = 0.91, but also confidence intervals like p = 0.91, c = 0.04, we might expect this new AI system to be able to do some things a lot better than the first one.

However, just exactly what is the relationship between these different choices of design at the lowest level, and the 'power' of the overall system? Is it the case, for example, that a system like the two I just described could be improved to an arbitrary extent by adding new, more subtle combinations of truth values, confidence intervals, and so on? If we added enough machinery to the truth-value part of the design, would we always be able to find a way to get from our initial design to a design that had human-level intelligence?

Or, is it possible that the initial choice of design could never be improved up to the human level, no matter what truth-value stuff was added to it? Could it be that other aspects of the initial design have already boxed it into a You-Can't-Get-There-From-Here situation?

What I am pointing to, then, is a 'complexity' that exists in the relationship between low-level design choices and high level consequences. So, when I see you write that "complexity problems that many systems have shown are due to overly large subunits, excessive connectivity, and inadequate firewalling/interface controlling", I see something that is way, way higher up than the root cause that I am referring to. In the kind of hypothetical AI design that I just described, all that stuff you referred to (all the subunits, connectivity, interfaces, etc) could be absolutely perfect and not suffer from any of the troubles you mentioned, but at the same time the system might be completely dead in the water, just because of the design decision regarding the symbols and the p values. If the system was already trapped in a dead end, no amount of cleanliness in the rest of the design would make any difference.

I am not, of course, saying that most AI systems *are* boxed in like this, I am only saying that there is a *possibility* that a design choice can make it impossible to improve a system beyond a certain level. I am saying that situations like that really do happen.

The big problem with my line of argument, so far, is that we all know many examples of scientific or engineering challenges, in the past, that were superficially similar to this, and many of these were overcome! So when you hear me complain about the possibility of getting boxed into a corner like this, you might be tempted to say that scientists are smart, and they often have good intuition, so in cases like this they will eventually get some clues about what kind of low-level design choices would work. At the end of the day, that is how all discoveries are made: people come to understand a system intuitively, and as a result are able to work backwards from the overall behavior of the system to the underlying mechanisms that cause the behavior. If the same story happens in AGI research as happened in all the other branches of science until now, it will only be a matter of time before the right combination of design choices is found.

I probably don't need to labor the rest of the story, because you have heard it before. If there is a brick wall between the overall behavior of the system and the design choices that go into it - if it is impossible to go from 'I want the system to behave like [that]' to 'therefore I need to make [this] choice of design at the low level' - then all the stuff about using intuition to sense the right design would go out the window. This is why the conversation yesterday about what John Conway actually did when he came up with Game of Life was so important: the documentary evidence suggests that what he and his team did was just blind search. Other people have tried to assert that he used mathematical intuition. The complex systems community would say that in almost all projects like the one Conway undertook, there would be absolutely no choice whatsoever but to do a blind search.

The problem, then, is that if the AGI case does have some resemblance to the project that Conway undertook (and this seems to be a distinct possibility, to say the least), then the *way* in which the complexity is going to get you might be in the first few design decisions you make. The whole show would be over long before you got to the details of the design, so it would not matter how careful you were to keep the subunits, connections and interfaces clean.

Let me know if this distinction makes sense. It is pretty close to the one I tried to make in the paper, when I talked about 'static' and 'dynamic' complexity.

Cheers,



Richard Loosemore




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to