Several posts on this coming in quick succession -- you might want to read all of them before replying to any of them.

I've just realized that much of the problem that we're all having in this discussion is the lack of realization of how much of a bottom-up design Richard is assuming vs. how much of a top-down design that many others are assuming.

If I understand Richard correctly, he is assuming that it is necessary to make symbols themselves complex and that each symbol needs his four forces of doom: Memory, Development, Identity, and Non-Linearity.

I have no problem with the first three but am not so sure that I agree with the non-linearity. Certainly, the interactions between symbols are non-linear but I believe that they are reasonably bounded -- particularly if you use some intelligent design principles (pun intended). For example, nature re-uses virtually everything -- I have to believe that this applies to cognition as well. Similarly, look at software design patterns (as per Gamma, et. al.). I don't believe at all that rules governing the behavior of inter-symbol interactions are necessarily complex. I believe that inter-symbol interaction will eventually be soluble with a reasonable number of rules (and rules generated from those rules). Just like gravity, the behavior generated by the rules WILL be complex but the rules will not. And just like gravity, there will be more than enough regularity that we will be able to predict and control the stability of inter-symbol interaction *as long as* we understand the rules well enough.

So . . . . to directly address Richard's points . . . .

To be able to say that you cannot see any reason that COMPELS you to believe that there is a significant amount of complexity in cognition, I think you have to be sure of several things (and this list could be longer, but I'll stop at these for the moment):

OK.  Let's do it  :-)

1) That analogy-making, whatever it is, is definitely not the sort of tangled operators-begetting-new-operators mechanism that I described last time.

Analogy-making is a reasonably simple operation with fairly standard operator TYPES. The trick is in all the variants of matching and translation from known symbol/concept to possible new analogous symbol/concept. Again, I think that this will proceed along the lines of starting with a fairly simple set of basic rules and then expand in a logical/rational manner. The main problem is the sheer size of the world but if it didn't proceed this way then I don't believe that humans would be able to learn to function in the world.

2) That when symbols are combined in the process of thinking, the combination process definitely does not involve any interactions that are complex. For example, understanding the syntax and semantics of a sentence must on no account resemble the process of folding that allows a string of amino acids to fold up into a protein (unquestionably complex), instead, understanding of a sentence must always proceed in a deterministic way.

I agree with the first part of the sentence and contend that you have cheated with your analogy. Protein folding is not so much complex as it is incalculable with current hardware. We pretty much *know* the forces that are involved but they are so numerous (and necessarily simultaneous) that we can't do the calculations. I contend that there are far, far fewer things going on in understanding a sentence -- though there *are* more things that current systems currently handle (i.e. we need to broaden the context of current systems before we'll get to human-level sentence understanding -- but it's not as great a stretch as some people believe).

3) When new symbols are built from old, by whatever learning mechanisms do this, the process cannot involve any interactions that are tangled enough to be complex. Again, for example, this process cannot resemble protein folding in the sense of being a constraint-driven relaxation whereby the system finds an optimal new symbol to capture an abstraction of some existing symbols. This process must be determinstic.

First sentence is fine. I'm not sure I understand the relevance of the second sentence. Why would a system be required to find "an optimal new symbol"? Humans don't do that. And if the process were a constraint-driven relaxation, why would that be a problem if the constraints were simple enough? I also don't understand the need for the third sentence. Why must the process be deterministic? What if it is just very tightly bounded? And how would it *not* be deterministic in a computer system anyways?

4) When reasoning or problem-solving processes occur, the system must choose the appropriate representations in which to express the problem to be solved, and this process of representation-choice must not involve any complex mechanisms (again, imagine the role that a relaxation mechanism like protein-folding might play here ... all those factors that come together to determine the best choice of representation, they MUST not be like a complex relaxation process). We all know that in an AI, the choice of representation can sometimes determine whether or not the system can actually solve the problem.

Looking at this and previous points, all you are arguing is that I can't say that something isn't complex unless I agree that all the parts aren't complex. Yes, I agree. That is true here. But the protein-folding keeps coming back and I think that is a horrible example. Protein-folding is incalculable (or complicated) but it is not complex according to your "size of theory" argument.

5) When reasoning has to be controlled and curtailed by an Inference Control Engine (as it always does, in a real-world AGI), this ICE must not involve complex processes. No kludges are allowed to get the ICE working, no adaptive processes allowed inside the ICE to ensure that it remains effective as the system expands.

Yeah, yeah, yeah, I hate parameter tuning as much as you do.

6) When the grounding mechanisms operate to build symbols in a way that keeps their semantics consistent with the semantics implicit in the architecture of the AGI (remember, a properly grounded system does not have a semantics imposed on it, it must adhere to the semantics that is implicit in the way that the symbols are used), you must be sure that whatever symbols are built, the *implicit* meaning of the symbol-innards is consistent with whatever meaning you decided to assume when you designed the mechanisms that operate on those symbols. So, if you decide to attach a 'probability' parameter to symbols that represent facts, the way that your mechanisms use that p value must be semantically consistent with the implicit semantics coming out of the grounding mechanisms ... which means that the latter must all be non-complex and semantically transparent throughout.

I agree entirely.

You have to be SURE that in each of these areas, the mechanisms that you have got, or that you will find in the future, will all be free of any taint of complexity, to be able to say that "I see no reason why this cannot be done without complexity". This is the degree of certainty that you must have.

Yup.  I'm there.

Now, bear in mind that we do not know how to build most of these mechanisms, and that all attempts to build mechanisms to do these things have fallen woefully short of demonstrating their feasibility in an AGI context.

I've kept it in mind. I've also kept in mind that I think that I have a simple, non-complex solution to ethics as well. I believe that the two problems are reasonable analogous.

And yet, in spite of that, you feel confident that all of these things can be done without any danger that complex mechanisms might creep in?

Yup.

How so?

... Because it seems to me that all the best efforts to understand these things are heading in the direction of interpreting these things, in the human cognitive system, as being rather closer to protein-folding than determinstic programs. So in that context, how would one be so SURE that all of these can be done some other, non-complex way?

That is why I say that the boot is on the other foot.

What do you think?




Richard Loosemore









-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to