Mark Waser wrote:
 > Richard Loosemore wrote:
 > I cannot see any reason that COMPELS me to believe that such a clean
> version [i.e. a non-complex cognitive system] can be built. I cannot see any reason that COMPELS me to believe that it cannot. Humans look to me to be pretty complicated but not all that complex (using your definitions).

I just thought of a good way to state the reasons why we should strongly suspect that complexity is going to make itself felt in an intelligent system.

To be able to say that you cannot see any reason that COMPELS you to believe that there is a significant amount of complexity in cognition, I think you have to be sure of several things (and this list could be longer, but I'll stop at these for the moment):

1) That analogy-making, whatever it is, is definitely not the sort of tangled operators-begetting-new-operators mechanism that I described last time.

2) That when symbols are combined in the process of thinking, the combination process definitely does not involve any interactions that are complex. For example, understanding the syntax and semantics of a sentence must on no account resemble the process of folding that allows a string of amino acids to fold up into a protein (unquestionably complex), instead, understanding of a sentence must always proceed in a deterministic way.

3) When new symbols are built from old, by whatever learning mechanisms do this, the process cannot involve any interactions that are tangled enough to be complex. Again, for example, this process cannot resemble protein folding in the sense of being a constraint-driven relaxation whereby the system finds an optimal new symbol to capture an abstraction of some existing symbols. This process must be determinstic.

4) When reasoning or problem-solving processes occur, the system must choose the appropriate representations in which to express the problem to be solved, and this process of representation-choice must not involve any complex mechanisms (again, imagine the role that a relaxation mechanism like protein-folding might play here ... all those factors that come together to determine the best choice of representation, they MUST not be like a complex relaxation process). We all know that in an AI, the choice of representation can sometimes determine whether or not the system can actually solve the problem.

5) When reasoning has to be controlled and curtailed by an Inference Control Engine (as it always does, in a real-world AGI), this ICE must not involve complex processes. No kludges are allowed to get the ICE working, no adaptive processes allowed inside the ICE to ensure that it remains effective as the system expands.

6) When the grounding mechanisms operate to build symbols in a way that keeps their semantics consistent with the semantics implicit in the architecture of the AGI (remember, a properly grounded system does not have a semantics imposed on it, it must adhere to the semantics that is implicit in the way that the symbols are used), you must be sure that whatever symbols are built, the *implicit* meaning of the symbol-innards is consistent with whatever meaning you decided to assume when you designed the mechanisms that operate on those symbols. So, if you decide to attach a 'probability' parameter to symbols that represent facts, the way that your mechanisms use that p value must be semantically consistent with the implicit semantics coming out of the grounding mechanisms ... which means that the latter must all be non-complex and semantically transparent throughout.

You have to be SURE that in each of these areas, the mechanisms that you have got, or that you will find in the future, will all be free of any taint of complexity, to be able to say that "I see no reason why this cannot be done without complexity". This is the degree of certainty that you must have.

Now, bear in mind that we do not know how to build most of these mechanisms, and that all attempts to build mechanisms to do these things have fallen woefully short of demonstrating their feasibility in an AGI context.

And yet, in spite of that, you feel confident that all of these things can be done without any danger that complex mechanisms might creep in?

How so?

... Because it seems to me that all the best efforts to understand these things are heading in the direction of interpreting these things, in the human cognitive system, as being rather closer to protein-folding than determinstic programs. So in that context, how would one be so SURE that all of these can be done some other, non-complex way?

That is why I say that the boot is on the other foot.

What do you think?




Richard Loosemore









-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to