A process should never be allowed to muck with the internal workings of another process. One process should be able to ask another process to return speculative results but it shouldn't be able to say set alpha to 1.7, beta to 3.4, and gamma to 75.
Except, of course, for a process whose sole purpose is to optimize the second process (through experimentation, etc.). But this is a totally separate case from what I am arguing.
----- Original Message ----- From: "Mark Waser" <[EMAIL PROTECTED]>
To: <[email protected]> Sent: Wednesday, June 14, 2006 11:41 PM Subject: Re: [agi] How the Brain Represents Abstract Knowledge
Hi,The problem is that we are using the word interfere differently. Most of what you are calling interference, I would call interaction and claim as being absolutely necessary. I understand that using the word interfere in this way is logical if you think of wave interference but it's not what I'm trying to prevent.I am using the word interfere solely in a negative sense (i.e. as the opposite of your "These processes all have to play together nicely as they are acting on the same data at the same time"). My contention is that the knowledge collection should enforce nice play.AI architectures in which cognitive processes do not significantly and complexly affect each other, via their interactions on a common data store, are NOT going to be capable of achieving powerful AGI given limited computational resources...Agreed. That's the entire point to any blackboard system.The fact that different cognitive processes "interfere" with each other [i.e., significantly and complexly affect each others' activities] is NOT a flaw, it is intentional and it is VERY VERY NECESSARY.Change the word in quotes from interfere to interact and you have my entire agreement.If classical blackboard systems have so much modularity that each cognitive process can be tuned independently of the others --- then very likely these blackboard systems are incapable of giving rise to the complex emergent dynamics that characterize intelligence.A very bold statement and one I disagree with entirely. Why would modularity prevent the emergence of complex behavior?Processes should always be fine and function correctly if the knowledge base is in any acceptable state or acceptable dynamic motion (as defined by the knowledge base). You can end up with a completely complex system with just two modular process and a modular knowledge base, but you should never have two processes interfere with each other (in my sense of the word) since the knowledge base should NEVER be outside the realm of acceptable behavior and the processes should be able to cope with any acceptable behavior. You may end up with unexpected behavior like oscillations, etc. but the processes should NEVER be able to mess each other up and the knowledge base can (and probably should) be designed to damp oscillations above a certain frequency and/or amplitude (and prevent other obviously problematical states). My concern about the Novamente design is that the knowledge base is not designed to protect itself and one process in the middle of an operation can destructively interfere with another process in the middle of a different operation to the extent of breaking one process or the other (or both).And, it is very obvious that the parameters of these two cognitive processes DO affect each other.Nope, it sounds to me like more conflation of design. If concept creation is looking for wacky designs, it should merely be advertising for an inference process whose parameters are set/tuned so that it returns the results that it is looking for. One thing that it should NOT be expected to do is to know another process well enough to be able to muck in the internal workings of that other process. A process should know it's properties for a given set of parameter setting well enough to advertise them. The last thing that you should be doing is co-varying parameters all over the map. It's no wonder that you're having stability problems.The interactions between the parameter-tunings of different cognitive processes are necessary in order to make the artificial mind function effectively as a coherent whole.You're going to have to back this up for me. I contend that different cognitive processes should be black boxes to each other with easily manipulated settings/controls that are a provided abstraction of the behavior of specific parameter tuning. A process should never be allowed to muck with the internal workings of another process. One process should be able to ask another process to return speculative results but it shouldn't be able to say set alpha to 1.7, beta to 3.4, and gamma to 75.I am still not sure whether we really disagree, or are just using words differently -- or some combination of the two.I think that it's a combination of the two. I think that we agree that complete interaction and flexibility is necessary. I think that we disagree about the necessity for and feasibility of good modularity and encapsulation.Mark----- Original Message ----- From: "Ben Goertzel" <[EMAIL PROTECTED]>To: <[email protected]> Sent: Wednesday, June 14, 2006 6:09 PM Subject: Re: [agi] How the Brain Represents Abstract KnowledgeMark, Hmmm.... In this conversation, we seem to be completely talking past each other and not communicating meaningfully at all... You say thatIn most "blackboard" systems (i.e. those where all processes share the same collection of "active knowledge") and, more particularly, in 100% ofthose that are generally considered to be well designed, the individualprocesses are all forced to follow certain standardized rules (enforced bythe active knowledge collection itself, NOT the processes) so that the processes not only don't step on each other, but they CAN'T step on eachother. Due to these rules, the individual processes themselves can't have ANY parameters that when tweaked can possibly cause them to interfere withother processes. If your design has this problem (i.e. that the activeknowledge collection does not adequately protect itself), then you have asub-optimal design. "Blackboard" systems have been around for decades longer than AGI systems (which are just a very complex sub-class of"blackboard" systems) and there is a considerable body of work that prettydefinitively shows that any design with the behaviors that you are describing CAN be optimized so that it doesn't exhibit those negative behaviors without losing any functionality.Apparently you are using the word "interfere" in a radically different sense than I would in this context, because if I interpret the word "interfere" in the way I naturally would, then the above paragraph reads like complete insanity!! The fact that different cognitive processes "interfere" with each other [i.e., significantly and complexly affect each others' activities] is NOT a flaw, it is intentional and it is VERY VERY NECESSARY. AI architectures in which cognitive processes do not significantly and complexly affect each other, via their interactions on a common data store, are NOT going to be capable of achieving powerful AGI given limited computational resources... If classical blackboard systems have so much modularity that each cognitive process can be tuned independently of the others --- then very likely these blackboard systems are incapable of giving rise to the complex emergent dynamics that characterize intelligence. This is one of the reasons why Novamente is not a classical blackboard system... As an example, consider two cognitive processes, acting concurrently on a common data store: * concept creation [by blending existing concepts via various heuristics] * probabilistic inference The parameters of the concept creation process govern which sorts of concepts tend to be created -- how different they tend to be from existing concepts, how general they tend to be, how many created concepts are related to current goals and how many are just "generally interesting" etc. The parameters of the probabilistic inference process govern what sorts of inferences tend to be drawn -- including such aspects as how speculative the inferences are, how much effort is spent on a few highly complex and abstract inferences versus large masses of smiple inferences etc. And, it is very obvious that the parameters of these two cognitive processes DO affect each other. The kinds of concepts needed to drive highly abstract inferences are different in various subtle ways than the kinds of concepts needed to drive simple inferences, to give just one example.... And if highly speculative inferences are to be focused on, then more whacky and speculative "conceptual blends" are going to be more valuable... Etc., etc. etc. The interactions between the parameter-tunings of different cognitive processes are necessary in order to make the artificial mind function effectively as a coherent whole. If you don't agree with this, then indeed we have fundamentally different intuitions aboutu AGI design. This is fine, but I want to be clear that the complex interactions between Novamente cognitive processes is not an accident, nor a compromise made for performance reasons -- it is a decision made because I believe this kind of interaction is a critical aspect of general intelligence given limited computational resources. I am still not sure whether we really disagree, or are just using words differently -- or some combination of the two. Talking about this kind of thing can be very, very hard, due in large part to the lack of a commonly-understood, precisely-defined vocabulary. -- Ben G -------To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]-------To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
-------To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
