> Are the current configurations of the initial state of Novamente > proprietary? If you don't mind indulging my curiosity, I'd be > curious to see how you define large scale goals of this sort > within the context of Novamente. > > -Brad
Each goal is its own story.... Let's talk about one example: novelty. How does the system know when it's identified something novel? (Please note, this novelty-identification has got to be at least in part a very rapid highly automated process; the system can't be spending a large amount of its time assessing each of its perceptions, thought and actions for potential novelty!) It's different for the different kinds of knowledge representation used in Novamente. Novelty is recognized when the truth value of an Atom (a link or a node) changes rapidly. Novelty is recognized when a new PredicateNode (representing an observed pattern) is created, and it's assessed that prior to the analysis of the particular data the PredicateNode was just recognized in, the system would have assigned that PredicateNode a much lower truth value. (That is: the system has seen a pattern that it did not expect to see.) Novelty is recognized when a "map" (a set of Atoms that share a coherent activity pattern) is formed, which is dissimilar to any previously existing maps. It should be noted that the rules for recognizing novelty are similar to the rules for mentioning "learning". However, novelty focuses on the suddenness of changes in truth value, whereas learning focuses on the total amount of changes in truth value. The two are similar conceptually but different quantitatively. [Yeah, I know my answer is a bit technical and uses a bunch of insular terminology. The terminology is explained in the Novamente essay on the AGIRI website. Within a couple weeks we're going to have a much nicer review article on Novamente up there, BTW.] -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
