Hi Nil,
> Observe that the triple above is an arrow: the tail of the arrow is >> "some subset of the atomspace", the head of the arrow is "the result of >> applying PLN rule X", and the shaft of the arrow is given a name: its >> "rule X". >> > > Aha, I finally understand what you meant all these years! > > I already pointed out that some of the worlds are "impossible" i.e. have >> a probability of zero. These can be discarded. But wait, there's more. >> Suppose that one of the possible worlds contains the statement "John >> Kennedy is alive" (with a very very high confidence) , while another one >> contains the statement "John Kennedy is dead" (with a very very high >> confidence). What I wish to claim is that, no matter what future PLN >> inferences might be made, these two worlds will never become confluent. >> > > I don't think that's true. I believe they should at least be somewhat > confluent, I hope at least, if not then PLN inference control is > pathological. Sure you can't have John Kennedy being half-alive and > half-dead but that is not what a probability distribution means. OK, the reason I focused on having separate, distinct copies of the atomspace at each step is that you (or some algo) gets to decide, at each point, whether you want to merge two atomspaces back together again into one, or not. Today, by default, with the way the chainers are designed, the various different atomspaces are *always* merged back together again (into one single, global atomspace), and you are inventing things like "distributional TV" to control how that merge is done. I am trying to point out that there is another possibility: one could, if desired, maintain many distinct atomspaces, and only sometimes merge them. So, for just a moment, just pretend you actually did want to do that. How could it actually be done? Because doing it in the "naive" way is not practical. Well, there are several ways of doing this more efficiently. One way is to create a new TV, which stores the pairs (atomspace-id, simple-TV) Then, if you wanted to merge two of these "abstract" atomspaces into one, you could just *erase* the atomspace-id. Just as easy as that -- erase some info. You could even take two different (atomspace-id, simple-TV) pairs and mash them into one distributional TV. The nice thing about keeping such pairs is that the atomsapce-id encodes the PLN inference chain. If you want to know *how* you arrived at some simple-TV, you just look at the atomspace-id, and you then can know how you got there -- the inference chain is recorded, folded into the id. To create a distributional-TV, you simply throw away the records of the different inference chains, and combine the simple-TV's into the distributional TV. I hope this is clear. The above indicates how something like this could work -- but we can't talk about if its a good idea, or how it might be useful, till we get past that. > I can't comment on link-grammar since I don't understand it. Well, its a lot like PLN -- it is a set of inference rules (called "disjuncts") that get applied, and each of these inference rules has a probability associated with it (actually, log-probability -- the "cost"). However, instead of always merging each the result of each inference step back into a single global atomspace (called a "linkage"), one keeps track of multiple linkages (multiple distinct atomspaces). One keeps going and going, until it is impossible to apply any further inference rules. At this point, parsing is done. When parsing is done, one has a few or dozens or hundreds of these "linkages" (aka "atomspaces") A parse is then the complete contents of the "atomspace" aka "linkage". At the end of the parse, the "words" (aka OC Nodes, we actually use WordNodes after conversion) are connected with "links" (aka OC EvaluationLinks) Let me be clear: when I say "its a lot like PLN", I am NOT hand-waving or being metaphorical, nor am I trying to be abstract or obtuse. I am trying to state something very real, very concrete, very central. It might not be easy to understand; you might have to tilt your head sideways to get it, but it really is there. Anyway, moving on -- Now, you could, if you wished, mash all of the "linkages"(atomspaces) back together again into just one -- you could put a distributional TV on each "link"(EvaluationLink), and mash everything into one. You could do even more violence, and mash such a distributional TV down to a simple TV. It might even be a good idea to do this! No one has actually done so. Historically, linguists really dislike the single-global-atomspace-with-probabilistic-TV's idea, and have always gone for the many-parallel-universes-with-crisp-TV's model of parsing. This dates back to before chomsky, before tesniere and is rooted in 19th or 18th-century or earlier concepts of grammar in, for example, Latin, etc. -- scholastic thinking maybe even to the 12th century. The core concepts are already present, there; certainly, in jurisprudence. What I am suggesting is that perhaps, by stealing some of these rather very old ideas, and realizing that they also just happen to describe one way of operating PLN, then perhaps would could create better inference control algorithms. You don't have to always work with just a single atomspace. Its OK to conceptualize about having many of them, and think about what might happen in each one. --linas > > > Nil > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA34zXom3zF6oCg1aZZuxvfcNFULF%2BDWfJWhY3eVzc9v7Mg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
