Let me preface this by noting that I am really psyched we are having this sort of discussion ;) ...
> - Let's imagine I know 10000 people, and I'm asked if I know a family in > which the difference in ages of two children is larger than 15 years (a wife > is taller than a husband more than by 20 cm, or whatever you want). Such > dataset and query are essentially the same as in our example with video > frames and bounding boxes, and PM behavior will be the same. Should > information about these people be stored in the long-term memory? Sure. > Should we use space-server to deal with this query? I guess, no. By what > means should this query be answered? So this is an arithmetic query, rather than a spatial query -- but the two cases are similar in that both arithmetic operations and spatial operations are "special domains" with their own algebras, and by using those algebras one can answer queries in those domains more efficiently than one can do by generic means... For questions regarding numerical comparison, we have GreaterThanLink So to efficiently handles queries like those you're mentioning, I would want to use the PLN backward chainer rather than just the PM, and have the backward chainer perhaps connected to some computer-algebra engine as one option to use when encountering a GreaterThanLink ... Or one could tweak the PM to use the backward chainer only when encountering a GreaterThanLink, and just do plain vanilla pattern matching otherwise... This is a pragmatic solution but obviously doesn't solve the underlying conceptual AGI-related problem. It begs the question "OK but how would something analogous to a computer-algebra engine be learned via experience" .... The "engine" part of a computer algebra engine can just be the URE. So then the question is how can the rules of algebra be learned from experience. But this is straightforward inductive reasoning from a bunch of examples... QED ;) > - I'm asked, "Do you remember a scene in any film, in which a planet was > inside a luggage locker"? My brain says: "Yeah, I remember!". But if I'm > asked: "Do you remember a scene in a film, in which there was a flower pot > more than 30 cm above a bed?"... Huh, maybe, I need to think... My brain > will not go into a loop for 10 billion years. It simply will not evaluate > all possibilities. It will search only among already evaluated subgraphs. > And if it fails, it will try to come up with a special search strategy. So, > the unconscious process of recalling is restricted to matching existing > patterns. I remember about the planet, because my brain evaluated Inside > predicate when I saw the film, cause it found this interesting or > surprising. It doesn't evaluate this predicate during answering the query. > Do we need a space-server for this? Maybe. Do we need PM for this? Maybe, > but with restricted permissions. Hey, this is incorrect as a statement about how human memory works... Human memory is very *constructive*. Rather than searching among stored memories, as in a database search or whatever, the "pattern matching" done when a human mind searches its memory is a matter of inventing memories that match the pattern being searched for. This is very well documented and is why stuff like eyewitness evidence of crimes is so fucked up... What human memory search does is way more like PLN abductive inference based on the cues of stored memories (existing patterns) ... > - NP-complete problems... Yeah, there are a lot of them. But we don't use > unconscious-level algorithms with exponential complexity to solve them. We > either slowly solve them on the conscious level, or slowly learn to solve > them approximately, but fast on the unconscious level. My claim is trivial: > we just should avoid the possibility that some question hangs our AGI system > forever. One jewel of wisdom from Pei Wang is: Almost all algorithms used by human-like minds have exponential complexity in worst case.... The point is they have tractable time in the average case, relative to the probability distribution over problems that is relevant to the organism/mind in question, for problem sizes relevant to the organism/mind in question etc. > You can say: just don't use PM for such queries. Well, OK... But I guess, PM > is a powerful enough tool to cope with them in a reasonable time, either > being provided with appropriate callbacks represented in Atomese, or by some > other means. Right now, we don't propose anything. We just ask: is there any > sense in studying PM on the example of this problem? My gut reaction is it's perhaps often better to think about PLN backward chainer (which uses the URE which uses the PM)..... I.e. often, instead of thinking about custom callbacks to the PM, one can think about custom domain-specific inference rules to use within PLN... But sometimes one may want to think about custom callbacks as well, I wouldn't want to rule it out totally... ben -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CACYTDBftDAwh3%2B0MuboSoeJUUPmg6vUW9feXuErkmfLR%3DYHUYA%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
