>
>
> So this is an arithmetic query, rather than a spatial query -- but the two
> cases
> are similar in that both arithmetic operations and spatial operations are
> "special domains" with their own algebras, and by using those algebras one
> can answer queries in those domains more efficiently than one can do by
> generic
> means...
>

Are you going to manually implement a special algorithm for each domain?


>
> So to efficiently handles queries like those you're mentioning, I would
> want to
> use the PLN backward chainer rather than just the PM, and have the backward
> chainer perhaps connected to some computer-algebra engine as one option to
> use
> when encountering a GreaterThanLink ...
>

What rules for BC to do have in mind for this case? Let's try them and see
if the solution will be O(N).
Again, you just say: don't use PM with GreaterThanLinks. Then, for what
reason their support is presented in PM?


>
> Or one could tweak the PM to use the backward chainer only when
> encountering
> a GreaterThanLink, and just do plain vanilla pattern matching otherwise...
>

PM doesn't need to know algebra to deal with this query efficiently. It
just needs not to evaluate pairwise relations for every pair object, but
only for objects belonging to same groups.


>
> It begs the question "OK but how would
> something analogous to a computer-algebra engine be learned via
> experience" ....
>

Yes, this is also the right question, although the considered problem is
not necesserily related to it.


> Human memory is very *constructive*.   Rather than searching among stored
> memories, as in a database search or whatever, the "pattern matching" done
> when a human mind searches its memory is a matter of inventing memories
> that match the pattern being searched for.


Yeah, I know. I just tried to imagine OpenCog in place of human mind. So,
does OpenCog have anything to perform memory quering? For me, PM was a
natural candidate. But we can consider the backward chainer.


> What human memory search does is way more like PLN abductive inference
> based on the cues of stored memories (existing patterns) ...
>

So, you just say, we shouldn't use PM to match data pattern, but should use
it to match patterns describing some general rules?


>
> One jewel of wisdom from Pei Wang is: Almost all algorithms used by
> human-like
> minds have exponential complexity in worst case....
>

I doubt this is true for unconcsious algorithms. Or, at least, they are
any-time algorithms. They will rather fail than run for more than a certain
time.


>
> My gut reaction is it's perhaps often better to think about PLN
> backward chainer (which uses the
> URE which uses the PM).....   I.e. often, instead of thinking about
> custom callbacks to the PM,
> one can think about custom domain-specific inference rules to use within
> PLN...
>

Maybe. So, what rules will work in this case?

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CABpRrhyVVUX4_ohu6%2BjEMu1h6Wk%3DKj-7yhYxV5V3gQ0nk3Nucg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to