> Yes, for automatic term mapping the Cognitive Cyc sensor will read a
> symbolic proposition or the equivalent in some structured form.  For rich,
> information-rich sensors I would follow the pattern given by James Albus:
> Grouping, Focus Attention, and Search.  He uses sliding windows to detect
> regions of interest at the lowest resolution, and so forth.  His (NIST)
> ideas have been demonstrated with autonomous vehicle navigation.
>
> And thanks again for the tip you me - to consider Albus.
>
> -Steve


Albus's work is really outstanding.  His architecture is part of the
inspiration for Novamente's perception-action architecture, as well.

But even with the sliding window approach, I still think you're gonna get a
LOT of data at the lower levels of the hierarchy, which will lead to huge
Bayes nets (in a Bayes net approach), which will give rise to a couple
scaling problems with typical Bayes net algorithms:

a) the scaling problem with Gibbs sampling, that I mentioned before

b) a scaling problem with the greedy algorithm typically used for bayes net
learning

I think both these problems are overcomeable -- a) by the approach in the
paper I referenced in an earlier e-mail, and b) by trickier methods that
we're currently experimenting with.

Anyway, if you do run into scaling issues with Bayes nets at some point in
the future, let me know because I may have some concrete insights to offer.
I don't have a lot of practical Bayes net experience so far, but we're just
now coding/testing Bayes net learning for use inside our optimization &
procedure learning modules.

I note that I don't think Bayes nets are suitable to serve as the primary
inference engine of an AGI.  Some of their problems are solved by moving to
"loopy" Bayes nets....

But their biggest problem is that they're not really workable for huge
problems, for efficiency reasons AND because it's not all that meaningful in
practice to define a global pdf applicable to all of a system's experience.

So you need to take an approach like you've done in Cyc, and use small
localized Bayes nets.  But then you have the tricky probabilistic inference
problem of combining knowledge that belongs to different localized Bayes net
models....

We avoid this in Novamente by using a different sort of Bayesian inference
method than Bayes nets as our core inference method (probabilistic term
logic) ... but using Bayes nets for specialized purposes, and integrating
the output of bayes nets into our non-bayes-net (but still probabilistic)
knowledge representation.

-- Ben







-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to