Actually, it consists of two completely different networks: one close to a neural, and the other a regular bayesian. The first stores\relates patterns, the second simply does inference using the conditional probability matrices.

On 10/28/06, Pei Wang <[EMAIL PROTECTED]> wrote:
Sounds interesting. I'm looking forward to reading your paper.

Yes, people sometimes take the HTM model to be similar to a neural
net, though it is actually much closer to a Bayesian net.

Pei

On 10/28/06, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> Thank you. I've studied the paper and the tested 'improvements'. The
> experiments in the paper are certainly usefull and are of the kind of
> parameter-testing without modifying the actual model. My experiments,
> however, are somewhat different and you could say they explore a broader
> field of modifications for a more complete theory, with better
> multidimensional invariance. I also put it in a neural net perspective which
> Hawkins et al may disagree with. I will pull out a paper some time before
> februari 2007.
>
> On 10/26/06, Pei Wang <[EMAIL PROTECTED]> wrote:
> >
> > Hi,
> >
> > You may found this work relevant:
> http://www.phillylac.org/prediction/
> >
> > Pei
> >
> > On 10/26/06, Kingma, D.P. <[EMAIL PROTECTED] > wrote:
> > > I'm a Dutch student currently situated in Rome for six months. Due to my
> > > recent interest in AGI I have initiated a small research project into
> HTM
> > > theory (J. Hawkins / D. George). HTM learning is (in my eyes) an ANN, in
> > > function similar to Hebbian learning, just particularly more efficient
> with
> > > dealing with hierarchically structured, n-dimensional input.
> > >
> > >  At the moment, I'm creating a HTM implementation (in Java) to test my
> > > hypotheses and theories. My focus lies in:
> > >   - Description of HTM theory as a special kind of ANN and its relation
> to
> > > Hebbian learning.
> > >   - Test of improvements:
> > >     - A more dynamic, scale/orientation invariant pattern matching.
> > >     - A more efficient way of matching patterns to a 'letters' (of the
> > > layers alphabet).
> > >   - Depending on the time I have, I will do some field tests with Vision
> > > (combining with SIFT?) and language grounding.
> > >
> > >  Currently I have a running java implementation for experimentation
> > > purposes.
> > >
> > >  Academic knowledge in this field is a bit scarce here at La Sapienza.
> > > Therefore, my question to you guys is the following:
> > >   - Does anyone have nice pointer to related Hebbian-learning theories?
> > >   - Does anyone use, or consider HTM's or similar in their AGI design?
> > >   - Any other comments, warnings or wisdom to share?
> > >
> > >  Durk Kingma
> > >  ________________________________
> > >  This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe
> > > or change your options, please go to:
> > >
> http://v2.listbox.com/member/[EMAIL PROTECTED]
> >
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/[EMAIL PROTECTED]
> >
>
>  ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
> or change your options, please go to:
> http://v2.listbox.com/member/[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to