Fergal,

I am confident that the degradation you are worried about will be minor and
probably not even noticeable.  In an earlier post I mentioned that we have
run the SP such that each column learned more than one coincidence.  It is
easy to do this if you adjust the ratio of increment to decrement of the
synapse permanence's.  We found the system is robust to these changes.  The
reason is the combination of bits is what matters.  Individual bits can have
multiple meanings as long as the combination of bits are unique.  It is hard
to describe in an email.  Anyway, this is exactly what will happen in the
encoder scheme I am proposing.  

 

I remember one test we did.  It was for the TP.  We changed learning rates
of synapses on dendrite segments such  that each dendrite segment learned
more than one pattern.  We called the potential errors "mix and match"
errors because what we are worried about is the coincidence getting a match
using some bits from one pattern and other bits from other patterns.  We ran
various sequence memory tests as we continued to "overload" each dendritic
segment.  On the tests we were running we didn't see any degradation until
segments represented 20 different coincidences!  That is each coincidence
detector would respond to 20 completely different patterns and the TP
didn't get confused.  It depends on what tests you run but these tests were
something 10,000 sequences of some length and we wanted to see when the TP
starting making incorrect predictions due to overloading and mix and match
errors.

 

We also tested this with the SP although I don't remember the exact results.
Obviously, there is no magic and there will be degradation but it will be
gradual and virtually unnoticeable until the SP bits are heavily overloaded.

 

I hope I am not misunderstanding your concern.

 

I can talk to you about this at the hackathon tomorrow.

 

Jeff

 

 

From: nupic [mailto:[email protected]] On Behalf Of Fergal
Byrn o
Sent: Friday, November 01, 2013 2:57 PM
To: NuPIC general mailing list.
Subject: Re: [nupic-dev] new encoder idea

 

Hi Jeff,

 

I'm concerned that this scheme will degrade the semantics of the encoding.
With all the current scalar encoders, a given bit will be on every time the
input value is within a certain range. Each bit also only has this single
semantic meaning. Thirdly, most of the bits should stay on if you adjust the
input by a small amount.

 

With the standard fixed encoder, these semantics never change. The downside
is that the meaning of the top few bits is (nearly at max plus all the
out-of-range values), so you sacrifice some resolution above the max.

 

The current AdaptiveEncoder may drastically change these semantics (what
range the bit means) if given a large outlier value. The semantics of every
single bit will be altered abruptly, ruining previous learning. This is the
big weakness of this encoder.

 

The encoder you're discussing here also changes the semantics, but perhaps
in an even worse way due to the randomness. Let's say you start with the min
at 0, the max at 101, with a 128 bit input, 21-bits on.

 

Now you give it a value of 175. Are you going to select 25% of the bits and
split their semantics? Let's say you pick bit 64. It used to represent the
range 55-76 (ish) and now it also represents 155-175. You've doubled the
range of values for that bit (ie halving its information content), while the
others keep their ranges. The range represented by a bit is also
non-contiguous. And all this has happened because one outlier was
encountered.

 

It surely makes more sense to degrade the semantics of the bits near the
outlier (top or bottom few bits) most, and to do it based on how many
outliers occur. if you do this then all the ranges stay continuous, and the
bits which suffer dilution are the ones which deserved it most.

 

Adjusting the encoder for outliers is best done gradually and based on the
statistics of those outliers. If there's lots of data appearing beyond the
current range, spread the encoding out gradually and allow the SP time to
adjust its learning. The NonUniformScalarEncoder (NUSE) seems to provide a
(half-complete) basis for this idea.

 

In all three cases you're adjusting for the outlier by increasing the ranges
which will fire each bit. The AdaptiveEncoder makes a mistake by ignoring
the statistics of the outliers, but at least it ends up with the right
encoding if the outlier should have been chosen as the initial max (ie all
bits are equally representative of the data). You encoder makes this mistake
too, and also makes the mistake of creating two populations of bits, one of
which is now suddenly responding to two ranges of values. 

 

The NUSE learns its encoding based on the data. This is biologically
justified. We adjust our sensitivity (in both directions) based on the
changing statistics of the physical intensity of the stimulus (light, sound
levels, etc), and we also adjust the shape of the response profile in a
gradual way. Note that this is different to adjusting the identity of the
stimulus (sound frequency, ultraviolet light etc) which is usually brought
up whenever this is discussed. We're talking here about encoding a single
scalar for a given datum, not changing what the sensor is measuring.

 

 

Regards,

 

Fergal Byrne

 

 

Regards,

 

Fergal Byrne

 

 

On Fri, Nov 1, 2013 at 8:57 PM, Jeff Hawkins <[email protected]> wrote:

Patrick,
Maybe we are thinking about this differently.

In my understanding there is no "age".  In the current encoder a bit in the
encoder represents a small range on the number line.  This never changes,
doesn't age.  What is new is that same bit now also represents one or more
additional ranges of the number line in addition to the original one.  These
new ranges are either above and/or below the min and max values of the
original encoder.  A single bit in the encoder could represent many
different ranges of the number line, as many as we need to cover the full
range of values in the input.  No bits are "released".

In your diagram if all the colored bits were full saturation it would match
what I was thinking.  The y axis would represents the encoding of increasing
values of the input.  The diagram could continue expanding vertically
representing increasing values of the input.

Jeff

-----Original Message-----
From: nupic [mailto:[email protected]] On Behalf Of Patrick
Higgins

Sent: Friday, November 01, 2013 1:14 PM
To: NuPIC general mailing list.
Subject: Re: [nupic-dev] new encoder idea

Jeff,

Saturation indicates the bit's age, the oldest one changes to a 0 and a new
random one is selected as the new 5th bit. Green bits are 1 and yellow bits
are 0. Saturation indicates age. It seemed necessary to facilitate
consistent overlap, so I used the age of the bit as the criteria for
deciding which bit is released, to give similar input numbers a similar SDR.
If its not needed or helpful, I can change the green back to 100%
saturation.

In the case of t=13 bit 9 is released to 0 and bit 38 is activated to 1. bit
38 was chosen randomly, but what if bit 21 was picked instead. Would this be
acceptable or does the new bit need to be previously unused?




Patrick

On Nov 1, 2013, at 11:54 AM, Jeff Hawkins wrote:

> Patrick,
> I am confused.  Why did you go to graded bits?  I was expecting to see
> five solid green dots per row.
> Jeff


_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org


_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org





 

-- 


Fergal Byrne

 

Brenter IT

[email protected] +353 83 4214179

Formerly of Adnet [email protected] http://www.adnet.ie <http://www.adnet.ie/>


_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to