Hey Matt, I created this a couple of weeks ago: https://github.com/numenta/nupic/issues/1230. Last I checked, I think you assigned it to Scott when it was created.
thanks, Nicholas On Sep 30, 2014, at 9:32 PM, Matthew Taylor <[email protected]> wrote: > Nicholas, > > I never saw a new issue appear. I've created one here: > https://github.com/numenta/nupic/issues/1382. Please review for accuracy. > > --------- > Matt Taylor > OS Community Flag-Bearer > Numenta > > On Mon, Aug 18, 2014 at 1:53 PM, Nicholas Mitri <[email protected]> wrote: > Hey Matt, > > I’ll give it a go tomorrow and report back in case any problems pop up. > > Nick > > > On Aug 18, 2014, at 7:04 PM, Matthew Taylor <[email protected]> wrote: > > > Nicholas, do you need any help creating a GitHub issue for the problem > > you brought up earlier? > > > > https://github.com/numenta/nupic/issues/new > > --------- > > Matt Taylor > > OS Community Flag-Bearer > > Numenta > > > > > > On Sat, Aug 9, 2014 at 10:28 AM, Chetan Surpur <[email protected]> wrote: > >> Nick, > >> > >> Thanks for looking into this. Please file an issue in GitHub, and someone > >> will investigate. > >> > >> - Chetan > >> > >> On Aug 9, 2014 6:55 AM, "Nicholas Mitri" <[email protected]> wrote: > >>> > >>> Chetan, > >>> > >>> On the topic of encoders and specifically Scalar encoders, there seem to > >>> be some mistakes in the scalar.py. > >>> Resolution = (max - min)/(n-w) = range/(n-w) and radius = resolution * w. > >>> > >>> In the scalar.py documentation: > >>> > >>> You could specify resolution = 0.5, which gives > >>> 1 -> 11111000... (22 bits total) > >>> 1.5 -> 011111..... > >>> 2.0 -> 0011111.... > >>> [resolution = 0.5; n=22; radius=2.5] > >>> > >>> > >>> It should be 23 bits instead of 22. Same mistake is done in the > >>> implementation details: > >>> > >>> Implementation details: > >>> > >>> -------------------------------------------------------------------------- > >>> range = maxval - minval > >>> h = (w-1)/2 (half-width) > >>> resolution = radius / w > >>> n = w * range/radius (periodic) > >>> n = w * range/radius + 2 * h (non-periodic) > >>> > >>> The last equation for n results in n = range/res + (w-1) => res = > >>> range/(n-w+1) which is not what the code or the above calculations state. > >>> I’m not sure if I’m missing a condition that would allow for both > >>> equations in different scenarios, but I thought I’d bring yours and Matt’s > >>> attention to this. > >>> > >>> best, > >>> Nick > >>> > >>> > >>> > >>> On Aug 4, 2014, at 10:45 PM, Nicholas Mitri <[email protected]> wrote: > >>> > >>> Chetan, Fergal, and David, > >>> > >>> Thank you for your feedback! Your comments are very helpful. > >>> Really appreciate you taking the time to answer my questions thoroughly. > >>> > >>> Best, > >>> Nick > >>> > >>> > >>> > >>> > >>> On Aug 4, 2014, at 9:00 PM, Chetan Surpur <[email protected]> wrote: > >>> > >>> Hi Nick, > >>> > >>> Great questions. Adding to the other answers that have been sent, here are > >>> my clarifications to some of your questions (out of order): > >>> > >>> 2- The wiki refers to encoder outputs as SDRs. Is that necessarily the > >>> case and if so, to what properties of encoder design is that requirement > >>> attributed to? (i.e. why do I need an SDR to be the output of the encoder > >>> as > >>> opposed to a binary vector unconstrained in density?) > >>> > >>> > >>> Not necessarily. Encoders can output any kind of binary vector, and the > >>> spatial pooler will convert them into SDRs. Most of the encoders that > >>> currently exist output a sparse binary vector because that makes it easy > >>> to > >>> distinguish between different outputs. But this is not a hard requirement. > >>> > >>> 1- Are there any specific properties that encoders need to have when > >>> designing one? What’s the rationale behind them if they exist? > >>> > >>> > >>> As Fergal mentioned, semantic similarity between inputs should translate > >>> to overlapping bits in the outputs. The rationale is simply that this will > >>> allow the CLA to generalize more easily between similar inputs. Otherwise > >>> learning would be slower. > >>> > >>> 3- Is there a biological counterpart for encoders in the general sense? > >>> > >>> > >>> Our biological sensors, like our retinas or cochleas. They have an > >>> evolutionarily-driven hard-coded way of generating electrical activations > >>> that have the above properties, and allow the lowest levels of the cortex > >>> to > >>> learn patterns quickly. > >>> > >>> - Chetan > >>> > >>> On August 2, 2014 at 5:50:47 AM, Nicholas Mitri ([email protected]) > >>> wrote: > >>> > >>> Hey all, > >>> > >>> Just a few quick questions about encoders I’d appreciate some feedback to. > >>> > >>> 1- Are there any specific properties that encoders need to have when > >>> designing one? What’s the rationale behind them if they exist? > >>> 2- The wiki refers to encoder outputs as SDRs. Is that necessarily the > >>> case and if so, to what properties of encoder design is that requirement > >>> attributed to? (i.e. why do I need an SDR to be the output of the encoder > >>> as > >>> opposed to a binary vector unconstrained in density?) > >>> 3- Is there a biological counterpart for encoders in the general sense? > >>> 4- Encoders perform quantization on the input stream by binning similar > >>> input patterns into hypercubes in feature space and assigning a single > >>> label > >>> (SDR or binary representation) to each bin. The encoder resolution > >>> determines the size of the hypercube. The SP essentially performs a very > >>> similar task by binning the outputs of the encoder in a binary feature > >>> space > >>> instead. City block distance determined by a threshold parameter controls > >>> the size of the hypercubes/bins. Why is this not viewed as a redundant > >>> operation by 2 consecutive modules of the HTM design? Is there a strong > >>> case > >>> for allowing for it? > >>> 5- Finally, is there any benefit to designing an encoding scheme that bins > >>> inputs into hyperspheres instead of hypercubes? Would the resulting > >>> combination of bins produce decision boundaries that might possibly allow > >>> for better binary classification performance for example? > >>> > >>> Thanks! > >>> Nick > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> nupic mailing list > >>> [email protected] > >>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > >>> > >>> _______________________________________________ > >>> nupic mailing list > >>> [email protected] > >>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> nupic mailing list > >>> [email protected] > >>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > >>> > >> > >> _______________________________________________ > >> nupic mailing list > >> [email protected] > >> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > >> > > > > _______________________________________________ > > nupic mailing list > > [email protected] > > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >
