Thanks Fergal :)

Actually, i am testing the application using artificially generated data
based on the distribution factor.

On Fri, Aug 9, 2013 at 9:01 PM, Fergal Byrne <[email protected]>wrote:

>
> Hi Ramesh,
>
>  It depends on your data (everything always depends on your data!).
>
>  If, for example, you are trying to learn and predict stock prices for 80
> stocks, then you have a couple of choices:
>
>  1. Keep them all separate, encode each separately, and feed a 10k bit
> array into the CLA
>
>  2. Group them by industry sector (or some other grouping), combine the
> group of 128-bit arrays for the CLA.
>
>  3. Add them all together and feed the single 128-bit value into the CLA.
>
>  What you'll get in each case:
>
>  1. Predictions of all your stocks, informed (hopefully) by any (possibly
> hidden) correlative relationship among stocks.
>
>  2. Same as 1 but with likely better performance (less noise, more
> correlation).
>
>  3. Predictions of the "index" of all your stocks (like the S&P 500)
>
>  When you're deciding how much "detail" to feed the CLA, you will
> conversely be deciding how much noise you're feeding it. The CLA is
> supposed to learn from whatever predictive information is to be found
> embedded in the data, it'll do this (hopefully) if that information is
> there, somewhere. Your job is to oversee the diet of data and discover the
> best recipe for successful prediction.
>
>  One way to picture how the CLA learns is to regard it as building a
> structure of causal flow in space (ie across the input array, across the
> region) and in time (from one pattern in a sequence to the next), in
> response to the analogous flows of the data. It does this by making
> synaptic connections in the SP (for the data) and with previously active
> cells (for sequence memory and prediction).
>
>  These connections are constantly adjusting to try and better match the
> experienced flows of the data. The plan is for noise (or spurious,
> non-structural changes in data) to cancel itself out over time, while
> information should monotonically (in toto) improve the structure.
>
>  So, if you think your 80-stock-wide 10k bit array is a high-information
> diet, feed it into NuPIC and see if it can give you a) stable SDR's out of
> the SP, and b) any kind of predictive capacity! Make sure to donate 10% of
> your earnings to Jeff's favourite charity..
>
> I am already doing it through contributing 10% of my time :)


>  Regards,
>
>  Fergal Byrne
>
>
>
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to