Ralf,

Please let me know whether I can be of any help with the network demo? Are
you porting the NAPI also?

Sorry I forgot to ask before...

Cheers,
David

On Fri, Jul 17, 2015 at 3:49 AM, John Blackburn <[email protected]>
wrote:

> OK thanks I will try that. I currently haven't created a fork, just
> cloned the repo directly. But I can create a fork and then make a PR.
>
> Unfortunately I am currently having trouble installing the latest
> NuPIC because I have 32 bit Linux and NuPIC no longer supports this
> officially. My computer is actually 64 bit so I might reinstall Ubuntu
> 64-bit before trying to do anything further with git.
>
> But please feel free to add the file yourself in the meantime of course.
>
> John.
>
> On Thu, Jul 16, 2015 at 6:37 PM, Matthew Taylor <[email protected]> wrote:
> > John, why don't you create a pull request? You can put your file at
> > "examples/tm/hello_tm.py".
> > ---------
> > Matt Taylor
> > OS Community Flag-Bearer
> > Numenta
> >
> >
> > On Thu, Jul 16, 2015 at 10:29 AM, John Blackburn
> > <[email protected]> wrote:
> >> Dear All
> >>
> >> I've now prepared a basic temporal memory example based on the
> >> "hello_tp.py" test for the Temporal Pooler. it does the same thing,
> >> learns the ABCDE sequence and outputs in a similar format. I think
> >> this example might be useful for beginners so feel free to add it to
> >> the repo if you like.
> >>
> >> The TM is able to learn the sequence perfectly, like TP did. However,
> >> the choice of which cell to activate in each column seems to be random
> >> now, whereas with TP it was always the bottom cell in each column.
> >> Also, the TM is "surprised" by the first input with all cells active
> >> in each activated column.
> >>
> >> In the next step I'll try to make it lean a simple sinusoid sequence.
> >>
> >> John.
> >>
> >> On Wed, Jul 15, 2015 at 9:40 AM, John Blackburn
> >> <[email protected]> wrote:
> >>> Thanks very much, Matthew. This will help many people.
> >>>
> >>> John
> >>>
> >>> On Tue, Jul 14, 2015 at 5:35 PM, Matthew Taylor <[email protected]>
> wrote:
> >>>> Fixed! http://numenta.org/docs/nupic/
> >>>> ---------
> >>>> Matt Taylor
> >>>> OS Community Flag-Bearer
> >>>> Numenta
> >>>>
> >>>>
> >>>> On Tue, Jul 14, 2015 at 9:23 AM, John Blackburn
> >>>> <[email protected]> wrote:
> >>>>> Looks like it might be time to run doxygen again! Last run in May 19.
> >>>>>
> >>>>> John.
> >>>>>
> >>>>> On Tue, Jul 14, 2015 at 5:14 PM, cogmission (David Ray)
> >>>>> <[email protected]> wrote:
> >>>>>> Hey John,
> >>>>>>
> >>>>>> Nice self-sufficient researching! I like that in ya' !!!
> >>>>>>
> >>>>>> Anyway, yes that last (stripNeverLearned) parameter was recently
> removed
> >>>>>> last month. The file I gave you is older than that...
> >>>>>>
> >>>>>> Remember, NuPIC is ever evolving, and it is still technically
> "pre-release"!
> >>>>>>
> >>>>>> ;-)
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Tue, Jul 14, 2015 at 11:09 AM, John Blackburn
> >>>>>> <[email protected]> wrote:
> >>>>>>>
> >>>>>>> Thanks, Ralf,
> >>>>>>>
> >>>>>>> Actually that reminds me, David Ray kindly sent me the QuickTest.py
> >>>>>>> example in Python so I just tried that. However, I ran into another
> >>>>>>> problem: there seems to be some confusion about how many parameters
> >>>>>>> sp.compute() takes (Spatial Pooler). In QuickTest.py the code reads
> >>>>>>>
> >>>>>>> sp.compute(encoding, True, output, False)
> >>>>>>>
> >>>>>>> However, on Github sp.compute takes only 3 parameters (apart from
> self):
> >>>>>>>
> >>>>>>>
> >>>>>>>
> https://github.com/numenta/nupic/blob/master/nupic/research/spatial_pooler.py#L658
> >>>>>>>
> >>>>>>> So this causes a crash. I notice on the API docs the 4th parameter
> is
> >>>>>>> indeed mentioned:
> >>>>>>>
> >>>>>>>
> >>>>>>>
> http://numenta.org/docs/nupic/classnupic_1_1research_1_1spatial__pooler_1_1_spatial_pooler.html#aaa2084b96999fb1734fd2f330bfa01a6
> >>>>>>>
> >>>>>>> So I guess the 4th arg was recently removed. Pretty confusing!
> >>>>>>>
> >>>>>>> Can anyone shed light on this mystery?
> >>>>>>>
> >>>>>>> John.
> >>>>>>>
> >>>>>>> On Tue, Jul 14, 2015 at 12:25 PM, Ralf Seliger <[email protected]>
> wrote:
> >>>>>>> > Hey John,
> >>>>>>> >
> >>>>>>> > why don't you try the QuickTest example in htm.java
> >>>>>>> > (https://github.com/numenta/htm.java) or htm.JavaScript
> >>>>>>> > (https://github.com/nupic-community/htm.JavaScript)? It
> involves the new
> >>>>>>> > temporal memory, and stepping through the code with a debugger
> you can
> >>>>>>> > easily study the inner workings of th algorithm.
> >>>>>>> >
> >>>>>>> > Regards, RS
> >>>>>>> >
> >>>>>>> >
> >>>>>>> > Am 14.07.2015 um 11:39 schrieb John Blackburn:
> >>>>>>> >>
> >>>>>>> >> Thanks, Chetan,
> >>>>>>> >>
> >>>>>>> >> Any tutorials, examples of how to use temporal_memory.py? The
> nice
> >>>>>>> >> thing about old TP is it has an example: hello_tp.py.
> >>>>>>> >>
> >>>>>>> >> John.
> >>>>>>> >>
> >>>>>>> >> On Mon, Jul 13, 2015 at 7:55 PM, Chetan Surpur <
> [email protected]>
> >>>>>>> >> wrote:
> >>>>>>> >>>
> >>>>>>> >>> Hi John,
> >>>>>>> >>>
> >>>>>>> >>> The TP is now called "Temporal Memory", and there's a new
> >>>>>>> >>> implementation
> >>>>>>> >>> of
> >>>>>>> >>> it in NuPIC [1]. Please use this latest version instead, and
> let us
> >>>>>>> >>> know
> >>>>>>> >>> if
> >>>>>>> >>> you still find issues with the results.
> >>>>>>> >>>
> >>>>>>> >>> [1]
> >>>>>>> >>>
> >>>>>>> >>>
> >>>>>>> >>>
> https://github.com/numenta/nupic/blob/master/nupic/research/temporal_memory.py
> >>>>>>> >>>
> >>>>>>> >>> Thanks,
> >>>>>>> >>> Chetan
> >>>>>>> >>>
> >>>>>>> >>> On Jul 13, 2015, at 4:44 AM, John Blackburn
> >>>>>>> >>> <[email protected]>
> >>>>>>> >>> wrote:
> >>>>>>> >>>
> >>>>>>> >>> Dear All
> >>>>>>> >>>
> >>>>>>> >>> I'm trying to use the temporal pooler (TP) directly as I want
> to get
> >>>>>>> >>> into the details of how Nupic works (rather than high level
> OPF etc)
> >>>>>>> >>>
> >>>>>>> >>> Having trained the TP I used this code to get some predictions:
> >>>>>>> >>>
> >>>>>>> >>> for j in range(10):
> >>>>>>> >>>     x=2*math.pi/100*j
> >>>>>>> >>>     y=math.sin(x)
> >>>>>>> >>>
> >>>>>>> >>>     print "Time step:",j
> >>>>>>> >>>
> >>>>>>> >>>     for k in range(nIntervals):
> >>>>>>> >>>         if y>=ybot[k] and y<ytop[k]:
> >>>>>>> >>>             print "input=",x,y,k,rep[k,:]
> >>>>>>> >>>
> >>>>>>> >>> tp.compute(rep[k,:],enableLearn=False,computeInfOutput=True)
> >>>>>>> >>>             tp.printStates(printPrevious = False,
> printLearnState =
> >>>>>>> >>> False)
> >>>>>>> >>>             break
> >>>>>>> >>>
> >>>>>>> >>>
> >>>>>>> >>> Here is the result I got:
> >>>>>>> >>>
> >>>>>>> >>> Time step: 0
> >>>>>>> >>> input= 0.0 0.0 9 [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
> >>>>>>> >>>
> >>>>>>> >>> Inference Active state
> >>>>>>> >>> 0000000001 0000000000
> >>>>>>> >>> 0000000000 0000000000
> >>>>>>> >>> Inference Predicted state
> >>>>>>> >>> 0000000000 0000000000
> >>>>>>> >>> 0000000001 0000000000
> >>>>>>> >>> Time step: 1
> >>>>>>> >>> input= 0.0628318530718 0.0627905195293 10 [0 0 0 0 0 0 0 0 0 0
> 1 0 0 0
> >>>>>>> >>> 0 0 0 0 0 0]
> >>>>>>> >>>
> >>>>>>> >>> Inference Active state
> >>>>>>> >>> 0000000000 1000000000
> >>>>>>> >>> 0000000000 0000000000
> >>>>>>> >>> Inference Predicted state
> >>>>>>> >>> 0000000000 0000000000
> >>>>>>> >>> 0000000001 0000000000
> >>>>>>> >>> Time step: 2
> >>>>>>> >>> input= 0.125663706144 0.125333233564 11 [0 0 0 0 0 0 0 0 0 0 0
> 1 0 0 0
> >>>>>>> >>> 0 0 0 0 0]
> >>>>>>> >>>
> >>>>>>> >>> Inference Active state
> >>>>>>> >>> 0000000000 0100000000
> >>>>>>> >>> 0000000000 0100000000
> >>>>>>> >>> Inference Predicted state
> >>>>>>> >>> 0000000000 0000000000
> >>>>>>> >>> 0000000000 1110000000
> >>>>>>> >>> Time step: 3
> >>>>>>> >>> input= 0.188495559215 0.187381314586 11 [0 0 0 0 0 0 0 0 0 0 0
> 1 0 0 0
> >>>>>>> >>> 0 0 0 0 0]
> >>>>>>> >>>
> >>>>>>> >>> Inference Active state
> >>>>>>> >>> 0000000000 0000000000
> >>>>>>> >>> 0000000000 0100000000
> >>>>>>> >>> Inference Predicted state
> >>>>>>> >>> 0000000000 0000000000
> >>>>>>> >>> 0000000000 1110000000
> >>>>>>> >>>
> >>>>>>> >>> You can see that in time step 3, one cell (12th column) is
> shown as
> >>>>>>> >>> being both in the active and predictive state, which I though
> was
> >>>>>>> >>> impossible. (its inference active state is 1 and its inference
> >>>>>>> >>> predicated state is 1)
> >>>>>>> >>>
> >>>>>>> >>> Also if you look at time step 0, only 1 cell is in the
> predictive
> >>>>>>> >>> state. However, the input that comes in at time step 1
> activates the
> >>>>>>> >>> colum to the right of this cell (the 11th slot is "1") so I
> would
> >>>>>>> >>> expect the 11th column to have both cells active, the
> "unexpected
> >>>>>>> >>> input state" but this does not happen.
> >>>>>>> >>>
> >>>>>>> >>> Can anyone explain this?
> >>>>>>> >>>
> >>>>>>> >>> John.
> >>>>>>> >>>
> >>>>>>> >>>
> >>>>>>> >
> >>>>>>> >
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> With kind regards,
> >>>>>>
> >>>>>> David Ray
> >>>>>> Java Solutions Architect
> >>>>>>
> >>>>>> Cortical.io
> >>>>>> Sponsor of:  HTM.java
> >>>>>>
> >>>>>> [email protected]
> >>>>>> http://cortical.io
> >>>>>
> >>>>
> >
>
>


-- 
*With kind regards,*

David Ray
Java Solutions Architect

*Cortical.io <http://cortical.io/>*
Sponsor of:  HTM.java <https://github.com/numenta/htm.java>

[email protected]
http://cortical.io

Reply via email to