Dear All

I've now prepared a basic temporal memory example based on the
"hello_tp.py" test for the Temporal Pooler. it does the same thing,
learns the ABCDE sequence and outputs in a similar format. I think
this example might be useful for beginners so feel free to add it to
the repo if you like.

The TM is able to learn the sequence perfectly, like TP did. However,
the choice of which cell to activate in each column seems to be random
now, whereas with TP it was always the bottom cell in each column.
Also, the TM is "surprised" by the first input with all cells active
in each activated column.

In the next step I'll try to make it lean a simple sinusoid sequence.

John.

On Wed, Jul 15, 2015 at 9:40 AM, John Blackburn
<[email protected]> wrote:
> Thanks very much, Matthew. This will help many people.
>
> John
>
> On Tue, Jul 14, 2015 at 5:35 PM, Matthew Taylor <[email protected]> wrote:
>> Fixed! http://numenta.org/docs/nupic/
>> ---------
>> Matt Taylor
>> OS Community Flag-Bearer
>> Numenta
>>
>>
>> On Tue, Jul 14, 2015 at 9:23 AM, John Blackburn
>> <[email protected]> wrote:
>>> Looks like it might be time to run doxygen again! Last run in May 19.
>>>
>>> John.
>>>
>>> On Tue, Jul 14, 2015 at 5:14 PM, cogmission (David Ray)
>>> <[email protected]> wrote:
>>>> Hey John,
>>>>
>>>> Nice self-sufficient researching! I like that in ya' !!!
>>>>
>>>> Anyway, yes that last (stripNeverLearned) parameter was recently removed
>>>> last month. The file I gave you is older than that...
>>>>
>>>> Remember, NuPIC is ever evolving, and it is still technically 
>>>> "pre-release"!
>>>>
>>>> ;-)
>>>>
>>>>
>>>>
>>>> On Tue, Jul 14, 2015 at 11:09 AM, John Blackburn
>>>> <[email protected]> wrote:
>>>>>
>>>>> Thanks, Ralf,
>>>>>
>>>>> Actually that reminds me, David Ray kindly sent me the QuickTest.py
>>>>> example in Python so I just tried that. However, I ran into another
>>>>> problem: there seems to be some confusion about how many parameters
>>>>> sp.compute() takes (Spatial Pooler). In QuickTest.py the code reads
>>>>>
>>>>> sp.compute(encoding, True, output, False)
>>>>>
>>>>> However, on Github sp.compute takes only 3 parameters (apart from self):
>>>>>
>>>>>
>>>>> https://github.com/numenta/nupic/blob/master/nupic/research/spatial_pooler.py#L658
>>>>>
>>>>> So this causes a crash. I notice on the API docs the 4th parameter is
>>>>> indeed mentioned:
>>>>>
>>>>>
>>>>> http://numenta.org/docs/nupic/classnupic_1_1research_1_1spatial__pooler_1_1_spatial_pooler.html#aaa2084b96999fb1734fd2f330bfa01a6
>>>>>
>>>>> So I guess the 4th arg was recently removed. Pretty confusing!
>>>>>
>>>>> Can anyone shed light on this mystery?
>>>>>
>>>>> John.
>>>>>
>>>>> On Tue, Jul 14, 2015 at 12:25 PM, Ralf Seliger <[email protected]> wrote:
>>>>> > Hey John,
>>>>> >
>>>>> > why don't you try the QuickTest example in htm.java
>>>>> > (https://github.com/numenta/htm.java) or htm.JavaScript
>>>>> > (https://github.com/nupic-community/htm.JavaScript)? It involves the new
>>>>> > temporal memory, and stepping through the code with a debugger you can
>>>>> > easily study the inner workings of th algorithm.
>>>>> >
>>>>> > Regards, RS
>>>>> >
>>>>> >
>>>>> > Am 14.07.2015 um 11:39 schrieb John Blackburn:
>>>>> >>
>>>>> >> Thanks, Chetan,
>>>>> >>
>>>>> >> Any tutorials, examples of how to use temporal_memory.py? The nice
>>>>> >> thing about old TP is it has an example: hello_tp.py.
>>>>> >>
>>>>> >> John.
>>>>> >>
>>>>> >> On Mon, Jul 13, 2015 at 7:55 PM, Chetan Surpur <[email protected]>
>>>>> >> wrote:
>>>>> >>>
>>>>> >>> Hi John,
>>>>> >>>
>>>>> >>> The TP is now called "Temporal Memory", and there's a new
>>>>> >>> implementation
>>>>> >>> of
>>>>> >>> it in NuPIC [1]. Please use this latest version instead, and let us
>>>>> >>> know
>>>>> >>> if
>>>>> >>> you still find issues with the results.
>>>>> >>>
>>>>> >>> [1]
>>>>> >>>
>>>>> >>>
>>>>> >>> https://github.com/numenta/nupic/blob/master/nupic/research/temporal_memory.py
>>>>> >>>
>>>>> >>> Thanks,
>>>>> >>> Chetan
>>>>> >>>
>>>>> >>> On Jul 13, 2015, at 4:44 AM, John Blackburn
>>>>> >>> <[email protected]>
>>>>> >>> wrote:
>>>>> >>>
>>>>> >>> Dear All
>>>>> >>>
>>>>> >>> I'm trying to use the temporal pooler (TP) directly as I want to get
>>>>> >>> into the details of how Nupic works (rather than high level OPF etc)
>>>>> >>>
>>>>> >>> Having trained the TP I used this code to get some predictions:
>>>>> >>>
>>>>> >>> for j in range(10):
>>>>> >>>     x=2*math.pi/100*j
>>>>> >>>     y=math.sin(x)
>>>>> >>>
>>>>> >>>     print "Time step:",j
>>>>> >>>
>>>>> >>>     for k in range(nIntervals):
>>>>> >>>         if y>=ybot[k] and y<ytop[k]:
>>>>> >>>             print "input=",x,y,k,rep[k,:]
>>>>> >>>
>>>>> >>> tp.compute(rep[k,:],enableLearn=False,computeInfOutput=True)
>>>>> >>>             tp.printStates(printPrevious = False, printLearnState =
>>>>> >>> False)
>>>>> >>>             break
>>>>> >>>
>>>>> >>>
>>>>> >>> Here is the result I got:
>>>>> >>>
>>>>> >>> Time step: 0
>>>>> >>> input= 0.0 0.0 9 [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
>>>>> >>>
>>>>> >>> Inference Active state
>>>>> >>> 0000000001 0000000000
>>>>> >>> 0000000000 0000000000
>>>>> >>> Inference Predicted state
>>>>> >>> 0000000000 0000000000
>>>>> >>> 0000000001 0000000000
>>>>> >>> Time step: 1
>>>>> >>> input= 0.0628318530718 0.0627905195293 10 [0 0 0 0 0 0 0 0 0 0 1 0 0 0
>>>>> >>> 0 0 0 0 0 0]
>>>>> >>>
>>>>> >>> Inference Active state
>>>>> >>> 0000000000 1000000000
>>>>> >>> 0000000000 0000000000
>>>>> >>> Inference Predicted state
>>>>> >>> 0000000000 0000000000
>>>>> >>> 0000000001 0000000000
>>>>> >>> Time step: 2
>>>>> >>> input= 0.125663706144 0.125333233564 11 [0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
>>>>> >>> 0 0 0 0 0]
>>>>> >>>
>>>>> >>> Inference Active state
>>>>> >>> 0000000000 0100000000
>>>>> >>> 0000000000 0100000000
>>>>> >>> Inference Predicted state
>>>>> >>> 0000000000 0000000000
>>>>> >>> 0000000000 1110000000
>>>>> >>> Time step: 3
>>>>> >>> input= 0.188495559215 0.187381314586 11 [0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
>>>>> >>> 0 0 0 0 0]
>>>>> >>>
>>>>> >>> Inference Active state
>>>>> >>> 0000000000 0000000000
>>>>> >>> 0000000000 0100000000
>>>>> >>> Inference Predicted state
>>>>> >>> 0000000000 0000000000
>>>>> >>> 0000000000 1110000000
>>>>> >>>
>>>>> >>> You can see that in time step 3, one cell (12th column) is shown as
>>>>> >>> being both in the active and predictive state, which I though was
>>>>> >>> impossible. (its inference active state is 1 and its inference
>>>>> >>> predicated state is 1)
>>>>> >>>
>>>>> >>> Also if you look at time step 0, only 1 cell is in the predictive
>>>>> >>> state. However, the input that comes in at time step 1 activates the
>>>>> >>> colum to the right of this cell (the 11th slot is "1") so I would
>>>>> >>> expect the 11th column to have both cells active, the "unexpected
>>>>> >>> input state" but this does not happen.
>>>>> >>>
>>>>> >>> Can anyone explain this?
>>>>> >>>
>>>>> >>> John.
>>>>> >>>
>>>>> >>>
>>>>> >
>>>>> >
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> With kind regards,
>>>>
>>>> David Ray
>>>> Java Solutions Architect
>>>>
>>>> Cortical.io
>>>> Sponsor of:  HTM.java
>>>>
>>>> [email protected]
>>>> http://cortical.io
>>>
>>
#!/usr/bin/env python
# ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc.  Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------

from nupic.research.temporal_memory import TemporalMemory
import numpy

def formatRow(x):
  s = ''
  for c in range(len(x)):
    if c > 0 and c % 10 == 0:
      s += ' '
    s += str(x[c])
  s += ' '
  return s

def printStates(tm):
  active=numpy.zeros([2,50],dtype="int")
  predictive=numpy.zeros([2,50],dtype="int")

  for cell in tm.activeCells:
    i=cell/2     # col number
    j=cell-i*2   # position in column

    active[j,i]=1

  print "Active cells:"
  print formatRow(active[0])
  print formatRow(active[1])

  for cell in tm.predictiveCells:
    i=cell/2     # col number
    j=cell-i*2   # position in column

    predictive[j,i]=1

  print "Predictive cells:"
  print formatRow(predictive[0])
  print formatRow(predictive[1])

def printInput(x):
  input=numpy.zeros([50],dtype="int")

  for col in x:
    input[col]=1
    
  print formatRow(input)

#######################################################################
#
# Step 1: create Temporal Memory instance with appropriate parameters
tm = TemporalMemory(columnDimensions=(50,), cellsPerColumn=2,
                    initialPermanence=0.5, connectedPermanence=0.5,
                    minThreshold=10,
                    permanenceIncrement=0.1, permanenceDecrement=0.0,
                    activationThreshold=8)

print "activeCells=",tm.activeCells


#######################################################################
#
# Step 2: create input vectors to feed to the temporal memory. Each input vector
# must be numberOfCols wide. Here we create a simple sequence of 5 vectors
# representing the sequence A -> B -> C -> D -> E

x=[None, None, None, None, None]

x[0]=set([i for i in range( 0,10)])
x[1]=set([i for i in range(10,20)])
x[2]=set([i for i in range(20,30)])
x[3]=set([i for i in range(30,40)])
x[4]=set([i for i in range(40,50)])

#######################################################################
#
# Step 3: send this simple sequence to the temporal memory for learning
# We repeat the sequence 10 times
for i in range(10):

  # Send each letter in the sequence in order
  for j in range(5):

    # The compute method performs one step of learning and/or inference. Note:
    # here we just perform learning but you can perform prediction/inference and
    # learning in the same step if you want (online learning).
    tm.compute(x[j], learn = True)

  # The reset command tells the TM that a sequence just ended and essentially
  # zeros out all the states. It is not strictly necessary but it's a bit
  # messier without resets, and the TM learns quicker with resets.
  tm.reset()
  

#######################################################################
#
# Step 3: send the same sequence of vectors and look at predictions made by
# temporal memory
for j in range(5):
  print "\n\n--------","ABCDE"[j],"-----------"
  print "Raw input vector\n",x[j]
  printInput(x[j])
  
  # Send each vector to the TM, with learning turned off
  tm.compute(x[j], learn = False)
  
  # This method prints out the active state of each cell followed by the
  # predicted state of each cell. For convenience the cells are grouped
  # 10 at a time. When there are multiple cells per column the printout
  # is arranged so the cells in a column are stacked together
  #
  # What you should notice is that the columns where active state is 1
  # represent the SDR for the current input pattern and the columns where
  # predicted state is 1 represent the SDR for the next expected pattern
  print "\nAll the active and predicted cells:"
  printStates(tm)

Reply via email to