Sebastian, I don't have a lot of time to help you with this right now. Does
anyone else have any insight for his string categorization prediction
problem?

---------
Matt Taylor
OS Community Flag-Bearer
Numenta

On Tue, Jan 12, 2016 at 11:57 AM, Sebastián Narváez <[email protected]>
wrote:

> Yeah, I think that sums it up. The generalization for novel inputs is a
> goal, of course, but for now I would be happy to have a higher success rate
> for the strict training set, which in theory should be achievable. I have a
> decent training set with lots of different ways of saying the same
> sentence. What really bugs me, however is what I said about which sentences
> the model was having a difficult time predicting. I'll give some examples
> (One thing I forgot to mention, my training set is in spanish, but I'll
> translate it and put the direction of the movement in CAPS):
>
> Well predicted:
>
> mover hacia la DERECHA - move to the right
> muevete hacia la IZQUIERDA - you move to the left
> por favor muevete hacia el NORTE - please move to the north
>
> Badly predicted:
>
> ¿ podrias moverte hacia ABAJO ? - could you move downwards ?
> mover a ARRIBA por favor - move upwards please
>
> As you can see, when the direction is the last word, it tends to predict
> well. Special characters like '¿' and '?' are treated as other categories
> too. I've also used custom encoders to give the inputs some more semantic
> meaning, but the results are not very different.
>
>
> On Tue, Jan 12, 2016 at 2:16 PM, Matthew Taylor <[email protected]> wrote:
>
>> Ok I think I see now. So you are training the model that certain
>> sequences of words are associated with certain event types. And you are
>> asking it to predict event types based on what it has already seen.
>>
>> Correct me if I am wrong, but it seems that your goal is to have a system
>> that generalizes on the input text. I don't think the approach you are
>> taking is going to work well at generalization unless you train it on
>> hundreds of different phrases over and over. It has to learn that terms
>> like "upwards" and "up" and "forward" and "north" have similar meanings,
>> and there is no semantic meaning encoded in those category terms until the
>> model has seen enough to know that a certain event type will follow it.
>>
>>
>>
>> ---------
>> Matt Taylor
>> OS Community Flag-Bearer
>> Numenta
>>
>> On Tue, Jan 12, 2016 at 10:34 AM, Sebastián Narváez <[email protected]>
>> wrote:
>>
>>> Both Events and Words are strings, and both are encoder with the same
>>> nupic built-in Category Encoder. You could say my current model is learning
>>> a sequence of strings, it makes no differentiation whatsoever between an
>>> Event and a Word. The CLA Classifier does a multistep prediction, of 1 and
>>> 2 steps ahead. When I'm training the model, I pass the entire sequence,
>>> words and events. Then, when I'm testing, I only pass the words and let the
>>> CLA Classifier predict the next to elements, which should be the events.
>>>
>>> On Tue, Jan 12, 2016 at 1:02 PM, Matthew Taylor <[email protected]>
>>> wrote:
>>>
>>>> So you have a NuPIC model accepting sentences like "move to the left",
>>>> and it is classifying them into event types? If so, how are you doing that?
>>>> Are the words treated as string categories? Is each sentence a sequence?
>>>> How is a prediction being translated into an event type? I'm really
>>>> confused. :?
>>>>
>>>> ---------
>>>> Matt Taylor
>>>> OS Community Flag-Bearer
>>>> Numenta
>>>>
>>>> On Mon, Jan 11, 2016 at 12:39 PM, Sebastián Narváez <
>>>> [email protected]> wrote:
>>>>
>>>>> Ok, sorry for the inaccuracy, I was trying to explain it in a quick
>>>>> manner. I'm making a task execution software for a virtual environment
>>>>> (It's like a game. So far the character can move up, down, left and right)
>>>>> with nupic. It takes a sentence as a parameter (which is treated as a
>>>>> sequence of words) and outputs an event on the virtual environment (i.e.:
>>>>> the movement of the character). So for example, an input can be ['move',
>>>>> 'to', 'the', 'left'], and it should output ['event-move', 'event-left'].
>>>>>
>>>>> I've tried various encoders and structures. So far I've got the best
>>>>> results using the same Category Encoder to encode the words of the input
>>>>> sentence, and the output events, and passing them trough a SP, a TM all 
>>>>> the
>>>>> way to the CLA Classifier. I train the model by passing a sequence of 
>>>>> words
>>>>> and then it's corresponding sequence of events, then reset()ing the TM and
>>>>> passing another pair of sequences.
>>>>>
>>>>> After roughly 50 iterations, the model correctly predicts the sequence
>>>>> of events 77.77% of the time, given a sequence of words (or input
>>>>> sentence). Technically, it should always succesfully predict the 
>>>>> sentences,
>>>>> as I'm currently testing with the exact same training set (there is no
>>>>> noise whatsoever), so I thought it might improve if I adjusted the SP
>>>>> and/or TM parameters. In fact, before I played with them a little, my
>>>>> succes rate was of about 66.66%. What I've found funny, however, is that 
>>>>> it
>>>>> almost always predict correctly if the sentence ends with the direction
>>>>> (e.g. ['move', 'it', 'upwards']), but fails if it doesn't (e.g. ['could',
>>>>> 'you', 'move', 'it', 'upwards', '?']).
>>>>>
>>>>> I hope I explained myself clearly. Please tell me if there's something
>>>>> weird in my explanation; english is not my main language.
>>>>>
>>>>> On Mon, Jan 11, 2016 at 1:36 PM, Matthew Taylor <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Sebastian, is it possible to talk about this data without
>>>>>> abstraction? I'm having a hard time understanding what you mean by "fixed
>>>>>> size sequence". It would help if you could talk about the actual sensors
>>>>>> producing this data?
>>>>>>
>>>>>> Either way, if you swarm for a model with two data sources in an
>>>>>> attempt to optimize the predictions of one of these fields, many times 
>>>>>> the
>>>>>> swarm exposes an unexpected lack of correlation between the two fields.
>>>>>> Sometimes it is simply that to predict field A, the value of field B 
>>>>>> simply
>>>>>> does not matter enough to keep track of it.
>>>>>>
>>>>>> If both sequences are independent of each other like you say, I would
>>>>>> not expect the value of one field to affect the value of another field at
>>>>>> all.
>>>>>>
>>>>>>
>>>>>> ---------
>>>>>> Matt Taylor
>>>>>> OS Community Flag-Bearer
>>>>>> Numenta
>>>>>>
>>>>>> On Fri, Jan 8, 2016 at 7:26 PM, Sebastián Narváez <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> Sure. I have two inputs from different sensors. Having a sequence
>>>>>>> from the first sensor (which isn't of a fixed size), I want to predict a
>>>>>>> sequence from the second sensor (which right now is of a fixed size of 
>>>>>>> two,
>>>>>>> but it would be nice to generalize to non-fixed size later). When a
>>>>>>> sequence from the first sensor is passed to Nupic and it returns a
>>>>>>> prediction, another sequence from the first sensor is then passed, that
>>>>>>> have no relation whatsoever with the previous sequence. That's what I 
>>>>>>> meant
>>>>>>> with non-streaming data. Each sequence is independent from the other.
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Jan 6, 2016 at 2:04 PM, Matthew Taylor <[email protected]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Sebastian,
>>>>>>>>
>>>>>>>> We recommend you use the swarming library to identify the best SP
>>>>>>>> and TM parameters. See
>>>>>>>> https://github.com/numenta/nupic/wiki/Running-Swarms for details.
>>>>>>>>
>>>>>>>> I'm not sure what you mean by non-streaming data, can you elaborate?
>>>>>>>>
>>>>>>>>
>>>>>>>> ---------
>>>>>>>> Matt Taylor
>>>>>>>> OS Community Flag-Bearer
>>>>>>>> Numenta
>>>>>>>>
>>>>>>>> On Tue, Jan 5, 2016 at 3:42 PM, Sebastián Narváez <
>>>>>>>> [email protected]> wrote:
>>>>>>>>
>>>>>>>>> Hey guys. Is there any recommendations for the SP and TM
>>>>>>>>> parameteres based on the type of inputs or the encoder used? Also, 
>>>>>>>>> how does
>>>>>>>>> swarming behave with non-streming data? (i.e.: A dataset of 
>>>>>>>>> independent
>>>>>>>>> sequences).
>>>>>>>>>
>>>>>>>>> Thanks in advance!
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to