Chetan,

 The seed for the pseudo-random number generators are often fixed (42 being
common as a seed).

Ian


On Sat, Sep 14, 2013 at 2:10 PM, Chetan Surpur <[email protected]> wrote:

> Thanks Ian, for your ideas. Now I'm curious as to what would happen if you
> trained two different HTM networks with the same data, and with a lot of
> it. Since they're randomly initialized, I imagine that the positions of
> patterns in the encodings would differ, but the relative existence of those
> patterns would be analogous between the networks. If this is true, then the
> problem becomes, can you identify and transfer these patterns between
> networks?
>
> I agree it sounds like a very challenging problem :)
>
>
> On Fri, Sep 13, 2013 at 4:49 PM, Ian Danforth <[email protected]>wrote:
>
>> Chetan,
>>
>>  This is a really hard problem, but there are a couple of datapoints that
>> give me hope.
>>
>> 1. Neural networks trained on natural image scenes end up with gabor
>> filters
>>
>> Many many different techniques for doing autoencoders end up with gabor
>> like filters. And if you use the same techniques on different classes of
>> natural images you still get gabor like filters, in similar if not
>> perfectly aligned shapes / proportions.
>>
>> 2. There is a great deal of similarity in activations areas in the human
>> brain
>>
>> The general map of activity for certain perceptions and actions in the
>> brain is very similar between people. There is a lot of variation around
>> the edges of regions, but you can rely on some consistency in nearly every
>> brain area.
>>
>> So what do we do with this information? Well my suspicion is that the
>> statement "the internal connections between neurons don't translate
>> between models" will turn out to be practically false.
>>
>> If the general characteristics of the experience are shared between two
>> models then every layer of their representations will be analogous.
>>
>> In CLA especially if the SP states are shared between two networks
>> (pretrained) I think updating the TP weights from one to the other could
>> work quite well.
>>
>> I'd love to see a tool that can 'diff' two networks so that assumptions
>> like this could be evaluated.
>>
>> Ian
>>
>>
>> On Fri, Sep 13, 2013 at 4:18 PM, Chetan Surpur <[email protected]>wrote:
>>
>>> Anyone have any ideas about this? It's been something I've been curious
>>> about for a while now, and just keeps popping into my head :)
>>>
>>>
>>> On Thursday, August 29, 2013, Chetan Surpur wrote:
>>>
>>>> Hello everyone,
>>>>
>>>> I've been wondering if it's possible to transfer knowledge from one
>>>> trained HTM network to another.
>>>>
>>>> For instance, let's say there's a trained language model on every
>>>> user's phone, and there's a global language model on the cloud. The
>>>> distributed client models were initially copies of the cloud model but
>>>> further trained on the user's own data, thus personalizing them. Then, you
>>>> train the cloud model with more public textual training data, and it learns
>>>> new patterns (new vocabulary, new phrases, etc.). What would be the best
>>>> way to transfer the new knowledge from the cloud model to the client 
>>>> models?
>>>>
>>>> Since the internal connections between neurons don't translate between
>>>> models, I imagine that only the externally facing layers (the input and
>>>> output layers) are useful in transferring data. So then one way would be to
>>>> have the cloud model generate patterns at the output layer, and feed that
>>>> to the client model's input layer. Kind of like the cloud model is
>>>> "talking", and the client model is "listening". After all, this is the only
>>>> effective way to transfer knowledge between humans, since we can't connect
>>>> our brains to each other directly. But it's at least faster than training
>>>> the client models directly on the raw training data, because the cloud
>>>> model can compress the patterns and transfer them more efficiently.
>>>>
>>>> That's just one idea, and I'm not even sure how exactly that would
>>>> work. I pretty much just thought of it analogous to human communication.
>>>> Are there better ways with HTMs?
>>>>
>>>> Thanks,
>>>> Chetan
>>>>
>>>
>>> _______________________________________________
>>> nupic mailing list
>>> [email protected]
>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>>
>>>
>>
>> _______________________________________________
>> nupic mailing list
>> [email protected]
>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>
>>
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to