Anyone have any ideas about this? It's been something I've been curious about for a while now, and just keeps popping into my head :)
On Thursday, August 29, 2013, Chetan Surpur wrote: > Hello everyone, > > I've been wondering if it's possible to transfer knowledge from one > trained HTM network to another. > > For instance, let's say there's a trained language model on every user's > phone, and there's a global language model on the cloud. The distributed > client models were initially copies of the cloud model but further trained > on the user's own data, thus personalizing them. Then, you train the cloud > model with more public textual training data, and it learns new patterns > (new vocabulary, new phrases, etc.). What would be the best way to transfer > the new knowledge from the cloud model to the client models? > > Since the internal connections between neurons don't translate between > models, I imagine that only the externally facing layers (the input and > output layers) are useful in transferring data. So then one way would be to > have the cloud model generate patterns at the output layer, and feed that > to the client model's input layer. Kind of like the cloud model is > "talking", and the client model is "listening". After all, this is the only > effective way to transfer knowledge between humans, since we can't connect > our brains to each other directly. But it's at least faster than training > the client models directly on the raw training data, because the cloud > model can compress the patterns and transfer them more efficiently. > > That's just one idea, and I'm not even sure how exactly that would work. I > pretty much just thought of it analogous to human communication. Are there > better ways with HTMs? > > Thanks, > Chetan >
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
