I fully agree. Thanks, Ian, for sharing this link to the great thesis. This is my particular area of interest (deep NLP). I also believe this is the next layer of logic that should umbrella over our NuPIC CLA-HTM, once that is working and fine tuned. We need the recursive concepts from this thesis to help build the hierarchies between regions in a self-organizing manner. Because if I am not mistaken and have not overseen something, there is still a lot of work to be done in the area of hierarchy building. And our neocortex has some kind of universal, self-organizing hierarchy builder.
Joe From: Scott Purdy <[email protected]> Reply-To: "NuPIC general mailing list." <[email protected]> Date: Freitag, 12. September 2014 01:17 To: Joseph-Anthony Perez <[email protected]> Subject: Re: For NLP Folks - Somewhat off topic Very interesting, Ian. This might give us some good direction for future NLP work. On Thu, Sep 11, 2014 at 9:22 AM, Ian Danforth <[email protected]> wrote: > Re: http://nlp.stanford.edu/~socherr/thesis.pdf > > All, > > A very interesting recently completed thesis out of Stanford that provides > state of the art results using deep recurrent (recursive) nets. > > Ian
