Mike, we are working on a first paper. It is planned to submit it to one of the 
major IR conferences in the beginning of 2014. I plan to release a technical 
white paper in parallel.

Francisco 

On 26.08.2013, at 00:44, Mike Lawrence wrote:

> Francisco, are the details (math, pseudocode, etc) of the methods by which 
> you create and train the retina available anywhere? Published paper, 
> conference paper, etc? 
> 
> 
> --
> Mike Lawrence
> Graduate Student
> Department of Psychology & Neuroscience
> Dalhousie University
> 
> ~ Certainty is (possibly) folly ~
> 
> 
> On Wed, Aug 21, 2013 at 5:00 PM, Francisco Webber <[email protected]> wrote:
> Hello,
> I am one of the founders of CEPT Systems and lead researcher of our retina 
> algorithm.
> 
> We have developed a method to represent words by a bitmap pattern capturing 
> most of its "lexical semantics". (A text sensor)
> Our word-SDRs fulfill all the requirements for "good" HTM input data.
> 
> - Words with similar meaning "look" similar
> - If you drop random bits in the representation the semantics remain intact
> - Only a small number (up to 5%) of bits are set in a word-SDR
> - Every bit in the representation corresponds to a specific semantic feature 
> of the language used
> - The Retina (sensory organ for a HTM) can be trained on any language
> - The retina training process is fully unsupervised.
> 
> We have found out that the word-SDR by itself (without using any HTM yet) can 
> improve many NLP problems that are only poorly solved using the traditional 
> statistic approaches.
> We use the SDRs to:
> - Create fingerprints of text documents which allows us to compare them for 
> semantic similarity using simple (euclidian) similarity measures
> - We can automatically detect polysemy and disambiguate multiple meanings.
> - We can characterize any text with context terms for automatic search-engine 
> query-expansion …
> 
> We hope to successfully link-up our Retina to an HTM network to go beyond 
> lexical semantics into the field of "grammatical semantics".
> This would hopefully lead to improved abstracting-, conversation-, question 
> answering- and translation- systems….
> 
> Our correct web address is www.cept.at (no kangaroos in Vienna ;-)
> 
> I am interested in any form of cooperation to apply HTM technology to text.
> 
> Francisco
> 
> On 21.08.2013, at 20:16, Christian Cleber Masdeval Braz wrote:
> 
> >
> >  Hello.
> >
> >  As many of you here i am prety new in HTM technology.
> >
> >  I am a researcher in Brazil and I am going to start my Phd program soon. 
> > My field of interest is NLP and the extraction of knowledge from text. I am 
> > thinking to use the ideas behind the Memory Prediction Framework to 
> > investigate semantic information retrieval from the Web, and answer 
> > questions in natural language. I intend to use the HTM implementation as 
> > base to do this.
> >
> >  I apreciate a lot if someone could answer some questions:
> >
> >  - Are there some researches related to HTM and NLP? Could indicate them?
> >
> >  - Is HTM proper to address this problem? Could it learn, without 
> > supervision, the grammar of a language or just help in some aspects as 
> > Named Entity Recognition?
> >
> >
> >
> >  Regards,
> >
> >  Christian
> >
> >
> > _______________________________________________
> > nupic mailing list
> > [email protected]
> > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
> 
> 
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
> 
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to