I have found many words not tokenized (and indexed) by Lucene.NET which are
tokenized/indexed by SQL Server 2005 full-text.   Is there a way to optimize
lucene's tokenizer so it handles more words? 
 

Reply via email to