I have found many words not tokenized (and indexed) by Lucene.NET which are tokenized/indexed by SQL Server 2005 full-text. Is there a way to optimize lucene's tokenizer so it handles more words?
- tokenizer optimizations Michael Paine
- RE: tokenizer optimizations George Aroush
- Re: tokenizer optimizations Doug
