Do you have some examples?  Are those stop-words you are referring to?

SQL's analyzer/tokenizer doesn't necessarily match that of Lucene.Net's.

-- George

> -----Original Message-----
> From: Michael Paine [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, April 04, 2007 5:29 AM
> To: [email protected]
> Subject: tokenizer optimizations
> 
> I have found many words not tokenized (and indexed) by 
> Lucene.NET which are
> tokenized/indexed by SQL Server 2005 full-text.   Is there a 
> way to optimize
> lucene's tokenizer so it handles more words? 
>  
> 

Reply via email to