> Another case, which does not seem to be supported
> is when a token is replaced with a sequence of tokens, each
> representing an *alternative* meaning. Here is an example:
> 
>  'dog' -> 'dog',  'pet'
>  'cat' -> 'cat',  'pet'
>  'pet' -> 'pet'
> 
> When you search for 'pet' you want to match also documents with 'dog' and
> 'cat' but when you search for 'dog' you don't want to match 'cat' or 'pet'.

This strategy can be accomodated in the current design as follows: 
have two analyzers, one which expands tokens to include their synonyms,
and one that doesn't.  Use the first for document tokenization and the
second for query tokenization.  Voila, everyone's happy.  


_______________________________________________
Lucene-dev mailing list
[EMAIL PROTECTED]
http://lists.sourceforge.net/lists/listinfo/lucene-dev

Reply via email to