Hi iorixxx!

Thanks for replying. I managed to get around well enough not to need a
tokenizer customized implementation. That would be a pain in ...

Anyway, now I have another problem, which is related to the following:

 - I had previously used replace chars and replace patterns, charfilters and
filters, at index time to replace "EP" by "European Parliament". At that
point, it increased the facet_field count for "European Parliament".
Well now I have a big problem which is: I have already deleted the document
which generated the "European Parliament" and still that facet_field.count
will not subtract!! Is there a way to either remove a facet_field or to
subtract its count manually?

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Implementing-a-customised-tokenizer-tp4121355p4121957.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to