Hi Viet, i want to build a filter to split tokens that contain characters [a-zA-Z] and numbers into two or more tokens for example if this filter got a token like "test123" it will split it into two tokens "test" and "123", and it will split "ci7nucha" to "ci", "7" and "nusha" My implementation does that, but rather than converting splited tokens into Term query, it convert them into PhraseQuery I want to build a filter like solr WordDelimiterFilter
thank you 2011/11/30, Veit Jahns <nuncupa...@googlemail.com>: > Hi Ahmed, > > I'm sorry, I don't understand your problem completely. Do you mean > that your query "test123" has to be parsed to two sub-queries "test" > and "123", but you only get a query "test 123" performed on your > default field? If so, I guess, you have to extend the query parser > also. > > Kind regards, > > Veit > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure > contains a definitive record of customers, application performance, > security threats, fraudulent activity, and more. Splunk takes this > data and makes sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-novd2d > _______________________________________________ > CLucene-developers mailing list > CLucene-developers@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/clucene-developers > -- Envoyé avec mon mobile ------------------------------------------------------------------------------ All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-novd2d _______________________________________________ CLucene-developers mailing list CLucene-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/clucene-developers