My objective is to retain the keyword (input stream) as is a token like a
keyword tokenizer does and also split the keyword by whitespace and maintain
that tokens as a white space tokenizer does
-- 
View this message in context: 
http://www.nabble.com/Tokenizer-Question-tp21295325p21298291.html
Sent from the Lucene - General mailing list archive at Nabble.com.

Reply via email to