Thanks for the answer Uwe!

so the behavior has changed since the 3.6, hasn't it? 

Now I need to instantiate the analyzer each time I feed the field with the
tokenStream, or it happens behind the scenes if I use new (String name,
String value, Field.Store store).

Another question then... Now I try my best to reuse the Document and Field
instances when indexing more than 1 document. Is the instantiation of an
analyzer heavy enough and shouldn't it be also re-used?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Confusion-with-Analyzer-tokenStream-re-use-in-4-1-tp4043427p4043508.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to