Diego Cassinera wrote:
>
> Are you sure you are creating the fields with Field.Index.ANALYZED ?
>
>
Yes, my fields are all ANALYZED. (One was ANALYZED_NO_NORMS but changing it
to ANALYZED did not solve the problem)
I checked with the debugger, and the analyzer I use tu update my indexer
does contain my ISOLatin1AccentFilter.
It looks like the indexWriter does not go through the tokenStream method.
Maybe this is because I perform an updateDocument() instead of a
addDocument() ?
Here is how I index a document:
m_analyzer is an Analyzer returned by my getAnalyzer method
field and field value are a "key" to my document (a unique ID)
IndexWriter luceneIndexWriter = new IndexWriter(m_indexDir, m_analyzer,
IndexWriter.MaxFieldLength.UNLIMITED);
luceneIndexWriter.updateDocument(new Term(field, fieldValue),
luceneDocument);
--
View this message in context:
http://www.nabble.com/Indexing-accented-characters%2C-then-searching-by-any-form-tp15412778p20696670.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]