Thanks Ian,
That's what I needed, things now work like a charm.
someone really should put this in a blog or something :D
good day
2017-02-17 21:16 GMT+08:00 Ian Lea :
> Hi
>
>
> Sounds like you should use FieldType.setTokenized(false). For the
> equivalent field in some of my lucene
Hi
Sounds like you should use FieldType.setTokenized(false). For the
equivalent field in some of my lucene indexes I use
FieldType idf = new FieldType();
idf.setStored(true);
idf.setOmitNorms(true);
idf.setIndexOptions(IndexOptions.DOCS);
idf.setTokenized(false);
idf.freeze();
There's also Per
Thanks, Ian:
You saved my day!
And there is a further question to ask:
Since the analyzer could only be configured through the IndexWriter,
using different
analyzers for different Fields is not possible, right? I only want
this '_id' field to identify
the document in index, so I could update or
Hi
SimpleAnalyzer uses LetterTokenizer which divides text at non-letters.
Your add and search methods use the analyzer but the delete method doesn't.
Replacing SimpleAnalyzer with KeywordAnalyzer in your program fixes it.
You'll need to make sure that your id field is left alone.
Good to see a
Hi, all:
I am Using version 5.5.4, and find can't delete a document via the
IndexWriter.deleteDocuments(term) method.
Here is the test code:
import org.apache.lucene.analysis.core.SimpleAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apac