> > Document doc = new Document()
> > for (int i = 0; i < pages.length; i++) {
> > doc.add(new Field("text", pages[i], Field.Store.NO,
> > Field.Index.TOKENIZED));
> > doc.add(new Field("text", "$$", Field.Store.NO,
> > Field.Index.UN_TOKENIZED));
> > }
>
> UN_TOKENIZED. Nice idea!
> I will check this out.
Hm... when I try this, something strange happens with my offsets.
When I use
doc.add(new Field("text", pages[i] +
"012345678901234567890123456789012345678901234567890123456789",
Field.Store.NO, Field.Index.TOKENIZED))
everything is fine. Offsets are as I expect.
But when I use
doc.add(new Field("text", pages[i], Field.Store.NO, Field.Index.TOKENIZED))
doc.add(new Field("text",
"012345678901234567890123456789012345678901234567890123456789",
Field.Store.NO, Field.Index.UN_TOKENIZED))
the offsets of my terms are to high.
What is the difference?
Thank you.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]