Okay, I figured out my issue (well, actually a coworker spotted it - I was
just too close). A word of warning:
Token "termBuffer" character arrays are fixed size, not sized to the number
of characters!
Yep, I was dropping the term buffer into a String without start and length,
thereby adding uns
LOL - I sure wish it was! :)
Sadly, that was a typo (Luke, for all its beauties, does not seem to grasp
the concept of a clipboard so the sample was a manual transcription).
A few more details - don't know if this will help or not.
Same query as before, when I do a rewrite of the query in Luke I
> ...
> And expect to match document 156297 (search_text=="Austell GA", type==1).
> ...
> System.out.println(searcher.explain(query, 156296));
156297 != 156296
Could that be it?
--
Ian.
On Thu, May 22, 2008 at 11:21 PM, Casey Dement <[EMAIL PROTECTED]> wrote:
> Hi - trying to execute a searc
Hi - trying to execute a search in Lucene and getting results I don't
understand :(
The index contains fields search_text and type - both indexed tokenized.
I'm attempting to execute the query:
+(search_text:austell~0.9 search_text:ga~0.9) +(type:1 type:4)
And expect to match document 156297 (