Nov 2006 22:18:13 -0500
: From: James Rhodes <[EMAIL PROTECTED]>
: Reply-To: java-user@lucene.apache.org
: To: java-user@lucene.apache.org
: Subject: Re: 2.0 and Tokenized versus UN_TOKENIZED
:
: Thanks. That helps, but I've tried a lot of combinations and I forget
now.
: I'm usi
leave the query text
untokenized so it can match the untokenized value you indexed.
: Date: Sat, 4 Nov 2006 22:18:13 -0500
: From: James Rhodes <[EMAIL PROTECTED]>
: Reply-To: java-user@lucene.apache.org
: To: java-user@lucene.apache.org
: Subject: Re: 2.0 and Tokenized versus UN_TOKENIZED
:
Thanks. That helps, but I've tried a lot of combinations and I forget now.
I'm using StandardAnalyzer for the index and query.I can't say for sure if
I've tried other cases. The specific combination is lastname:rhodes AND
city:"EAGLE RIVER" AND state:AK, Before TOKENIZED no match after TOKENIZED
m
Two questions come to mind...
1> what analyzer are you using for the *query*? Is it possible that when you
query for city you're using a tokenizer that breaks up your city code?
2> what about case? I'll assume that you have tried to search one-word
cities, so how the stream is tokenized won't br
I'm using the 2.0 branch and I've had issues with searching indexes where
the fields aren't tokenized.
For instance, my index consists of count,lastname,city,state and I used the
following code to index it (the data is in a sql server db):
*
if*(count != 0) {
doc.add(*new* Field("count", NumberU