I wrote my indexer so that it added each field without tokenizing it:
Field fnameField = new Field("fname", fname.toLowerCase(), true, true, false);
Field lnameField = new Field("lname", lname.toLowerCase(), true, true, false);
Field cityField = new Field("city", position.toLowerCase(), true, true, false);
By the way, if this is the case, is the indexer even using the analyzer that I pass to it?
No. Tokenized fields are analyzed. Non-tokenized fields are left as-is. It might be clearer if you used Field.Keyword instead, which is identical to what you have here.
Then in my search code I create the firstname query as a WildcardQuery if the first name is provided (adding a * to the end if it's not already there):
Term fnameTerm = null; Query fnameQuery = null; if( fnameIn.length() > 0) { if( !fnameIn.endsWith("*") ) { fnameIn += "*"; } fnameTerm = new Term("fname", fnameIn); fnameQuery = new WildcardQuery(fnameTerm); }
I recommend PrefixQuery in this case.
I presume you lowercased fnameIn? You should to get it to match what was indexed.
Does this seem like a reasonable way to approach this problem, or am I missing something that's going to bite me in the you-know-what?
Seems reasonable to me as long as you are lowercasing the strings at query time also.
Erik
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
