Right you are!  Once I updated to the latest cvs version, it all became
apparent.  That'll teach me to be a laggard.  <sigh>

Thanks,
Scott

-----Original Message-----
From: Doug Cutting [mailto:[EMAIL PROTECTED]]
Sent: Monday, June 25, 2001 5:43 PM
To: 'Scott Ganyo'; Lucene-Dev (E-mail)
Subject: RE: [Lucene-dev] Allowing an Analyzer to choose a parsing
strateg y based on contex t


> From: Scott Ganyo [mailto:[EMAIL PROTECTED]]
> 
> Ok, I've been looking at getting the QueryParser to work 
> under this new
> world order and I'm having trouble understanding where to 
> hook into it.

I think you just need to change QueryParser.jj line 122 to pass the field
name in to the tokenStream method.  Is that what you're asking?

> From: Doug Cutting [mailto:[EMAIL PROTECTED]]
> > From: Scott Ganyo [mailto:[EMAIL PROTECTED]]
> 
> I've thought a bit more about this.  The new method should 
> also be usable by
> the query parser, right?  But the query parser doesn't have a 
> Document or a
> Field.  So I think the the new method should instead be:
> 
>   public TokenStream tokenStream(String fieldName, Reader text);
> 
> That way the query parser can, after having parsed out field 
> names, apply
> the appropriate analysis to the tokens.
> 
> A utility Analyzer class like the following would also be useful:
> 
>   public class FieldAnalyzers extends Analyzer {
>     private HashTable fieldToAnalyzer = new HashTable();
>     public void add(String fieldName, Analyzer analyzer) {
>       fieldToAnalyzer.put(field, analyzer);
>     }
>     public TokenStream tokenStream(String field, Reader reader) {
>       return ((Analyzer)fieldToAnalyzer.get(field)).tokenStream(field,
> reader); 
>     }
>   }
> 
> Probably needs a little more error checking, and maybe a 
> default analyzer,
> but you get the idea...
> 
> Doug
> 

_______________________________________________
Lucene-dev mailing list
[EMAIL PROTECTED]
http://lists.sourceforge.net/lists/listinfo/lucene-dev

Reply via email to