Actually I'm getting results if I add it to the tokenStream instance.

Can you please inform is it right?


Thanks in advance.


On Fri, Oct 5, 2012 at 5:02 PM, selvakumar netaji <vvekselva...@gmail.com>wrote:

> Hi All,
>
>
> I'm reading the docs of Apache Lucene.
>
> I just read through the docs of the analyser
> docs/core/org/apache/lucene/analysis/package-summary.html.
>
>
> Here they have given a code snippet,I've ambiguities in the add attribute
> method. Should it be added to the token stream instance?
>
>  Version matchVersion = Version.LUCENE_XY; // Substitute desired Lucene 
> version for XY
>     Analyzer analyzer = new StandardAnalyzer(matchVersion); // or any other 
> analyzer
>     TokenStream ts = analyzer.tokenStream("myfield", new StringReader("some 
> text goes here"));
>     OffsetAttribute offsetAtt = addAttribute(OffsetAttribute.class);
>
>     try {
>       ts.reset(); // Resets this stream to the beginning. (Required)
>       while (ts.incrementToken()) {
>         // Use AttributeSource.reflectAsString(boolean)
>         // for token stream debugging.
>         System.out.println("token: " + ts.reflectAsString(true));
>
>         System.out.println("token start offset: " + offsetAtt.startOffset());
>         System.out.println("  token end offset: " + offsetAtt.endOffset());
>       }
>       ts.end();   // Perform end-of-stream operations, e.g. set the final 
> offset.
>     } finally {
>       ts.close(); // Release resources associated with this stream.
>     }
>
>
>
>
>
>
>

Reply via email to