Hi Timo,
I was mentioning to your previous code that you can collect all the text
from term.
IndexReader reader = IndexReader.open(ram);
TermEnum te = reader.terms();
StringBuffer sb = new StringBuffer();
while(te.next())
{
Term t = te.term();
sb.append(t.text());
}
And you can get the tokens using StringTokenizer on the sb.toString() and
put them into Map by calculating the occurrences.
As mentioned I didn't use any information from index so I didn't uses any
TokenStream but let me check it out.
Kiran
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: 16 February 2004 11:38
To: Lucene Users List
Subject: Re: Did you mean...
On Thursday 12 February 2004 18:35, Viparthi, Kiran (AFIS) wrote:
> As mentioned the only way I can see is to get the output of the
> analyzer directly as a TokenStream iterate through it and insert it
> into a Map.
Could you provide or point me to some example code on how to get and use
TokenStream. The API docs are somewhat unclear to me...
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]