]
Sent: Saturday, November 27, 2004 11:50 AM
To: Chuck Williams
Subject: Re: URGENT: Help indexing large document set
I found the reason for the degredation. It is because I was writing
to
a RamDirectory and then adding to a FSWriter. I guess it makes sense
since the addIndex call
On Wednesday 24 November 2004 00:37, John Wang wrote:
Hi:
I am trying to index 1M documents, with batches of 500 documents.
Each document has an unique text key, which is added as a
Field.KeyWord(name,value).
For each batch of 500, I need to make sure I am not adding a
Thanks Paul!
Using your suggestion, I have changed the update check code to use
only the indexReader:
try {
localReader = IndexReader.open(path);
while (keyIter.hasNext()) {
key = (String) keyIter.next();
term = new Term(key, key);
the slowdown stops after a certain point, especially if you increase
your batch size.
Chuck
-Original Message-
From: John Wang [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 24, 2004 12:21 PM
To: Lucene Users List
Subject: Re: URGENT: Help indexing large document set
Hi:
I am trying to index 1M documents, with batches of 500 documents.
Each document has an unique text key, which is added as a
Field.KeyWord(name,value).
For each batch of 500, I need to make sure I am not adding a
document with a key that is already in the current index.
To do
-Original Message-
From: John Wang [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 3:38 PM
To: [EMAIL PROTECTED]
Subject: URGENT: Help indexing large document set
Hi:
I am trying to index 1M documents, with batches of 500 documents.
Each document has
, November 23, 2004 3:38 PM
To: [EMAIL PROTECTED]
Subject: URGENT: Help indexing large document set
Hi:
I am trying to index 1M documents, with batches of 500 documents.
Each document has an unique text key, which is added as a
Field.KeyWord(name,value).
For each