I currently have a set up that indexes into RAM and then periodically 
merges that into a disk based index. 

Searches are done from the disk based index and deletes are handled by 
keeping a list of deleted documents, filtering out search results and 
applying the deletes to the index at merge time.

All this was done to make sure that we didn't corrupt the index (which 
we'd seen happen a few times when the indexing machine failed for 
whatever reason). With this scheme if the machine fails then all that's 
lost is the RAM index and the list of deletes. We then just simply play 
back all actions since the last merge and we're back to where we 
started.

However it occurred to me that this might all be redundant now with 
Lucene 2.3 (it's possible it might have always been redundant come to 
think of it) - should I just open a Disk based Index with 
autocommit=false and then periodically commit the changes by close()ing 
and then re-open()ing the Disk index ? Is that atomic? i.e is there a 
situation using this whereby the index could become corrupted?

Thanks,

Simon



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to