Hi,
I am facing an out of memory problem using Lucene 1.4.1.
I am re-indexing a pretty large number ( about 30.000 ) of documents.
I identify old instances by checking for a unique ID field, delete those with indexReader.delete() and add the new document version.


HeapDump says I am having a huge number of HashMaps with SegmentTermEnum objects (256891) .

IndexReader is closed directly after delete(term)...

Seems to me that this did not happen with version1.2 (same number of objects and all...).
Has anyone an idea how I get these "hanging" objects? Or what to do in order to avoid them?


Thanks
Daniel

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to