I brought this up a couple of weeks in the Lucene-user forum ("Out of memory in Lucene 1.4.1 when re-indexing large number of documents") and it started a thread that brought up some other out-of-memory problems. Unfortunately the fixes for those introduced in 1.4.2 seem not to apply to my problem: When (re-)indexing a considerable number if objects running Lucene version > 1.4RC3 in an IBM JDK 1.3.1 JVM the number of SegmentTermEnums tends to mount up despite all forced gc runs until I get an out-of-memory error. This is not so with a JDK 1.4.2 where it runs fine.
A quick fix that Doug Cutting proposed seems not to be working (version 1.4.1): Index: src/java/org/apache/lucene/index/TermInfosReader.java >>>>>> =================================================================== >>>>>> RCS file: >>>>>> /home/cvs/jakarta-lucene/src/java/org/apache/lucene/index/TermInfosReade r.java,v >>>>>> >>>>>> retrieving revision 1.9 >>>>>> diff -u -r1.9 TermInfosReader.java >>>>>> --- src/java/org/apache/lucene/index/TermInfosReader.java 6 Aug >>>>>> 2004 20:50:29 -0000 1.9 >>>>>> +++ src/java/org/apache/lucene/index/TermInfosReader.java 10 >>>>>> Sep 2004 17:46:47 -0000 >>>>>> @@ -45,6 +45,11 @@ >>>>>> readIndex(); >>>>>> } >>>>>> >>>>>> + protected final void finalize() { >>>>>> + // patch for pre-1.4.2 JVMs, whose ThreadLocals leak >>>>>> + enumerators.set(null); >>>>>> + } >>>>>> + >>>>>> public int getSkipInterval() { >>>>>> return origEnum.skipInterval; >>>>>> } >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------ I did check this with Lucene 1.4.2 but the permanently increasing number of SegmentTermEnum instances is still there with IBM JDK 1.3.1. Daniel --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]