Hi all.

Lucene latest version - 2.3.0 says that the default behaviour of flushing
from memory to file-system based index is based upon RAM usage - with 16 MB
being the default value. Fine. Works for me, as long as I am using a single
thread to write into the index.

However, I have been trying to use multiple threads (say 'n'), to write into
their corresponding individual indexes, all based into the file-system. Now,
occasionally and randomly, I am getting "OutOfMemoryException - Java Memory
Heap", while it works other times. I know, that there may be issues with my
environment, but I haven't been able to figure out, why it works sometimes,
and it does not.

So, I will be obliged if the following doubts are cleared :

a) Let's say there are 'n' threads, writing into 'n' indexes. Each thread
has a unique individual indexer; this is implemented by passing the
corresponding reference to the thread, which is then used in it's run()
method. Now, as we know, 16 MB is the default size, before the docs are
flushed to the file-system based index. Thus, my question is : 

         does this hold for each thread indexer. In other words, do we need
to have at least '16n'MB 
         of memory at our disposal, before we are sure that there will be no
"outOfMemoryException" ? If 
         that is the case, does that also mean that if we are working with a
single main thread only, and 
         providing anything less than 16MB of memory to the JVM, then the
exception would always occur ?

-- 
View this message in context: 
http://www.nabble.com/Query-in-Lucene-2.3.0-tp15175141p15175141.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to