Matthias Jaekle wrote:
050721 071234 * Optimizing index...
... this takes a long time ...
Hello,
optimizing the index takes extremly long.
I have the feeling in earlier versions, this was much faster.
I just try to index a 7.000.000 Pages Segment.
This is running till 10 days now.
Hi Andrzej,
thanks for your response. I am not really familar with the lucene internals.
I am just running nutch with the default parameters on a debian sarge
system with ext3 file system, maximum 1024 files opened, and 1 GB RAM.
So is ext3 a bad file system for millions of files?
I could
Hi,
--- Andrzej Bialecki [EMAIL PROTECTED] wrote:
Matthias Jaekle wrote:
Hi Andrzej,
thanks for your response. I am not really familar with the lucene
internals.
I am just running nutch with the default parameters on a debian
sarge
system with ext3 file system, maximum 1024
You probably don't want to touch indexer.termIndexInterval and
indexer.maxMergeDocs (determines the max size of an individual
segment).
Why is maxMergeDocs 50 by default? Should not this value be much higher?
I found how to calculate the number of opened files
But how could I calculate the