Ben Halsted wrote:
I'm getting the dreaded: "Too many open files" error.
I've checked my system settings for file-max:
$ cat /proc/sys/fs/file-nr
2677 1945 478412

$ cat /proc/sys/fs/file-max
478412

What does 'ulimit -n' print? Look in /etc/security/limits.conf to increase the limit.

What would be the best way to work around (or fix) this. Merging 10 indexes
at a time and then merging the results down until I get just one index?

Yes. You can decrease indexer.mergeFactor to make this happen. Perhaps we should decrease the default. With the addition of crc files, the number of open files is doubled. So 50 indexes with 10 open files each yeilds 1000 open files, and the JVM needs more than 24. So I guess the default should be decreased to 30 or so.

What about the dedup process. It seems to be able to manage the 100+ indexes
fine, but if I switch the process and merge the indexes first and then
remove dupes, I think it may speed up the process. Ideas?

Then you end up with dupes still taking space in your final index, which is not optimal for search.

Doug

Reply via email to