Rasik Pandey writes: > As a side note, regarding the "Too many open files" issue, has anyone noticed that > this could be related to the JVM? For instance, I have a coworker who tried to run a > number of "optimized" indexes in a JVM instance and a received the "Too many open > files" error. With the same number of available file descriptors (on linux ulimit = > ulimited), he split the number of indicies over too JVM instances his problem > disappeared. He also tested the problem by increasing the available memory to the > JVM instance, via the -Xmx parameter, with all indicies running in one JVM instance > and again the problem disappeared. I think the issue deserves more testing to > pin-point the exact problem, but I was just wondering if anyone has already > experienced anything similar or if this information could be of use to anyone, in > which case we should probably start a new thread dedicated to this issue. > The limit is per process. Two JVM make two processes. (There's a per system limit too, but it's much higher; I think you find it in /proc/sys/fs/file-max and it's default value depends on the amount of memory the system has)
AFAIK there's no way of setting openfiles to unlimited. At least neither bash nor tcsh accepts that. But it should not be a problem to set it to very high values. And you should be able to increase the system wide limit by writing to /proc/sys/fs/file-max as long as you have enough memory. I never used this, though. Morus --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
