Easiest way is to use sysctl to view and change the max files setting. For
some reason the max-files is set to 8000 or something small (is for mandrake
anyway)

Sysctl fs.file-nr to view what is currently in use and what the max is set
for.  It reports file usage in the format of xxx yyy zzz  where xxx= max
that has been used by the system, yyy=currently being used, zzz=max
allocated.  So yyy should never get near zzz, if it does then you will get
the out of file errors.  Try using the command when you are getting the
issue and see what the system values are. 

To change use
Sysctl -w fs.file-max="32768"  to give it something decent.

Should solve your problems.

Stephen...


> -----Original Message-----
> From: Morus Walter [mailto:[EMAIL PROTECTED] 
> Sent: Friday, 20 February 2004 7:41 PM
> To: Lucene Users List
> Subject: open files under linux
> 
> Rasik Pandey writes:
>  
> > As a side note, regarding the "Too many open files" issue, 
> has anyone noticed that this could be related to the JVM? For 
> instance, I have a coworker who tried to run a number of 
> "optimized" indexes in a JVM instance and a received the "Too 
> many open files" error. With the same number of available 
> file descriptors (on linux ulimit = ulimited), he split the 
> number of indicies over too JVM instances his problem 
> disappeared.  He also tested the problem by increasing the 
> available memory to the JVM instance, via the -Xmx parameter, 
> with all indicies running in one JVM instance and again the 
> problem disappeared. I think the issue deserves more testing 
> to pin-point the exact problem, but I was just wondering if 
> anyone has already experienced anything similar or if this 
> information could be of use to anyone, in which case we 
> should probably start a new thread dedicated to this issue.
> > 
> The limit is per process. Two JVM make two processes.
> (There's a per system limit too, but it's much higher; I 
> think you find it in /proc/sys/fs/file-max and it's default 
> value depends on the amount of memory the system has)
> 
> AFAIK there's no way of setting openfiles to unlimited. At 
> least neither bash nor tcsh accepts that.
> But it should not be a problem to set it to very high values.
> And you should be able to increase the system wide limit by 
> writing to /proc/sys/fs/file-max as long as you have enough memory.
> 
> I never used this, though.
> 
> Morus
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> 
> __________ NOD32 1.628 (20040218) Information __________
> 
> This message was checked by NOD32 antivirus system.
> http://www.nod32.com
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to