Thank you all for helping me this far. I suppose I just open a lot of
file descriptors. 

How can I change the max number of file descriptors for the whole system
(I'm using RedHat 5.2)?

When my servlets run out of file descriptors other processes owned by
other users are getting messages like "Too many open files in system"
to.

> UNIX (Linux included) allows you to set a max number of file descriptors
> that a given process can have open.  The command controlling this
> depends on the shell you're using.  For instance, under Linux tcsh, it
> is "limit" and under bash (/bin/sh) it is "ulimit"
> 
> I think the default is 256 -- you can check the current setting by
> typing "limit descriptors" in tcsh or "ulimit -n" in bash.  So, to set
> it to something hight under tcsh, type "limit descriptors XXX" where XXX
> is the number you want.  Under bash, type "ulimit -n XXX".  Read the man
> page for bash or tcsh (limit and ulimit are builtin commands) for more
> information.  When I run servers, I usually set it to 1024.
> 
> Also, you really, really should not rely on garbage collection to close
> files and sockets for you.  While technically it's true that they will
> be closed and the resources will be freed when GC finally catches them,
> it's no way to write good software.
> 
> -nate

-- 
    _  _   _ ___    ___ __    __  __ ___  ___ ___
   | |/_\ / / _ \  | _ \  )  / _)/  ) _ \/ _ (   )
  _| /(_)\ ( (_) ) ||_) ) ) ( (_-| | (_)  (_) | |
 (__/_/ \_\_\___/  |___/__)  \___|_|\___/\___/|_|


----------------------------------------------------------------------
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to