Am Montag, den 14.07.2008, 09:50 -0400 schrieb Yonik Seeley:
> Solr uses reference counting on IndexReaders to close them ASAP (since
> relying on gc can lead to running out of file descriptors).
> 

How do you force them to close ASAP? I use File and FileOutputStream
objects, I close the output streams and then call delete on the files. I
sill have problems with to many open files. After a while I get
exceptions that I cannot open any new files. After this the threads stop
working and a day later, the files are still open and marked for
deletion. I have to kill the server to get it running again or call
System.gc() periodically.  

How do force the VM to realese the files?

This happens under RedHat with a 2.4er kernel and under Debian Etch with
2.6er kernel. 

Thanks,

Brian
> -Yonik
> 
> On Mon, Jul 14, 2008 at 9:15 AM, Brian Carmalt <[EMAIL PROTECTED]> wrote:
> > Hello,
> >
> > I have a similar problem, not with Solr, but in Java. From what I have
> > found, it is a usage and os problem: comes from using to many files, and
> > the time it takes the os to reclaim the fds. I found the recomendation
> > that System.gc() should be called periodically. It works for me. May not
> > be the most elegant, but it works.
> >
> > Brian.
> >
> > Am Montag, den 14.07.2008, 11:14 +0200 schrieb Alexey Shakov:
> >> now we have set the limt to ~10000 files
> >> but this is not the solution - the amount of open files increases
> >> permanantly.
> >> Earlier or later, this limit will be exhausted.
> >>
> >>
> >> Fuad Efendi schrieb:
> >> > Have you tried [ulimit -n 65536]? I don't think it relates to files
> >> > marked for deletion...
> >> > ==============
> >> > http://www.linkedin.com/in/liferay
> >> >
> >> >
> >> >> Earlier or later, the system crashes with message "Too many open files"
> >> >
> >> >
> >>
> >>
> >>
> >
> >

Reply via email to