Hi Florent,

700k open files sounds plain wrong and like a file descriptor leak. Could 
you please create a bug report for this at 
https://github.com/Graylog2/graylog2-server/issues/new and include the list 
of open files of the Java process running Graylog on one of those servers?

Please also upgrade to Graylog 1.0.1 and verify that the problem still 
exists.

Best regards,
Jochen

On Thursday, 26 March 2015 10:13:18 UTC+1, Florent B wrote:
>
> Hi everyone, 
>
> This night, it seems we had some network instabilities. 
>
> We have 3 Graylog servers (1.0.0). 
>
> This morning we can't read logs, web interface is showing lots of errors 
> (stack traces...). 
>
> In the 3 servers logs, we can see some errors like this : 
>
> 2015-03-26T10:08:23.838+01:00 ERROR [KafkaJournal] Cannot write 
> /var/lib/graylog-server/journal/graylog2-committed-read-offset to disk. 
> java.io.FileNotFoundException: 
> /var/lib/graylog-server/journal/graylog2-committed-read-offset (Too many 
> open files) 
>
> 2015-03-26T10:08:23.974+01:00 WARN  [AbstractNioSelector] Failed to 
> accept a connection. 
> java.io.IOException: Too many open files 
>
> Java server process has more than 700 000 (!!!) opened files on each 
> servers ! 
> We are not running out of space, and CPU usage is very low. 
>
> So my questions are: 
>
> How to handle this ? What can I do to avoid loosing messages ? 
> Is it a bug ? (some resources not freed maybe ?) 
>
> Thank you a lot. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to