You may want to look at the the actual filenames.  You might have an app
leaving them open.  Also, remember, sockets use FDs so they are in the list
too.


On Fri, Aug 8, 2014 at 1:13 PM, Marcelo Elias Del Valle <
marc...@s1mbi0se.com.br> wrote:

> I am using datastax community, the packaged version for Debian. I am also
> using last version of opscenter and datastax-agent
>
> However, I just listed open files here:
>
> sudo lsof -n | grep java | wc -l
> 1096599
>
> It seems it has exceed. Should I just increase? Or is it possible to be a
> memory leak?
>
> Best regards,
> Marcelo.
>
>
>
> 2014-08-08 17:06 GMT-03:00 Shane Hansen <shanemhan...@gmail.com>:
>
> Are you using apache or Datastax cassandra?
>>
>> The datastax distribution ups the file handle limit to 100000. That
>> number's hard to exceed.
>>
>>
>>
>> On Fri, Aug 8, 2014 at 1:35 PM, Marcelo Elias Del Valle <
>> marc...@s1mbi0se.com.br> wrote:
>>
>>> Hi,
>>>
>>> I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having
>>> "too many open files" exceptions when I try to perform a large number of
>>> operations in my 10 node cluster.
>>>
>>> I saw the documentation
>>> http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting/trblshootTooManyFiles_r.html
>>> and I have set everything to the recommended settings, but I keep getting
>>> the errors.
>>>
>>> In the documentation it says: "Another, much less likely possibility,
>>> is a file descriptor leak in Cassandra. Run lsof -n | grep java to
>>> check that the number of file descriptors opened by Java is reasonable and
>>> reports the error if the number is greater than a few thousand."
>>>
>>> I guess it's not the case, or else a lot of people would be complaining
>>> about it, but I am not sure what I could do to solve the problem.
>>>
>>> Any hint about how to solve it?
>>>
>>> My client is written in python and uses Cassandra Python Driver. Here
>>> are the exceptions I am having in the client:
>>> [s1log] 2014-08-08 12:16:09,631 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.151, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,632 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.142, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,633 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.143, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,634 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.142, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,634 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.145, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,635 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.144, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,635 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.148, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,732 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.146, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,733 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.77, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,734 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.76, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,734 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.75, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,735 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.142, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,736 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.185, scheduling retry in 600.0
>>> seconds: [Errno 24] Too many open files
>>> [s1log] 2014-08-08 12:16:09,942 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.144, scheduling retry in 512.0
>>> seconds: Timed out connecting to 200.200.200.144
>>> [s1log] 2014-08-08 12:16:09,998 - cassandra.pool - WARNING - Error
>>> attempting to reconnect to 200.200.200.77, scheduling retry in 512.0
>>> seconds: Timed out connecting to 200.200.200.77
>>>
>>>
>>> And here is the exception I am having in the server:
>>>
>>>  WARN [Native-Transport-Requests:163] 2014-08-08 14:27:30,499
>>> BatchStatement.java (line 223) Batch of prepared statements for
>>> [identification.entity_lookup, identification.entity] is of size 25216,
>>> exceeding specified threshold of 5120 by 20096.
>>> ERROR [Native-Transport-Requests:150] 2014-08-08 14:27:31,611
>>> ErrorMessage.java (line 222) Unexpected exception during request
>>> java.io.IOException: Connection reset by peer
>>>         at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>         at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>         at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:192)
>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:375)
>>>         at
>>> org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
>>>         at
>>> org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
>>>         at
>>> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
>>>         at
>>> org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
>>>         at
>>> org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>         at java.lang.Thread.run(Thread.java:745)
>>>
>>> Here is the amount of open files before and after I restart Cassandra:
>>>
>>> root@h:/etc/security/limits.d# lsof -n | grep java | wc -l
>>> 936580
>>> root@h:/etc/security/limits.d# sudo service cassandra restart
>>> [ ok ] Restarting Cassandra: cassandra.
>>> root@h:/etc/security/limits.d# lsof -n | grep java | wc -l
>>> 80295
>>>
>>>
>>> Best regards,
>>> Marcelo Valle.
>>>
>>
>>
>


-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
<http://spinn3r.com>

Reply via email to