Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Jason Lewis
cat /proc/5980/limits Limit Soft Limit Hard Limit Units Max cpu time unlimitedunlimitedseconds Max file size unlimitedunlimitedbytes Max data size unlimited

Re: Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread 郝加来
many connection ? 郝加来 From: Jason Lewis Date: 2015-11-07 10:38 To: user@cassandra.apache.org Subject: Re: Too many open files Cassandra 2.1.11.872 cat /proc/5980/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Branton Davis
We recently went down the rabbit hole of trying to understand the output of lsof. lsof -n has a lot of duplicates (files opened by multiple threads). Use 'lsof -p $PID' or 'lsof -u cassandra' instead. On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng wrote: > Is your

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Bryan Cheng
Is your compaction progressing as expected? If not, this may cause an excessive number of tiny db files. Had a node refuse to start recently because of this, had to temporarily remove limits on that process. On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis wrote: > I'm

Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Jason Lewis
I'm getting too many open files errors and I'm wondering what the cause may be. lsof -n | grep java show 1.4M files ~90k are inodes ~70k are pipes ~500k are cassandra services in /usr ~700K are the data files. What might be causing so many files to be open? jas

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Sebastian Estevez
You probably need to configure ulimits correctly . What does this give you? /proc//limits All the best, [image: datastax_logo.png] Sebastián Estévez Solutions Architect |