Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Jason Lewis
cat /proc/5980/limits
Limit Soft Limit   Hard Limit   Units
Max cpu time  unlimitedunlimitedseconds
Max file size unlimitedunlimitedbytes
Max data size unlimitedunlimitedbytes
Max stack size8388608  unlimitedbytes
Max core file size0unlimitedbytes
Max resident set  unlimitedunlimitedbytes
Max processes 2063522  2063522
 processes
Max open files10   10   files
Max locked memory unlimitedunlimitedbytes
Max address space unlimitedunlimitedbytes
Max file locksunlimitedunlimitedlocks
Max pending signals   2063522  2063522  signals
Max msgqueue size 819200   819200   bytes
Max nice priority 00
Max realtime priority 00
Max realtime timeout  unlimitedunlimitedus


On Fri, Nov 6, 2015 at 4:01 PM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:

> You probably need to configure ulimits correctly
> 
> .
>
> What does this give you?
>
> /proc//limits
>
>
> All the best,
>
>
> [image: datastax_logo.png] 
>
> Sebastián Estévez
>
> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
>
> [image: linkedin.png]  [image:
> facebook.png]  [image: twitter.png]
>  [image: g+.png]
> 
> 
> 
>
>
> 
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Fri, Nov 6, 2015 at 1:56 PM, Branton Davis 
> wrote:
>
>> We recently went down the rabbit hole of trying to understand the output
>> of lsof.  lsof -n has a lot of duplicates (files opened by multiple
>> threads).  Use 'lsof -p $PID' or 'lsof -u cassandra' instead.
>>
>> On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng 
>> wrote:
>>
>>> Is your compaction progressing as expected? If not, this may cause an
>>> excessive number of tiny db files. Had a node refuse to start recently
>>> because of this, had to temporarily remove limits on that process.
>>>
>>> On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis 
>>> wrote:
>>>
 I'm getting too many open files errors and I'm wondering what the
 cause may be.

 lsof -n | grep java  show 1.4M files

 ~90k are inodes
 ~70k are pipes
 ~500k are cassandra services in /usr
 ~700K are the data files.

 What might be causing so many files to be open?

 jas

>>>
>>>
>>
>


Re: Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread 郝加来
many connection ?





郝加来

From: Jason Lewis
Date: 2015-11-07 10:38
To: user@cassandra.apache.org
Subject: Re: Too many open files Cassandra 2.1.11.872
cat /proc/5980/limits
Limit Soft Limit   Hard Limit   Units
Max cpu time  unlimitedunlimitedseconds
Max file size unlimitedunlimitedbytes
Max data size unlimitedunlimitedbytes
Max stack size8388608  unlimitedbytes
Max core file size0unlimitedbytes
Max resident set  unlimitedunlimitedbytes
Max processes 2063522  2063522  processes
Max open files10   10   files
Max locked memory unlimitedunlimitedbytes
Max address space unlimitedunlimitedbytes
Max file locksunlimitedunlimitedlocks
Max pending signals   2063522  2063522  signals
Max msgqueue size 819200   819200   bytes
Max nice priority 00
Max realtime priority 00
Max realtime timeout  unlimitedunlimitedus




On Fri, Nov 6, 2015 at 4:01 PM, Sebastian Estevez 
<sebastian.este...@datastax.com> wrote:

You probably need to configure ulimits correctly.


What does this give you?


/proc//limits


All the best,



Sebastián Estévez
Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com







DataStax is the fastest, most scalable distributed database technology, 
delivering Apache Cassandra to the world’s most innovative enterprises. 
Datastax is built to be agile, always-on, and predictably scalable to any size. 
With more than 500 customers in 45 countries, DataStax is the database 
technology and transactional backbone of choice for the worlds most innovative 
companies such as Netflix, Adobe, Intuit, and eBay. 


On Fri, Nov 6, 2015 at 1:56 PM, Branton Davis <branton.da...@spanning.com> 
wrote:

We recently went down the rabbit hole of trying to understand the output of 
lsof.  lsof -n has a lot of duplicates (files opened by multiple threads).  Use 
'lsof -p $PID' or 'lsof -u cassandra' instead.


On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng <br...@blockcypher.com> wrote:

Is your compaction progressing as expected? If not, this may cause an excessive 
number of tiny db files. Had a node refuse to start recently because of this, 
had to temporarily remove limits on that process.


On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis <jle...@packetnexus.com> wrote:

I'm getting too many open files errors and I'm wondering what the
cause may be.

lsof -n | grep java  show 1.4M files

~90k are inodes
~70k are pipes
~500k are cassandra services in /usr
~700K are the data files.

What might be causing so many files to be open?

jas


---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please
immediately notify the sender by return e-mail, and delete the original message 
and all copies from
your system. Thank you.
---


Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Branton Davis
We recently went down the rabbit hole of trying to understand the output of
lsof.  lsof -n has a lot of duplicates (files opened by multiple threads).
Use 'lsof -p $PID' or 'lsof -u cassandra' instead.

On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng  wrote:

> Is your compaction progressing as expected? If not, this may cause an
> excessive number of tiny db files. Had a node refuse to start recently
> because of this, had to temporarily remove limits on that process.
>
> On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis 
> wrote:
>
>> I'm getting too many open files errors and I'm wondering what the
>> cause may be.
>>
>> lsof -n | grep java  show 1.4M files
>>
>> ~90k are inodes
>> ~70k are pipes
>> ~500k are cassandra services in /usr
>> ~700K are the data files.
>>
>> What might be causing so many files to be open?
>>
>> jas
>>
>
>


Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Bryan Cheng
Is your compaction progressing as expected? If not, this may cause an
excessive number of tiny db files. Had a node refuse to start recently
because of this, had to temporarily remove limits on that process.

On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis  wrote:

> I'm getting too many open files errors and I'm wondering what the
> cause may be.
>
> lsof -n | grep java  show 1.4M files
>
> ~90k are inodes
> ~70k are pipes
> ~500k are cassandra services in /usr
> ~700K are the data files.
>
> What might be causing so many files to be open?
>
> jas
>


Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Jason Lewis
I'm getting too many open files errors and I'm wondering what the
cause may be.

lsof -n | grep java  show 1.4M files

~90k are inodes
~70k are pipes
~500k are cassandra services in /usr
~700K are the data files.

What might be causing so many files to be open?

jas


Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Sebastian Estevez
You probably need to configure ulimits correctly

.

What does this give you?

/proc//limits


All the best,


[image: datastax_logo.png] 

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

[image: linkedin.png]  [image:
facebook.png]  [image: twitter.png]
 [image: g+.png]







DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Fri, Nov 6, 2015 at 1:56 PM, Branton Davis 
wrote:

> We recently went down the rabbit hole of trying to understand the output
> of lsof.  lsof -n has a lot of duplicates (files opened by multiple
> threads).  Use 'lsof -p $PID' or 'lsof -u cassandra' instead.
>
> On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng 
> wrote:
>
>> Is your compaction progressing as expected? If not, this may cause an
>> excessive number of tiny db files. Had a node refuse to start recently
>> because of this, had to temporarily remove limits on that process.
>>
>> On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis 
>> wrote:
>>
>>> I'm getting too many open files errors and I'm wondering what the
>>> cause may be.
>>>
>>> lsof -n | grep java  show 1.4M files
>>>
>>> ~90k are inodes
>>> ~70k are pipes
>>> ~500k are cassandra services in /usr
>>> ~700K are the data files.
>>>
>>> What might be causing so many files to be open?
>>>
>>> jas
>>>
>>
>>
>