t; for every request?
>
>
> From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay
> Mihaylov
> Sent: Monday, January 22, 2018 11:47 AM
> To: user@cassandra.apache.org
> Subject: Re: Too many open files
>
> You can increase system open files,
> als
a global session object or to create it and shut it down
for every request?
From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay
Mihaylov
Sent: Monday, January 22, 2018 11:47 AM
To: user@cassandra.apache.org
Subject: Re: Too many open files
You can increase system open
>>
>>
>>
>> I keep getting a “Last error: Too many open files” followed by a list of
>> node IPs.
>>
>> The output of “lsof -n|grep java|wc -l” is about 674970 on each node.
>>
>>
>>
>> What is a normal number of open files?
>>
>>
>>
>> Thank you.
>>
>>
>>
>
>
:59 PM, Andreou, Arys (Nokia - GR/Athens) <
arys.andr...@nokia.com> wrote:
> Hi,
>
>
>
> I keep getting a “Last error: Too many open files” followed by a list of
> node IPs.
>
> The output of “lsof -n|grep java|wc -l” is about 674970 on each node.
>
>
>
Hi,
I keep getting a "Last error: Too many open files" followed by a list of node
IPs.
The output of "lsof -n|grep java|wc -l" is about 674970 on each node.
What is a normal number of open files?
Thank you.
ead.
>>
>> On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng <br...@blockcypher.com>
>> wrote:
>>
>>> Is your compaction progressing as expected? If not, this may cause an
>>> excessive number of tiny db files. Had a node refuse to start recently
>
many connection ?
郝加来
From: Jason Lewis
Date: 2015-11-07 10:38
To: user@cassandra.apache.org
Subject: Re: Too many open files Cassandra 2.1.11.872
cat /proc/5980/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited
t; wrote:
>
>> I'm getting too many open files errors and I'm wondering what the
>> cause may be.
>>
>> lsof -n | grep java show 1.4M files
>>
>> ~90k are inodes
>> ~70k are pipes
>> ~500k are cassandra services in /usr
>> ~700K are the data files.
>>
>> What might be causing so many files to be open?
>>
>> jas
>>
>
>
> I'm getting too many open files errors and I'm wondering what the
> cause may be.
>
> lsof -n | grep java show 1.4M files
>
> ~90k are inodes
> ~70k are pipes
> ~500k are cassandra services in /usr
> ~700K are the data files.
>
> What might be causing so many files to be open?
>
> jas
>
I'm getting too many open files errors and I'm wondering what the
cause may be.
lsof -n | grep java show 1.4M files
~90k are inodes
~70k are pipes
~500k are cassandra services in /usr
~700K are the data files.
What might be causing so many files to be open?
jas
remove limits on that process.
>>
>> On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis <jle...@packetnexus.com>
>> wrote:
>>
>>> I'm getting too many open files errors and I'm wondering what the
>>> cause may be.
>>>
>>> lsof -n | grep ja
From: Marcelo Elias Del Valle
Sent: Saturday, August 9, 2014 12:41 AM
To: user@cassandra.apache.org
Subject: Re: too many open files
Indeed, that was my mistake, that was exactly what we were doing in the code.
[]s
2014-08-09 0:56 GMT-03:00 Brian Zhang yikebo...@gmail.com:
For cassandra
Tyler,
I’ll see if I can reproduce this on a local instance, but just in case, the
error was basically—instead of storing the session in my connection factory, I
stored a cluster and called “connect” each time I requested a Session. I had
defined a max/min number of connections for the
I just had a generator that (in the incorrect way) had a cluster as a member
variable, and would call .connect() repeatedly. I _thought_, incorrectly, that
the Session was thread unsafe, and so I should request a separate Session each
time—obviously wrong in hind sight.
There was no special
Linux open a FD for each connection received and
honestly I still don't know much about the details of this. When I got a
too many open files error it took a good while to think about checking
the connections.
I think the documentation could point this fact, it would help other people
with the same
Another idea to detect this is when the number of open sessions exceeds the
number of threads.
On Aug 9, 2014 10:59 AM, Andrew redmu...@gmail.com wrote:
I just had a generator that (in the incorrect way) had a cluster as a
member variable, and would call .connect() repeatedly. I _thought_,
It really doesn't need to be this complicated. You only need 1
session per application. It's thread safe and manages the connection
pool for you.
http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/Session.html
On Sat, Aug 9, 2014 at 1:29 PM, Kevin Burton bur...@spinn3r.com
Yes, that was the problem—I actually knew better, but had briefly overlooked
this that when I was doing some refactoring. I am not the OP (although he
himself realized his mistake).
if you follow the thread, I was explaining that the Datastax Java driver
allowed me to basically open a
Hi,
I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too
many open files exceptions when I try to perform a large number of
operations in my 10 node cluster.
I saw the documentation
http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting
having too
many open files exceptions when I try to perform a large number of
operations in my 10 node cluster.
I saw the documentation
http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting/trblshootTooManyFiles_r.html
and I have set everything to the recommended
...@s1mbi0se.com.br wrote:
Hi,
I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too
many open files exceptions when I try to perform a large number of
operations in my 10 node cluster.
I saw the documentation
http://www.datastax.com/documentation/cassandra/2.0/cassandra
Cassandra 2.0.9 running on Debian Wheezy, and I am having
too many open files exceptions when I try to perform a large number of
operations in my 10 node cluster.
I saw the documentation
http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting/trblshootTooManyFiles_r.html
,
I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having
too many open files exceptions when I try to perform a large number of
operations in my 10 node cluster.
I saw the documentation
http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting
wrote:
Hi,
I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too
many open files exceptions when I try to perform a large number of
operations in my 10 node cluster.
I saw the documentation
http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting
netstat and check the number of established
connections. This number should not be big.
Thank you,
Andrey
On Fri, Aug 8, 2014 at 12:35 PM, Marcelo Elias Del Valle
marc...@s1mbi0se.com.br wrote:
Hi,
I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too
many open files
On Fri, Aug 8, 2014 at 5:52 PM, Redmumba redmu...@gmail.com wrote:
Just to chime in, I also ran into this issue when I was migrating to the
Datastax client. Instead of reusing the session, I was opening a new
session each time. For some reason, even though I was still closing the
session on
:
Hi,
I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too
many open files exceptions when I try to perform a large number of
operations in my 10 node cluster.
I saw the documentation
http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting
, Aug 8, 2014 at 12:35 PM, Marcelo Elias Del Valle
marc...@s1mbi0se.com.br wrote:
Hi,
I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too
many open files exceptions when I try to perform a large number of
operations in my 10 node cluster.
I saw the documentation
http
:
[s1log] 2014-08-08 12:16:09,631 - cassandra.pool - WARNING - Error
attempting to reconnect to 200.200.200.151, scheduling retry in 600.0
seconds: [Errno 24] Too many open files
[s1log] 2014-08-08 12:16:09,632 - cassandra.pool - WARNING - Error
attempting to reconnect to 200.200.200.142, scheduling
,
We are running ElasticMapReduce Jobs from Amazon on a 25 nodes Cassandra
cluster (with VNodes). Since we have increased the size of the cluster we
are facing a too many open files (due to sockets) exception when creating
the splits. Does anyone has an idea?
Thanks,
Here is the stacktrace:
14
the in
our application.
Write are doing good. but when comes to reads i have obsereved that
cassandra is getting into too many open files issues. When i check the logs
its not able to open the cassandra data files any more before of the file
descriptors limits.
Can some one suggest me what i am
To: user@cassandra.apache.org
Subject: RE: Getting into Too many open files issues
Hi Murthy,
32768 is a bit low (I know datastax docs recommend this). But our production
env is now running on 1kk, or you can even put it on unlimited.
Pieter
From: Murthy Chelankuri [mailto:kmurt
I have experimenting cassandra latest version for storing the huge the in
our application.
Write are doing good. but when comes to reads i have obsereved that
cassandra is getting into too many open files issues. When i check the logs
its not able to open the cassandra data files any more before
@cassandra.apache.org
Subject: Getting into Too many open files issues
I have experimenting cassandra latest version for storing the huge the in our
application.
Write are doing good. but when comes to reads i have obsereved that cassandra
is getting into too many open files issues. When i check the logs its
was
too low.
Kind regards,
Pieter Callewaert
*From:* Murthy Chelankuri [mailto:kmurt...@gmail.com]
*Sent:* donderdag 7 november 2013 12:15
*To:* user@cassandra.apache.org
*Subject:* Getting into Too many open files issues
I have experimenting cassandra latest version for storing
: Getting into Too many open files issues
Thanks Pieter for giving quick reply.
I have downloaded the tar ball. And have changed the limits.conf as per the
documentation like below.
* soft nofile 32768
* hard nofile 32768
root soft nofile 32768
root hard nofile 32768
* soft memlock unlimited
* hard
, November 07, 2013 4:22 AM
To: user@cassandra.apache.org
Subject: RE: Getting into Too many open files issues
Hi Murthy,
32768 is a bit low (I know datastax docs recommend this). But our production
env is now running on 1kk, or you can even put it on unlimited.
Pieter
From: Murthy Chelankuri
oleg.du...@gmail.com wrote:
Got this error:
WARN [Thread-8] 2013-10-29 02:58:24,565 CustomTThreadPoolServer.java (line
122) Transport error occurred during acceptance of message.
2 org.apache.thrift.transport.TTransportException:
java.net.SocketException: Too many open files
3
Got this error:
WARN [Thread-8] 2013-10-29 02:58:24,565 CustomTThreadPoolServer.java
(line 122) Transport error occurred during acceptance of message.
2 org.apache.thrift.transport.TTransportException:
java.net.SocketException: Too many open files
3
Hi,
I've noticed some nodes in our cluster are dying after some period of time.
WARN [New I/O server boss #17] 2013-10-29 12:22:20,725 Slf4JLogger.java (line
76) Failed to accept a connection.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native
Also, looking through the log, it appears a lot of the files end with ic-
which I assume is associated with a secondary index I have on the table. Are
secondary indexes really expensive from a file descriptor standpoint? That
particular table uses the default compaction scheme...
On Jul
It doesn't tell you anything if file ends it with ic-###, except
pointing out the SSTable version it uses (ic in this case).
Files related to secondary index contain something like this in the
filename: KS-CF.IDX-NAME, while in regular CFs do not contain
any dots except the one just before
6 CFs.
ERROR [ReadStage:62384] 2013-07-14 18:04:26,062
AbstractCassandraDaemon.java (line 135) Exception in thread
Thread[ReadStage:62384,5,main]
java.io.IOError: java.io.FileNotFoundException:
/tmp_vol/cassandra/data/dev_a/portfoliodao/dev_a-portfoliodao-hf-166-Data.db
(Too many open files
I'm running into a problem where instances of my cluster are hitting over 450K
open files. Is this normal for a 4 node 1.2.6 cluster with replication factor
of 3 and about 50GB of data on each node? I can push the file descriptor limit
up, but I plan on having a much larger load so I'm
Are you using leveled compaction? If so, what do you have the file size
set at? If you're using the defaults, you'll have a ton of really small
files. I believe Albert Tobey recommended using 256MB for the
table sstable_size_in_mb to avoid this problem.
On Sun, Jul 14, 2013 at 5:10 PM, Paul
(via jmx).
compaction_throughput_mb_per_sec is 0.
Concurrent_compactors is 3.
multithreaded_compaction = false.
No other load on these machines.
And when I start querying (using thrift), I get a 'too many open files' error
on the machine with pending compaction tasks.
Limits.conf setting
get a ’too many open files’ error
on the machine with pending compaction tasks.
Limits.conf setting for nofile is 65536
Using ‘lsof’ and ‘wc –l’ I get a count of 59577 files for Cassandra.
Total count of keyspace files on disk : 20464.
The 3 machines have an equal (+/-) data load
[mailto:jeremy.hanna1...@gmail.com]
Sent: donderdag 27 juni 2013 15:36
To: user@cassandra.apache.org
Subject: Re: Too many open files and stopped compaction with many pending
compaction tasks
Are you on SSDs?
On 27 Jun 2013, at 14:24, Desimpel, Ignace ignace.desim...@nuance.com wrote:
On a test
While running the nodetool repair , we are running into
FileNotFoundException with too many open files error. We increased the
ulimit value to 32768, and still we have seen this issue.
THe number of files in the data directory is around 29500+.
If we further increase the limit of ulimt, would
This bug is fixed in 1.1.5
Andrey
On Thu, Dec 20, 2012 at 12:01 AM, santi kumar santi.ku...@gmail.com wrote:
While running the nodetool repair , we are running into
FileNotFoundException with too many open files error. We increased the
ulimit value to 32768, and still we have seen
at 12:01 AM, santi kumar santi.ku...@gmail.comwrote:
While running the nodetool repair , we are running into
FileNotFoundException with too many open files error. We increased the
ulimit value to 32768, and still we have seen this issue.
THe number of files in the data directory is around
On Thu, Dec 20, 2012 at 12:01 AM, santi kumar santi.ku...@gmail.comwrote:
While running the nodetool repair , we are running into
FileNotFoundException with too many open files error. We increased the
ulimit value to 32768, and still we have seen this issue.
THe number of files in the data
Ilinykh ailin...@gmail.com wrote:
This bug is fixed in 1.1.5
Andrey
On Thu, Dec 20, 2012 at 12:01 AM, santi kumar santi.ku...@gmail.com wrote:
While running the nodetool repair , we are running into FileNotFoundException
with too many open files error. We increased the ulimit value to 32768
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
Thread[CompactionExecutor:2918,1,main] java.io.IOError:
java.io.FileNotFoundException:
/mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db
(Too many
open files in system)
After that it stopped working and just say
[CompactionExecutor:2918,1,main] java.io.IOError:
java.io.FileNotFoundException:
/mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many
open files in system)
After that it stopped working and just say there with this error
(undestandable). I did an lsof and saw that it had 98567 open
133) Fatal exception in thread
Thread[CompactionExecutor:2918,1,main] java.io.IOError:
java.io.FileNotFoundException:
/mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many
open files in system)
After that it stopped working and just say there with this error
(undestandable
many
open files in system)
After that it stopped working and just say there with this error
(undestandable). I did an lsof and saw that it had 98567 open files,
yikes! An ls in the data directory shows 234011 files. After restarting
it spent about 5 hours compacting, then quieted down. About
] 2012-01-12 20:37:06,327
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
Thread[CompactionExecutor:2918,1,main] java.io.IOError:
java.io.FileNotFoundException:
/mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many
open files in system)
After
/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many
open files in system)
After that it stopped working and just say there with this error
(undestandable). I did an lsof and saw that it had 98567 open files,
yikes! An ls in the data directory shows 234011 files. After restarting
it spent
during acceptance of message.
org.apache.thrift.transport.TTransportException:
java.net.SocketException: Too many open files
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:
124)
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java
What does the following error mean? One of my cassandra servers print this
error, and nodetool shows the state of the server is down. Netstat result
shows the socket number is very few.
The operating system enforced limits have been hit, so Cassandra is
unable to create additional file
I'm guessing you've seen this already?
http://www.datastax.com/docs/0.8/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files
Check out the # of File Descriptors opened with the lsof- -n | grep java
command.
On Tue, Jul 19, 2011 at 8:30 AM, cbert...@libero.it cbert
If you are using Linux, especially Ubuntu, check the linked document
below. This is my favorite: Using sudo has side effects in terms of
open file limits. On Ubuntu they’ll be reset to 1024, no matter what’s
set in /etc/security/limits.conf
http://wiki.basho.com/Open-Files-Limit.html
/Attila
Gude; Juergen Link; Johannes Hoerle
Betreff: Re: too many open files - maybe a fd leak in indexslicequeries
Index queries (ColumnFamilyStore.scan) don't do any low-level i/o
themselves, they go through CFS.getColumnFamily, which is what normal
row fetches also go through. So if there is a leak
Nachricht-
Von: Jonathan Ellis [mailto:jbel...@gmail.com]
Gesendet: Freitag, 1. April 2011 06:07
An: user@cassandra.apache.org
Cc: Roland Gude; Juergen Link; Johannes Hoerle
Betreff: Re: too many open files - maybe a fd leak in indexslicequeries
Index queries (ColumnFamilyStore.scan) don't do
I experience something that looks exactly like
https://issues.apache.org/jira/browse/CASSANDRA-1178
On cassandra 0.7.3 when using index slice queries (lots of them)
Crashing multiple nodes and rendering the cluster useless. But I have no clue
where to look if index queries still leak fd
Does
Index queries (ColumnFamilyStore.scan) don't do any low-level i/o
themselves, they go through CFS.getColumnFamily, which is what normal
row fetches also go through. So if there is a leak there it's
unlikely to be specific to indexes.
What is your open-file limit (remember that sockets count
: java.net.SocketException:
Too many open files*
*
*
What worries me is this / by zero exception when I try to restart cassandra
! At least, I want to backup the 3.50 rows to continue then my
insertion, is there a way to do this?
*
Exception encountered during startup.
java.lang.ArithmeticException: / by zero
file descriptors to unlimted.
Now, I get exactly the same exception after 3.50 rows :
*CustomTThreadPoolServer.java (line 104) Transport error occurred during
acceptance of message.*
*org.apache.thrift.transport.TTransportException:
java.net.SocketException: Too many open files
exactly the same exception after 3.50 rows :
*CustomTThreadPoolServer.java (line 104) Transport error occurred during
acceptance of message.*
*org.apache.thrift.transport.TTransportException:
java.net.SocketException: Too many open files*
*
*
What worries me is this / by zero exception
.
org.apache.thrift.transport.TTransportException: java.net.SocketException:
Too many open files
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124)
at
org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl(TCustomServerSocket.java:67)
at
org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl
)
at
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:229)
at
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:134)
Caused by: java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method
during acceptance of message.
org.apache.thrift.transport.TTransportException:
java.net.SocketException:
Too many open files
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124)
at
org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl
CustomTThreadPoolServer.java (line 104)
Transport error occurred during acceptance of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException:
Too many open files
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124
http://www.riptano.com/docs/0.6/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files
On Wed, Dec 15, 2010 at 11:13 AM, Amin Sakka, Novapost
amin.sa...@novapost.fr wrote:
*Hello,*
*I'm using cassandra 0.7.0 rc1, a single node configuration, replication
factor 1
CustomTThreadPoolServer.java (line 104)
Transport error occurred during acceptance of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException:
Too many open files
at
org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124
, Aug 26, 2010 at 2:05 PM, Aaron Morton aa...@thelastpickle.com wrote:
Under 0.7.0 beta1 am seeing cassandra run out of files handles...Caused by: java.io.FileNotFoundException: /local1/junkbox/cassandra/data/junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files
Under 0.7.0 beta1 am seeing cassandra run out of files handles...Caused by: java.io.FileNotFoundException: /local1/junkbox/cassandra/data/junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files) at java.ioRandomAccessFile.open(Native Method) at java.io.RandomAccessFile.init
/
junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files)
at java.ioRandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.init(RandomAccessFile.java:212)
at java.io.RandomAccessFile.init(RandomAccessFile.java:98
for the first 30M or so mutates, then within
4 seconds they jumped to about 800, stayed there for about 30 seconds,
then within 5 seconds went over 2022, at which point the server entered
the cycle of SocketException: Too many open files. Interesting thing is
that the file limit
[snip]
I'm not sure that is the case.
When the server gets into the unrecoverable state, the repeating exceptions
are indeed SocketException: Too many open files.
[snip]
Although this is unquestionably a network error, I don't think it is
actually a
network problem per se, as the maximum
Schuller peter.schul...@infidyne.com
wrote:
[snip]
I'm not sure that is the case.
When the server gets into the unrecoverable state, the repeating
exceptions
are indeed SocketException: Too many open files.
[snip]
Although this is unquestionably a network error, I don't think
Each of my top-level functions was allocating a Hector client connection at
the top, and releasing it when returning. The problem arose when a top-level
function had to call another top-level function, which led to the same
thread allocating two connections. Hector was not releasing one of them
83 matches
Mail list logo