Re: Too many open files

2018-01-22 Thread Jeff Jirsa
t; for every request? > > > From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay > Mihaylov > Sent: Monday, January 22, 2018 11:47 AM > To: user@cassandra.apache.org > Subject: Re: Too many open files > > You can increase system open files, > als

RE: Too many open files

2018-01-22 Thread Andreou, Arys (Nokia - GR/Athens)
a global session object or to create it and shut it down for every request? From: n...@photonhost.com [mailto:n...@photonhost.com] On Behalf Of Nikolay Mihaylov Sent: Monday, January 22, 2018 11:47 AM To: user@cassandra.apache.org Subject: Re: Too many open files You can increase system open

Re: Too many open files

2018-01-22 Thread Nikolay Mihaylov
>> >> >> >> I keep getting a “Last error: Too many open files” followed by a list of >> node IPs. >> >> The output of “lsof -n|grep java|wc -l” is about 674970 on each node. >> >> >> >> What is a normal number of open files? >> >> >> >> Thank you. >> >> >> > >

Re: Too many open files

2018-01-22 Thread Dor Laor
:59 PM, Andreou, Arys (Nokia - GR/Athens) < arys.andr...@nokia.com> wrote: > Hi, > > > > I keep getting a “Last error: Too many open files” followed by a list of > node IPs. > > The output of “lsof -n|grep java|wc -l” is about 674970 on each node. > > >

Too many open files

2018-01-22 Thread Andreou, Arys (Nokia - GR/Athens)
Hi, I keep getting a "Last error: Too many open files" followed by a list of node IPs. The output of "lsof -n|grep java|wc -l" is about 674970 on each node. What is a normal number of open files? Thank you.

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Jason Lewis
ead. >> >> On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng <br...@blockcypher.com> >> wrote: >> >>> Is your compaction progressing as expected? If not, this may cause an >>> excessive number of tiny db files. Had a node refuse to start recently >

Re: Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread 郝加来
many connection ? 郝加来 From: Jason Lewis Date: 2015-11-07 10:38 To: user@cassandra.apache.org Subject: Re: Too many open files Cassandra 2.1.11.872 cat /proc/5980/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Branton Davis
t; wrote: > >> I'm getting too many open files errors and I'm wondering what the >> cause may be. >> >> lsof -n | grep java show 1.4M files >> >> ~90k are inodes >> ~70k are pipes >> ~500k are cassandra services in /usr >> ~700K are the data files. >> >> What might be causing so many files to be open? >> >> jas >> > >

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Bryan Cheng
> I'm getting too many open files errors and I'm wondering what the > cause may be. > > lsof -n | grep java show 1.4M files > > ~90k are inodes > ~70k are pipes > ~500k are cassandra services in /usr > ~700K are the data files. > > What might be causing so many files to be open? > > jas >

Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Jason Lewis
I'm getting too many open files errors and I'm wondering what the cause may be. lsof -n | grep java show 1.4M files ~90k are inodes ~70k are pipes ~500k are cassandra services in /usr ~700K are the data files. What might be causing so many files to be open? jas

Re: Too many open files Cassandra 2.1.11.872

2015-11-06 Thread Sebastian Estevez
remove limits on that process. >> >> On Fri, Nov 6, 2015 at 10:09 AM, Jason Lewis <jle...@packetnexus.com> >> wrote: >> >>> I'm getting too many open files errors and I'm wondering what the >>> cause may be. >>> >>> lsof -n | grep ja

Re: too many open files

2014-08-09 Thread Jack Krupansky
From: Marcelo Elias Del Valle Sent: Saturday, August 9, 2014 12:41 AM To: user@cassandra.apache.org Subject: Re: too many open files Indeed, that was my mistake, that was exactly what we were doing in the code. []s 2014-08-09 0:56 GMT-03:00 Brian Zhang yikebo...@gmail.com: For cassandra

Re: too many open files

2014-08-09 Thread Andrew
Tyler, I’ll see if I can reproduce this on a local instance, but just in case, the error was basically—instead of storing the session in my connection factory, I stored a cluster and called “connect” each time I requested a Session.  I had defined a max/min number of connections for the

Re: too many open files

2014-08-09 Thread Andrew
I just had a generator that (in the incorrect way) had a cluster as a member variable, and would call .connect() repeatedly.  I _thought_, incorrectly, that the Session was thread unsafe, and so I should request a separate Session each time—obviously wrong in hind sight. There was no special

Re: too many open files

2014-08-09 Thread Marcelo Elias Del Valle
Linux open a FD for each connection received and honestly I still don't know much about the details of this. When I got a too many open files error it took a good while to think about checking the connections. I think the documentation could point this fact, it would help other people with the same

Re: too many open files

2014-08-09 Thread Kevin Burton
Another idea to detect this is when the number of open sessions exceeds the number of threads. On Aug 9, 2014 10:59 AM, Andrew redmu...@gmail.com wrote: I just had a generator that (in the incorrect way) had a cluster as a member variable, and would call .connect() repeatedly. I _thought_,

Re: too many open files

2014-08-09 Thread Jonathan Haddad
It really doesn't need to be this complicated. You only need 1 session per application. It's thread safe and manages the connection pool for you. http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/Session.html On Sat, Aug 9, 2014 at 1:29 PM, Kevin Burton bur...@spinn3r.com

Re: too many open files

2014-08-09 Thread Andrew
Yes, that was the problem—I actually knew better, but had briefly overlooked this that when I was doing some refactoring.  I am not the OP (although he himself realized his mistake). if you follow the thread, I was explaining that the Datastax Java driver allowed me to basically open a

too many open files

2014-08-08 Thread Marcelo Elias Del Valle
Hi, I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too many open files exceptions when I try to perform a large number of operations in my 10 node cluster. I saw the documentation http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting

Re: too many open files

2014-08-08 Thread Shane Hansen
having too many open files exceptions when I try to perform a large number of operations in my 10 node cluster. I saw the documentation http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting/trblshootTooManyFiles_r.html and I have set everything to the recommended

Re: too many open files

2014-08-08 Thread Marcelo Elias Del Valle
...@s1mbi0se.com.br wrote: Hi, I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too many open files exceptions when I try to perform a large number of operations in my 10 node cluster. I saw the documentation http://www.datastax.com/documentation/cassandra/2.0/cassandra

Re: too many open files

2014-08-08 Thread Kevin Burton
Cassandra 2.0.9 running on Debian Wheezy, and I am having too many open files exceptions when I try to perform a large number of operations in my 10 node cluster. I saw the documentation http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting/trblshootTooManyFiles_r.html

Re: too many open files

2014-08-08 Thread Marcelo Elias Del Valle
, I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too many open files exceptions when I try to perform a large number of operations in my 10 node cluster. I saw the documentation http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting

Re: too many open files

2014-08-08 Thread Andrey Ilinykh
wrote: Hi, I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too many open files exceptions when I try to perform a large number of operations in my 10 node cluster. I saw the documentation http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting

Re: too many open files

2014-08-08 Thread Redmumba
netstat and check the number of established connections. This number should not be big. Thank you, Andrey On Fri, Aug 8, 2014 at 12:35 PM, Marcelo Elias Del Valle marc...@s1mbi0se.com.br wrote: Hi, I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too many open files

Re: too many open files

2014-08-08 Thread Tyler Hobbs
On Fri, Aug 8, 2014 at 5:52 PM, Redmumba redmu...@gmail.com wrote: Just to chime in, I also ran into this issue when I was migrating to the Datastax client. Instead of reusing the session, I was opening a new session each time. For some reason, even though I was still closing the session on

Re: too many open files

2014-08-08 Thread J. Ryan Earl
: Hi, I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too many open files exceptions when I try to perform a large number of operations in my 10 node cluster. I saw the documentation http://www.datastax.com/documentation/cassandra/2.0/cassandra/troubleshooting

Re: too many open files

2014-08-08 Thread Brian Zhang
, Aug 8, 2014 at 12:35 PM, Marcelo Elias Del Valle marc...@s1mbi0se.com.br wrote: Hi, I am using Cassandra 2.0.9 running on Debian Wheezy, and I am having too many open files exceptions when I try to perform a large number of operations in my 10 node cluster. I saw the documentation http

Re: too many open files

2014-08-08 Thread Marcelo Elias Del Valle
: [s1log] 2014-08-08 12:16:09,631 - cassandra.pool - WARNING - Error attempting to reconnect to 200.200.200.151, scheduling retry in 600.0 seconds: [Errno 24] Too many open files [s1log] 2014-08-08 12:16:09,632 - cassandra.pool - WARNING - Error attempting to reconnect to 200.200.200.142, scheduling

Re: Too Many Open Files (sockets) - VNodes - Map/Reduce Job

2014-06-04 Thread Michael Shuler
, We are running ElasticMapReduce Jobs from Amazon on a 25 nodes Cassandra cluster (with VNodes). Since we have increased the size of the cluster we are facing a too many open files (due to sockets) exception when creating the splits. Does anyone has an idea? Thanks, Here is the stacktrace: 14

Re: Getting into Too many open files issues

2013-11-20 Thread J. Ryan Earl
the in our application. Write are doing good. but when comes to reads i have obsereved that cassandra is getting into too many open files issues. When i check the logs its not able to open the cassandra data files any more before of the file descriptors limits. Can some one suggest me what i am

Re: Getting into Too many open files issues

2013-11-11 Thread Aaron Morton
To: user@cassandra.apache.org Subject: RE: Getting into Too many open files issues Hi Murthy, 32768 is a bit low (I know datastax docs recommend this). But our production env is now running on 1kk, or you can even put it on unlimited. Pieter From: Murthy Chelankuri [mailto:kmurt

Getting into Too many open files issues

2013-11-07 Thread Murthy Chelankuri
I have experimenting cassandra latest version for storing the huge the in our application. Write are doing good. but when comes to reads i have obsereved that cassandra is getting into too many open files issues. When i check the logs its not able to open the cassandra data files any more before

RE: Getting into Too many open files issues

2013-11-07 Thread Pieter Callewaert
@cassandra.apache.org Subject: Getting into Too many open files issues I have experimenting cassandra latest version for storing the huge the in our application. Write are doing good. but when comes to reads i have obsereved that cassandra is getting into too many open files issues. When i check the logs its

Re: Getting into Too many open files issues

2013-11-07 Thread Murthy Chelankuri
was too low. Kind regards, Pieter Callewaert *From:* Murthy Chelankuri [mailto:kmurt...@gmail.com] *Sent:* donderdag 7 november 2013 12:15 *To:* user@cassandra.apache.org *Subject:* Getting into Too many open files issues I have experimenting cassandra latest version for storing

RE: Getting into Too many open files issues

2013-11-07 Thread Pieter Callewaert
: Getting into Too many open files issues Thanks Pieter for giving quick reply. I have downloaded the tar ball. And have changed the limits.conf as per the documentation like below. * soft nofile 32768 * hard nofile 32768 root soft nofile 32768 root hard nofile 32768 * soft memlock unlimited * hard

RE: Getting into Too many open files issues

2013-11-07 Thread Arindam Barua
, November 07, 2013 4:22 AM To: user@cassandra.apache.org Subject: RE: Getting into Too many open files issues Hi Murthy, 32768 is a bit low (I know datastax docs recommend this). But our production env is now running on 1kk, or you can even put it on unlimited. Pieter From: Murthy Chelankuri

Re: Too many open files with Cassandra 1.2.11

2013-10-31 Thread Aaron Morton
oleg.du...@gmail.com wrote: Got this error: WARN [Thread-8] 2013-10-29 02:58:24,565 CustomTThreadPoolServer.java (line 122) Transport error occurred during acceptance of message. 2 org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files 3

Too many open files with Cassandra 1.2.11

2013-10-29 Thread Oleg Dulin
Got this error: WARN [Thread-8] 2013-10-29 02:58:24,565 CustomTThreadPoolServer.java (line 122) Transport error occurred during acceptance of message. 2 org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files 3

Too many open files (Cassandra 2.0.1)

2013-10-29 Thread Pieter Callewaert
Hi, I've noticed some nodes in our cluster are dying after some period of time. WARN [New I/O server boss #17] 2013-10-29 12:22:20,725 Slf4JLogger.java (line 76) Failed to accept a connection. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native

Re: too many open files

2013-07-15 Thread Paul Ingalls
Also, looking through the log, it appears a lot of the files end with ic- which I assume is associated with a secondary index I have on the table. Are secondary indexes really expensive from a file descriptor standpoint? That particular table uses the default compaction scheme... On Jul

Re: too many open files

2013-07-15 Thread Michał Michalski
It doesn't tell you anything if file ends it with ic-###, except pointing out the SSTable version it uses (ic in this case). Files related to secondary index contain something like this in the filename: KS-CF.IDX-NAME, while in regular CFs do not contain any dots except the one just before

Re: too many open files

2013-07-15 Thread Brian Tarbox
6 CFs. ERROR [ReadStage:62384] 2013-07-14 18:04:26,062 AbstractCassandraDaemon.java (line 135) Exception in thread Thread[ReadStage:62384,5,main] java.io.IOError: java.io.FileNotFoundException: /tmp_vol/cassandra/data/dev_a/portfoliodao/dev_a-portfoliodao-hf-166-Data.db (Too many open files

too many open files

2013-07-14 Thread Paul Ingalls
I'm running into a problem where instances of my cluster are hitting over 450K open files. Is this normal for a 4 node 1.2.6 cluster with replication factor of 3 and about 50GB of data on each node? I can push the file descriptor limit up, but I plan on having a much larger load so I'm

Re: too many open files

2013-07-14 Thread Jonathan Haddad
Are you using leveled compaction? If so, what do you have the file size set at? If you're using the defaults, you'll have a ton of really small files. I believe Albert Tobey recommended using 256MB for the table sstable_size_in_mb to avoid this problem. On Sun, Jul 14, 2013 at 5:10 PM, Paul

Too many open files and stopped compaction with many pending compaction tasks

2013-06-27 Thread Desimpel, Ignace
(via jmx). compaction_throughput_mb_per_sec is 0. Concurrent_compactors is 3. multithreaded_compaction = false. No other load on these machines. And when I start querying (using thrift), I get a 'too many open files' error on the machine with pending compaction tasks. Limits.conf setting

Re: Too many open files and stopped compaction with many pending compaction tasks

2013-06-27 Thread Jeremy Hanna
get a ’too many open files’ error on the machine with pending compaction tasks. Limits.conf setting for nofile is 65536 Using ‘lsof’ and ‘wc –l’ I get a count of 59577 files for Cassandra. Total count of keyspace files on disk : 20464. The 3 machines have an equal (+/-) data load

RE: Too many open files and stopped compaction with many pending compaction tasks

2013-06-27 Thread Desimpel, Ignace
[mailto:jeremy.hanna1...@gmail.com] Sent: donderdag 27 juni 2013 15:36 To: user@cassandra.apache.org Subject: Re: Too many open files and stopped compaction with many pending compaction tasks Are you on SSDs? On 27 Jun 2013, at 14:24, Desimpel, Ignace ignace.desim...@nuance.com wrote: On a test

Too Many Open files error

2012-12-20 Thread santi kumar
While running the nodetool repair , we are running into FileNotFoundException with too many open files error. We increased the ulimit value to 32768, and still we have seen this issue. THe number of files in the data directory is around 29500+. If we further increase the limit of ulimt, would

Re: Too Many Open files error

2012-12-20 Thread Andrey Ilinykh
This bug is fixed in 1.1.5 Andrey On Thu, Dec 20, 2012 at 12:01 AM, santi kumar santi.ku...@gmail.com wrote: While running the nodetool repair , we are running into FileNotFoundException with too many open files error. We increased the ulimit value to 32768, and still we have seen

Re: Too Many Open files error

2012-12-20 Thread santi kumar
at 12:01 AM, santi kumar santi.ku...@gmail.comwrote: While running the nodetool repair , we are running into FileNotFoundException with too many open files error. We increased the ulimit value to 32768, and still we have seen this issue. THe number of files in the data directory is around

Re: Too Many Open files error

2012-12-20 Thread Andrey Ilinykh
On Thu, Dec 20, 2012 at 12:01 AM, santi kumar santi.ku...@gmail.comwrote: While running the nodetool repair , we are running into FileNotFoundException with too many open files error. We increased the ulimit value to 32768, and still we have seen this issue. THe number of files in the data

Re: Too Many Open files error

2012-12-20 Thread aaron morton
Ilinykh ailin...@gmail.com wrote: This bug is fixed in 1.1.5 Andrey On Thu, Dec 20, 2012 at 12:01 AM, santi kumar santi.ku...@gmail.com wrote: While running the nodetool repair , we are running into FileNotFoundException with too many open files error. We increased the ulimit value to 32768

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-19 Thread Thorsten von Eicken
AbstractCassandraDaemon.java (line 133) Fatal exception in thread Thread[CompactionExecutor:2918,1,main] java.io.IOError: java.io.FileNotFoundException: /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many open files in system) After that it stopped working and just say

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-18 Thread Sylvain Lebresne
[CompactionExecutor:2918,1,main] java.io.IOError: java.io.FileNotFoundException: /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many open files in system) After that it stopped working and just say there with this error (undestandable). I did an lsof and saw that it had 98567 open

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-18 Thread Janne Jalkanen
133) Fatal exception in thread Thread[CompactionExecutor:2918,1,main] java.io.IOError: java.io.FileNotFoundException: /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many open files in system) After that it stopped working and just say there with this error (undestandable

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-17 Thread dir dir
many open files in system) After that it stopped working and just say there with this error (undestandable). I did an lsof and saw that it had 98567 open files, yikes! An ls in the data directory shows 234011 files. After restarting it spent about 5 hours compacting, then quieted down. About

Re: cassandra hit a wall: Too many open files (98567!)

2012-01-15 Thread aaron morton
] 2012-01-12 20:37:06,327 AbstractCassandraDaemon.java (line 133) Fatal exception in thread Thread[CompactionExecutor:2918,1,main] java.io.IOError: java.io.FileNotFoundException: /mnt/ebs/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many open files in system) After

cassandra hit a wall: Too many open files (98567!)

2012-01-13 Thread Thorsten von Eicken
/data/rslog_production/req_word_idx-hc-453661-Data.db (Too many open files in system) After that it stopped working and just say there with this error (undestandable). I did an lsof and saw that it had 98567 open files, yikes! An ls in the data directory shows 234011 files. After restarting it spent

Too many open files

2011-07-27 Thread Donna Li
during acceptance of message. org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java: 124) at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java

Re: Too many open files

2011-07-27 Thread Peter Schuller
What does the following error mean? One of my cassandra servers print this error, and nodetool shows the state of the server is down. Netstat result shows the socket number is very few. The operating system enforced limits have been hit, so Cassandra is unable to create additional file

Re: Too many open files during Repair operation

2011-07-19 Thread Sameer Farooqui
I'm guessing you've seen this already? http://www.datastax.com/docs/0.8/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files Check out the # of File Descriptors opened with the lsof- -n | grep java command. On Tue, Jul 19, 2011 at 8:30 AM, cbert...@libero.it cbert

Re: Too many open files during Repair operation

2011-07-19 Thread Attila Babo
If you are using Linux, especially Ubuntu, check the linked document below. This is my favorite: Using sudo has side effects in terms of open file limits. On Ubuntu they’ll be reset to 1024, no matter what’s set in /etc/security/limits.conf http://wiki.basho.com/Open-Files-Limit.html /Attila

Re: too many open files - maybe a fd leak in indexslicequeries

2011-04-05 Thread Jonathan Ellis
Gude; Juergen Link; Johannes Hoerle Betreff: Re: too many open files - maybe a fd leak in indexslicequeries Index queries (ColumnFamilyStore.scan) don't do any low-level i/o themselves, they go through CFS.getColumnFamily, which is what normal row fetches also go through.  So if there is a leak

AW: too many open files - maybe a fd leak in indexslicequeries

2011-04-02 Thread Roland Gude
Nachricht- Von: Jonathan Ellis [mailto:jbel...@gmail.com] Gesendet: Freitag, 1. April 2011 06:07 An: user@cassandra.apache.org Cc: Roland Gude; Juergen Link; Johannes Hoerle Betreff: Re: too many open files - maybe a fd leak in indexslicequeries Index queries (ColumnFamilyStore.scan) don't do

too many open files - maybe a fd leak in indexslicequeries

2011-03-31 Thread Roland Gude
I experience something that looks exactly like https://issues.apache.org/jira/browse/CASSANDRA-1178 On cassandra 0.7.3 when using index slice queries (lots of them) Crashing multiple nodes and rendering the cluster useless. But I have no clue where to look if index queries still leak fd Does

Re: too many open files - maybe a fd leak in indexslicequeries

2011-03-31 Thread Jonathan Ellis
Index queries (ColumnFamilyStore.scan) don't do any low-level i/o themselves, they go through CFS.getColumnFamily, which is what normal row fetches also go through. So if there is a leak there it's unlikely to be specific to indexes. What is your open-file limit (remember that sockets count

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Amin Sakka, Novapost
: java.net.SocketException: Too many open files* * * What worries me is this / by zero exception when I try to restart cassandra ! At least, I want to backup the 3.50 rows to continue then my insertion, is there a way to do this? * Exception encountered during startup. java.lang.ArithmeticException: / by zero

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Germán Kondolf
file descriptors to unlimted. Now, I get exactly the same exception after 3.50 rows : *CustomTThreadPoolServer.java (line 104) Transport error occurred during acceptance of message.* *org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Jake Luciani
exactly the same exception after 3.50 rows : *CustomTThreadPoolServer.java (line 104) Transport error occurred during acceptance of message.* *org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files* * * What worries me is this / by zero exception

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Ryan King
. org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124) at org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl(TCustomServerSocket.java:67) at org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Nate McCall
) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:229) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:134) Caused by: java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-16 Thread Germán Kondolf
during acceptance of message. org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124) at org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl

Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-15 Thread Amin Sakka, Novapost
CustomTThreadPoolServer.java (line 104) Transport error occurred during acceptance of message. org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124

Re: Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-15 Thread Jake Luciani
http://www.riptano.com/docs/0.6/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files On Wed, Dec 15, 2010 at 11:13 AM, Amin Sakka, Novapost amin.sa...@novapost.fr wrote: *Hello,* *I'm using cassandra 0.7.0 rc1, a single node configuration, replication factor 1

Too many open files Exception + java.lang.ArithmeticException: / by zero

2010-12-15 Thread Amin Sakka, Novapost
CustomTThreadPoolServer.java (line 104) Transport error occurred during acceptance of message. org.apache.thrift.transport.TTransportException: java.net.SocketException: Too many open files at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:124

Re: too many open files 0.7.0 beta1

2010-08-26 Thread Aaron Morton
, Aug 26, 2010 at 2:05 PM, Aaron Morton aa...@thelastpickle.com wrote: Under 0.7.0 beta1 am seeing cassandra run out of files handles...Caused by: java.io.FileNotFoundException: /local1/junkbox/cassandra/data/junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files

too many open files 0.7.0 beta1

2010-08-25 Thread Aaron Morton
Under 0.7.0 beta1 am seeing cassandra run out of files handles...Caused by: java.io.FileNotFoundException: /local1/junkbox/cassandra/data/junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files) at java.ioRandomAccessFile.open(Native Method) at java.io.RandomAccessFile.init

Re: too many open files 0.7.0 beta1

2010-08-25 Thread Dan Washusen
/ junkbox.wetafx.co.nz/ObjectIndex-e-31-Index.db (Too many open files) at java.ioRandomAccessFile.open(Native Method) at java.io.RandomAccessFile.init(RandomAccessFile.java:212) at java.io.RandomAccessFile.init(RandomAccessFile.java:98

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Jonathan Ellis
for the first 30M or so mutates, then within 4 seconds they jumped to about 800, stayed there for about 30 seconds, then within 5 seconds went over 2022, at which point the server entered the cycle of SocketException: Too many open files.  Interesting thing is that the file limit

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Peter Schuller
[snip] I'm not sure that is the case. When the server gets into the unrecoverable state, the repeating exceptions are indeed SocketException: Too many open files. [snip] Although this is unquestionably a network error,  I don't think it is actually a network problem per se, as the maximum

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Jorge Barrios
Schuller peter.schul...@infidyne.com wrote: [snip] I'm not sure that is the case. When the server gets into the unrecoverable state, the repeating exceptions are indeed SocketException: Too many open files. [snip] Although this is unquestionably a network error, I don't think

Re: Too many open files [was Re: Minimizing the impact of compaction on latency and throughput]

2010-07-14 Thread Jorge Barrios
Each of my top-level functions was allocating a Hector client connection at the top, and releasing it when returning. The problem arose when a top-level function had to call another top-level function, which led to the same thread allocating two connections. Hector was not releasing one of them