Ah, that explains part of the problem indeed. The whole situation still
doesn't make a lot of sense to me, unless the answer is that the default
sstable size with level compaction is just no good for large datasets. I
restarted cassandra a few hours ago and it had to open about 32k files
at
On Fri, Jan 13, 2012 at 8:01 PM, Thorsten von Eicken t...@rightscale.com
wrote:
I'm running a single node cassandra 1.0.6 server which hit a wall yesterday:
ERROR [CompactionExecutor:2918] 2012-01-12 20:37:06,327
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
1.0.6 has a file leak problem, fixed in 1.0.7. Perhaps this is the reason?
https://issues.apache.org/jira/browse/CASSANDRA-3616
/Janne
On Jan 18, 2012, at 03:52 , dir dir wrote:
Very Interesting Why you open so many file? Actually what kind of
system that is built by you until open so
Very Interesting Why you open so many file? Actually what kind of
system that is built by you until open so many files? would you tell us?
Thanks...
On Sat, Jan 14, 2012 at 2:01 AM, Thorsten von Eicken t...@rightscale.comwrote:
I'm running a single node cassandra 1.0.6 server which hit a
That sounds like to many sstables.
Out of interest were you using multi threaded compaction ? Just wondering about
this
https://issues.apache.org/jira/browse/CASSANDRA-3711
Can you set the file handles to unlimited ?
Can you provide some more info what your see in the data dir incase it is
I'm running a single node cassandra 1.0.6 server which hit a wall yesterday:
ERROR [CompactionExecutor:2918] 2012-01-12 20:37:06,327
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
Thread[CompactionExecutor:2918,1,main] java.io.IOError:
java.io.FileNotFoundException: