[jira] [Created] (CASSANDRA-7468) Add time-based execution to cassandra-stress
Matt Kennedy created CASSANDRA-7468: --- Summary: Add time-based execution to cassandra-stress Key: CASSANDRA-7468 URL: https://issues.apache.org/jira/browse/CASSANDRA-7468 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Matt Kennedy Priority: Minor -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7468) Add time-based execution to cassandra-stress
[ https://issues.apache.org/jira/browse/CASSANDRA-7468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Kennedy updated CASSANDRA-7468: Attachment: trunk-7468.patch Add time-based execution to cassandra-stress Key: CASSANDRA-7468 URL: https://issues.apache.org/jira/browse/CASSANDRA-7468 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Matt Kennedy Priority: Minor Attachments: trunk-7468.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
buildbot failure in ASF Buildbot on cassandra-trunk
The Buildbot has detected a new failure on builder cassandra-trunk while building libcloud. Full details are available at: http://ci.apache.org/builders/cassandra-trunk/builds/393 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: portunus_ubuntu Build Reason: scheduler Build Source Stamp: [branch trunk] 0f369d715b96f7934e5d91a15e7fd59402444fa2 Blamelist: Marcus Devich m.dev...@gmx.at,Tomaz Muraus to...@apache.org BUILD FAILED: failed sincerely, -The Buildbot
[jira] [Commented] (CASSANDRA-7467) flood of setting live ratio to maximum of 64 from repair
[ https://issues.apache.org/jira/browse/CASSANDRA-7467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047150#comment-14047150 ] Jackson Chung commented on CASSANDRA-7467: -- please ignore prev comment. 2 nodes misbehaved over night while repair was running (on another node), and crontab flush is already disabled. flood of setting live ratio to maximum of 64 from repair -- Key: CASSANDRA-7467 URL: https://issues.apache.org/jira/browse/CASSANDRA-7467 Project: Cassandra Issue Type: Bug Reporter: Jackson Chung we are on 2.0.8 running with repair -pr -local KS, all nodes on i2.2x (60G ram);, with setting 8G of heap. Using java 8. (key cache size is 1G) On occasion, when repair is run, the C* that run the repair, or another node in the cluster, or both, run into a bad state with the system.log just printing setting live ratio to maximum of 64 forever every split seconds. It usually happens when repairing one of the larger/wider CF. WARN [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 470) setting live ratio to maximum of 64.0 instead of Infinity INFO [MemoryMeter:1] 2014-06-28 09:13:24,540 Memtable.java (line 481) CFS(Keyspace='RIQ', ColumnFamily='MemberTimeline') liveRatio is 64.0 (just-counted was 64.0). calculation took 0ms for 0 cells Table: MemberTimeline SSTable count: 13 Space used (live), bytes: 17644018786 ... Compacted partition minimum bytes: 30 Compacted partition maximum bytes: 464228842 Compacted partition mean bytes: 54578 Just to give an idea of how bad this is, the log file is set to rotate 50 times with 21M each. In less than 15 minutes, all the logs are filled up with just that log. C* is not responding, and can't be killed normally. Only way is to kill -9 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7469) RejectedExecutionException causes orphan SSTables
Yasuharu Goto created CASSANDRA-7469: Summary: RejectedExecutionException causes orphan SSTables Key: CASSANDRA-7469 URL: https://issues.apache.org/jira/browse/CASSANDRA-7469 Project: Cassandra Issue Type: Bug Components: Core Reporter: Yasuharu Goto Priority: Minor I noticed that some old SSTables are not deleted and remaining in data dir. They are never compacted. {code} ./ks2-cf2-he-9690-Data.db ./ks2-cf2-he-9691-Data.db ./ks2-cf2-he-9679-Data.db- current version id ./ks2-cf2-he-205-Data.db- very old version id ./ks2-cf2-he-201-Data.db ./ks2-cf2-he-202-Data.db ./ks2-cf2-he-203-Data.db {code} And I noticed that some RejectedExecutionException causes these orphan SSTables. {code} ... INFO 18:51:45,323 DRAINING: starting drain process INFO 18:51:45,324 Stop listening to thrift clients ... # This compaction is not finished. Terminated by following Exception and nerver retried. So these SSTables are not deleted eternally. INFO 18:51:46,512 Compacting [SSTableReader(path='/var/cassandra/data/ks2/cf2/ks2-cf2-he-205-Data.db'), SSTableReader(path='/var/cassandra/data/ks2/cf2/ks2-cf2-he-203-Data.db'), SSTableReader(path='/var/cassandra/data/ks2/cf2/ks2-cf2-he-202-Data.db'), SSTableReader(path='/var/cassandra/data/ks2/cf2/ks2-cf2-he-201-Data.db')] ... # This compaction is finished. They don't get to be orphans. INFO 18:51:46,641 Compacting [SSTableReader(path='/var/cassandra/data/ks1/cf1/ks1-cf1-he-90-Data.db'), SSTableReader(path='/var/cassandra/data/ks1/cf1/ks1-cf1-he-89-Data.db'), SSTableReader(path='/var/cassandra/data/ks1/cf1/ks1-cf1-he-88-Data.db'), SSTableReader(path='/var/cassandra/data/ks1/cf1/ks1-cf1-he-87-Data.db')] INFO 18:51:46,736 Compacted to [/var/cassandra/data/ks1/cf1/ks1-cf1-he-91-Data.db,]. 370,606 to 317,566 (~85% of original) bytes for 193 keys at 3.187943MB/s. Time: 95ms. INFO 18:51:46,836 DRAINED ERROR 18:51:49,807 Exception in thread Thread[CompactionExecutor:1927,1,RMI Runtime] java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@32b5a2c6 rejected from org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@32d18f2c[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 3043] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2013) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:629) at org.apache.cassandra.io.sstable.SSTableDeletingTask.schedule(SSTableDeletingTask.java:67) at org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:806) at org.apache.cassandra.db.DataTracker.removeOldSSTablesSize(DataTracker.java:358) at org.apache.cassandra.db.DataTracker.postReplace(DataTracker.java:330) at org.apache.cassandra.db.DataTracker.replace(DataTracker.java:324) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:253) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:992) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:200) at org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) INFO 18:52:54,010 Cassandra shutting down... {code} As a result of log servey, we found some orphan SSTables caused by RejectExcutionException. Maybe I can fix each orphan files by nodetool refresh. But I'd like to ask if this is a problem that already has been solved in early release. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7311) Enable incremental backup on a per-keyspace level
[ https://issues.apache.org/jira/browse/CASSANDRA-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047369#comment-14047369 ] pankaj mishra commented on CASSANDRA-7311: -- Hi, can someone comment on patch for incremental_backup option on table. Enable incremental backup on a per-keyspace level - Key: CASSANDRA-7311 URL: https://issues.apache.org/jira/browse/CASSANDRA-7311 Project: Cassandra Issue Type: Improvement Reporter: Johnny Miller Priority: Minor Labels: lhf Attachments: 7311-cqlsh-update.txt, table_incremental_7311.patch Currently incremental backups are globally defined, however this is not always appropriate or required for all keyspaces in a cluster. As this is quite expensive, it would be preferred to either specify the keyspaces that need this (or exclude the ones that don't). -- This message was sent by Atlassian JIRA (v6.2#6252)