[
https://issues.apache.org/jira/browse/CASSANDRA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959498#comment-14959498
]
Jeff Griffith commented on CASSANDRA-10515:
-------------------------------------------
A second iteration. Ran into a second instance of metrics via RMI but caught it
very early when only a few were blocked behind the compaction. Still looks like
the same general place:
{code}
"CompactionExecutor:16" #1502 daemon prio=1 os_prio=4 tid=0x00007fb78c4f2000
nid=0xf7ff runnable [0x00007fb751941000]
java.lang.Thread.State: RUNNABLE
at java.util.HashMap.putVal(HashMap.java:641)
at java.util.HashMap.put(HashMap.java:611)
at java.util.HashSet.add(HashSet.java:219)
at
org.apache.cassandra.db.compaction.LeveledManifest.overlapping(LeveledManifest.java:512)
at
org.apache.cassandra.db.compaction.LeveledManifest.overlapping(LeveledManifest.java:497)
at
org.apache.cassandra.db.compaction.LeveledManifest.getCandidatesFor(LeveledManifest.java:572)
at
org.apache.cassandra.db.compaction.LeveledManifest.getCompactionCandidates(LeveledManifest.java:346)
- locked <0x00000004bcf24298> (a
org.apache.cassandra.db.compaction.LeveledManifest)
at
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getMaximalTask(LeveledCompactionStrategy.java:101)
at
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:90)
- locked <0x00000004bcbec488> (a
org.apache.cassandra.db.compaction.LeveledCompactionStrategy)
at
org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getNextBackgroundTask(WrappingCompactionStrategy.java:84)
- locked <0x00000004b98f1b00> (a
org.apache.cassandra.db.compaction.WrappingCompactionStrategy)
at
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:230)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
> Commit logs back up with move to 2.1.10
> ---------------------------------------
>
> Key: CASSANDRA-10515
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10515
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Environment: redhat 6.5, cassandra 2.1.10
> Reporter: Jeff Griffith
> Assignee: Branimir Lambov
> Priority: Critical
> Labels: commitlog, triage
> Attachments: CommitLogProblem.jpg, CommitLogSize.jpg, stacktrace.txt,
> system.log.clean
>
>
> After upgrading from cassandra 2.0.x to 2.1.10, we began seeing problems
> where some nodes break the 12G commit log max we configured and go as high as
> 65G or more before it restarts. Once it reaches the state of more than 12G
> commit log files, "nodetool compactionstats" hangs. Eventually C* restarts
> without errors (not sure yet whether it is crashing but I'm checking into it)
> and the cleanup occurs and the commit logs shrink back down again. Here is
> the nodetool compactionstats immediately after restart.
> {code}
> [email protected]:~$ ndc
> pending tasks: 2185
> compaction type keyspace table completed
> total unit progress
> Compaction SyncCore *cf1* 61251208033
> 170643574558 bytes 35.89%
> Compaction SyncCore *cf2* 19262483904
> 19266079916 bytes 99.98%
> Compaction SyncCore *cf3* 6592197093
> 6592316682 bytes 100.00%
> Compaction SyncCore *cf4* 3411039555
> 3411039557 bytes 100.00%
> Compaction SyncCore *cf5* 2879241009
> 2879487621 bytes 99.99%
> Compaction SyncCore *cf6* 21252493623
> 21252635196 bytes 100.00%
> Compaction SyncCore *cf7* 81009853587
> 81009854438 bytes 100.00%
> Compaction SyncCore *cf8* 3005734580
> 3005768582 bytes 100.00%
> Active compaction remaining time : n/a
> {code}
> I was also doing periodic "nodetool tpstats" which were working but not being
> logged in system.log on the StatusLogger thread until after the compaction
> started working again.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)