[
https://issues.apache.org/jira/browse/CASSANDRA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14955530#comment-14955530
]
Jeff Griffith edited comment on CASSANDRA-10515 at 10/13/15 7:56 PM:
---------------------------------------------------------------------
Here is the tpstats output. Note the number of pending MemtablePostFlush:
{code}
Every 15.0s: nodetool tpstats
Tue Oct 13 19:51:26 2015
Pool Name Active Pending Completed Blocked All
time blocked
MutationStage 0 0 33228570 0
0
ReadStage 4 0 29323052 0
0
RequestResponseStage 0 0 25342306 0
0
ReadRepairStage 0 0 1776236 0
0
CounterMutationStage 0 0 0 0
0
MiscStage 0 0 0 0
0
HintedHandoff 0 0 13 0
0
GossipStage 0 0 46134 0
0
CacheCleanupExecutor 0 0 0 0
0
InternalResponseStage 0 0 3 0
0
CommitLogArchiver 0 0 0 0
0
CompactionExecutor 10 105 3936 0
0
ValidationExecutor 0 0 0 0
0
MigrationStage 0 0 1 0
0
AntiEntropyStage 0 0 0 0
0
PendingRangeCalculator 0 0 4 0
0
Sampler 0 0 0 0
0
MemtableFlushWriter 1 1 1574 0
0
MemtablePostFlush 1 13755 134889 0
0
MemtableReclaimMemory 0 0 1574 0
0
Message type Dropped
READ 3
RANGE_SLICE 0
_TRACE 0
MUTATION 237849
COUNTER_MUTATION 0
BINARY 0
REQUEST_RESPONSE 3
PAGED_RANGE 0
READ_REPAIR 11081
{code}
was (Author: jeffery.griffith):
Here is the tpstats output. Note the number of pending MemtablePostFlush:
{code}
Every 15.0s: nodetool tpstats
Tue Oct 13 19:51:26 2015
Pool Name Active Pending Completed Blocked All
time blocked
MutationStage 0 0 33228570 0
0
ReadStage 4 0 29323052 0
0
RequestResponseStage 0 0 25342306 0
0
ReadRepairStage 0 0 1776236 0
0
CounterMutationStage 0 0 0 0
0
MiscStage 0 0 0 0
0
HintedHandoff 0 0 13 0
0
GossipStage 0 0 46134 0
0
CacheCleanupExecutor 0 0 0 0
0
InternalResponseStage 0 0 3 0
0
CommitLogArchiver 0 0 0 0
0
CompactionExecutor 10 105 3936 0
0
ValidationExecutor 0 0 0 0
0
MigrationStage 0 0 1 0
0
AntiEntropyStage 0 0 0 0
0
PendingRangeCalculator 0 0 4 0
0
Sampler 0 0 0 0
0
MemtableFlushWriter 1 1 1574 0
0
MemtablePostFlush 1 13755 134889 0
0
MemtableReclaimMemory 0 0 1574 0
0
{code}
> Compaction hangs with move to 2.1.10
> ------------------------------------
>
> Key: CASSANDRA-10515
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10515
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Environment: redhat 6.5, cassandra 2.1.10
> Reporter: Jeff Griffith
> Priority: Critical
> Attachments: CommitLogProblem.jpg
>
>
> After upgrading from cassandra 2.0.x to 2.1.10, we began seeing problems
> where some nodes break the 12G commit log max we configured and go as high as
> 65G or more. Once it reaches this state, "nodetool compactionstats" hangs. I
> watched the recovery live when compactions begin happening again. the
> "nodetool compactionstats" suddenly completed to show the outstanding jobs
> most in 100% completion state:
> {code}
> [email protected]:~$ ndc
> pending tasks: 2185
> compaction type keyspace table completed
> total unit progress
> Compaction SyncCore ContactInformationUpdates 61251208033
> 170643574558 bytes 35.89%
> Compaction SyncCore CommEvents 19262483904
> 19266079916 bytes 99.98%
> Compaction SyncCore EndpointPrefixIndexMinimized 6592197093
> 6592316682 bytes 100.00%
> Compaction SyncCore EmailHistogramDeltas 3411039555
> 3411039557 bytes 100.00%
> Compaction SyncCore ContactPrefixBytesIndex 2879241009
> 2879487621 bytes 99.99%
> Compaction SyncCore EndpointProfiles 21252493623
> 21252635196 bytes 100.00%
> Compaction SyncCore CommEvents 81009853587
> 81009854438 bytes 100.00%
> Compaction SyncCore EndpointIndexIntId 3005734580
> 3005768582 bytes 100.00%
> Active compaction remaining time : n/a
> {code}
> I was also doing periodic "nodetool tpstats" which were working but not being
> logged in system.log on the StatusLogger thread until after the compaction
> started working again.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)