[
https://issues.apache.org/jira/browse/CASSANDRA-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Raul Barroso resolved CASSANDRA-13511.
--------------------------------------
Resolution: Cannot Reproduce
> Compaction stats high with no CPU use.
> ---------------------------------------
>
> Key: CASSANDRA-13511
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13511
> Project: Cassandra
> Issue Type: Bug
> Components: Compaction
> Environment: Red Hat Maipo 7.3
> $ nodetool -h localhost version
> ReleaseVersion: 2.2.8
> $ nodetool describecluster
> Cluster Information:
> Name: XXXXXXX Production Cassandra Cluster
> Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
> Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> Schema versions:
> 6938e859-955d-3ecb-aa0a-07bac9db1fc1: [172.16.121.4,
> 172.16.121.68, 172.16.121.5, 172.16.121.69, 172.16.121.6, 172.16.121.70,
> 172.16.121.7, 172.16.121.71]
> $ nodetool status
> Datacenter: DC1
> ===============
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> -- Address Load Tokens Owns (effective) Host ID
> Rack
> UN 172.16.121.4 1.59 TB 1 75.0%
> cbf7608f-fd8e-49c6-83d7-2ac5a5a9104c RAC1
> UN 172.16.121.5 1.44 TB 1 75.0%
> fa10aa81-c336-4f8b-a6fe-09f7f92e2026 RAC1
> UN 172.16.121.6 1.52 TB 1 75.0%
> d0ed7e9f-034f-490a-8112-30d0b0829c81 RAC1
> UN 172.16.121.7 2.01 TB 1 75.0%
> e17ce089-d638-410e-816f-498567200c3d RAC1
> Datacenter: DC2
> ===============
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> -- Address Load Tokens Owns (effective) Host ID
> Rack
> UN 172.16.121.68 2.3 TB 1 75.0%
> eb92ecdc-4be1-452f-b638-67cb8c9c32fd RAC1
> UN 172.16.121.69 2.77 TB 1 75.0%
> cbd93cfc-48a4-4eb5-8015-d1d1f513d09c RAC1
> UN 172.16.121.70 2.82 TB 1 75.0%
> f6d415cf-40c0-4da5-8996-5551dadf2640 RAC1
> UN 172.16.121.71 1.65 TB 1 75.0%
> 160a7251-fe54-4d4d-8251-32abc6408753 RAC1
> Reporter: Raul Barroso
> Priority: Trivial
> Fix For: 2.2.x
>
>
> Hi Team,
> First of all, this is my first post on apache's JIRA and I'm not pretty sure
> if I'm doing it right. Excuse any inconvenience and let me know if there's
> some mistake from my side.
> We currently are facing some problems at our company with the delivered
> productive environment. Right now I'm keen on compaction tasks.
> We are having high compaction tasks on two nodes from our cluster. Currently
> 84/92 tasks on these nodes with low CPU usage and no activity on disk (1-2
> Mb/s Read and write values on iostats).
> I'm quite confused with the info shown at compactionstats & tpstats (see
> below):
> - 83 pending compaction tasks + 10 running
> - CompactionExecutor : 8 Active 151 Pending
> Why doesn't these numbers match?
> Why are the compactions being accumulated if the system CPU and I/O are low?
> Why are we running a "unreleased" version and what does that mean?
> Thanks for your time and help here. Greatly appreciated. Rbarroso.
>
> $ nodetool compactionstats
> pending tasks: 83
> id compaction type
> keyspace table completed
> total unit progress
> a3307690-3480-11e7-a556-b1cf2e788d44 Compaction
> revenue_events usage_events 29628223698
> 42281336944 bytes 70.07%
> d67dde80-3321-11e7-a556-b1cf2e788d44 Validation
> revenue_events usage_events_by_agreement_id 884497156069
> 1000368079324 bytes 88.42%
> 925b4540-34a6-11e7-a556-b1cf2e788d44 Anticompaction after repair
> cm resources__history 2907492207
> 5106868634 bytes 56.93%
> 760fa550-3395-11e7-a556-b1cf2e788d44 Compaction
> revenue_events charging_balance_changes_by_source_id 195729008027
> 240875858343 bytes 81.26%
> 622374c0-3423-11e7-a556-b1cf2e788d44 Compaction
> revenue_events event_charges 93038948816
> 128189837056 bytes 72.58%
> 0b188320-3485-11e7-a556-b1cf2e788d44 Compaction
> revenue_events recharge_events_by_agreement_id 27485751628
> 40857511701 bytes 67.27%
> e714bf40-3464-11e7-a556-b1cf2e788d44 Compaction
> revenue_events charging_balance_changes_by_target_id 51642077893
> 104669802576 bytes 49.34%
> ef12b890-34a1-11e7-a556-b1cf2e788d44 Anticompaction after repair
> cm individuals 6276572787
> 6987894450 bytes 89.82%
> f6073d80-34a4-11e7-a556-b1cf2e788d44 Anticompaction after repair
> cm agreements__history 4081490766
> 10168203433 bytes 40.14%
> cc251310-3496-11e7-a556-b1cf2e788d44 Validation
> revenue_events usage_events_by_agreement_id 169361885289
> 907665793695 bytes 18.66%
> Active compaction remaining time : 2h38m18s
> $ nodetool tpstats
> Pool Name Active Pending Completed Blocked All
> time blocked
> MutationStage 0 0 2751211799 0
> 0
> ReadStage 0 0 31316 0
> 0
> RequestResponseStage 0 0 8071 0
> 0
> ReadRepairStage 0 0 1 0
> 0
> CounterMutationStage 0 0 0 0
> 0
> Repair#26 1 139 137 0
> 0
> HintedHandoff 0 0 566 0
> 0
> MiscStage 0 0 0 0
> 0
> CompactionExecutor 8 151 1617296 0
> 0
> MemtableReclaimMemory 0 0 49919 0
> 0
> PendingRangeCalculator 0 0 19 0
> 0
> GossipStage 0 0 4248064 0
> 0
> MigrationStage 0 0 68432 0
> 0
> MemtablePostFlush 0 0 74595 0
> 0
> ValidationExecutor 2 2 848 0
> 0
> Sampler 0 0 0 0
> 0
> MemtableFlushWriter 0 0 49705 0
> 0
> InternalResponseStage 0 0 288 0
> 0
> AntiEntropyStage 0 0 4181 0
> 0
> CacheCleanupExecutor 0 0 0 0
> 0
> Native-Transport-Requests 0 0 283879 0
> 0
> Message type Dropped
> READ 0
> RANGE_SLICE 0
> _TRACE 0
> MUTATION 5
> COUNTER_MUTATION 0
> REQUEST_RESPONSE 0
> PAGED_RANGE 0
> READ_REPAIR 0
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]