[
https://issues.apache.org/jira/browse/CASSANDRA-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15760984#comment-15760984
]
Tom van der Woerdt commented on CASSANDRA-13055:
------------------------------------------------
FAQ :
* No, this node was not experiencing GC pressure, not even with 1300+ threads
actively doing things
* No, the node hasn't been down for more than the hinted_handoff_window (3h)
since the last repair
* No, there's no reason for the data to be out of sync between the nodes
* This happens occasionally, not just this one instance, though in previous
cases I failed to trace it in time
* No, compaction is not backlogged
* The table is ~10GB per node, in a 24 node cluster configured to replicate
everything 9 times.
* The nodes are normally idle, until repairs start, when loadavg suddenly jumps
* loadavg graph, across all nodes, last 3 days: https://i.imgur.com/hcXYWH0.png
> DoS by StreamReceiveTask, during incremental repair
> ---------------------------------------------------
>
> Key: CASSANDRA-13055
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13055
> Project: Cassandra
> Issue Type: Bug
> Reporter: Tom van der Woerdt
>
> There's no limit on how many StreamReceiveTask there can be, and during an
> incremental repair on a vnode cluster with high replication factors, this can
> lead to thousands of conccurent StreamReceiveTask threads, effectively DoSing
> the node.
> I just found one of my nodes with 1000+ loadavg, caused by 1363 concurrent
> StreamReceiveTask threads.
> That sucks :)
> I think :
> * Cassandra shouldn't allow more than X concurrent StreamReceiveTask threads
> * StreamReceiveTask threads should be at a lower priority, like compaction
> threads
> Alternative ideas welcome as well, of course.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)