[
https://issues.apache.org/jira/browse/CASSANDRA-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14081040#comment-14081040
]
Brandon Williams edited comment on CASSANDRA-7567 at 7/31/14 4:04 PM:
----------------------------------------------------------------------
I applied CASSANDRA-7644 and finally got meaningful traces, but none were
fruitful - everything always completed quickly and exactly as it should. So I
went for the sure-fire method of reproducing, and suspended the jvm on the 'dd'
node so it couldn't respond at all. Still nothing suspicious in the trace,
however now I _did_ see stress report 12s latencies. Suspecting that stress
wasn't actually doing what I told it to (only connect to node1 and node2, not
node3 which I'm trying to beat up with dd) I discovered it was actually
connecting to all three nodes, and that's why suspending the 'unused' node
caused the latencies.
I don't think anything is wrong in Cassandra itself here any longer, but
something is wrong with stress, probably doing ring discovery and connecting to
everything when it shouldn't.
was (Author: brandon.williams):
I applied CASSANDRA-7644 and finally got meaningful traces, but none were
fruitful - everything always completed quickly and exactly as it should. So I
went for the sure-fire method of reproducing, and suspended the jvm on the 'dd'
node so it could respond at all. Still nothing suspicious in the trace,
however now I _did_ see stress report 12s latencies. Suspecting that stress
wasn't actually doing what I told it to (only connect to node1 and node2, not
node3 which I'm trying to beat up with dd) I discovered it was actually
connecting to all three nodes, and that's why suspending the 'unused' node
caused the latencies.
I don't think anything is wrong in Cassandra itself here any longer, but
something is wrong with stress, probably doing ring discovery and connecting to
everything when it shouldn't.
> when the commit_log disk for a single node is overwhelmed the entire cluster
> slows down
> ---------------------------------------------------------------------------------------
>
> Key: CASSANDRA-7567
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7567
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Environment: debian 7.5, bare metal, 14 nodes, 64CPUs, 64GB RAM,
> commit_log disk sata, data disk SSD, vnodes, leveled compaction strategy
> Reporter: David O'Dell
> Assignee: Brandon Williams
> Fix For: 2.1.0
>
> Attachments: 7567.logs.bz2, write_request_latency.png
>
>
> We've run into a situation where a single node out of 14 is experiencing high
> disk io. This can happen when a node is being decommissioned or after it
> joins the ring and runs into the bug cassandra-6621.
> When this occurs the write latency for the entire cluster spikes.
> From 0.3ms to 170ms.
> To simulate this simply run dd on the commit_log disk (dd if=/dev/zero
> of=/tmp/foo bs=1024) and you will see that instantly all nodes in the cluster
> have slowed down.
> BTW overwhelming the data disk does not have this same effect.
> Also I've tried this where the overwhelmed node isn't being connected
> directly from the client and it still has the same effect.
--
This message was sent by Atlassian JIRA
(v6.2#6252)