[
https://issues.apache.org/jira/browse/CASSANDRA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13804365#comment-13804365
]
Quentin Conner edited comment on CASSANDRA-6127 at 10/24/13 6:28 PM:
---------------------------------------------------------------------
*Feature Suggestion*
The current Gossip failure detector is characterized by a sliding window of
elapsed time, a heartbeat message period and a PHI threshold used to make the
continuous random variable (lower case phi) into a dichotomous (binary) random
variable. That PHI (uppercase) threshold is called phi_convict_threshold.
I don't have a better mathmatical theory or derivation at this writing, but I
do have an easy workaround for your consideration. While phi_convict_threshold
is adjustable, the period (or frequency) of Gossip messages is not. Adjusting
the gossip period to integrate over a longer time baseline reduced false
positives from the Gossip failure detector. The side effect increases the
elapsed time to detect a legitimately-failed node.
Depending on user workload characteristics, and the related sources of latency
(CPU, disk and network activity or transient delays) cited above, a System
Architect could present a reasonable use case for controlling the Gossip
message period.
The goal would be to set a detection window that accomodates common occurences
for a given deployment scenario. Not all data centers are created equal.
Patches and results from implementation will follow in subsequent posts.
*Potential Next Steps*
Explore concern about sensitivity to gossip period. Do the vnode gossip
messages exceed capacity for peers to ingest?
Explore concern about phi estimates from un-filled (new) deque. See Patch #3.
Explore concern about assuming Gaussian PDF. Networks (not computers)
generally characterize expected arrival time by Poisson distribution, not
Gaussian.
was (Author: qconner):
*Feature Suggestion*
The current Gossip failure detector is characterized by a sliding window of
elapsed time, a heartbeat message period and a PHI threshold used to make the
continuous random variable (lower case phi) into a dichotomous (binary) random
variable. That PHI (uppercase) threshold is called phi_convict_threshold.
I don't have a better mathmatical theory or derivation at this writing, but I
do have an easy workaround for your consideration. While phi_convict_threshold
is adjustable, the period (or frequency) of Gossip messages is not. Adjusting
the gossip period to integrate over a longer time baseline reduced false
positives from the Gossip failure detector. The side effect increases the
elapsed time to detect a legitimately-failed node.
Depending on user workload characteristics, and the related sources of latency
(CPU, disk and network activity or transient delays) cited above, a System
Architect could present a reasonable use case for controlling the Gossip
message period.
The goal would be to set a detection window that accomodates common occurences
for a given deployment scenario. Not all data centers are created equal.
Patches and results from implementation will follow in subsequent posts.
*Potential Next Steps*
Explore concern about sensitivity to gossip period. Do the vnode gossip
messages exceed capacity for peers to ingest?
Explore concern about phi estimates from un-filled (new) deque.
Explore concern about assuming Gaussian PDF. Networks (not computers)
generally characterize expected arrival time by Poisson distribution, not
Gaussian.
> vnodes don't scale to hundreds of nodes
> ---------------------------------------
>
> Key: CASSANDRA-6127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6127
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Environment: Any cluster that has vnodes and consists of hundreds of
> physical nodes.
> Reporter: Tupshin Harper
> Assignee: Jonathan Ellis
> Attachments: 6000vnodes.patch, AdjustableGossipPeriod.patch,
> delayEstimatorUntilStatisticallyValid.patch
>
>
> There are a lot of gossip-related issues related to very wide clusters that
> also have vnodes enabled. Let's use this ticket as a master in case there are
> sub-tickets.
> The most obvious symptom I've seen is with 1000 nodes in EC2 with m1.xlarge
> instances. Each node configured with 32 vnodes.
> Without vnodes, cluster spins up fine and is ready to handle requests within
> 30 minutes or less.
> With vnodes, nodes are reporting constant up/down flapping messages with no
> external load on the cluster. After a couple of hours, they were still
> flapping, had very high cpu load, and the cluster never looked like it was
> going to stabilize or be useful for traffic.
--
This message was sent by Atlassian JIRA
(v6.1#6144)