Re: Virtual Nodes, lots of physical nodes and potentially increasing outage count?

2012-12-11 Thread Eric Parusel
Thanks for your thoughts guys.

I agree that with vnodes total downtime is lessened.  Although it also
seems that the total number of outages (however small) would be greater.

But I think downtime is only lessened up to a certain cluster size.

I'm thinking that as the cluster continues to grow:
  - node rebuild time will max out (a single node only has so much write
bandwidth)
  - the probability of 2 nodes being down at any given time will continue
to increase -- even if you consider only non-correlated failures.

Therefore, when adding nodes beyond the point where node rebuild time maxes
out, both the total number of outages *and* overall downtime would increase?

Thanks,
Eric




On Mon, Dec 10, 2012 at 7:00 AM, Edward Capriolo edlinuxg...@gmail.comwrote:

 Assuming you need to work with quorum in a non-vnode scenario. That means
 that if 2 nodes in a row in the ring are down some number of quorum
 operations will fail with UnavailableException (TimeoutException right
 after the failures). This is because the for a given range of tokens quorum
 will be impossible, but quorum will be possible for others.

 In a vnode world if any two nodes are down,  then the intersection of
 vnode token ranges they have are unavailable.

 I think it is two sides of the same coin.


 On Mon, Dec 10, 2012 at 7:41 AM, Richard Low r...@acunu.com wrote:

 Hi Tyler,

 You're right, the math does assume independence which is unlikely to be
 accurate.  But if you do have correlated failure modes e.g. same power,
 racks, DC, etc. then you can still use Cassandra's rack-aware or DC-aware
 features to ensure replicas are spread around so your cluster can survive
 the correlated failure mode.  So I would expect vnodes to improve uptime in
 all scenarios, but haven't done the math to prove it.

 Richard.





Re: Virtual Nodes, lots of physical nodes and potentially increasing outage count?

2012-12-11 Thread Richard Low
Hi Eric,

The time to recover one node is limited by that node, but the time to
recover that's most important is just the time to replicate the data that
is missing from that node.  This is the removetoken operation (called
removenode in 1.2), and this gets faster the more nodes you have.

Richard.


On 11 December 2012 08:39, Eric Parusel ericparu...@gmail.com wrote:

 Thanks for your thoughts guys.

 I agree that with vnodes total downtime is lessened.  Although it also
 seems that the total number of outages (however small) would be greater.

 But I think downtime is only lessened up to a certain cluster size.

 I'm thinking that as the cluster continues to grow:
   - node rebuild time will max out (a single node only has so much write
 bandwidth)
   - the probability of 2 nodes being down at any given time will continue
 to increase -- even if you consider only non-correlated failures.

 Therefore, when adding nodes beyond the point where node rebuild time
 maxes out, both the total number of outages *and* overall downtime would
 increase?

 Thanks,
 Eric




 On Mon, Dec 10, 2012 at 7:00 AM, Edward Capriolo edlinuxg...@gmail.comwrote:

 Assuming you need to work with quorum in a non-vnode scenario. That means
 that if 2 nodes in a row in the ring are down some number of quorum
 operations will fail with UnavailableException (TimeoutException right
 after the failures). This is because the for a given range of tokens quorum
 will be impossible, but quorum will be possible for others.

 In a vnode world if any two nodes are down,  then the intersection of
 vnode token ranges they have are unavailable.

 I think it is two sides of the same coin.


 On Mon, Dec 10, 2012 at 7:41 AM, Richard Low r...@acunu.com wrote:

 Hi Tyler,

 You're right, the math does assume independence which is unlikely to be
 accurate.  But if you do have correlated failure modes e.g. same power,
 racks, DC, etc. then you can still use Cassandra's rack-aware or DC-aware
 features to ensure replicas are spread around so your cluster can survive
 the correlated failure mode.  So I would expect vnodes to improve uptime in
 all scenarios, but haven't done the math to prove it.

 Richard.






-- 
Richard Low
Acunu | http://www.acunu.com | @acunu


Re: Virtual Nodes, lots of physical nodes and potentially increasing outage count?

2012-12-11 Thread Eric Parusel
Ok, thanks Richard.  That's good to hear.

However, I still contend that as node count increases to infinity, the
probability of there being at least two node failures in the cluster at any
time would increase to 100%.

I think of this as somewhat analogous to RAID -- I would not be comfortable
with a 144+ disk RAID 6 array, no matter the rebuild speed :)

Is it possible to configure or write a snitch that would create separate
distribution zones within the cluster?  (e.g. 144 nodes in cluster, split
into 12 zones.  Data stored to node 1 could only be replicated to one of 11
other nodes in the same distribution zone).


On Tue, Dec 11, 2012 at 3:24 AM, Richard Low r...@acunu.com wrote:

 Hi Eric,

 The time to recover one node is limited by that node, but the time to
 recover that's most important is just the time to replicate the data that
 is missing from that node.  This is the removetoken operation (called
 removenode in 1.2), and this gets faster the more nodes you have.

 Richard.


 On 11 December 2012 08:39, Eric Parusel ericparu...@gmail.com wrote:

 Thanks for your thoughts guys.

 I agree that with vnodes total downtime is lessened.  Although it also
 seems that the total number of outages (however small) would be greater.

 But I think downtime is only lessened up to a certain cluster size.

 I'm thinking that as the cluster continues to grow:
   - node rebuild time will max out (a single node only has so much write
 bandwidth)
   - the probability of 2 nodes being down at any given time will continue
 to increase -- even if you consider only non-correlated failures.

 Therefore, when adding nodes beyond the point where node rebuild time
 maxes out, both the total number of outages *and* overall downtime would
 increase?

 Thanks,
 Eric




 On Mon, Dec 10, 2012 at 7:00 AM, Edward Capriolo 
 edlinuxg...@gmail.comwrote:

 Assuming you need to work with quorum in a non-vnode scenario. That
 means that if 2 nodes in a row in the ring are down some number of quorum
 operations will fail with UnavailableException (TimeoutException right
 after the failures). This is because the for a given range of tokens quorum
 will be impossible, but quorum will be possible for others.

 In a vnode world if any two nodes are down,  then the intersection of
 vnode token ranges they have are unavailable.

 I think it is two sides of the same coin.


 On Mon, Dec 10, 2012 at 7:41 AM, Richard Low r...@acunu.com wrote:

 Hi Tyler,

 You're right, the math does assume independence which is unlikely to be
 accurate.  But if you do have correlated failure modes e.g. same power,
 racks, DC, etc. then you can still use Cassandra's rack-aware or DC-aware
 features to ensure replicas are spread around so your cluster can survive
 the correlated failure mode.  So I would expect vnodes to improve uptime in
 all scenarios, but haven't done the math to prove it.

 Richard.






 --
 Richard Low
 Acunu | http://www.acunu.com | @acunu



Re: Virtual Nodes, lots of physical nodes and potentially increasing outage count?

2012-12-10 Thread Richard Low
Hi Tyler,

You're right, the math does assume independence which is unlikely to be
accurate.  But if you do have correlated failure modes e.g. same power,
racks, DC, etc. then you can still use Cassandra's rack-aware or DC-aware
features to ensure replicas are spread around so your cluster can survive
the correlated failure mode.  So I would expect vnodes to improve uptime in
all scenarios, but haven't done the math to prove it.

Richard.


On 9 December 2012 17:50, Tyler Hobbs ty...@datastax.com wrote:

 Nicolas,

 Strictly speaking, your math makes the assumption that the failure of
 different nodes are probabilistically independent events. This is, of
 course, not a accurate assumption for real world conditions.  Nodes share
 racks, networking equipment, power, availability zones, data centers, etc.
 So, I think the mathematical assertion is not quite as strong as one would
 like, but it's certainly a good argument for handling certain types of node
 failures.


 On Fri, Dec 7, 2012 at 11:27 AM, Nicolas Favre-Felix nico...@acunu.comwrote:

 Hi Eric,

 Your concerns are perfectly valid.

 We (Acunu) led the design and implementation of this feature and spent a
 long time looking at the impact of such a large change.
 We summarized some of our notes and wrote about the impact of virtual
 nodes on cluster uptime a few months back:
 http://www.acunu.com/2/post/2012/10/improving-cassandras-uptime-with-virtual-nodes.html
 .
 The main argument in this blog post is that you only have a failure to
 perform quorum read/writes if at least RF replicas fail within the time it
 takes to rebuild the first dead node. We show that virtual nodes actually
 decrease the probability of failure, by streaming data from all nodes and
 thereby improving the rebuild time.

 Regards,

 Nicolas


 On Wed, Dec 5, 2012 at 4:45 PM, Eric Parusel ericparu...@gmail.comwrote:

 Hi all,

 I've been wondering about virtual nodes and how cluster uptime might
 change as cluster size increases.

 I understand clusters will benefit from increased reliability due to
 faster rebuild time, but does that hold true for large clusters?

 It seems that since (and correct me if I'm wrong here) every physical
 node will likely share some small amount of data with every other node,
 that as the count of physical nodes in a Cassandra cluster increases (let's
 say into the triple digits) that the probability of at least one failure to
 Quorum read/write occurring in a given time period would *increase*.

 Would this hold true, at least until physical nodes becomes greater than
 num_tokens per node?

 I understand that the window of failure for affected ranges would
 probably be small but we do Quorum reads of many keys, so we'd likely hit
 every virtual range with our queries, even if num_tokens was 256.

 Thanks,
 Eric





 --
 Tyler Hobbs
 DataStax http://datastax.com/




-- 
Richard Low
Acunu | http://www.acunu.com | @acunu


Re: Virtual Nodes, lots of physical nodes and potentially increasing outage count?

2012-12-10 Thread Edward Capriolo
Assuming you need to work with quorum in a non-vnode scenario. That means
that if 2 nodes in a row in the ring are down some number of quorum
operations will fail with UnavailableException (TimeoutException right
after the failures). This is because the for a given range of tokens quorum
will be impossible, but quorum will be possible for others.

In a vnode world if any two nodes are down,  then the intersection of vnode
token ranges they have are unavailable.

I think it is two sides of the same coin.


On Mon, Dec 10, 2012 at 7:41 AM, Richard Low r...@acunu.com wrote:

 Hi Tyler,

 You're right, the math does assume independence which is unlikely to be
 accurate.  But if you do have correlated failure modes e.g. same power,
 racks, DC, etc. then you can still use Cassandra's rack-aware or DC-aware
 features to ensure replicas are spread around so your cluster can survive
 the correlated failure mode.  So I would expect vnodes to improve uptime in
 all scenarios, but haven't done the math to prove it.

 Richard.



Re: Virtual Nodes, lots of physical nodes and potentially increasing outage count?

2012-12-09 Thread Tyler Hobbs
Nicolas,

Strictly speaking, your math makes the assumption that the failure of
different nodes are probabilistically independent events. This is, of
course, not a accurate assumption for real world conditions.  Nodes share
racks, networking equipment, power, availability zones, data centers, etc.
So, I think the mathematical assertion is not quite as strong as one would
like, but it's certainly a good argument for handling certain types of node
failures.


On Fri, Dec 7, 2012 at 11:27 AM, Nicolas Favre-Felix nico...@acunu.comwrote:

 Hi Eric,

 Your concerns are perfectly valid.

 We (Acunu) led the design and implementation of this feature and spent a
 long time looking at the impact of such a large change.
 We summarized some of our notes and wrote about the impact of virtual
 nodes on cluster uptime a few months back:
 http://www.acunu.com/2/post/2012/10/improving-cassandras-uptime-with-virtual-nodes.html
 .
 The main argument in this blog post is that you only have a failure to
 perform quorum read/writes if at least RF replicas fail within the time it
 takes to rebuild the first dead node. We show that virtual nodes actually
 decrease the probability of failure, by streaming data from all nodes and
 thereby improving the rebuild time.

 Regards,

 Nicolas


 On Wed, Dec 5, 2012 at 4:45 PM, Eric Parusel ericparu...@gmail.comwrote:

 Hi all,

 I've been wondering about virtual nodes and how cluster uptime might
 change as cluster size increases.

 I understand clusters will benefit from increased reliability due to
 faster rebuild time, but does that hold true for large clusters?

 It seems that since (and correct me if I'm wrong here) every physical
 node will likely share some small amount of data with every other node,
 that as the count of physical nodes in a Cassandra cluster increases (let's
 say into the triple digits) that the probability of at least one failure to
 Quorum read/write occurring in a given time period would *increase*.

 Would this hold true, at least until physical nodes becomes greater than
 num_tokens per node?

 I understand that the window of failure for affected ranges would
 probably be small but we do Quorum reads of many keys, so we'd likely hit
 every virtual range with our queries, even if num_tokens was 256.

 Thanks,
 Eric





-- 
Tyler Hobbs
DataStax http://datastax.com/


Re: Virtual Nodes, lots of physical nodes and potentially increasing outage count?

2012-12-07 Thread Edward Capriolo
Good point . hadoop sprays its blocks around randomly. Thus if replication
factor nodes are down some blocks are not found. The larger the cluster the
higher chance nodes are down.

To deal with this increase rf once the cluster gets to be very large.


On Wednesday, December 5, 2012, Eric Parusel ericparu...@gmail.com wrote:
 Hi all,
 I've been wondering about virtual nodes and how cluster uptime might
change as cluster size increases.
 I understand clusters will benefit from increased reliability due to
faster rebuild time, but does that hold true for large clusters?
 It seems that since (and correct me if I'm wrong here) every physical
node will likely share some small amount of data with every other node,
that as the count of physical nodes in a Cassandra cluster increases (let's
say into the triple digits) that the probability of at least one failure to
Quorum read/write occurring in a given time period would *increase*.
 Would this hold true, at least until physical nodes becomes greater than
num_tokens per node?

 I understand that the window of failure for affected ranges would
probably be small but we do Quorum reads of many keys, so we'd likely hit
every virtual range with our queries, even if num_tokens was 256.
 Thanks,
 Eric


Re: Virtual Nodes, lots of physical nodes and potentially increasing outage count?

2012-12-07 Thread Nicolas Favre-Felix
Hi Eric,

Your concerns are perfectly valid.

We (Acunu) led the design and implementation of this feature and spent a
long time looking at the impact of such a large change.
We summarized some of our notes and wrote about the impact of virtual nodes
on cluster uptime a few months back:
http://www.acunu.com/2/post/2012/10/improving-cassandras-uptime-with-virtual-nodes.html
.
The main argument in this blog post is that you only have a failure to
perform quorum read/writes if at least RF replicas fail within the time it
takes to rebuild the first dead node. We show that virtual nodes actually
decrease the probability of failure, by streaming data from all nodes and
thereby improving the rebuild time.

Regards,

Nicolas


On Wed, Dec 5, 2012 at 4:45 PM, Eric Parusel ericparu...@gmail.com wrote:

 Hi all,

 I've been wondering about virtual nodes and how cluster uptime might
 change as cluster size increases.

 I understand clusters will benefit from increased reliability due to
 faster rebuild time, but does that hold true for large clusters?

 It seems that since (and correct me if I'm wrong here) every physical node
 will likely share some small amount of data with every other node, that as
 the count of physical nodes in a Cassandra cluster increases (let's say
 into the triple digits) that the probability of at least one failure to
 Quorum read/write occurring in a given time period would *increase*.

 Would this hold true, at least until physical nodes becomes greater than
 num_tokens per node?

 I understand that the window of failure for affected ranges would probably
 be small but we do Quorum reads of many keys, so we'd likely hit every
 virtual range with our queries, even if num_tokens was 256.

 Thanks,
 Eric