I believe shuffle has been removed recently.  I do not recommend using
it for any reason.

If you really want to go vnodes, your only sane option is to add a new
DC that uses vnodes and switch to it.

The downside in the 2.0.x branch to using vnodes is that repairs take
N times as long, where N is the number of tokens you put on each node.
I can't think of any other reasons why you wouldn't want to use vnodes
(but this may be significant enough for you by itself)

2.1 should address the repair issue for most use cases.

Jon


On Mon, Sep 8, 2014 at 1:28 PM, Robert Coli <rc...@eventbrite.com> wrote:
> On Mon, Sep 8, 2014 at 1:21 PM, Tim Heckman <t...@pagerduty.com> wrote:
>>
>> We're still at the exploratory stage on systems that are not
>> production-facing but contain production-like data. Based on our
>> placement strategy we have some concerns that the new datacenter
>> approach may be riskier or more difficult. We're just trying to gauge
>> both paths and see what works best for us.
>
>
> Your case of RF=N is probably the best possible case for shuffle, but
> general statements about how much this code has been exercised remain. :)
>
>>
>> The cluster I'm testing this on is a 5 node cluster with a placement
>> strategy such that all nodes contain 100% of the data. In practice we
>> have six clusters of similar size that are used for different
>> services. These different clusters may need additional capacity at
>> different times, so it's hard to answer the maximum size question. For
>> now let's just assume that the clusters may never see an 11th
>> member... but no guarantees.
>
>
> With RF of 3, cluster sizes of under approximately 10 tend to net lose from
> vnodes. If these clusters are not very likely to ever have more than 10
> nodes, consider not using Vnodes.
>
>>
>> We're looking to use vnodes to help with easing the administrative
>> work of scaling out the cluster. The improvements of streaming data
>> during repairs amongst others.
>
>
> Most of these wins don't occur until you have a lot of nodes, but the fixed
> costs of having many ranges are paid all the time.
>
>>
>> For shuffle, it looks like it may be easier than adding a new
>> datacenter and then have to adjust the schema for a new "datacenter"
>> to come to life. And we weren't sure whether the same pitfalls of
>> shuffle would effect us while having all data on all nodes.
>
>
> Let us know! Good luck!
>
> =Rob
>



-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade

Reply via email to