Thrift is still present in the 2.0 branch as well as 2.1.  Where did
you see that it's deprecated?

Let me elaborate my earlier advice.  Shuffle was removed because it
doesn't work for anything beyond a trivial dataset.  It is definitely
"more risky" than adding a new vnode enabled DC, as it does not work
at all.

On Mon, Sep 8, 2014 at 2:01 PM, Tim Heckman <t...@pagerduty.com> wrote:
> On Mon, Sep 8, 2014 at 1:45 PM, Jonathan Haddad <j...@jonhaddad.com> wrote:
>> I believe shuffle has been removed recently.  I do not recommend using
>> it for any reason.
>
> We're still using the 1.2.x branch of Cassandra, and will be for some
> time due to the thrift deprecation. Has it only been removed from the
> 2.x line?
>
>> If you really want to go vnodes, your only sane option is to add a new
>> DC that uses vnodes and switch to it.
>
> We use the NetworkTopologyStrategy across three geographically
> separated regions. Doing it this way feels a bit more risky based on
> our replication strategy. Also, I'm not sure where all we have our
> current datacenter names defined across our different internal
> repositories. So there could be quite a large number of changes going
> this route.
>
>> The downside in the 2.0.x branch to using vnodes is that repairs take
>> N times as long, where N is the number of tokens you put on each node.
>> I can't think of any other reasons why you wouldn't want to use vnodes
>> (but this may be significant enough for you by itself)
>>
>> 2.1 should address the repair issue for most use cases.
>>
>> Jon
>
> Thank you for the notes on the behaviors in the 2.x branch. If we do
> move to the 2.x version that's something we'll be keeping in mind.
>
> Cheers!
> -Tim
>
>> On Mon, Sep 8, 2014 at 1:28 PM, Robert Coli <rc...@eventbrite.com> wrote:
>>> On Mon, Sep 8, 2014 at 1:21 PM, Tim Heckman <t...@pagerduty.com> wrote:
>>>>
>>>> We're still at the exploratory stage on systems that are not
>>>> production-facing but contain production-like data. Based on our
>>>> placement strategy we have some concerns that the new datacenter
>>>> approach may be riskier or more difficult. We're just trying to gauge
>>>> both paths and see what works best for us.
>>>
>>>
>>> Your case of RF=N is probably the best possible case for shuffle, but
>>> general statements about how much this code has been exercised remain. :)
>>>
>>>>
>>>> The cluster I'm testing this on is a 5 node cluster with a placement
>>>> strategy such that all nodes contain 100% of the data. In practice we
>>>> have six clusters of similar size that are used for different
>>>> services. These different clusters may need additional capacity at
>>>> different times, so it's hard to answer the maximum size question. For
>>>> now let's just assume that the clusters may never see an 11th
>>>> member... but no guarantees.
>>>
>>>
>>> With RF of 3, cluster sizes of under approximately 10 tend to net lose from
>>> vnodes. If these clusters are not very likely to ever have more than 10
>>> nodes, consider not using Vnodes.
>>>
>>>>
>>>> We're looking to use vnodes to help with easing the administrative
>>>> work of scaling out the cluster. The improvements of streaming data
>>>> during repairs amongst others.
>>>
>>>
>>> Most of these wins don't occur until you have a lot of nodes, but the fixed
>>> costs of having many ranges are paid all the time.
>>>
>>>>
>>>> For shuffle, it looks like it may be easier than adding a new
>>>> datacenter and then have to adjust the schema for a new "datacenter"
>>>> to come to life. And we weren't sure whether the same pitfalls of
>>>> shuffle would effect us while having all data on all nodes.
>>>
>>>
>>> Let us know! Good luck!
>>>
>>> =Rob
>>>
>>
>>
>>
>> --
>> Jon Haddad
>> http://www.rustyrazorblade.com
>> twitter: rustyrazorblade



-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade

Reply via email to