I don't know about "new snitch protocol" (there is no snitch protocol), but
in the case of ec2snitch and gossipingpropertyfilesnitch (do not
extrapolate this to any other combination), you can switch them 1:1 before
the move, and everything should be fine, as long as care is taken to make
the property file match the aws region/az discovered.

If there were other snitches involved (especially SimpleSnitch), things get
much more complicated.




On Thu, Jun 7, 2018 at 9:18 AM, Nitan Kainth <nitankai...@gmail.com> wrote:

> Jeff,
>
> In this case, if Riccardo is adding new DC, then he can pickup new snitch
> protocol right?
>
> On Thu, Jun 7, 2018 at 12:15 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>
>>
>>
>>
>> On Thu, Jun 7, 2018 at 9:12 AM, Nitan Kainth <nitankai...@gmail.com>
>> wrote:
>>
>>> Riccardo,
>>>
>>> Simplest method can be is to add VPC as an additional datacenter to
>>> existing cluster; once New DC has synced up all data, just switch over your
>>> application. Only caveat is that there should be enough network bandwidth
>>> between EC2 and VPC.
>>>
>>> Horizontal scaling will help to some extent.
>>>
>>> Changing snitch will cause data movement but for new DC, I believe you
>>> can chose. Leaving others comment.
>>>
>>
>> If you change the snitch, and the data ownership changes, you run the
>> risk of violating at least consistency guarantees, if not outright losing
>> data.
>>
>> Changing the snitch is only something you should attempt if you
>> understand the implications. Cassandra does NOT re-stream data to the right
>> places if/when you change a snitch.
>>
>>
>>
>>>
>>> With EBS, use right instance where IOPs are as high as possible.
>>> Generally, SSDs IOPs are 75K; what is best for you depends on workload. I
>>> have seen 20K IOPs supporting production loads in reasonable cluster.
>>>
>>
>> This is very much workload dependent. I know of people running thousands
>> of nodes / petabytes of EBS on 4T GP2 volumes which provide 10k iops, and
>> they RARELY go above 1k iops during normal use patterns.
>>
>> -  Jeff
>>
>>
>>
>>>
>>> On Thu, Jun 7, 2018 at 6:20 AM, Riccardo Ferrari <ferra...@gmail.com>
>>> wrote:
>>>
>>>> Dear list,
>>>>
>>>> We have a small cluster on AWS EC2-Classic and we are planning to move
>>>> it to a VPC.
>>>>
>>>> I know this has been discussed few times already including here:
>>>> https://mail-archives.apache.org/mod_mbox/cassandra-user/201
>>>> 406.mbox/%3CCA+VSrLopop7Th8nX20aOZ3As75g2jrJm3ryX119dekLYNHq
>>>> f...@mail.gmail.com%3E - Thanks Alain RODRIGUEZ!
>>>> or here:
>>>> http://grokbase.com/t/cassandra/user/1498y74hn7/moving-cassa
>>>> ndra-from-ec2-classic-into-vpc - Thanks Ben Bromhead!
>>>>
>>>> However I can't find anyone suggesting ClassicLink to perform such
>>>> migration, maybe because was recent addition compared to those posts. So
>>>> current status is:
>>>>
>>>>
>>>>    - 3 nodes : m1.xlarge (ephemeral storage)
>>>>    - Cassandra 3.0.8
>>>>    - Few keyspaces with {'class': 'SimpleStrategy',
>>>>    'replication_factor': '3'}
>>>>    - endpoint_snitch: Ec2Snitch
>>>>    - connectivity on private ips (local_address, rpc_address, no
>>>>    broadcast_address)
>>>>    - average load pretty high (we need to scale up)
>>>>    - vnodes
>>>>
>>>> My idea is to migrate:
>>>>
>>>>    - Add intances on to the existing cluster (1 by 1):
>>>>       - Same cassandra version
>>>>       - Same cluster name
>>>>       - Seed list from ec2 classic list
>>>>    - Run repair and cleanup
>>>>    - Update seed list (cassandar config and services)
>>>>    - Decommision old instances (1 by 1)
>>>>    - Run repair and cleanup again
>>>>
>>>>
>>>> Question time:
>>>>
>>>>    - What is the best approach, in 2018, for migrating a cluster from
>>>>    EC2Classic to VPC with 0 downtime?
>>>>    - Should we scale up before or during the migration? I know I
>>>>    should expect some load from data streaming, at the same time we're 
>>>> adding
>>>>    capacity.
>>>>    - We would like to migrato to GossipingPropertyFileSnitch, can we
>>>>    mix them?
>>>>    - Any gothca?
>>>>    - We are thinking to move to EBS (due to cluster size and
>>>>    snapshotting caoabilities). Any hardware reccomendation?
>>>>
>>>> Any suggestion is much appreciated!
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to