Wanted to just circle back on all the above..
Thanks everyone for your help, and input.
I'm glad to hear someone else did a site-to-site tunnel with Cassandra
between regions. When originally setting up all the docs and information
all preached public IP's. I can totally understand
Nice that you solved the issue. I had some thoughts while reading:
> My original thought was a new DC parallel to the current, and then
> decommission the other DC.
I also think this is the best way to go when possible. It can be reverted
any time in the process, respect distributions,
Ohh i see now. It makes sense. Thanks a lot.
On Fri, Jun 29, 2018 at 9:17 PM, Randy Lynn wrote:
> data is only lost if you stop the node. between restarts the storage is
> On Fri, Jun 29, 2018 at 10:39 AM, Pradeep Chhetri
>> Isnt NVMe storage an instance storage ie. the
data is only lost if you stop the node. between restarts the storage is
On Fri, Jun 29, 2018 at 10:39 AM, Pradeep Chhetri
> Isnt NVMe storage an instance storage ie. the data will be lost in case
> the instance restarts. How are you going to make sure that there is no data
Isnt NVMe storage an instance storage ie. the data will be lost in case the
instance restarts. How are you going to make sure that there is no data
loss in case instance gets rebooted?
On Fri, 29 Jun 2018 at 7:00 PM, Randy Lynn wrote:
> GPFS - Rahul FTW! Thank you for your help!
GPFS - Rahul FTW! Thank you for your help!
Yes, Pradeep - migrating to i3 from r3. moving for NVMe storage, I did not
have the benefit of doing benchmarks.. but we're moving from 1,500 IOPS so
I intrinsically know we'll get better throughput.
On Fri, Jun 29, 2018 at 7:21 AM, Rahul Singh
Totally agree. GPFS for the win. EC2 multi region snitch is an automation tool
like Ansible or Puppet. Unless you have two orders of magnitude more servers
than you do now, you don’t need it.
On Jun 29, 2018, 6:18 AM -0400, kurt greaves , wrote:
> Yes. You would just end up with a rack
Yes. You would just end up with a rack named differently to the AZ. This is
not a problem as racks are just logical. I would recommend migrating all
your DCs to GPFS though for consistency.
On Fri., 29 Jun. 2018, 09:04 Randy Lynn, wrote:
> So we have two data centers already running..
Just curious -
>From which instance type are you migrating to i3 type and what are the
reasons to move to i3 type ?
Are you going to take benefit from NVMe instance storage - if yes, how ?
Since we are also migrating our cluster on AWS - but we are currently using
r4 instance, so i was
So we have two data centers already running..
AP-SYDNEY, and US-EAST.. I'm using Ec2Snitch over a site-to-site tunnel..
I'm wanting to move the current US-EAST from AZ 1a to 1e..
I know all docs say use ec2multiregion for multi-DC.
I like the GPFS idea. would that work with the multi-DC too?
There is a need for a repair with both DCs as rebuild will not stream all
replicas, so unless you can guarantee you were perfectly consistent at time
of rebuild you'll want to do a repair after rebuild.
On another note you could just replace the nodes but use GPFS instead of
EC2 snitch, using the
Parallel load is the best approach and then switch your Data access code to
only access the new hardware. After you verify that there are no local read /
writes on the OLD dc and that the updates are only via Gossip, then go ahead
and change the replication factor on the key space to have zero
Already running with Ec2.
My original thought was a new DC parallel to the current, and then
decommission the other DC.
Also my data load is small right now.. I know small is relative term.. each
node is carrying about 6GB..
So given the data size, would you go with parallel DC or let the new
You don’t have to use EC2 snitch on AWS but if you have already started with it
, it may put a node in a different DC.
If your data density won’t be ridiculous You could add 3 to different DC/
Region and then sync up. After the new DC is operational you can remove one at
a time on the old DC
The single node in 1e will be a replica for every range (and you won’t be able
to tolerate an outage in 1c), potentially putting it under significant load
> On Jun 28, 2018, at 7:02 AM, Randy Lynn wrote:
> I have a 6-node cluster I'm migrating to the new i3 types.
> But at
Mail list logo