We do use LOCAL_ONE and LOCAL_Quorum currently. But these 8 nodes need to be in 2 different DC< so we would end up create additional 2 new DC and dropping 2.
are there any advantages on adding DC over one node at a time? ________________________________ From: Jeff Jirsa <[email protected]> Sent: Wednesday, February 21, 2018 1:02 AM To: [email protected] Subject: Re: Best approach to Replace existing 8 smaller nodes in production cluster with New 8 nodes that are bigger in capacity, without a downtime You add the nodes with rf=0 so there’s no streaming, then bump it to rf=1 and run repair, then rf=2 and run repair, then rf=3 and run repair, then you either change the app to use local quorum in the new dc, or reverse the process by decreasing the rf in the original dc by 1 at a time -- Jeff Jirsa > On Feb 20, 2018, at 8:51 PM, Kyrylo Lebediev <[email protected]> wrote: > > I'd say, "add new DC, then remove old DC" approach is more risky especially > if they use QUORUM CL (in this case they will need to change CL to > LOCAL_QUORUM, otherwise they'll run into a lot of blocking read repairs). > Also, if there is a chance to get rid of streaming, it worth doing as usually > direct data copy (not by means of C*) is more effective and less troublesome. > > Regards, > Kyrill > > ________________________________________ > From: Nitan Kainth <[email protected]> > Sent: Wednesday, February 21, 2018 1:04:05 AM > To: [email protected] > Subject: Re: Best approach to Replace existing 8 smaller nodes in production > cluster with New 8 nodes that are bigger in capacity, without a downtime > > You can also create a new DC and then terminate old one. > > Sent from my iPhone > >> On Feb 20, 2018, at 2:49 PM, Kyrylo Lebediev <[email protected]> >> wrote: >> >> Hi, >> Consider using this approach, replacing nodes one by one: >> https://mrcalonso.com/2016/01/26/cassandra-instantaneous-in-place-node-replacement/ [https://s0.wp.com/i/blank.jpg]<https://mrcalonso.com/2016/01/26/cassandra-instantaneous-in-place-node-replacement/> Cassandra instantaneous in place node replacement | Carlos ...<https://mrcalonso.com/2016/01/26/cassandra-instantaneous-in-place-node-replacement/> mrcalonso.com At some point everyone using Cassandra faces the situation of having to replace nodes. Either because the cluster needs to scale and some nodes are too small or ... >> >> Regards, >> Kyrill >> >> ________________________________________ >> From: Leena Ghatpande <[email protected]> >> Sent: Tuesday, February 20, 2018 10:24:24 PM >> To: [email protected] >> Subject: Best approach to Replace existing 8 smaller nodes in production >> cluster with New 8 nodes that are bigger in capacity, without a downtime >> >> Best approach to replace existing 8 smaller 8 nodes in production cluster >> with New 8 nodes that are bigger in capacity without a downtime >> >> We have 4 nodes each in 2 DC, and we want to replace these 8 nodes with new >> 8 nodes that are bigger in capacity in terms of RAM,CPU and Diskspace >> without a downtime. >> The RF is set to 3 currently, and we have 2 large tables with upto 70Million >> rows >> >> What would be the best approach to implement this >> - Add 1 New Node and Decomission 1 Old node at a time? >> - Add all New nodes to the cluster, and then decommission old nodes ? >> If we do this, can we still keep the RF=3 while we have 16 nodes at a >> point in the cluster before we start decommissioning? >> - How long do we wait in between adding a Node or decomissiing to ensure >> the process is complete before we proceed? >> - Any tool that we can use to monitor if the add/decomission node is done >> before we proceed to next >> >> Any other suggestion? >> >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: [email protected] >> For additional commands, e-mail: [email protected] >> > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] > For additional commands, e-mail: [email protected] > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] > For additional commands, e-mail: [email protected] > --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
