Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-12 Thread Laxmikant Upadhyay
Hi Justin,

If you have 6 node cluster with RF = 3 nodes in each rack. So nodes in rac1
will be primary ower of different token ranges and their replica will be
rac2 and rac3.
If one of the node in rac1 goes down then their replicas present in rac2
and rac3 will be serving the request. However note that hints for the down
node will be stored on cordinator. If you are using token aware policy at
client side that the replica on rac2/3 will be coordinator and will store
hint for the down node in rac1.

regards,
Laxmikant

On Wed, Mar 13, 2019 at 1:44 AM Justin Sanciangco
 wrote:

> Maybe this was a specific issue to my topology in the past where I had 9
> nodes with a 3 rack implementation. Each rack contained a unique replica
> set so when a node went down it put very high load on the nodes in the same
> rack. How does the data get distributed in this case where there are only 2
> nodes in each of the 3 racks?
>
>
>
> - Justin Sanciangco
>
>
>
>
>
> *From: *Alexander Dejanovski 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Tuesday, March 12, 2019 at 10:56 AM
> *To: *user 
> *Subject: *Re: Changing existing Cassandra cluster from single rack
> configuration to multi racks configuration
>
>
>
> Hi Justin,
>
>
>
> I'm not sure I follow your reasoning. In a 6 node cluster with 3 racks (2
> nodes per rack) and RF 3, if a node goes down you'll still have one node in
> each of the other racks to serve the requests. Nodes within the same racks
> aren't replicas for the same tokens (as long as the number of racks is
> greater or equal to the RF).
>
>
>
> Regarding the other question with the decommission/rebootstrap procedure,
> unbalances are indeed to be expected, and I'd favor the DC switch
> technique, but it may not be an option.
>
>
>
> Cheers,
>
>
>
> Le mar. 12 mars 2019 à 18:28, Justin Sanciangco
>  a écrit :
>
> I would recommend that you do not go into a 3 rack single dc
> implementation with only 6 nodes. If a node goes down in this situation,
> the node that is paired with the node that is downed will have to service
> all of the load instead of being evenly distributed throughout the cluster.
> While its conceptually nice to have 3 rack implementation, it does have
> some negative implications when not at a proper node count.
>
>
>
> What features are you trying to make use of with going with multirack?
>
>
>
> - Justin Sanciangco
>
>
>
>
>
> *From: *Laxmikant Upadhyay 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Monday, March 11, 2019 at 10:52 PM
> *To: *"user@cassandra.apache.org" 
> *Subject: *Re: Changing existing Cassandra cluster from single rack
> configuration to multi racks configuration
>
>
>
> Hi Alex,
>
>
>
> Regarding your below point the admin need to take care of temporary uneven
> distribution of data util the entire process is done:
>
>
>
> "If you can't, then I guess you can for each node (one at a time),
> decommission it, wipe it clean and re-bootstrap it after setting the
> appropriate rack."
>
>
>
> I believe while doing so in the existing single rack cluster, the first
> new node joined with different rack (rac2) will get 100% loaded in terms so
> disk usage will be proportionally very high in comparison to other nodes in
> rac1.
>
> So until both racks have equal number of nodes and we run nodetool cleaup,
> the data will not be equally distributed.
>
>
>
>
>
> On Wed, Mar 6, 2019 at 5:50 PM Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
> Hi Manish,
>
>
>
> the best way, if you have the opportunity to easily add new
> hardware/instances, is to create a new DC with racks and switch traffic to
> the new DC when it's ready (then remove the old one). My co-worker Alain
> just wrote a very handy blog post on that technique :
> http://thelastpickle.com/blog/2019/02/26/data-center-switch.html
>
>
>
> If you can't, then I guess you can for each node (one at a time),
> decommission it, wipe it clean and re-bootstrap it after setting the
> appropriate rack.
>
> Also, take into account that your keyspaces must use the
> NetworkTopologyStrategy so that racks can be taken into account. Change the
> strategy prior to adding the new nodes if you're currently using
> SimpleStrategy.
>
>
>
> You cannot (and shouldn't) try to change the rack on an existing node (the
> GossipingPropertyFileSnitch won't allow it).
>
>
>
> Cheers,
>
>
>
> On Wed, Mar 6, 2019 at 12:15 PM manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
> We have a 6 

Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-12 Thread Justin Sanciangco
Maybe this was a specific issue to my topology in the past where I had 9 nodes 
with a 3 rack implementation. Each rack contained a unique replica set so when 
a node went down it put very high load on the nodes in the same rack. How does 
the data get distributed in this case where there are only 2 nodes in each of 
the 3 racks?

- Justin Sanciangco


From: Alexander Dejanovski 
Reply-To: "user@cassandra.apache.org" 
Date: Tuesday, March 12, 2019 at 10:56 AM
To: user 
Subject: Re: Changing existing Cassandra cluster from single rack configuration 
to multi racks configuration

Hi Justin,

I'm not sure I follow your reasoning. In a 6 node cluster with 3 racks (2 nodes 
per rack) and RF 3, if a node goes down you'll still have one node in each of 
the other racks to serve the requests. Nodes within the same racks aren't 
replicas for the same tokens (as long as the number of racks is greater or 
equal to the RF).

Regarding the other question with the decommission/rebootstrap procedure, 
unbalances are indeed to be expected, and I'd favor the DC switch technique, 
but it may not be an option.

Cheers,

Le mar. 12 mars 2019 à 18:28, Justin Sanciangco 
 a écrit :
I would recommend that you do not go into a 3 rack single dc implementation 
with only 6 nodes. If a node goes down in this situation, the node that is 
paired with the node that is downed will have to service all of the load 
instead of being evenly distributed throughout the cluster. While its 
conceptually nice to have 3 rack implementation, it does have some negative 
implications when not at a proper node count.

What features are you trying to make use of with going with multirack?

- Justin Sanciangco


From: Laxmikant Upadhyay 
mailto:laxmikant@gmail.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
mailto:user@cassandra.apache.org>>
Date: Monday, March 11, 2019 at 10:52 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
mailto:user@cassandra.apache.org>>
Subject: Re: Changing existing Cassandra cluster from single rack configuration 
to multi racks configuration

Hi Alex,

Regarding your below point the admin need to take care of temporary uneven 
distribution of data util the entire process is done:

"If you can't, then I guess you can for each node (one at a time), decommission 
it, wipe it clean and re-bootstrap it after setting the appropriate rack."

I believe while doing so in the existing single rack cluster, the first new 
node joined with different rack (rac2) will get 100% loaded in terms so disk 
usage will be proportionally very high in comparison to other nodes in rac1.
So until both racks have equal number of nodes and we run nodetool cleaup, the 
data will not be equally distributed.


On Wed, Mar 6, 2019 at 5:50 PM Alexander Dejanovski 
mailto:a...@thelastpickle.com>> wrote:
Hi Manish,

the best way, if you have the opportunity to easily add new hardware/instances, 
is to create a new DC with racks and switch traffic to the new DC when it's 
ready (then remove the old one). My co-worker Alain just wrote a very handy 
blog post on that technique : 
http://thelastpickle.com/blog/2019/02/26/data-center-switch.html

If you can't, then I guess you can for each node (one at a time), decommission 
it, wipe it clean and re-bootstrap it after setting the appropriate rack.
Also, take into account that your keyspaces must use the 
NetworkTopologyStrategy so that racks can be taken into account. Change the 
strategy prior to adding the new nodes if you're currently using SimpleStrategy.

You cannot (and shouldn't) try to change the rack on an existing node (the 
GossipingPropertyFileSnitch won't allow it).

Cheers,

On Wed, Mar 6, 2019 at 12:15 PM manish khandelwal 
mailto:manishkhandelwa...@gmail.com>> wrote:
We have a 6 node Cassandra cluster in which all the nodes  are in same rack in 
a dc. We want to take advantage of "multi rack" cluster (example: parallel 
upgrade on all the nodes in same rack without downtime). I would like to know 
what is the recommended process to change an existing cluster with single racks 
configuration to multi rack configuration.

I want to introduce 3 racks with 2 nodes in each rack.

Regards
Manish

--
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com<http://www.thelastpickle.com/>


--

regards,
Laxmikant Upadhyay



Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-12 Thread Alexander Dejanovski
Hi Justin,

I'm not sure I follow your reasoning. In a 6 node cluster with 3 racks (2
nodes per rack) and RF 3, if a node goes down you'll still have one node in
each of the other racks to serve the requests. Nodes within the same racks
aren't replicas for the same tokens (as long as the number of racks is
greater or equal to the RF).

Regarding the other question with the decommission/rebootstrap procedure,
unbalances are indeed to be expected, and I'd favor the DC switch
technique, but it may not be an option.

Cheers,


Le mar. 12 mars 2019 à 18:28, Justin Sanciangco
 a écrit :

> I would recommend that you do not go into a 3 rack single dc
> implementation with only 6 nodes. If a node goes down in this situation,
> the node that is paired with the node that is downed will have to service
> all of the load instead of being evenly distributed throughout the cluster.
> While its conceptually nice to have 3 rack implementation, it does have
> some negative implications when not at a proper node count.
>
>
>
> What features are you trying to make use of with going with multirack?
>
>
>
> - Justin Sanciangco
>
>
>
>
>
> *From: *Laxmikant Upadhyay 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Monday, March 11, 2019 at 10:52 PM
> *To: *"user@cassandra.apache.org" 
> *Subject: *Re: Changing existing Cassandra cluster from single rack
> configuration to multi racks configuration
>
>
>
> Hi Alex,
>
>
>
> Regarding your below point the admin need to take care of temporary uneven
> distribution of data util the entire process is done:
>
>
>
> "If you can't, then I guess you can for each node (one at a time),
> decommission it, wipe it clean and re-bootstrap it after setting the
> appropriate rack."
>
>
>
> I believe while doing so in the existing single rack cluster, the first
> new node joined with different rack (rac2) will get 100% loaded in terms so
> disk usage will be proportionally very high in comparison to other nodes in
> rac1.
>
> So until both racks have equal number of nodes and we run nodetool cleaup,
> the data will not be equally distributed.
>
>
>
>
>
> On Wed, Mar 6, 2019 at 5:50 PM Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
> Hi Manish,
>
>
>
> the best way, if you have the opportunity to easily add new
> hardware/instances, is to create a new DC with racks and switch traffic to
> the new DC when it's ready (then remove the old one). My co-worker Alain
> just wrote a very handy blog post on that technique :
> http://thelastpickle.com/blog/2019/02/26/data-center-switch.html
>
>
>
> If you can't, then I guess you can for each node (one at a time),
> decommission it, wipe it clean and re-bootstrap it after setting the
> appropriate rack.
>
> Also, take into account that your keyspaces must use the
> NetworkTopologyStrategy so that racks can be taken into account. Change the
> strategy prior to adding the new nodes if you're currently using
> SimpleStrategy.
>
>
>
> You cannot (and shouldn't) try to change the rack on an existing node (the
> GossipingPropertyFileSnitch won't allow it).
>
>
>
> Cheers,
>
>
>
> On Wed, Mar 6, 2019 at 12:15 PM manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
> We have a 6 node Cassandra cluster in which all the nodes  are in same
> rack in a dc. We want to take advantage of "multi rack" cluster (example:
> parallel upgrade on all the nodes in same rack without downtime). I would
> like to know what is the recommended process to change an existing cluster
> with single racks configuration to multi rack configuration.
>
>
>
> I want to introduce 3 racks with 2 nodes in each rack.
>
>
>
> Regards
>
> Manish
>
>
>
> --
>
> -
>
> Alexander Dejanovski
>
> France
>
> @alexanderdeja
>
>
>
> Consultant
>
> Apache Cassandra Consulting
>
> http://www.thelastpickle.com
>
>
>
>
> --
>
>
>
> regards,
>
> Laxmikant Upadhyay
>
>
>


Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-12 Thread Jeff Jirsa
On Tue, Mar 12, 2019 at 5:28 PM Justin Sanciangco
 wrote:

> I would recommend that you do not go into a 3 rack single dc
> implementation with only 6 nodes. If a node goes down in this situation,
> the node that is paired with the node that is downed will have to service
> all of the load instead of being evenly distributed throughout the cluster.
> While its conceptually nice to have 3 rack implementation, it does have
> some negative implications when not at a proper node count.
>
>
>

This isn't true unless you query at ALL. If you query at (local_)quorum,
the snitch will choose the fastest replicas, which probably means favoring
the other two racks.


Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-12 Thread Justin Sanciangco
I would recommend that you do not go into a 3 rack single dc implementation 
with only 6 nodes. If a node goes down in this situation, the node that is 
paired with the node that is downed will have to service all of the load 
instead of being evenly distributed throughout the cluster. While its 
conceptually nice to have 3 rack implementation, it does have some negative 
implications when not at a proper node count.

What features are you trying to make use of with going with multirack?

- Justin Sanciangco


From: Laxmikant Upadhyay 
Reply-To: "user@cassandra.apache.org" 
Date: Monday, March 11, 2019 at 10:52 PM
To: "user@cassandra.apache.org" 
Subject: Re: Changing existing Cassandra cluster from single rack configuration 
to multi racks configuration

Hi Alex,

Regarding your below point the admin need to take care of temporary uneven 
distribution of data util the entire process is done:

"If you can't, then I guess you can for each node (one at a time), decommission 
it, wipe it clean and re-bootstrap it after setting the appropriate rack."

I believe while doing so in the existing single rack cluster, the first new 
node joined with different rack (rac2) will get 100% loaded in terms so disk 
usage will be proportionally very high in comparison to other nodes in rac1.
So until both racks have equal number of nodes and we run nodetool cleaup, the 
data will not be equally distributed.


On Wed, Mar 6, 2019 at 5:50 PM Alexander Dejanovski 
mailto:a...@thelastpickle.com>> wrote:
Hi Manish,

the best way, if you have the opportunity to easily add new hardware/instances, 
is to create a new DC with racks and switch traffic to the new DC when it's 
ready (then remove the old one). My co-worker Alain just wrote a very handy 
blog post on that technique : 
http://thelastpickle.com/blog/2019/02/26/data-center-switch.html

If you can't, then I guess you can for each node (one at a time), decommission 
it, wipe it clean and re-bootstrap it after setting the appropriate rack.
Also, take into account that your keyspaces must use the 
NetworkTopologyStrategy so that racks can be taken into account. Change the 
strategy prior to adding the new nodes if you're currently using SimpleStrategy.

You cannot (and shouldn't) try to change the rack on an existing node (the 
GossipingPropertyFileSnitch won't allow it).

Cheers,

On Wed, Mar 6, 2019 at 12:15 PM manish khandelwal 
mailto:manishkhandelwa...@gmail.com>> wrote:
We have a 6 node Cassandra cluster in which all the nodes  are in same rack in 
a dc. We want to take advantage of "multi rack" cluster (example: parallel 
upgrade on all the nodes in same rack without downtime). I would like to know 
what is the recommended process to change an existing cluster with single racks 
configuration to multi rack configuration.

I want to introduce 3 racks with 2 nodes in each rack.

Regards
Manish

--
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com<http://www.thelastpickle.com/>


--

regards,
Laxmikant Upadhyay



Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-11 Thread Laxmikant Upadhyay
Hi Alex,

Regarding your below point the admin need to take care of temporary uneven
distribution of data util the entire process is done:

"If you can't, then I guess you can for each node (one at a time),
decommission it, wipe it clean and re-bootstrap it after setting the
appropriate rack."

I believe while doing so in the existing single rack cluster, the first new
node joined with different rack (rac2) will get 100% loaded in terms so
disk usage will be proportionally very high in comparison to other nodes in
rac1.
So until both racks have equal number of nodes and we run nodetool cleaup,
the data will not be equally distributed.


On Wed, Mar 6, 2019 at 5:50 PM Alexander Dejanovski 
wrote:

> Hi Manish,
>
> the best way, if you have the opportunity to easily add new
> hardware/instances, is to create a new DC with racks and switch traffic to
> the new DC when it's ready (then remove the old one). My co-worker Alain
> just wrote a very handy blog post on that technique :
> http://thelastpickle.com/blog/2019/02/26/data-center-switch.html
>
> If you can't, then I guess you can for each node (one at a time),
> decommission it, wipe it clean and re-bootstrap it after setting the
> appropriate rack.
> Also, take into account that your keyspaces must use the
> NetworkTopologyStrategy so that racks can be taken into account. Change the
> strategy prior to adding the new nodes if you're currently using
> SimpleStrategy.
>
> You cannot (and shouldn't) try to change the rack on an existing node (the
> GossipingPropertyFileSnitch won't allow it).
>
> Cheers,
>
> On Wed, Mar 6, 2019 at 12:15 PM manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
>> We have a 6 node Cassandra cluster in which all the nodes  are in same
>> rack in a dc. We want to take advantage of "multi rack" cluster (example:
>> parallel upgrade on all the nodes in same rack without downtime). I
>> would like to know what is the recommended process to change an existing
>> cluster with single racks configuration to multi rack configuration.
>>
>>
>> I want to introduce 3 racks with 2 nodes in each rack.
>>
>>
>> Regards
>> Manish
>>
>> --
> -
> Alexander Dejanovski
> France
> @alexanderdeja
>
> Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>


-- 

regards,
Laxmikant Upadhyay


Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-06 Thread Alexander Dejanovski
Hi Manish,

the best way, if you have the opportunity to easily add new
hardware/instances, is to create a new DC with racks and switch traffic to
the new DC when it's ready (then remove the old one). My co-worker Alain
just wrote a very handy blog post on that technique :
http://thelastpickle.com/blog/2019/02/26/data-center-switch.html

If you can't, then I guess you can for each node (one at a time),
decommission it, wipe it clean and re-bootstrap it after setting the
appropriate rack.
Also, take into account that your keyspaces must use the
NetworkTopologyStrategy so that racks can be taken into account. Change the
strategy prior to adding the new nodes if you're currently using
SimpleStrategy.

You cannot (and shouldn't) try to change the rack on an existing node (the
GossipingPropertyFileSnitch won't allow it).

Cheers,

On Wed, Mar 6, 2019 at 12:15 PM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:

> We have a 6 node Cassandra cluster in which all the nodes  are in same
> rack in a dc. We want to take advantage of "multi rack" cluster (example:
> parallel upgrade on all the nodes in same rack without downtime). I would
> like to know what is the recommended process to change an existing cluster
> with single racks configuration to multi rack configuration.
>
>
> I want to introduce 3 racks with 2 nodes in each rack.
>
>
> Regards
> Manish
>
> --
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Changing existing Cassandra cluster from single rack configuration to multi racks configuration

2019-03-06 Thread manish khandelwal
We have a 6 node Cassandra cluster in which all the nodes  are in same rack
in a dc. We want to take advantage of "multi rack" cluster (example:
parallel upgrade on all the nodes in same rack without downtime). I would
like to know what is the recommended process to change an existing cluster
with single racks configuration to multi rack configuration.


I want to introduce 3 racks with 2 nodes in each rack.


Regards
Manish