I've finished the migration to NetworkTopologyStrategy using
GossipingPropertyFileSnitch.

Now I have 4 nodes at zone a (rack1) and another 4 nodes at zone b
(rack2) only one dc, there's no zone c at Frankfurt.

Can I get QUORUM consistency for reading (for writing I'm using ANY)
adding a tiny node using only num_tokens = 3 in another place or it must
be a node like the others with vnodes = 256?

I only make inserts and queries there's no updates or direct deletes
only deletes made by the TTL.

Thanks in advance.


On 05-09-2017 13:41, kurt greaves wrote:
> data will be distributed amongst racks correctly, however only if you
> are using a snitch that understands racks and also
> NetworkTopologyStrategy. SimpleStrategy doesn't understand racks or
> DCs. You should use a snitch that understands racks and then
> transition to a 2 rack cluster, keeping only 1 DC. The whole DC per
> rack thing isn't necessary and will make your clients overly complicated.
>
> On 5 Sep. 2017 21:01, "Cogumelos Maravilha"
> <cogumelosmaravi...@sapo.pt <mailto:cogumelosmaravi...@sapo.pt>> wrote:
>
>     Hi list,
>
>     CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy',
>     'replication_factor': '2'}  AND durable_writes = true;
>
>     I'm using C* 3.11.0 with 8 nodes at aws, 4 nodes at zone a and the
>     other
>     4 nodes at zone b. The idea is to keep the cluster alive if zone a
>     or b
>     goes dark and keep QUORUM for reading. For writing I'm using ANY.
>
>     Using getendpoints I can see that lots of keys are in the same
>     zone. As
>     fair as I understand a rack solution does not grant full data
>     replication between racks.
>
>     My idea to reach this goal is:
>
>     - Change replication_factor to 1
>
>     - Start decommission nodes one by node in one zone.
>
>     - When only 4 nodes are up and running in one zone change keyspace
>     configuration to DC using actual data as DC1 and the other 4 nodes
>     as DC2
>
>
>     Is this the best approach?
>
>
>     Thanks in advance.
>
>
>
>     ---------------------------------------------------------------------
>     To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>     <mailto:user-unsubscr...@cassandra.apache.org>
>     For additional commands, e-mail: user-h...@cassandra.apache.org
>     <mailto:user-h...@cassandra.apache.org>
>

Reply via email to