When one node or DC is down, coordinator nodes being written through will
notice this fact and store hints (hinted handoff is the mechanism),  and
those hints are used to send the data that was not able to be replicated
initially.

http://www.datastax.com/dev/blog/modern-hinted-handoff

-Tupshin
On May 29, 2014 6:22 PM, "Vasileios Vlachos" <vasileiosvlac...@gmail.com>
wrote:

 Hello All,

We have plans to add a second DC to our live Cassandra environment.
Currently RF=3 and we read and write at QUORUM. After adding DC2 we are
going to be reading and writing at LOCAL_QUORUM.

If my understanding is correct, when a client sends a write request, if the
consistency level is satisfied on DC1 (that is RF/2+1), success is returned
to the client and DC2 will eventually get the data as well. The assumption
behind this is that the the client always connects to DC1 for reads and
writes and given that there is a site-to-site VPN between DC1 and DC2.
Therefore, DC1 will almost always return success before DC2 (actually I
don't know if it is possible for DC2 to be more up-to-date than DC1 with
this setup...).

Now imagine DC1 looses connectivity and the client fails over to DC2.
Everything should work fine after that, with the only difference that DC2
will be now handling the requests directly from the client. After some
time, say after max_hint_window_in_ms, DC1 comes back up. My question is
how do I bring DC1 up to speed with DC2 which is now more up-to-date? Will
that require a nodetool repair on DC1 nodes? Also, what is the answer when
the outage is < max_hint_window_in_ms instead?

Thanks in advance!

Vasilis

-- 
Kind Regards,

Vasileios Vlachos

Reply via email to