> Local quorum works in the same data center as the coordinator node,
but when an app server execute the write query, how is the coordinator
node chosen?

It typically depends on the driver, and decent drivers offer you several
options for this, usually called load balancing strategy.  You indicate
that you're using the node.js driver (presumably the DataStax version),
which is documented here:
http://docs.datastax.com/en/developer/nodejs-driver/3.0/common/drivers/reference/tuningPolicies.html

I'm not familiar with the node.js driver, but I am familiar with the Java
driver, and since they use the same terminology RE load balancing, I'll
assume they work the same.

A typical way to set that up is to use TokenAware policy with
DCAwareRoundRobinPolicy as its child policy.  This will prefer to route
queries to the primary replica (or secondary replica if the primary is
offline) in the local datacenter for that query if it can be discovered
automatically by the driver, such as with prepared statements.

Where the replica discovery can't be accomplished, TokenAware defers to the
child policy to choose the host.  In the case of DCAwareRoundRobinPolicy
that means it iterates through the hosts of the configured local datacenter
(defaulted to the DC of the seed nodes if they're all in the same DC) for
each subsequent execution.

On Fri, Mar 25, 2016 at 2:04 PM X. F. Li <lixf...@gmail.com> wrote:

> Hello,
>
> Local quorum works in the same data center as the coordinator node, but
> when an app server execute the write query, how is the coordinator node
> chosen?
>
> I use the node.js driver. How do the driver client determine which
> cassandra nodes are in the same DC as the client node? Does it use
> private network IP [192.168.x.x etc] to auto detect, or must I manually
> provide a localBalancing policy by `new DCAwareRoundRobinPolicy(
> localDcName )`?
>
> If a partition is not available in the local DC, i.e. if the local
> replica node fails or all replica nodes are in remote DC, will local
> quorum fail? If it doesn't fail, there is no guarantee that it all
> queries on a partition will be directed to the same data center, so does
> it means strong consistency cannot be expected?
>
> Another question:
>
> Suppose I have replication factor 3. If one of the node fails, will
> queries with ALL consistency fail if the queried partition is on the
> failed node? Or would they continue to work with 2 replicas during the
> time while cassandra is replicating the partitions on the failed node to
> re-establish 3 replicas?
>
> Thank you.
> Regards,
>
> X. F. Li
>

Reply via email to