will replay back to the co-ordinator in DC1. So if you
have replication of DC1:3,DC2:3. A co-ordinator node will get 6 responses
back if it is not in the replica set.
Hope that answers your question.
On Tue, Feb 11, 2014 at 8:16 AM, Mullen, Robert robert.mul...@pearson.com
wrote:
I had
what nodes
should reply. It is not correct.
On Tue, Feb 11, 2014 at 9:36 AM, Mullen, Robert robert.mul...@pearson.com
wrote:
So is that picture incorrect, or just incomplete missing the piece on how
the nodes reply to the coordinator node.
On Tue, Feb 11, 2014 at 9:38 AM, sankalp kohli
appreciated.
Regards,
Rob
On Tue, Feb 11, 2014 at 11:25 AM, Andrey Ilinykh ailin...@gmail.com wrote:
On Tue, Feb 11, 2014 at 10:14 AM, Mullen, Robert
robert.mul...@pearson.com wrote:
Thanks for the feedback.
The picture shows a sample request, which is why the coordinator points
index first, the cql statement will run into
syntax error.
On Tue, Jan 28, 2014 at 11:37 AM, Mullen, Robert
robert.mul...@pearson.com wrote:
I would do #2. Take a look at this blog which talks about secondary
indexes, cardinality, and what it means for cassandra. Secondary indexes
I would do #2. Take a look at this blog which talks about secondary
indexes, cardinality, and what it means for cassandra. Secondary indexes
in cassandra are a different beast, so often old rules of thumb about
indexes don't apply. http://www.wentnet.com/blog/?p=77
On Tue, Jan 28, 2014 at
...@gmail.com wrote:
Robert, is it possible you've changed the partitioner during the upgrade?
(e.g. from RandomPartitioner to Murmur3Partitioner ?)
On Sat, Jan 4, 2014 at 9:32 PM, Mullen, Robert robert.mul...@pearson.com
wrote:
The nodetool repair command (which took about 8 hours) seems to have
upon that I would expect the counts across the nodes to all be 59 in this
case.
thanks,
Rob
On Fri, Jan 3, 2014 at 5:14 PM, Robert Coli rc...@eventbrite.com wrote:
On Fri, Jan 3, 2014 at 3:33 PM, Mullen, Robert
robert.mul...@pearson.comwrote:
I have a multi region cluster with 3 nodes in each
from cql
cqlshselect count(*) from topics;
On Sat, Jan 4, 2014 at 12:18 PM, Robert Coli rc...@eventbrite.com wrote:
On Sat, Jan 4, 2014 at 11:10 AM, Mullen, Robert robert.mul...@pearson.com
wrote:
I have a column family called topics which has a count of 47 on one
node, 59 on another
still don't understand why it's reporting 16% for each node when 100% seems
to reflect the state of the cluster better. I didn't find any info in
those issues you posted that would relate to the % changing from 100%
-16%.
On Sat, Jan 4, 2014 at 12:26 PM, Mullen, Robert
robert.mul
Hello,
I have a multi region cluster with 3 nodes in each data center, ec2 us-east
and and west. Prior to upgrading to 2.0.2 from 1.2.6, the owns % of each
node was 100%, which made sense because I had a replication factor of 3 for
each data center. After upgrading to 2.0.2 each node claims to
10 matches
Mail list logo