Re: Clarification on how multi-DC replication works

2014-02-11 Thread Mullen, Robert
will replay back to the co-ordinator in DC1. So if you have replication of DC1:3,DC2:3. A co-ordinator node will get 6 responses back if it is not in the replica set. Hope that answers your question. On Tue, Feb 11, 2014 at 8:16 AM, Mullen, Robert robert.mul...@pearson.com wrote: I had

Re: Clarification on how multi-DC replication works

2014-02-11 Thread Mullen, Robert
what nodes should reply. It is not correct. On Tue, Feb 11, 2014 at 9:36 AM, Mullen, Robert robert.mul...@pearson.com wrote: So is that picture incorrect, or just incomplete missing the piece on how the nodes reply to the coordinator node. On Tue, Feb 11, 2014 at 9:38 AM, sankalp kohli

Re: Clarification on how multi-DC replication works

2014-02-11 Thread Mullen, Robert
appreciated. Regards, Rob On Tue, Feb 11, 2014 at 11:25 AM, Andrey Ilinykh ailin...@gmail.com wrote: On Tue, Feb 11, 2014 at 10:14 AM, Mullen, Robert robert.mul...@pearson.com wrote: Thanks for the feedback. The picture shows a sample request, which is why the coordinator points

Re: question about secondary index or not

2014-01-29 Thread Mullen, Robert
index first, the cql statement will run into syntax error. On Tue, Jan 28, 2014 at 11:37 AM, Mullen, Robert robert.mul...@pearson.com wrote: I would do #2. Take a look at this blog which talks about secondary indexes, cardinality, and what it means for cassandra. Secondary indexes

Re: question about secondary index or not

2014-01-28 Thread Mullen, Robert
I would do #2. Take a look at this blog which talks about secondary indexes, cardinality, and what it means for cassandra. Secondary indexes in cassandra are a different beast, so often old rules of thumb about indexes don't apply. http://www.wentnet.com/blog/?p=77 On Tue, Jan 28, 2014 at

Re: nodetool status owns % calculation after upgrade to 2.0.2

2014-01-06 Thread Mullen, Robert
...@gmail.com wrote: Robert, is it possible you've changed the partitioner during the upgrade? (e.g. from RandomPartitioner to Murmur3Partitioner ?) On Sat, Jan 4, 2014 at 9:32 PM, Mullen, Robert robert.mul...@pearson.com wrote: The nodetool repair command (which took about 8 hours) seems to have

Re: nodetool status owns % calculation after upgrade to 2.0.2

2014-01-04 Thread Mullen, Robert
upon that I would expect the counts across the nodes to all be 59 in this case. thanks, Rob On Fri, Jan 3, 2014 at 5:14 PM, Robert Coli rc...@eventbrite.com wrote: On Fri, Jan 3, 2014 at 3:33 PM, Mullen, Robert robert.mul...@pearson.comwrote: I have a multi region cluster with 3 nodes in each

Re: nodetool status owns % calculation after upgrade to 2.0.2

2014-01-04 Thread Mullen, Robert
from cql cqlshselect count(*) from topics; On Sat, Jan 4, 2014 at 12:18 PM, Robert Coli rc...@eventbrite.com wrote: On Sat, Jan 4, 2014 at 11:10 AM, Mullen, Robert robert.mul...@pearson.com wrote: I have a column family called topics which has a count of 47 on one node, 59 on another

Re: nodetool status owns % calculation after upgrade to 2.0.2

2014-01-04 Thread Mullen, Robert
still don't understand why it's reporting 16% for each node when 100% seems to reflect the state of the cluster better. I didn't find any info in those issues you posted that would relate to the % changing from 100% -16%. On Sat, Jan 4, 2014 at 12:26 PM, Mullen, Robert robert.mul

nodetool status owns % calculation after upgrade to 2.0.2

2014-01-03 Thread Mullen, Robert
Hello, I have a multi region cluster with 3 nodes in each data center, ec2 us-east and and west. Prior to upgrading to 2.0.2 from 1.2.6, the owns % of each node was 100%, which made sense because I had a replication factor of 3 for each data center. After upgrading to 2.0.2 each node claims to