Hi, could it be due to having noisy neighbour? Do you have graphs
statistics ping between nodes?
Jason
On Mon, Jan 6, 2014 at 7:28 AM, Blake Eggleston bl...@shift.com wrote:
Hi,
I’ve been having a problem with 3 neighboring nodes in our cluster having
their read latencies jump up to 9000ms
Oh man, you know what my problem was, I was not specifying the keyspace
after nodetool status. After specifying the keyspace i get the 100%
ownership like I would expect.
nodetool status discsussions
ubuntu@prd-usw2b-pr-01-dscsapi-cadb-0002:~$ nodetool status discussions
Datacenter: us-east-1
On 01/02/2014 01:51 PM, Arindam Barua wrote:
1. the stability of vnodes in production
I'm happily using vnodes in production now, but I would have trouble
calling them stable for more than small clusters until very recently
(1.2.13). CASSANDRA-6127 served as a master ticket for most of
That’s a good point. CPU steal time is very low, but I haven’t observed
internode ping times during one of the peaks, I’ll have to check that out.
Another thing I’ve noticed is that cassandra starts dropping read messages
during the spikes, as reported by tpstats. This indicates that there’s
Thanks for your responses. We are on 1.2.12 currently.
The fixes in 1.2.13 seem to help for clusters in the 500+ node range (like
CASSANDRA-6409). Ours is below 50 now, so we plan to go ahead and enable vnodes
with the 'add a new DC' procedure. We will try to upgrade to 1.2.13 or 1.2.14
This is a generally good interpretation of the state of vnodes with respect
to Cassandra versions 1.2.12 and 1.2.13.
Adding a new datacenter to a 1.2.12 cluster at your scale should be fine. I
consider vnodes fit for production at almost any scale after 1.2.13, or 50
nodes or less (ballpark) for
On 01/04/2014 08:04 AM, Ertio Lew wrote:
... my dual boot 4GB(RAM) machine.
... -Xms4G -Xmx4G -
You are allocating all your ram to the java heap. Are you using the
same JVM parameters on the windows side? You can try to lower the heap
size or add ram to your machine.
- Erik -
On 01/06/2014 01:56 PM, Arindam Barua wrote:
Thanks for your responses. We are on 1.2.12 currently.
The fixes in 1.2.13 seem to help for clusters in the 500+ node range (like
CASSANDRA-6409). Ours is below 50 now, so we plan to go ahead and enable vnodes
with the 'add a new DC' procedure. We