Re: need help tuning dropped mutation messages

2017-07-06 Thread Subroto Barua
c* version: 3.0.11 cross_node_timeout: truerange_request_timeout_in_ms: 1write_request_timeout_in_ms: 2000counter_write_request_timeout_in_ms: 5000cas_contention_timeout_in_ms: 1000 On Thursday, July 6, 2017, 11:43:44 AM PDT, Subroto Barua wrote: I am seeing

need help tuning dropped mutation messages

2017-07-06 Thread Subroto Barua
I am seeing these errors: MessagingService.java: 1013 -- MUTATION messages dropped in last 5000 ms: 0 for internal timeout and 4 for cross node timeout write consistency @ LOCAL_QUORUM is failing on a 3-node cluster and 18-node cluster..

Dropped Mutation Messages in two DCs at different sites

2017-01-03 Thread Benyi Wang
I need to batch load a lot of data everyday into a keyspace across two DCs, one DC is at west coast and the other is at east coast. I assume that the network delay between two DCs at different sites will cause a lot of dropped mutation messages if I write too fast in LOCAL DC using LOCAL_QUORUM

Re: Dropped mutation messages

2015-06-13 Thread Robert Wille
the rpc_timeout it will return a TimedOutException to the client. I understand that, but that’s where this makes no sense. I’m running with RF=1, and CL=QUORUM, which means each update goes to one node, and I need one response for a success. I have many thousands of dropped mutation messages

Re: Dropped mutation messages

2015-06-13 Thread Anuj Wadehra
U said RF=1...missed that..so not sure eventual consistency is creating issues.. Thanks Anuj Wadehra Sent from Yahoo Mail on Android From:Anuj Wadehra anujw_2...@yahoo.co.in Date:Sat, 13 Jun, 2015 at 11:31 pm Subject:Re: Dropped mutation messages I think the messages dropped

Re: Dropped mutation messages

2015-06-13 Thread Anuj Wadehra
...@fold3.com Date:Sat, 13 Jun, 2015 at 8:29 pm Subject:Re: Dropped mutation messages Internode messages which are received by a node, but do not get not to be processed within rpc_timeout are dropped rather than processed. As the coordinator node will no longer be waiting for a response

Dropped mutation messages

2015-06-12 Thread Robert Wille
mutation messages. I’m overloading my cluster. I never have more than about 10% CPU utilization (even my I/O wait is negligible). A curious thing about that is that the driver hasn’t thrown any exceptions, even though mutations have been dropped. I’ve seen dropped mutation messages on my

Re: Dropped mutation messages

2015-06-12 Thread Robert Wille
writing several 10’s of millions records to my test cluster. My main concern is that I have a few tens of thousands of dropped mutation messages. I’m overloading my cluster. I never have more than about 10% CPU utilization (even my I/O wait is negligible). A curious thing about

Re: Dropped mutation messages

2013-06-20 Thread aaron morton
: cem Sent: Tuesday, June 18, 2013 1:12 PM To: user@cassandra.apache.org Subject: Dropped mutation messages Hi All, I have a cluster of 5 nodes with C* 1.2.4. Each node has 4 disks 1 TB each. I see a lot of dropped messages after it stores 400 GB per disk. (1.6 TB per node

Re: Dropped mutation messages

2013-06-19 Thread Shahab Yunus
on large data retrievals, so in short, you may need to revise how you query. The queries need to be lightened /Arthur *From:* cem cayiro...@gmail.com *Sent:* Tuesday, June 18, 2013 1:12 PM *To:* user@cassandra.apache.org *Subject:* Dropped mutation messages Hi All, I have a cluster

Dropped mutation messages

2013-06-18 Thread cem
Hi All, I have a cluster of 5 nodes with C* 1.2.4. Each node has 4 disks 1 TB each. I see a lot of dropped messages after it stores 400 GB per disk. (1.6 TB per node). The recommendation was 500 GB max per node before 1.2. Datastax says that we can store terabytes of data per node with 1.2.

Re: Dropped mutation messages

2013-06-18 Thread Arthur Zubarev
To: user@cassandra.apache.org Subject: Dropped mutation messages Hi All, I have a cluster of 5 nodes with C* 1.2.4. Each node has 4 disks 1 TB each. I see a lot of dropped messages after it stores 400 GB per disk. (1.6 TB per node). The recommendation was 500 GB max per node before 1.2