Re: Is it okay to use a small t2.micro instance for OpsCenter and use m3.medium instances for the actual Cassandra nodes?

2015-06-26 Thread arun sirimalla
Hi Sid, I would recommend you to use either c3s or m3s instances for Opscenter and for Cassandra nodes it depends on your use case. You can go with either c3s or i2s for Cassandra nodes. But i would recommend you to run performance tests before selecting the instance type. If your use case

Re: Read Consistency

2015-06-23 Thread arun sirimalla
Scenario 1: Read query is fired for a key, data is found on one node and not found on other two nodes who are responsible for the token corresponding to key. You read query will fail, as it expects to receive data from 2 nodes with RF=3 Scenario 2: Read query is fired and all 3 replicas have

Re: Read Consistency

2015-06-23 Thread arun sirimalla
Thanks good to know that. On Tue, Jun 23, 2015 at 11:27 AM, Philip Thompson philip.thomp...@datastax.com wrote: Yes, that is what he means. CL is for how many nodes need to respond, not agree. On Tue, Jun 23, 2015 at 2:26 PM, arun sirimalla arunsi...@gmail.com wrote: So do you mean

Re: Read Consistency

2015-06-23 Thread arun sirimalla
.. Sent from Yahoo Mail on Android https://overview.mail.yahoo.com/mobile/?.src=Android -- *From*:arun sirimalla arunsi...@gmail.com *Date*:Tue, 23 Jun, 2015 at 11:39 pm *Subject*:Re: Read Consistency Scenario 1: Read query is fired for a key, data is found on one

Re: nodetool repair

2015-06-19 Thread arun sirimalla
, does it mean that it would also clean up my tombstone from my LeveledCompactionStrategy tables at the same time? Thanks for your help. On 19 Jun 2015, at 07:56 , arun sirimalla arunsi...@gmail.com wrote: Hi Jean, Running nodetool repair on a node will repair only that node in the cluster

Re: nodetool repair

2015-06-18 Thread arun sirimalla
Hi Jean, Running nodetool repair on a node will repair only that node in the cluster. It is recommended to run nodetool repair on one node at a time. Few things to keep in mind while running repair 1. Running repair will trigger compactions 2. Increase in CPU utilization. Run node tool

Re: EC2snitch in AWS

2015-05-27 Thread arun sirimalla
Hi Kaushal, Here is the reference, http://docs.datastax.com/en/cassandra/2.0/cassandra/architecture/architectureSnitchEC2_t.html On Wed, May 27, 2015 at 9:31 AM, Kaushal Shriyan kaushalshri...@gmail.com wrote: Hi, Can somebody please share me details about setting up of EC2snitch in AWS

Re: Unexpected behavior after adding successffully a new node

2015-05-12 Thread arun sirimalla
Analia, Try running repair on node 3. On Tue, May 12, 2015 at 7:39 AM, Analia Lorenzatto analialorenza...@gmail.com wrote: Hello guys, I have a cluster 2.1.0-2 comprised of 3 nodes. The replication factor=2. We successfully added the third node last week. After that, We ran clean ups

Re: Can a Cassandra node accept writes while being repaired

2015-05-07 Thread arun sirimalla
Yes, Cassandra nodes accept writes during Repair. Also Repair triggers compactions to remove any tombstones. On Thu, May 7, 2015 at 9:31 AM, Khaja, Raziuddin (NIH/NLM/NCBI) [C] raziuddin.kh...@nih.gov wrote: I was not able to find a conclusive answer to this question on the internet so I am

Re: calculation of disk size

2015-04-29 Thread arun sirimalla
Hi Rahul, If you are expecting 15 GB of data per day, here is the calculation. 1 Day = 15 GB, 1 Month = 450 GB, 1 Year = 5.4 TB, so your raw data size for one year is 5.4 TB with replication factor of 3 it would be around 16.2 TB of data for one year. Taking compaction into consideration and

Re: Best Practice to add a node in a Cluster

2015-04-27 Thread arun sirimalla
Hi Neha, After you add the node to the cluster, run nodetool cleanup on all nodes. Next running repair on each node will replicate the data. Make sure you run the repair on one node at a time, because repair is an expensive process (Utilizes high CPU). On Mon, Apr 27, 2015 at 8:36 PM, Neha

High Compactions Pending

2014-09-22 Thread arun sirimalla
I have a 6 (i2.2xlarge) node cluster on AWS with 4.5 DSE running on it. I notice high compaction pending on one of the node around 35. Compaction throughput set to 64 MB and flush writes to 4. Any suggestion is much appreciated. -- Arun Senior Hadoop Engineer Cloudwick Champion of Big Data