Re: Traffic inconsistent across nodes

2016-04-13 Thread Anishek Agarwal
here is the output: every node in a single DC is in the same rack. Datacenter: WDC5 Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.125.138.33 299.22 GB 256 64.2%

Re: Balancing tokens over 2 datacenter

2016-04-13 Thread Jeff Jirsa
100% ownership on all nodes isn’t wrong with 3 nodes in each of 2 Dcs with RF=3 in both of those Dcs. That’s exactly what you’d expect it to be, and a perfectly viable production config for many workloads. From: Anuj Wadehra Reply-To: "user@cassandra.apache.org" Date: Wednesday, April 13,

RE: Leak Detected while bootstrap

2016-04-13 Thread Anubhav Kale
Thanks, Updated with logs. From: Tyler Hobbs [mailto:ty...@datastax.com] Sent: Wednesday, April 13, 2016 3:36 PM To: user@cassandra.apache.org Subject: Re: Leak Detected while bootstrap This looks like it might be

Re: Leak Detected while bootstrap

2016-04-13 Thread Tyler Hobbs
This looks like it might be https://issues.apache.org/jira/browse/CASSANDRA-11374. Can you comment on that ticket and share your logs leading up to the error? On Wed, Apr 13, 2016 at 3:37 PM, Anubhav Kale wrote: > Hello, > > > > Since we upgraded to Cassandra

Re: Compaction Error When upgrading from 2.1.9 to 3.0.2

2016-04-13 Thread Tyler Hobbs
Can you open a ticket here with your schema and the stacktrace? https://issues.apache.org/jira/browse/CASSANDRA I'm also curious why you're not upgrading to 3.0.5 instead of 3.0.2. On Wed, Apr 13, 2016 at 4:37 PM, Anthony Verslues < anthony.versl...@mezocliq.com> wrote: > I got this compaction

Compaction Error When upgrading from 2.1.9 to 3.0.2

2016-04-13 Thread Anthony Verslues
I got this compaction error when running 'nodetool upgradesstable -a' while upgrading from 2.1.9 to 3.0.2. According to documentation this upgrade should work. Would upgrading to another intermediate version help? This is the line number:

RE: Set up authentication on a live production cluster

2016-04-13 Thread SEAN_R_DURITY
Do the clients already send the credentials? That is the first thing to address. Setting up a cluster for authentication (and authorization) requires a restart with the properties turned on in cassandra.yaml. However, the actual keyspace (system_auth) and tables are not created until the last

[RELEASE] Apache Cassandra 3.5 released

2016-04-13 Thread Jake Luciani
The Cassandra team is pleased to announce the release of Apache Cassandra version 3.5. Apache Cassandra is a fully distributed database. It is the right choice when you need scalability and high availability without compromising performance. http://cassandra.apache.org/ Downloads of source and

Leak Detected while bootstrap

2016-04-13 Thread Anubhav Kale
Hello, Since we upgraded to Cassandra 2.1.12, we are noticing that below happens when we are trying to bootstrap nodes, and the process just gets stuck. Restarting the process / VM does not help. Our nodes are around ~300 GB and run on local SSDs and we haven't seen this problem on older

Re: Cassandra Golang Driver and Support

2016-04-13 Thread Bryan Cheng
Hi Yawei, While you're right that there's no first-party driver, we've had good luck using gocql (https://github.com/gocql/gocql) in production at moderate scale. What features in particular are you looking for that are missing? --Bryan On Tue, Apr 12, 2016 at 10:06 PM, Yawei Li

Re: Balancing tokens over 2 datacenter

2016-04-13 Thread Walsh, Stephen
Right again Alain We use the DCAwareRoundRobinPolicy in our java datastax driver in each DC application to point to that Cassandra DC’s. From: Alain RODRIGUEZ > Reply-To: "user@cassandra.apache.org"

Re: Balancing tokens over 2 datacenter

2016-04-13 Thread Alain RODRIGUEZ
Steve, This cluster looks just great. Now, due to a miss configuration in our application, we saw that our > application in both DC’s where pointing to DC1. This is the only thing to solve, and it happens in the client side configuration. What client do you use? Are you using something like

Re: Balancing tokens over 2 datacenter

2016-04-13 Thread Walsh, Stephen
Thanks for your helps guys, As you guessed our schema is {'class': 'NetworkTopologyStrategy', 'DC1': '3', 'DC2': '3'} AND durable_writes = false; Our reads and writes on LOCAL_ONE with each application (now) using its own DC as its preferred DC Here is the nodetool status for one of our

Re: Balancing tokens over 2 datacenter

2016-04-13 Thread Alain RODRIGUEZ
Hi Steve, > As such, all keyspaces and tables where created on DC1. > The effect of this is that all reads are now going to DC1 and ignoring DC2 > I think this is not exactly true. When tables are created, they are created on a specific keyspace, no matter where you send the alter schema

Re: Creation of Async Datacenter for Monitoring purposes and Engineering Services purpose

2016-04-13 Thread Alain RODRIGUEZ
> > Live Data delay of 1-2 Hours is acceptable. It is essential that > replication to this DC to not impact the other 2 data centers. > Well, you can have immediate replication with no impact on other DC. Basically set your clients to use LOCAL_ONE/QUORUM and specify a DCAWARE policy on the

Re: Balancing tokens over 2 datacenter

2016-04-13 Thread Bhuvan Rawal
This could be because of the way you have configured the policy, have a look at the below links for configuring the policy https://datastax.github.io/python-driver/api/cassandra/policies.html http://stackoverflow.com/questions/22813045/ability-to-write-to-a-particular-cassandra-node Regards,

Re: C* 1.2.x vs Gossip marking DOWN/UP

2016-04-13 Thread Alain RODRIGUEZ
Hi Michael, I had critical issues using 1.2 (.11, I believe) around gossip (but it was like 2 years ago...). Are you using the last C* 1.2.19 minor version? If not, you probably should go there asap. A lot of issues like this one https://issues.apache.org/jira/browse/CASSANDRA-6297 have been

Balancing tokens over 2 datacenter

2016-04-13 Thread Walsh, Stephen
Hi there, So we have 2 datacenter with 3 nodes each. Replication factor is 3 per DC (so each node has all data) We have an application in each DC that writes that Cassandra DC. Now, due to a miss configuration in our application, we saw that our application in both DC’s where pointing to DC1.

Creation of Async Datacenter for Monitoring purposes and Engineering Services purpose

2016-04-13 Thread Bhuvan Rawal
Hi All, We have 2 Running Datacenters in physically seperate DC's with 3 Nodes each. There is a requirement of an Audit DC for issuing queries which will not be concerned with live application traffic. Live Data delay of 1-2 Hours is acceptable. It is essential that replication to this DC to not

C* 1.2.x vs Gossip marking DOWN/UP

2016-04-13 Thread Michael Fong
Hi, all We have been a Cassandra 4-node cluster (C* 1.2.x) where a node marked all the other 3 nodes DOWN, and came back UP a few seconds later. There was a compaction that kicked in a minute before, roughly 10~MB in size, followed by marking all the other nodes DOWN later. In the other

Set up authentication on a live production cluster

2016-04-13 Thread Vigneshwaran
Hi, I have setup a 16 node cluster (8 per DC; C* 2.2.4) up and running in our production setup. We use Datastax Java driver 2.1.8. I would like to set up Authentication and Authorization in the cluster without breaking the live clients. >From the references I found by googling, I can setup