Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread Stefano Ortolani
Hi Kurt, sorry, I forgot to specify. I am on 3.0.14. Cheers, Stefano On Wed, Aug 23, 2017 at 12:11 AM, kurt greaves wrote: > What version are you running? 2.2 has an improvement that will retain > levels when streaming and this shouldn't really happen. If you're on 2.1 >

Re: C* 3 node issue -Urgent

2017-08-23 Thread Akhil Mehra
The cqlsh image say bad credentials. Just confirming that you have the correct username/password when logging on. By turing on authentication I am assuming you mean using the PasswordAuthenticator instead of the AllowAllAuthenticator in the yaml. Cheers, Akhil > On 23/08/2017, at 8:59 PM,

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread kurt greaves
Well, that sucks. Be interested if you could find out if any of the streamed SSTables are retaining their levels. To answer your questions: 1) No. However, you could set your nodes to join in write_survey mode, which will stop them from joining the ring and you can initiate the join over JMX when

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread Stefano Ortolani
Hi Kurt, 1) You mean restarting the node in the middle of the bootstrap with join_ring=false? Would this option require me to issue a nodetool boostrap resume, correct? I didn't know you could instruct the join via JMX. Would it be the same of the nodetool boostrap command? 2) Yes, they are

RE: C* 3 node issue -Urgent

2017-08-23 Thread Jonathan Baynes
Yes I have the correct credentials I’m using cassandra/cassandra (superuser) To test that theory I tried a different user and got this Connection error: ('Unable to connect to any servers', {'10.172.115.63': AuthenticationFailed('Failed to authenticate to 10.172.115.63: Error from server:

Re: C* 3 node issue -Urgent

2017-08-23 Thread Akhil Mehra
I am assuming the following guide or similar was followed to add JMX authentication: http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureJmxAuthentication.html > On

Re: Cassandra isn't compacting old files

2017-08-23 Thread kurt greaves
Ignore me, I was getting the major compaction for LCS mixed up with STCS. Estimated droppable tombstones tends to be fairly accurate. If your SSTables in level 2 have that many tombstones I'd say that's definitely the reason L3 isn't being compacted. As for how you got here in the first place,

C* 3 node issue -Urgent

2017-08-23 Thread Jonathan Baynes
Hi Everyone. I need the communities help here. I have attempted this morning to turn on JMX authentication for Nodetool. I've gone into the Cassandra-env.sh file and updated the following: LOCAL_JMX=No JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=true" JVM_OPTS="$JVM_OPTS

RE: C* 3 node issue -Urgent

2017-08-23 Thread Jonathan Baynes
When trying to connect to CQLSH I get this “Cannot achieve consistency level QUORUM” 2 of my 3 nodes are down. So this error is correct. But how do I sign into CQLSH to change this? Or better, how do I get the other 2 nodes back up? From: Akhil Mehra [mailto:akhilme...@gmail.com] Sent: 23

Re: C* 3 node issue -Urgent

2017-08-23 Thread Akhil Mehra
You could try reverting your JMX authentication changes for the time being if getting you nodes up is a priority. At least you will be able to isolate the problem i.e. is it the Password authenticator or the jmx changes causing the problem. Cheers, Akhil PS Sorry for the silly questions just

RE: C* 3 node issue -Urgent

2017-08-23 Thread Jonathan Baynes
I will also mention I am on: C* 3.0.11 Linux Oracle red hat 7.1 Java 1.8.0.31 Python 2.7 From: Jonathan Baynes Sent: 23 August 2017 09:47 To: 'user@cassandra.apache.org' Cc: Stewart Allman Subject: C* 3 node issue -Urgent Hi Everyone. I need the communities help here. I have attempted this

Re: C* 3 node issue -Urgent

2017-08-23 Thread kurt greaves
The cassandra user requires QUORUM consistency to be achieved for authentication. Normal users only require ONE. I suspect your system_auth keyspace has an RF of 1, and the node that owns the cassandra users data is down. Steps to recover: 1. Turn off authentication on all the nodes 2. Restart

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread Stefano Ortolani
Hi Kurt, On Wed, Aug 23, 2017 at 11:32 AM, kurt greaves wrote: > > ​1) You mean restarting the node in the middle of the bootstrap with >> join_ring=false? Would this option require me to issue a nodetool boostrap >> resume, correct? I didn't know you could instruct the

Cassandra Setup Question

2017-08-23 Thread Jonathan Baynes
Hi Community, Quick question regarding Replication Factor. In my Production Environment I currently have 6 nodes this will grow to 10 shortly), over 2 datacentres, so currently 3 in each, we want to have Active/Passive setup so the Client will only speak to DC 1 via load balancing policy, but

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread kurt greaves
> ​1) You mean restarting the node in the middle of the bootstrap with > join_ring=false? Would this option require me to issue a nodetool boostrap > resume, correct? I didn't know you could instruct the join via JMX. Would > it be the same of the nodetool boostrap command? write_survey is

RE: C* 3 node issue -Urgent

2017-08-23 Thread Jonathan Baynes
@Kurt, You have hit the nail on the head. Whilst it was an issue with the cassandra-env.sh file, and I fixed this by overwriting it with the default file, essentially rolling it back, which got me back in, the larger issue is with the system_auth db it is a REPL factor of 1 and thus as stated

Re: C* 3 node issue -Urgent

2017-08-23 Thread kurt greaves
Common trap. It's an unfortunate default that is not so easy to change.​

Re: Cassandra Setup Question

2017-08-23 Thread Carlos Rolo
Use networktopologystrategy as replication strategy and make sure you have dc1: 3 and dc2: 3. This way you have 3 replicas in each DC. On 23 Aug 2017 12:53, "Jonathan Baynes" wrote: > Hi Community, > > > > Quick question regarding Replication Factor. > > > > In

Re: Bootstrapping a node fails because of compactions not keeping up

2017-08-23 Thread kurt greaves
> > But if it also streams, it means I'd still be under-pressure if I am not > mistaken. I am under the assumption that the compactions are the by-product > of streaming too many SStables at the same time, and not because of my > current write load. > Ah yeah I wasn't thinking about the capacity

RE: Restarting an existing node hangs

2017-08-23 Thread Mark Furlong
Cassandra doesn’t exit and continues to run with very little CPU usage shown in top. Thanks Mark 801-705-7115 office From: Jeff Jirsa [mailto:jji...@gmail.com] Sent: Wednesday, August 23, 2017 12:07 PM To: cassandra Subject: Re: Restarting an existing node hangs

Restarting an existing node hangs

2017-08-23 Thread Mark Furlong
I had an existing node go down. I don’t know the cause of this. I am starting Cassandra and I can see in the log that it starts and then hangs on the opening of an sstable. Is there anything I can do to fix the sstable? I’m on OSC 2.1.12. Thanks in advance, Mark Furlong Sr. Database

Re: Restarting an existing node hangs

2017-08-23 Thread Jeff Jirsa
Typically if that sstable is damaged you'd see some sort of message. If you recently changed bloom filter or index intervals for that table, it may be silently rebuilding the other components of that sstable. Does cassandra exit or does it just keep churning away? On Wed, Aug 23, 2017 at 10:20

Re: Restarting an existing node hangs

2017-08-23 Thread Jeff Jirsa
See also: https://issues.apache.org/jira/browse/CASSANDRA-11163 On Wed, Aug 23, 2017 at 11:07 AM, Jeff Jirsa wrote: > Typically if that sstable is damaged you'd see some sort of message. If > you recently changed bloom filter or index intervals for that table, it may > be

Re: Restarting an existing node hangs

2017-08-23 Thread Jeff Jirsa
jstack to dump threads, get a heap dump, or turn up your logging (to debug or trace) until you can figure out what exactly it's doing. On Wed, Aug 23, 2017 at 11:16 AM, Mark Furlong wrote: > Cassandra doesn’t exit and continues to run with very little CPU usage > shown

configure pooling options to avoid BusyPoolException

2017-08-23 Thread Avi Levi
Hi , I need to execute large amount (millions) of select queries. but I am getting BusyPoolExcption how can I avoid that ? I tried to configure the pooling options but couldn't see that it had any impact Any advice ? Failed to execute query SELECT * FROM my_table WHERE id = 'some_uuid' AND x >=

Re: Cassandra isn't compacting old files

2017-08-23 Thread Sotirios Delimanolis
These guesses will have to do. I thought something was wrong with such old SSTables. Thanks for your help investigating! On Wednesday, August 23, 2017, 3:09:34 AM PDT, kurt greaves wrote: Ignore me, I was getting the major compaction for LCS mixed up with STCS.

Re: configure pooling options to avoid BusyPoolException

2017-08-23 Thread Akhil Mehra
Since queries are asynchronously executed, you will need some mechanism in your code to queue your request. Try setting your setMaxQueueSize to meet your need. By default its 256

RE: Getting all unique keys

2017-08-23 Thread Durity, Sean R
DataStax Enterprise bundles spark and spark connector on the DSE nodes and handles much of the plumbing work (and monitoring, etc.). Worth a look. Sean Durity From: Avi Levi [mailto:a...@indeni.com] Sent: Tuesday, August 22, 2017 2:46 AM To: user@cassandra.apache.org Subject: Re: Getting all