CS process killed by kernel OOM

2017-01-25 Thread Benjamin Roth
Hi there, We installed 2 new nodes these days. They run on ubuntu (Ubuntu 16.04.1 LTS) with kernel 4.4.0-59-generic. On these nodes (and only on these) CS gets killed by the kernel due to OOM. It seems very strange to me because, CS only takes roughly 20GB (out of 128GB), most of RAM is allocated

Using Hostname in Seed_Provider

2017-01-25 Thread Nethi, Manoj
Hi, According to the documentation seed_provider The addresses of hosts designated as contact points in the cluster. A joining node contacts one of the

Re: Expensive to run nodetool status often?

2017-01-25 Thread Brooke Jensen
Have a look in org.apache.cassandra.net:type=FailureDetector *Brooke Jensen* VP Technical Operations & Customer Services www.instaclustr.com | support.instaclustr.com This email has been sent on behalf of Instaclustr Limited (Australia) and Instaclustr

Re: Expensive to run nodetool status often?

2017-01-25 Thread Xiaolei Li
Thanks for the advice! I do export a lot via JMX already. But I couldn't find the equivalent of the Status column (Up/Down + Normal/Leaving/Joining/Moving) from the status output. Does anyone know if those are available via JMX? Thank you. Best, x. On Wed, Jan 25, 2017 at 8:15 AM, Jonathan

Re: Expensive to run nodetool status often?

2017-01-25 Thread Jonathan Haddad
You're about to walk down an unfortunate path. I strongly recommend getting the information you need for monitoring using JMX. That's actually how nodetool gets all it's information. Instead of parsing output, if you use JMX, you'll have access to a *ton* of useful (and some not so useful)

Expensive to run nodetool status often?

2017-01-25 Thread Xiaolei Li
I'm planning to run "nodetool status -r" on every node every minute, storing the output in a file, and aggregating it somewhere else for monitoring. Is that a good idea? How expensive is it to be running status every minute. Best, x.

Cassandra 2.2.8 NoSuchFileException

2017-01-25 Thread Jacob Willoughby
I got cassandra-reaper up and running the other day on our 5 node 2.2.8 cluster. Now every once in awhile a node will get the below exception and the node shuts down its cql port. Is this a bug? something I am doing wrong? Are there any workarounds? -Jacob Willoughby ERROR