Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread Kiran mk
You have to download the Prometheus HTTP jmx dependencies jar and download the Cassandra yaml and mention the jmx port in the config (7199). Run the agent on specific port" on all the Cassandra nodes. After this go to your Prometheus server and make the scrape config to metrics from all clients.

secondary index use case

2017-07-20 Thread Micha
Hi, even after reading much about secondary index usage I'm not sure if I have the correct use case for it. My table will contain about 150'000'000 records (each about 2KB data). There are two uuids used to identify a row. One uuid is unique for each row, the other uuid is something like a

Re: secondary index use case

2017-07-20 Thread Vladimir Yudovin
Hi, You didn't mention your C* version, but starting from 3.4 SASI indexes are available. You can try it with SPARSE option, as uuid corresponds to only one row. Best regards, Vladimir Yudovin, Winguzone - Cloud Cassandra Hosting On Thu, 20 Jul 2017 05:21:31 -0400 Micha

RE: Cassandra 2.2.6 Fails to Boot Up correctly - JNA Class

2017-07-20 Thread William Boutin
Thank you for your help. We have been using jna-4.0.0.jar since using Cassandra 2.2(.6). Until last week, we had no issues. Now, we are experiencing the exception that I identified. We only have jna-4.0.0.jar loaded on our machines and the CLASSPATH that Cassandra builds only uses the jar from

Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread wxn...@zjqunshuo.com
Hi, I'm going to set up Prometheus+Grafana to monitor Cassandra cluster. I installed Prometheus and started it, but don't know how to config it to support Cassandra. Any ideas or related articles are appreciated. Cheers, Simon

Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread Kiran mk
You have to download the Prometheus HTTP jmx dependencies jar and download the Cassandra yaml and mention the jmx port in the config (7199). Run the agent on specific port" on all the Cassandra nodes. After this go to your Prometheus server and make the scrape config to metrics from all clients.

Re: MUTATION messages were dropped in last 5000 ms for cross node timeout

2017-07-20 Thread Anuj Wadehra
Hi Asad,  You can do following things: 1.Increase memtable_flush_writers especially if you have a write heavy load.  2.Make sure there are no big gc pauses on your nodes. If yes,  go for heap tuning.  Please let us know whether above measures fixed your problem or not.  ThanksAnuj Sent from

Re: MUTATION messages were dropped in last 5000 ms for cross node timeout

2017-07-20 Thread Subroto Barua
In a cloud environment, cross_node_timeout = true can cause issues; we had this issue in our environment and it is set to false now. Dropped messages is an another issue Subroto > On Jul 20, 2017, at 8:27 AM, ZAIDI, ASAD A wrote: > > Hello Folks – > > I’m using

Re: write time for nulls is not consistent

2017-07-20 Thread Jeff Jirsa
On 2017-07-20 08:17 (-0700), Nitan Kainth wrote: > Jeff, > > It is really strange, look at below log, I inserted your data and then few > additional; finally, the issue is reproduced: > .. > (6 rows) > cqlsh> insert into test.t(a) values('b'); > cqlsh> select a,b,

Re: write time for nulls is not consistent

2017-07-20 Thread Nitan Kainth
Jeff, It is really strange, look at below log, I inserted your data and then few additional; finally, the issue is reproduced: [cqlsh 5.0.1 | Cassandra 3.0.10.1443 | DSE 5.0.4 | CQL spec 3.4.0 | Native protocol v4] Use HELP for help. cqlsh> CREATE KEYSPACE test WITH replication =

MUTATION messages were dropped in last 5000 ms for cross node timeout

2017-07-20 Thread ZAIDI, ASAD A
Hello Folks - I'm using apache-cassandra 2.2.8. I see many messages like below in my system.log file. In Cassandra.yaml file [ cross_node_timeout: true] is set and NTP server is also running correcting clock drift on 16node cluster. I do not see pending or blocked HintedHandoff in tpstats

Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread Petrus Gomes
I use the same environment. Follow a few links: Use this link, is the best one to connect Cassandra and prometheus: https://www.robustperception.io/monitoring-cassandra-with-prometheus/ JMX agent: https://github.com/nabto/cassandra-prometheus

Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread wxn...@zjqunshuo.com
Petrus & Kiran, Thank you for the guide and suggestions. I will have a try. Cheers, Simon From: Petrus Gomes Date: 2017-07-21 00:45 To: user Subject: Re: Quick question to config Prometheus to monitor Cassandra cluster I use the same environment. Follow a few links: Use this link, is the best

Multi datacenter node loss

2017-07-20 Thread Roger Warner
Hi I’m a little dim on what multi datacenter implies in the 1 replica case. I know about replica recovery, how about “node recovery” As I understand if there a node failure or disk crash with a single node cluster with replication factor 1 I lose data.Easy. nodetool tells me each node

Re: MUTATION messages were dropped in last 5000 ms for cross node timeout

2017-07-20 Thread Akhil Mehra
Hi Asad, http://cassandra.apache.org/doc/latest/faq/index.html#why-message-dropped As mentioned in the link above this is a load shedding mechanism used by Cassandra. Is you cluster under heavy load? Regards, Akhil

Re: RE: Cassandra 2.2.6 Fails to Boot Up correctly - JNA Class

2017-07-20 Thread Jeff Jirsa
So what precisely changed? You've got a custom build based on the jar name, which is perfectly reasonable, but what upgrade did you do? 2.2.5 to 2.2.6 ? Any other changes? On 2017-07-20 05:41 (-0700), William Boutin wrote: > Thank you for your help. > We have

Re: Multi datacenter node loss

2017-07-20 Thread Michael Shuler
Datacenter replication is defined in the keyspace schema, so I believe that ... WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': 1, 'DC2': 1} ... you ought to be able to repair DC1 from DC2, once you have the DC1 node healthy again. If using the SimpleStrategy replication