Re: Compaction process stuck

2018-07-05 Thread Jeff Jirsa
You probably have a very large partition in that file. Nodetool cfstats will show you the largest compacted partition now - I suspect it's much higher than before. On Thu, Jul 5, 2018 at 9:50 PM, atul atri wrote: > Hi Chris, > > Compaction process finally finished. It took long time though. > >

Re: Compaction process stuck

2018-07-05 Thread atul atri
Hi Chris, Compaction process finally finished. It took long time though. Thank you very much for all your help. Please let me know if you have any guidelines to make future compaction processes faster. Thanks & Regards, Atul Atri. On 5 July 2018 at 22:05, atul atri wrote: > Hi Cris, > >

Re: Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread Jeff Jirsa
> On Jul 5, 2018, at 12:45 PM, Anuj Wadehra > wrote: > > Hi, > > I woud like to know how people are doing rolling upgrade of Casandra clustes > when there is a change in native protocol version say from 2.1 to 3.11. > During rolling upgrade, if client application is restarted on nodes,

Re: Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread Jeff Jirsa
There is replication between 2.1 and 3.x, but not hints. You will have to repair past the window, but you should be doing that anyway if you care about tombstones doing the right thing Read quorum with 2/3 in either version should work fine - if it gives you an error please open a JIRA with

Re: Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread James Shaw
other concerns: there is no replication between 2.11 and 3.11, store in hints, and replay hints when remote is same version. have to do repair if over window. if read quorum 2/3, will get error. in case rollback to 2.11, can not read new version 3.11 data files, but online rolling upgrade, some

Re: Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread kooljava2
Hello Anuj, The 2nd workaround should work. As app will auto discover all the other nodes. Its the first contact with the node that app makes determines the protocol version. So if you remove the newer version nodes from the app configuration after the startup, it will auto discover the newer

Re: How to query 100 primary keys at once

2018-07-05 Thread Goutham reddy
Thank you Jeff for the solution. On Thu, Jul 5, 2018 at 12:01 PM Jeff Jirsa wrote: > Either of those solutions are fine, you just need to consider > throttling/limiting the number of concurrent queries (either in the > application, or on the server side) to avoid timeouts. > > > > On Thu, Jul

Cassandra Upgrade with Different Protocol Version

2018-07-05 Thread Anuj Wadehra
Hi, I woud like to know how people are doing rolling upgrade of Casandra clustes when there is a change in native protocol version say from 2.1 to 3.11. During rolling upgrade, if client application is restarted on nodes, the client driver may first contact an upgraded Cassandra node with v4

Jmx_exporter CPU spike

2018-07-05 Thread rajpal reddy
we have Qualys security scan running causing the cpu spike. We are seeing the CPU spike only when Jmx metrics are exposed using Jmx_exporter. tried setting up imx authentication still see cpu spike. if i stop using jmx exporter we don’t see any cpu spike. is there any thing we have to tune

CPU Spike with Jmx_exporter

2018-07-05 Thread rajpal reddy
We are seeing the CPU spike only when Jmx metrics are exposed using Jmx_exporter. tried setting up imx authentication still see cpu spike. if i stop using jmx exporter we don’t see any cpu spike. is there any thing we have to tune to make work with Jmx_exporter? > On Jun 14, 2018, at 2:18

Re: How to query 100 primary keys at once

2018-07-05 Thread Jeff Jirsa
Either of those solutions are fine, you just need to consider throttling/limiting the number of concurrent queries (either in the application, or on the server side) to avoid timeouts. On Thu, Jul 5, 2018 at 11:16 AM, Goutham reddy wrote: > Hi users, > Querying multiple primary keys can be

How to query 100 primary keys at once

2018-07-05 Thread Goutham reddy
Hi users, Querying multiple primary keys can be achieved using IN operator but it cause load only on single node and which inturn causes READ timeout issues. Calling asynchronously each primary key is also not a right choice for big partition key. Can anyone suggest me best practice to query it

Re: Compaction process stuck

2018-07-05 Thread atul atri
Hi Cris, Thank you for reply. I already have tried to run "nodetool stop compaction" and this does not help. I have restarted each node in cluster one by one and compaction starts again. It gets stuck on same table. Following in 'nodetool compactionstats' output. It's stuck at *1336035468* for

Re: rebuild on running node

2018-07-05 Thread Hannu Kröger
You have just some extra data on those machines where you ran rebuild. Compaction will eventually take care of that. Nothing really harmful if you have the disk space available. Hannu > Randy Lynn kirjoitti 5.7.2018 kello 19.19: > > Anyone ever make stupid mistakes? :) > > TL/DR: I ran

rebuild on running node

2018-07-05 Thread Randy Lynn
Anyone ever make stupid mistakes? :) TL/DR: I ran rebuild on a node that is already up and running in an existing data center.. what happens? This is what I did... Assume I have DC_syndey and adding DC_sydney_new But also have a DC_us.. from a node in DC_sydney_new I intended to type "rebuild

Re: Compaction process stuck

2018-07-05 Thread Chris Lohfink
That looks a bit to me like it isnt stuck but just a long running compaction. Can you include the output of `nodetool compactionstats` and the `nodetool cfstats` with schema for the table thats being compacted (redacted names if necessary). Can stop compaction with `nodetool stop COMPACTION`

Re: C* in multiple AWS AZ's

2018-07-05 Thread Randy Lynn
Thanks Alain, Wanted to just circle back on all the above.. Thanks everyone for your help, and input. I'm glad to hear someone else did a site-to-site tunnel with Cassandra between regions. When originally setting up all the docs and information all preached public IP's. I can totally understand

Re: [ANNOUNCE] LDAP Authenticator for Cassandra

2018-07-05 Thread DuyHai Doan
Super great, thank you for this contribution Kurt! On Thu, Jul 5, 2018 at 1:49 PM, kurt greaves wrote: > We've seen a need for an LDAP authentication implementation for Apache > Cassandra so we've gone ahead and created an open source implementation > (ALv2) utilising the pluggable auth support

[ANNOUNCE] LDAP Authenticator for Cassandra

2018-07-05 Thread kurt greaves
We've seen a need for an LDAP authentication implementation for Apache Cassandra so we've gone ahead and created an open source implementation (ALv2) utilising the pluggable auth support in C*. Now, I'm positive there are multiple implementations floating around that haven't been open sourced,