LWT on data mutated by non-LWT operation is valid ?

2018-03-23 Thread Hiroyuki Yamada
Hi all, I have some question about LWT. I am wondering if LWT works only for data mutated by LWT or not. In other words, doing LWT on some data mutated by non-LWT operations is still valid ? I don't fully understand how system.paxos table works in LWT, but row_key should be empty for a data

Re: Listening to Cassandra on IPv4 and IPv6 at the same time

2018-03-23 Thread Niclas Hedhman
Hi, FQDN = Fully Qualified Domain Name = IPv6 Address record in DNS A = IPv4 Address record in DNS He is saying, that by removing the resolution of a domain name (e.g cass1.acme.com) to only return an IPv4 address, and never return an IPv6 address, the issue that Cassandra only binds to

Re: Listening to Cassandra on IPv4 and IPv6 at the same time

2018-03-23 Thread Goutham reddy
Sudheer, Seems interesting, can you please eloborate what is FQDN and where to remove mapping. Appreciate your help. Thanks and Regards, Goutham On Fri, Mar 23, 2018 at 2:34 PM sudheer k wrote: > I found a solution for this. As Cassandra can’t bind to two

Re: Update to C* 3.0.14 from 3.0.10

2018-03-23 Thread Nitan Kainth
Thanks you for your replies so far. We are just going to .14 because our repair is consuming cpu and our management always want to to stay a couple version behind for stability reasons. Sent from my iPhone > On Mar 23, 2018, at 4:50 PM, Jeff Jirsa wrote: > > Why .14? I

Re: Update to C* 3.0.14 from 3.0.10

2018-03-23 Thread Jeff Jirsa
Why .14? I would consider 3.0.16 to be production worthy. -- Jeff Jirsa > On Mar 23, 2018, at 2:01 PM, Nitan Kainth wrote: > > Hi All, > > Our repairs are consuming CPU and some research shows that moving to 3.0.14 > will help us fix them. I just want to know

Re: Update to C* 3.0.14 from 3.0.10

2018-03-23 Thread Jonathan Haddad
3.0.16 is the latest, I recommend going all the way up. About a hundred bug fixes: https://github.com/apache/cassandra/blob/cassandra-3.0/CHANGES.txt Jon On Fri, Mar 23, 2018 at 2:22 PM Dmitry Saprykin wrote: > Hi, > > I successfully used 3.0.14 more than a year in

Re: Listening to Cassandra on IPv4 and IPv6 at the same time

2018-03-23 Thread sudheer k
I found a solution for this. As Cassandra can’t bind to two addresses at a point in time according to the comments in cassandra.yaml file, we removed mapping to FQDN and kept only A(IPv4) mapping. So, FQDN resolves to IPv4 always and we can use FQDN in the application configuration while

Re: Update to C* 3.0.14 from 3.0.10

2018-03-23 Thread Dmitry Saprykin
Hi, I successfully used 3.0.14 more than a year in production. And moreover 3.0.10 is definitely not stable and you need to upgrade ASAP. 3.0.10 contains known bug which corrupts data during schema changes Regards, Dmitrii On Fri, Mar 23, 2018 at 5:01 PM Nitan Kainth

Update to C* 3.0.14 from 3.0.10

2018-03-23 Thread Nitan Kainth
Hi All, Our repairs are consuming CPU and some research shows that moving to 3.0.14 will help us fix them. I just want to know community's experience about 3.0.14 version. Is it stable? Anybody had any issues after upgrading this? Regards, Nitan K.

Re: Using Spark to delete from Transactional Cluster

2018-03-23 Thread Charulata Sharma (charshar)
Yes agree on “let really old data expire” . However, I could not find a way to TTL an entire row. Only columns can be TTLed. Charu From: Rahul Singh Reply-To: "user@cassandra.apache.org" Date: Friday, March 23, 2018 at 1:45 PM To:

Re: Using Spark to delete from Transactional Cluster

2018-03-23 Thread Rahul Singh
I think there are better ways to leverage parallel processing than to use it to delete data. As I said , it works for one of my projects for the same exact reason you stated : business rules. Deleting data is an old way of thinking. Why not store the data and just use the relevant data .. let

Re: Using Spark to delete from Transactional Cluster

2018-03-23 Thread Charulata Sharma (charshar)
Yes essentially it’s the same, but from a code complexity perspective, writing in spark is more compact and execution is superfast. Spark uses the Cassandra connector so the question was mostly on if there is any issue with that and also with spark we will be deleting in analytical nodes which

Re: Nodes unresponsive after upgrade 3.9 -> 3.11.2

2018-03-23 Thread Nitan Kainth
Martin, Would you pls share settings you had before and what did you change? We have similar issue. > On Mar 23, 2018, at 8:47 AM, Martin Mačura wrote: > > Nevermind, we resolved the issue JVM heap settings were misconfigured > > Martin > >> On Fri, Mar 23, 2018

Re: Using Spark to delete from Transactional Cluster

2018-03-23 Thread Nitan Kainth
We use spark to do same because our partition contains data for whole year and we delete one day at a time. C* does not allow us delete without using partition key. I know it’s wrong data model but we can’t change it due to obvious reason of whole application redesign. Sent from my iPhone >

Re: Using Spark to delete from Transactional Cluster

2018-03-23 Thread Jonathan Haddad
I'm confused as to what the difference between deleting with prepared statements and deleting through spark is? To the best of my knowledge either way it's the same thing - normal deletion with tombstones replicated. Is it that you're doing deletes in the analytics DC instead of your real time

Re: Using Spark to delete from Transactional Cluster

2018-03-23 Thread Charulata Sharma (charshar)
Hi Rahul, Thanks for your answer. Why do you say that deleting from spark is not elegant?? This is the exact feedback I want. Basically why is it not elegant? I can either delete using delete prepared statements or through spark. TTL approach doesn’t work for us Because first of all ttl

Re: Understanding Blocked and All Time Blocked columns in tpstats

2018-03-23 Thread John Sanda
We do small inserts. For a modest size environment we do about 90,000 inserts every 30 seconds. For a larger environment, we could be doing 300,000 or more inserts every 30 seconds. In earlier versions of the project, each insert was a separate request as each insert targets a different partition.

Re: Understanding Blocked and All Time Blocked columns in tpstats

2018-03-23 Thread Chris Lohfink
Increasing queue would increase the number of requests waiting. It could make GCs worse if the requests are like large INSERTs, but for a lot of super tiny queries it helps to increase queue size (to a point). Might want to look into what and how queries are being made, since there are possibly

Re: Understanding Blocked and All Time Blocked columns in tpstats

2018-03-23 Thread John Sanda
Thanks for the explanation. In the past when I have run into problems related to CASSANDRA-11363, I have increased the queue size via the cassandra.max_queued_native_transport_requests system property. If I find that the queue is frequently at capacity, would that be an indicator that the node is

Re: Understanding Blocked and All Time Blocked columns in tpstats

2018-03-23 Thread Chris Lohfink
It blocks the caller attempting to add the task until theres room in queue, applying back pressure. It does not reject it. It mimics the behavior from pre-SEP DebuggableThreadPoolExecutor's RejectionExecutionHandler that the other thread pools use (exception on sampling/trace which just throw

Re: Cassandra 2.1 on Xenial

2018-03-23 Thread Cyril Scetbon
Hmm. Interesting ! So you suspect that cassandra-stress tries to use the thrift protocol before actually using the native protocol, right ? I might check where is the difference between cassandra-stress 3.1 and cassandra-stress 2.1 when I have some time. Thanks. > On Mar 23, 2018, at 10:43

Re: Cassandra 2.1 on Xenial

2018-03-23 Thread Michael Shuler
I downloaded the 3.0.16 tar to /tmp on the same host as my 2.1 node was running (without thrift), and this worked for me: ./tools/bin/cassandra-stress write n=1 -mode native cql3 protocolVersion=3 Michael On 03/23/2018 09:30 AM, Michael Shuler wrote: > Well, now I'm a little stumped. I

Re: Cassandra 2.1 on Xenial

2018-03-23 Thread Michael Shuler
Well, now I'm a little stumped. I tried native mode with thrift enabled, wrote one row, so schema is created, then set start_rpc: false, restarted C*, and native mode fails in the same way. So it's not just the schema creation phase. I also tried including -port native=9042 and -schema

Re: Nodes unresponsive after upgrade 3.9 -> 3.11.2

2018-03-23 Thread Martin Mačura
Nevermind, we resolved the issue JVM heap settings were misconfigured Martin On Fri, Mar 23, 2018 at 1:18 PM, Martin Mačura wrote: > Hi all, > > We have a cluster of 3 nodes with RF 3 that ran fine until we upgraded > it to 3.11.2. > > Each node has 32 GB RAM, 8 GB

?????? disable compaction in bootstrap process

2018-03-23 Thread Peng Xiao
Many thanks Alain for the thorough explanation,we will not disable compaction for now. Thanks, Peng Xiao -- -- ??: "arodrime"; : 2018??3??23??(??) 8:57 ??: "Peng Xiao"<2535...@qq.com>; :

Re: Cassandra 2.1 on Xenial

2018-03-23 Thread Cyril Scetbon
Here is the command I use : cassandra-stress user profile=cass_insert_bac.yaml ops\(insert=1\) -mode native cql3 user=cassandra password=cassandra -rate threads=1 Thrift is disabled (start_rpc: False) as I’m not supposed to use thrift at all. But I was surprised by

unsubscribe

2018-03-23 Thread Nate Bruce (BLOOMBERG/ 919 3RD A)

Re: disable compaction in bootstrap process

2018-03-23 Thread Alain RODRIGUEZ
> > I mean to disable Compaction in the bootstrapping process,then enable it > after the bootstrapping. That's how I understood it :-). Bootstrap can take a relatively long time and could affect all the nodes when using vnodes. Disabling compactions for hours is risky, even more, if the cluster

Nodes unresponsive after upgrade 3.9 -> 3.11.2

2018-03-23 Thread Martin Mačura
Hi all, We have a cluster of 3 nodes with RF 3 that ran fine until we upgraded it to 3.11.2. Each node has 32 GB RAM, 8 GB Cassandra heap size. After the upgrade, clients started reporting connection issues: cassandra | [ERROR] Closing established connection pool to host because of the