Inviting comments and opinion

2019-05-02 Thread Devopam Mittra
Hi 'Users' :), Just wanted to seek your opinion on the approach, should you please spare some time on it. https://www.slideshare.net/devopam/cassandra-table-modeling-an-alternate-approach regards Dev

Re: Mixing LWT and normal operations for a partition

2019-05-02 Thread Shaurya Gupta
Hi, *1. The below sequence of commands also does not appear to give an expected output.* Since, there is a delete command in the batch and then an LWT update using IF EXISTS, in the final result row with id = 5 must get deleted. cassandra@cqlsh> select * from demo.tweets; *id* | *body*

Re: CL=LQ, RF=3: Can a Write be Lost If Two Nodes ACK'ing it Die

2019-05-02 Thread Avinash Mandava
Good catch, misread the detail. On Thu, May 2, 2019 at 4:56 PM Ben Slater wrote: > Reading more carefully, it could actually be either way: quorum requires > that a majority of nodes complete and ack the write but still aims to write > to RF nodes (with the last replicate either written

Re: CL=LQ, RF=3: Can a Write be Lost If Two Nodes ACK'ing it Die

2019-05-02 Thread Ben Slater
Reading more carefully, it could actually be either way: quorum requires that a majority of nodes complete and ack the write but still aims to write to RF nodes (with the last replicate either written immediately or eventually via hints or repairs). So, in the scenario outlined the replica may or

Re: CL=LQ, RF=3: Can a Write be Lost If Two Nodes ACK'ing it Die

2019-05-02 Thread Avinash Mandava
In scenario 2 it's lost, if both nodes die and get replaced entirely there's no history anywhere that the write ever happened, as it wouldn't be in commitlog, memtable, or sstable in node 3. Surviving that failure scenario of two nodes with same data simultaneously failing requires upping CL or

Re: CL=LQ, RF=3: Can a Write be Lost If Two Nodes ACK'ing it Die

2019-05-02 Thread Ben Slater
In scenario 2, if the row has been written to node 3 it will be replaced on the other nodes via rebuild or repair. --- *Ben Slater**Chief Product Officer*

Re: Mixing LWT and normal operations for a partition

2019-05-02 Thread Shaurya Gupta
One suggestion - I think Cassandra community is already having a drive to update the documentation. This could be added to CQLSH documentation or some other relevant documentation. On Fri, May 3, 2019 at 12:56 AM Shaurya Gupta wrote: > Thanks Jeff. > > On Fri, May 3, 2019 at 12:38 AM Jeff Jirsa

Re: Mixing LWT and normal operations for a partition

2019-05-02 Thread Shaurya Gupta
Thanks Jeff. On Fri, May 3, 2019 at 12:38 AM Jeff Jirsa wrote: > No. Don’t mix LWT and normal writes. > > -- > Jeff Jirsa > > > > On May 2, 2019, at 11:43 AM, Shaurya Gupta > wrote: > > > > Hi, > > > > We are seeing really odd behaviour while try to delete a row which is > simultaneously being

Re: Mixing LWT and normal operations for a partition

2019-05-02 Thread Jeff Jirsa
No. Don’t mix LWT and normal writes. -- Jeff Jirsa > On May 2, 2019, at 11:43 AM, Shaurya Gupta wrote: > > Hi, > > We are seeing really odd behaviour while try to delete a row which is > simultaneously being updated in a light weight transaction. > The delete command succeeds and the LWT

Mixing LWT and normal operations for a partition

2019-05-02 Thread Shaurya Gupta
Hi, We are seeing really odd behaviour while try to delete a row which is simultaneously being updated in a light weight transaction. The delete command succeeds and the LWT update fails with timeout exception but still the next select statement shows that the row still exists. This occurs ones

Re: TWCS sstables not dropping even though all data is expired

2019-05-02 Thread Paul Chandler
Hi Mike, It sounds like that record may have been deleted, if that is the case then it would still be shown in this sstable, but the deleted tombstone record would be in a later sstable. You can use nodetool getsstables to work out which sstables contain the data. I recommend reading The Last

RE: TWCS sstables not dropping even though all data is expired

2019-05-02 Thread Nick Hatfield
Hi Mike, Have you checked to make sure you’re not a victim of timestamp overlap? From: Mike Torra [mailto:mto...@salesforce.com.INVALID] Sent: Thursday, May 02, 2019 11:09 AM To: user@cassandra.apache.org Subject: Re: TWCS sstables not dropping even though all data is expired I'm pretty stumped

Re: TWCS sstables not dropping even though all data is expired

2019-05-02 Thread Mike Torra
I'm pretty stumped by this, so here is some more detail if it helps. Here is what the suspicious partition looks like in the `sstabledump` output (some pii etc redacted): ``` { "partition" : { "key" : [ "some_user_id_value", "user_id", "demo-test" ], "position" : 210 },

Re: Accidentaly removed SSTables of unneeded data

2019-05-02 Thread Nitan Kainth
You can run nodetool refresh and then sstablescrub to see if there is any corruption. On Thu, May 2, 2019 at 9:53 AM Simon ELBAZ wrote: > Hi, > > I am running Cassandra v2.1 on a 3 node cluster. > > *# yum list installed | grep cassa* > *cassandra21.noarch2.1.12-1 > @datastax

Re: Accidentaly removed SSTables of unneeded data

2019-05-02 Thread shalom sagges
Hi Simon, If you haven't did that already, try to drain and restart the node you deleted the data from. Then run the repair again. Regards, On Thu, May 2, 2019 at 5:53 PM Simon ELBAZ wrote: > Hi, > > I am running Cassandra v2.1 on a 3 node cluster. > > *# yum list installed | grep cassa* >

CL=LQ, RF=3: Can a Write be Lost If Two Nodes ACK'ing it Die

2019-05-02 Thread Fd Habash
C*: 2.2.8 Write CL = LQ Kspace RF = 3 Three racks A write gets received by node 1 in rack 1 at above specs. Node 1 (rack1) & node 2 (rack2) acknowledge it to the client. Within some unit of time, node 1 & 2 die. Either …. - Scenario 1: C* process death: Row did not make it to sstable (it is

Accidentaly removed SSTables of unneeded data

2019-05-02 Thread Simon ELBAZ
Hi, I am running Cassandra v2.1 on a 3 node cluster. /# yum list installed | grep cassa// //cassandra21.noarch 2.1.12-1 @datastax // //cassandra21-tools.noarch 2.1.12-1 @datastax / Unfortunately, I accidentally removed the SSTables (using rm)

Re: Cassandra taking very long to start and server under heavy load

2019-05-02 Thread Evgeny Inberg
Yes, sstable upgraded on each node. On Thu, 2 May 2019, 13:39 Nick Hatfield wrote: > Just curious but, did you make sure to run the sstable upgrade after you > completed the move from 2.x to 3.x ? > > > > *From:* Evgeny Inberg [mailto:evg...@gmail.com] > *Sent:* Thursday, May 02, 2019 1:31 AM >

RE: Cassandra taking very long to start and server under heavy load

2019-05-02 Thread Nick Hatfield
Just curious but, did you make sure to run the sstable upgrade after you completed the move from 2.x to 3.x ? From: Evgeny Inberg [mailto:evg...@gmail.com] Sent: Thursday, May 02, 2019 1:31 AM To: user@cassandra.apache.org Subject: Re: Cassandra taking very long to start and server under heavy