Re: Cassandra collection tombstones

2019-01-28 Thread Jeff Jirsa
The issue in 14861 doesn’t manifest itself in the data file (so you won’t see it in the sstable json), it’s in the min/max clustering of the metadata used in the read path. -- Jeff Jirsa > On Jan 28, 2019, at 7:08 AM, Ahmed Eljami wrote: > > Hi Alain, > > Just to confirm, range

Re: Cassandra collection tombstones

2019-01-28 Thread Ahmed Eljami
Hi Alain, Just to confirm, range tombstones that we are talking about here is not related to this jira: https://issues.apache.org/jira/browse/CASSANDRA-14861 ? Thanks a lot.

Re: Cassandra collection tombstones

2019-01-28 Thread Alain RODRIGUEZ
Hello, @Chris, I mostly agree with you. I will try to make clear what I had in mind, as it was not well-expressed obviously. > it doesn't matter if the tombstone is overlapped it still need to be kept > for the gc_grace before purging or it can result in data resurrection. Yes, I agree. I do

Re: Cassandra collection tombstones

2019-01-27 Thread Ayub M
Thanks Alain/Chris. Firstly I am not seeing any difference when using gc_grace_seconds with sstablemetadata. CREATE TABLE ks.nmtest ( reservation_id text, order_id text, c1 int, order_details map, PRIMARY KEY (reservation_id, order_id) ) WITH CLUSTERING ORDER BY (order_id

Re: Cassandra collection tombstones

2019-01-25 Thread Chris Lohfink
> The "estimated droppable tombstone" value is actually always wrong. Because > it's an estimate that does not consider overlaps (and I'm not sure about the > fact it considers the gc_grace_seconds either). It considers the time the tombstone was created and the gc_grace_seconds, it doesn't

Re: Cassandra collection tombstones

2019-01-25 Thread Alain RODRIGUEZ
Hello, I think you might be inserting on the top of an existing collection, implicitly, Cassandra creates a range tombstone. Cassandra does not update/delete data, it always inserts (data or tombstone). Then eventually compaction merges the data and evict the tombstones. Thus, when overwriting an