Re: Too many tombstones using TTL

2018-09-07 Thread Charulata Sharma (charshar)
Thanks, Charu From: Python_Max Reply-To: "user@cassandra.apache.org" Date: Tuesday, January 16, 2018 at 7:26 AM To: "user@cassandra.apache.org" Subject: Re: Too many tombstones using TTL Thanks for a very helpful reply. Will try to refactor the code accordingly. On Tu

Re: Too many tombstones using TTL

2018-01-16 Thread Python_Max
Thanks for a very helpful reply. Will try to refactor the code accordingly. On Tue, Jan 16, 2018 at 4:36 PM, Alexander Dejanovski < a...@thelastpickle.com> wrote: > I would not plan on deleting data at the row level as you'll end up with a > lot of tombstones eventually (and you won't even

Re: Too many tombstones using TTL

2018-01-16 Thread Alexander Dejanovski
I would not plan on deleting data at the row level as you'll end up with a lot of tombstones eventually (and you won't even notice them). It's not healthy to allow that many tombstones to be read, and while your latency may fit your SLA now, it may not in the future. Tombstones are going to create

Re: Too many tombstones using TTL

2018-01-16 Thread Python_Max
Hello. I was planning to remove a row (not partition). Most of the tombstones are seen in the use case of geographic grid with X:Y as partition key and object id (timeuuid) as clustering key where objects could be temporary with TTL about 10 hours or fully persistent. When I select all objects

Re: Too many tombstones using TTL

2018-01-16 Thread Alexander Dejanovski
Hi, could you be more specific about the deletes you're planning to perform ? This will end up moving your problem somewhere else as you'll be generating new tombstones (and if you're planning on deleting rows, be aware that row level tombstones aren't reported anywhere in the metrics, logs and

Re: Too many tombstones using TTL

2018-01-16 Thread Python_Max
Hi. Thank you very much for detailed explanation. Seems that there is nothing I can do about it except delete records by key instead of expiring. On Fri, Jan 12, 2018 at 7:30 PM, Alexander Dejanovski < a...@thelastpickle.com> wrote: > Hi, > > As DuyHai said, different TTLs could theoretically

Re: Too many tombstones using TTL

2018-01-12 Thread Alexander Dejanovski
Hi, As DuyHai said, different TTLs could theoretically be set for different cells of the same row. And one TTLed cell could be shadowing another cell that has no TTL (say you forgot to set a TTL and set one afterwards by performing an update), or vice versa. One cell could also be missing from a

Re: Too many tombstones using TTL

2018-01-12 Thread Python_Max
Thank you for response. I know about the option of setting TTL per column or even per item in collection. However in my example entire row has expired, shouldn't Cassandra be able to detect this situation and spawn a single tombstone for entire row instead of many? Is there any reason not doing

Re: Too many tombstones using TTL

2018-01-11 Thread kurt greaves
You should be able to avoid querying the tombstones if it's time series data. Using TWCS just make sure you don't query data that you know is expired (assuming you have the time component in your clustering key)​.

Re: Too many tombstones using TTL

2018-01-10 Thread DuyHai Doan
"The question is why Cassandra creates a tombstone for every column instead of single tombstone per row?" --> Simply because technically it is possible to set different TTL value on each column of a CQL row On Wed, Jan 10, 2018 at 2:59 PM, Python_Max wrote: > Hello, C*