Re: Leveled Compaction Strategy with a really intensive delete workload

2015-05-26 Thread Stefano Ortolani
I see, thanks Jason! Can a dev confirm it is safe to apply those changes on live data? Also, if I understood correctly, those parameters still obey the gc_grace_seconds, that is, no compaction to evict tombstones will take place before gc_grace_seconds elapsed, correct? Cheers, Stefano On Tue,

Re: Leveled Compaction Strategy with a really intensive delete workload

2015-05-25 Thread Jason Wee
, due to a really intensive delete workloads, the SSTable is promoted to t.. Is cassandra design for *delete* workloads? doubt so. Perhaps looking at some other alternative like ttl? jason On Mon, May 25, 2015 at 10:12 AM, Manoj Khangaonkar khangaon...@gmail.com wrote: Hi, For a

Re: Leveled Compaction Strategy with a really intensive delete workload

2015-05-25 Thread Stefano Ortolani
Hi all, Thanks for your answers! Yes, I agree that a delete intensive workload is not something Cassandra is designed for. Unfortunately this is to cope with some unexpected data transformations that I hope are a temporary thing. We chose LCS strategy because of really wide rows which were

Re: Leveled Compaction Strategy with a really intensive delete workload

2015-05-25 Thread Stefano Ortolani
Ok, I am reading a bit more about compaction subproperties here ( http://docs.datastax.com/en/cql/3.1/cql/cql_reference/compactSubprop.html) and it seems that tombstone_threshold and unchecked_tombstone_compaction might come handy. Does anybody know if changing any of these values (via ALTER) is

Re: Leveled Compaction Strategy with a really intensive delete workload

2015-05-25 Thread Jason Wee
Hi Stefano, I did a quick test, it looks almost instant if you do alter but remember, in my test machine, there are no loaded data yet and switching from stcs to lcs. cqlsh:jw_schema1 CREATE TABLE DogTypes ( block_id uuid, species text, alias text, population varint, PRIMARY KEY (block_id) )

Leveled Compaction Strategy with a really intensive delete workload

2015-05-24 Thread Stefano Ortolani
Hi all, I have a question re leveled compaction strategy that has been bugging me quite a lot lately. Based on what I understood, a compaction takes place when the SSTable gets to a specific size (10 times the size of its previous generation). My question is about an edge case where, due to a

Re: Leveled Compaction Strategy with a really intensive delete workload

2015-05-24 Thread Manoj Khangaonkar
Hi, For a delete intensive workload ( translate to write intensive), is there any reason to use leveled compaction ? The recommendation seems to be that leveled compaction is suited for read intensive workloads. Depending on your use case, you might better of with data tiered or size tiered