No, i didn't backport that one.


Thank you


Sent using https://www.zoho.com/mail/






============ Forwarded message ============
From: Kane Wilson <k...@raft.so>
To: <user@cassandra.apache.org>
Date: Mon, 01 Mar 2021 03:18:33 +0330
Subject: Re: using zstd cause high memtable switch count
============ Forwarded message ============


Did you also backport 
https://github.com/apache/cassandra/commit/9c1bbf3ac913f9bdf7a0e0922106804af42d2c1e
 to still use LZ4 for flushing? I would be curious if this is a side effect of 
using zstd for flushing.
https://raft.so - Cassandra consulting, support, and managed services







On Sun, Feb 28, 2021 at 9:22 PM onmstester onmstester 
<mailto:onmstes...@zoho.com.invalid> wrote:

Hi,



I'm using 3.11.2, just add the patch for zstd and changed table compression 
from default (LZ4) to zstd with level 1 and chunk 64kb, everything is fine 
(disk usage decreased by 40% and CPU usage is almost the same as before), only 
the memtable switch count was changed dramatically; with lz4 it was less than 
100 for a week, but with zstd it was more than 1000. I don't understand how its 
related.



P.S: Thank you guys for bringing zstd to Cassandra, it had a huge impact on my 
use-case by reducing almost 40% of costs, i wish i could have used it sooner 
(Although some sort of patch was already available for this feature 4 years 
ago).

Just find out that Hbase guys had zstd from 2017 and IMHO it would be good for 
Cassandra community to change its release policy to provide such features 
faster. 


Best Regards
Sent using https://www.zoho.com/mail/

Reply via email to