Not sure whether you're asking me or the original poster, but the more
times data gets overwritten in a memtable, the less it has to be
compacted later on (and even without overwrites, larger memtables result
in less compaction).
On 05/25/2017 05:59 PM, Jonathan Haddad wrote:
Why do you think keeping your data in the memtable is a what you need
to do?
On Thu, May 25, 2017 at 7:16 AM Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
Then it doesn't have to (it still may, for other reasons).
On 05/25/2017 05:11 PM, preetika tyagi wrote:
What if the commit log is disabled?
On May 25, 2017 4:31 AM, "Avi Kivity" <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
Cassandra has to flush the memtable occasionally, or the
commit log grows without bounds.
On 05/25/2017 03:42 AM, preetika tyagi wrote:
Hi,
I'm running Cassandra with a very small dataset so that the
data can exist on memtable only. Below are my configurations:
In jvm.options:
|-Xms4G -Xmx4G |
In cassandra.yaml,
|memtable_cleanup_threshold: 0.50 memtable_allocation_type:
heap_buffers |
As per the documentation in cassandra.yaml, the
/memtable_heap_space_in_mb/ and
/memtable_heap_space_in_mb/ will be set of 1/4 of heap size
i.e. 1000MB
According to the documentation here
(http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html#configCassandra_yaml__memtable_cleanup_threshold),
the memtable flush will trigger if the total size of
memtabl(s) goes beyond (1000+1000)*0.50=1000MB.
Now if I perform several write requests which results in
almost ~300MB of the data, memtable still gets flushed since
I see sstables being created on file system (Data.db etc.)
and I don't understand why.
Could anyone explain this behavior and point out if I'm
missing something here?
Thanks,
Preetika