Subject: Re: Cassandra going OOM due to tombstones (heapdump screenshots
provided)
It looks like the number of tables is the problem, with 5,000 - 10,000 tables,
that is way above the recommendations.
Take a look here:
https://docs.datastax.com/en/dse-planning/doc/planning/planningAntiPatterns.html
>
> It looks like the number of tables is the problem, with 5,000 - 10,000
> tables, that is way above the recommendations.
> Take a look here:
> https://docs.datastax.com/en/dse-planning/doc/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatTooManyTables
> This suggests that 5-10GB
It means that you are using 5-10GB of memory just to hold information about
tables. Memtables hold the data that is written to the database until those are
flushed to the disk, and those happen when memory is low or some other
threshold is reached.
Every table will have a memtable that takes
It doesn't seem to be the problem but I do not have deep knowledge of C*
internals.
When do memtable come into play? Only at startup?
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands,
Hi Behroz,
It looks like the number of tables is the problem, with 5,000 - 10,000 tables,
that is way above the recommendations.
Take a look here:
https://docs.datastax.com/en/dse-planning/doc/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatTooManyTables
IIRC there is an overhead of about 1MB per table which you have about
5000-1 => 5GB - 10GB overhead of just having that many tables. To me it
looks like that you need to increase the heap size and later potentially work
on the data models to have less tables.
Hannu
> On 29. Jan 2020, at
>> If it's after the host comes online and it's hint replay from the other
hosts, you probably want to throttle hint replay significantly on the rest
of the cluster. Whatever your hinted handoff throttle is, consider dropping
it by 50-90% to work around whichever of those two problems it is.
This
>> Startup would replay commitlog, which would re-materialize all of those
mutations and put them into the memtable. The memtable would flush over
time to disk, and clear the commitlog.
>From our observation, the node is already online and it seems to be happening
>after the commit log replay
>> Some environment details like Cassandra version, amount of physical RAM,
JVM configs (heap and others), and any other non-default cassandra.yaaml
configs would help. The amount of data, number of keyspaces & tables,
since you mention "clients", would also be helpful for people to suggest
-To: "user@cassandra.apache.org"
Date: Friday, January 24, 2020 at 12:09 PM
To: cassandra
Subject: Re: Cassandra going OOM due to tombstones (heapdump screenshots
provided)
Message from External Sender
Ah, I focused too much on the literal meaning of startup. If it's happening
Ah, I focused too much on the literal meaning of startup. If it's happening
JUST AFTER startup, it's probably getting flooded with hints from the other
hosts when it comes online.
If that's the case, it may be just simply overrunning the memtable, or it
may be a deadlock like
6 GB of mutations on heap
Startup would replay commitlog, which would re-materialize all of those
mutations and put them into the memtable. The memtable would flush over
time to disk, and clear the commitlog.
It looks like PERHAPS the commitlog replay is faster than the memtable
flush, so you're
Some environment details like Cassandra version, amount of physical RAM,
JVM configs (heap and others), and any other non-default cassandra.yaaml
configs would help. The amount of data, number of keyspaces & tables,
since you mention "clients", would also be helpful for people to suggest
We recently had a lot of OOM in C* and it was generally happening during
startup.
We took some heap dumps but still cannot pin point the exact reason. So, we
need some help from experts.
Our clients are not explicitly deleting data but they have TTL enabled.
C* details:
> show version
[cqlsh
14 matches
Mail list logo