Re: Why and How is Cassandra using all my ram ?

2018-07-23 Thread Mark Rose
On 19 July 2018 at 10:43, Léo FERLIN SUTTON wrote: > Hello list ! > > I have a question about cassandra memory usage. > > My cassandra nodes are slowly using up all my ram until they get OOM-Killed. > > When I check the memory usage with nodetool info the memory > (off-heap+heap) doesn't match

Re: Migrating to LCS : Disk Size recommendation clashes

2017-04-20 Thread Mark Rose
Hi Amit, The size recommendations are based on balancing CPU and the amount of data stored on a node. LCS requires less disk space but generally requires much more CPU to keep up with compaction for the same amount of data, which is why the size recommendation is smaller. There is nothing wrong

Re: Maximum memory usage reached in cassandra!

2017-04-03 Thread Mark Rose
You may have better luck switching to G1GC and using a much larger heap (16 to 30GB). 4GB is likely too small for your amount of data, especially if you have a lot of sstables. Then try increasing file_cache_size_in_mb further. Cheers, Mark On Tue, Mar 28, 2017 at 3:01 AM, Mokkapati, Bhargav

Re: Adding disk capacity to a running node

2016-10-17 Thread Mark Rose
I've had luck using the st1 EBS type, too, for situations where reads are rare (the commit log still needs to be on its own high IOPS volume; I like using ephemeral storage for that). On Mon, Oct 17, 2016 at 3:03 PM, Branton Davis wrote: > I doubt that's true anymore.

Re: Is to ok restart DECOMMISION

2016-09-15 Thread Mark Rose
I've done that several times. Kill the process, restart it, let it sync, decommission. You'll need enough space on the receiving nodes for the full set of data, on top of the other data that was already sent earlier, plus room to cleanup/compact it. Before you kill, check system.log to see if it

Re: large number of pending compactions, sstables steadily increasing

2016-08-19 Thread Mark Rose
Hi Ezra, Are you making frequent changes to your rows (including TTL'ed values), or mostly inserting new ones? If you're only inserting new data, it's probable using size-tiered compaction would work better for you. If you are TTL'ing whole rows, consider date-tiered. If leveled compaction is

Re: My cluster shows high system load without any apparent reason

2016-07-25 Thread Mark Rose
re's a screenshot from the latencies from our application point of view, > which uses the Cassandra cluster to do reads. I started a rolling restart at > around 09:30 and you can clearly see how the system latency dropped. > http://imgur.com/a/kaPG7 > > On Sat, Jul 23, 2016 at 2:25 A

Re: My cluster shows high system load without any apparent reason

2016-07-22 Thread Mark Rose
Hi Garo, Did you put the commit log on its own drive? Spiking CPU during stalls is a symptom of not doing that. The commitlog is very latency sensitive, even under low load. Do be sure you're using the deadline or noop scheduler for that reason, too. -Mark On Fri, Jul 22, 2016 at 4:44 PM, Juho

Re: My cluster shows high system load without any apparent reason

2016-07-22 Thread Mark Rose
Hi Garo, Are you using XFS or Ext4 for data? XFS is much better at deleting large files, such as may happen after a compaction. If you have 26 TB in just two tables, I bet you have some massive sstables which may take a while for Ext4 to delete, which may be causing the stalls. The underlying