Follow-up for anyone interested: disabling the Windows page file (which Windows 
makes kind of a pain) appears to resolve all issues. Cassandra is still using 
lots of memory but it gives it up as appropriate.

From: DuyHai Doan []
Sent: Friday, November 3, 2017 11:25
To: user <>
Subject: Re: Cassandra using a ton of native memory

8Gb of RAM being a recommended production setting for most of the workload out 
there. Having only 16Gb of RAM, and because Cassandra is relying a lot on 
system page cache, there should be no surprise that your 16Gb being eaten up.

On Fri, Nov 3, 2017 at 5:40 PM, Austin Sharp 
<<>> wrote:
I’ve investigated further. It appears that the performance issues are because 
Cassandra’s memory-mapped files (*.db files) fill up the physical memory and 
start being swapped to disk. Is this related to recommendations to disable 
swapping on a machine where Cassandra is installed? Should I disable 
memory-mapped IO?

I can see issues in JIRA related to Windows memory-mapped I/O but they all 
appear to be fixed prior to 3.11.

From: Austin Sharp [<>]
Sent: Thursday, November 2, 2017 17:51
Subject: Cassandra using a ton of native memory


I have a problem with Cassandra 3.11.0 on Windows. I'm testing a workload w= 
ith a lot of read-then-writes that had no significant problems on Cassandra=  
2.x. However, now when this workload continues for a while (perhaps an hou= r), 
Cassandra or its JVM effectively use up all of the machine's 16GB of me= mory. 
Cassandra is started with -Xmx2147M, and JMX shows <2GB heap memory a= nd 
<100MB of off-heap memory. However, when I use something like Process Ex= 
plorer, I see that Cassandra has 10 to 11GB of memory in its working set, a= nd 
Windows shows essentially no free memory at all. Once the system has no = free 
memory, other processes suffer long sequences of unresponsiveness.

I can't see anything terribly wrong from JMX metrics or log files - they ne= 
ver show more than 1GB of non-heap memory. Where should I look to investiga= te 
this further?



Reply via email to