[ https://issues.apache.org/jira/browse/CASSANDRA-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16660862#comment-16660862 ]
Chris Lohfink commented on CASSANDRA-14495: ------------------------------------------- If your gc time is set to 500ms like the default g1 settings are it doesn't mean much just that the JVM is doing what its supposed to do - it fills up enough eden regions and tries to set the number of regions such that with the current allocation rates it will take up the targeted pause time. Take a look at https://www.oracle.com/technetwork/tutorials/tutorials-1876574.html and the gc logs, theres many youtube presentations and blogs that can help walk through the phases and how to read the logs. Do you have an actual problem your experiencing? Bad latencies? timeouts? If so thats different and nodetool tablestats and schema helpful if its a data model issue but try to describe the problem your having and perhaps move this to user list or stack overflow as this jira is for bug reports, new features etc and changes to C* source. Your GCs are fairly frequent though so if its impacting your system people can help identify a bad data model and maybe some mitigation approaches but there are better forums to reach out for that kind of help. > Memory Leak /High Memory usage post 3.11.2 upgrade > -------------------------------------------------- > > Key: CASSANDRA-14495 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14495 > Project: Cassandra > Issue Type: Bug > Components: Metrics > Reporter: Abdul Patel > Priority: Major > Attachments: cas_heap.txt > > > Hi All, > > I recently upgraded my non prod cassandra cluster( 4 nodes single DC) from > 3.10 to 3.11.2 version. > No issues reported apart from only nodetool info reporting 80% usage . > I intially had 16GB memory on each node, later i bumped up to 20GB, and > rebooted all nodes. > Waited for an week and now again i have seen memory usage more than 80% , > 16GB + . > this means some memory leaks are happening over the time. > Any one has faced such issue or do we have any workaround ? my 3.11.2 version > upgrade rollout has been halted because of this bug. > =================================================================== > ID : 65b64f5a-7fe6-4036-94c8-8da9c57718cc > Gossip active : true > Thrift active : true > Native Transport active: true > Load : 985.24 MiB > Generation No : 1526923117 > Uptime (seconds) : 1097684 > Heap Memory (MB) : 16875.64 / 20480.00 > Off Heap Memory (MB) : 20.42 > Data Center : DC7 > Rack : rac1 > Exceptions : 0 > Key Cache : entries 3569, size 421.44 KiB, capacity 100 MiB, > 7931933 hits, 8098632 requests, 0.979 recent hit rate, 14400 save period in > seconds > Row Cache : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 > requests, NaN recent hit rate, 0 save period in seconds > Counter Cache : entries 0, size 0 bytes, capacity 50 MiB, 0 hits, 0 > requests, NaN recent hit rate, 7200 save period in seconds > Chunk Cache : entries 2361, size 147.56 MiB, capacity 3.97 GiB, > 2412803 misses, 72594047 requests, 0.967 recent hit rate, NaN microseconds > miss latency > Percent Repaired : 99.88086234106282% > Token : (invoke with -T/--tokens to see all 256 tokens) -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org