[ 
https://issues.apache.org/jira/browse/CASSANDRA-13754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16149207#comment-16149207
 ] 

Robert Stupp commented on CASSANDRA-13754:
------------------------------------------

Well, yea. Looking at the heap dump, that [~markusdlugi] provided, is looks 
like the node is "just" overloaded with too many and maybe too big writes in 
combination with a small heap. There are lots of {{BTree$Builder}} instances 
with live references in their {{Object[] values}} array to {{HeapByteBuffer}} 
instances, each holding a 1MB {{byte[]}}.
{{BTree$Builder}} instances reset the {{Object[] values}} when finished - i.e. 
those builders are actively doing something (writes are happening at that time).
TL;DR I don't think this is actually related to the issue that [~urandom] 
describes.

[~urandom], can you explain what actually what these {{ThreadLocal}} instances 
referenced?

> FastThreadLocal leaks memory
> ----------------------------
>
>                 Key: CASSANDRA-13754
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-13754
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Cassandra 3.11.0, Netty 4.0.44.Final, OpenJDK 8u141-b15
>            Reporter: Eric Evans
>            Assignee: Robert Stupp
>             Fix For: 3.11.1
>
>
> After a chronic bout of {{OutOfMemoryError}} in our development environment, 
> a heap analysis is showing that more than 10G of our 12G heaps are consumed 
> by the {{threadLocals}} members (instances of {{java.lang.ThreadLocalMap}}) 
> of various {{io.netty.util.concurrent.FastThreadLocalThread}} instances.  
> Reverting 
> [cecbe17|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=cecbe17e3eafc052acc13950494f7dddf026aa54]
>  fixes the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to