[ 
https://issues.apache.org/jira/browse/SPARK-24938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16580195#comment-16580195
 ] 

Marcelo Vanzin commented on SPARK-24938:
----------------------------------------

The line you mention is this, right?

{code}
    ByteBuf header = ctx.alloc().heapBuffer(headerLength);
{code}

My understanding of that line is that it's using the allocator used when 
building the client or server. So perhaps the fix here is not to use the 
default netty pool, but to use {{ctx.alloc().buffer()}} instead of 
{{.heapBuffer()}}? Seems that this way you'd be actually using the shared 
buffers when the allocator is configured for direct buffers, instead of 
initializing a heap pool just for the message encoder...

> Understand usage of netty's onheap memory use, even with offheap pools
> ----------------------------------------------------------------------
>
>                 Key: SPARK-24938
>                 URL: https://issues.apache.org/jira/browse/SPARK-24938
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Imran Rashid
>            Priority: Major
>              Labels: memory-analysis
>
> We've observed that netty uses large amount of onheap memory in its pools, in 
> addition to the expected offheap memory when I added some instrumentation 
> (using SPARK-24918 and https://github.com/squito/spark-memory). We should 
> figure out why its using that memory, and whether its really necessary.
> It might be just this one line:
> https://github.com/apache/spark/blob/master/common/network-common/src/main/java/org/apache/spark/network/protocol/MessageEncoder.java#L82
> which means that even with a small burst of messages, each arena will grow by 
> 16MB which could lead to a 128 MB spike of an almost entirely unused pool.  
> Switching to requesting a buffer from the default pool would probably fix 
> this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to