[
https://issues.apache.org/jira/browse/SPARK-24938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16580140#comment-16580140
]
Imran Rashid commented on SPARK-24938:
--------------------------------------
Cool, sounds like the info we need for making this change, then. [~zsxwing]
[~vanzin] do you have thoughts on this? Any reason why MessageEncoder is
explicitly using onheap pools, rather than the configured default netty pool?
> Understand usage of netty's onheap memory use, even with offheap pools
> ----------------------------------------------------------------------
>
> Key: SPARK-24938
> URL: https://issues.apache.org/jira/browse/SPARK-24938
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 2.4.0
> Reporter: Imran Rashid
> Priority: Major
> Labels: memory-analysis
>
> We've observed that netty uses large amount of onheap memory in its pools, in
> addition to the expected offheap memory when I added some instrumentation
> (using SPARK-24918 and https://github.com/squito/spark-memory). We should
> figure out why its using that memory, and whether its really necessary.
> It might be just this one line:
> https://github.com/apache/spark/blob/master/common/network-common/src/main/java/org/apache/spark/network/protocol/MessageEncoder.java#L82
> which means that even with a small burst of messages, each arena will grow by
> 16MB which could lead to a 128 MB spike of an almost entirely unused pool.
> Switching to requesting a buffer from the default pool would probably fix
> this.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]