[
https://issues.apache.org/jira/browse/SPARK-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208391#comment-14208391
]
Apache Spark commented on SPARK-4370:
-------------------------------------
User 'aarondav' has created a pull request for this issue:
https://github.com/apache/spark/pull/3155
> Limit cores used by Netty transfer service based on executor size
> -----------------------------------------------------------------
>
> Key: SPARK-4370
> URL: https://issues.apache.org/jira/browse/SPARK-4370
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.2.0
> Reporter: Aaron Davidson
> Assignee: Aaron Davidson
> Priority: Critical
>
> Right now, the NettyBlockTransferService uses the total number of cores on
> the system as the number of threads and buffer arenas to create. The latter
> is more troubling -- this can lead to significant allocation of extra heap
> and direct memory in situations where executors are relatively small compared
> to the whole machine. For instance, on a machine with 32 cores, we will
> allocate (32 cores * 16MB per arena = 512MB) * 2 for client and server = 1GB
> direct and heap memory. This can be a huge overhead if you're only using,
> say, 8 of those cores.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]