[
https://issues.apache.org/jira/browse/SPARK-32661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223727#comment-17223727
]
Apache Spark commented on SPARK-32661:
--------------------------------------
User 'tgravescs' has created a pull request for this issue:
https://github.com/apache/spark/pull/30204
> Spark executors on K8S do not request extra memory for off-heap allocations
> ---------------------------------------------------------------------------
>
> Key: SPARK-32661
> URL: https://issues.apache.org/jira/browse/SPARK-32661
> Project: Spark
> Issue Type: Sub-task
> Components: Kubernetes
> Affects Versions: 3.0.0, 3.0.1, 3.1.0
> Reporter: Luca Canali
> Priority: Minor
>
> Off-heap memory allocations are configured using
> `spark.memory.offHeap.enabled=true` and `conf
> spark.memory.offHeap.size=<size>`. Spark on YARN adds the off-heap memory
> size to the executor container resources. Spark on Kubernetes does not
> request the allocation of the off-heap memory. Currently, this can be worked
> around by using spark.executor.memoryOverhead to reserve memory for off-heap
> allocations. This proposes make Spark on Kubernetes behave as in the case of
> YARN, that is adding spark.memory.offHeap.size to the memory request for
> executor containers.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]