[
https://issues.apache.org/jira/browse/SPARK-37358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17791638#comment-17791638
]
Björn Boschman commented on SPARK-37358:
----------------------------------------
anybody ever looking into this?
we can provide a patch
> Spark-on-K8S: Allow disabling of resources.limits.memory in executor pod spec
> -----------------------------------------------------------------------------
>
> Key: SPARK-37358
> URL: https://issues.apache.org/jira/browse/SPARK-37358
> Project: Spark
> Issue Type: Improvement
> Components: Kubernetes
> Affects Versions: 3.2.0
> Reporter: Andrew de Quincey
> Priority: Major
>
> When spark creates an executor pod on my Kubernetes cluster, it adds the
> following resources definition:
> {{ resources:}}
> {{ limits:}}
> {{ memory: 896Mi}}
> {{ requests:}}
> {{ cpu: '4'}}
> {{ memory: 896Mi}}
> Note that resources.limits.cpu is not set. This is controlled by the
> spark.kubernetes.driver.limit.cores setting (which we intentionally do not
> set).
> We'd like to be able to omit the resources.limit.memory setting as well to
> let the spark worker expand its memory as necessary.
> However, this isn't possible. The scala code in
> BasicExecutorFeatureStep.scala is as follows:
> {{{}.editOrNewResources(){}}}{{{}.addToRequests("memory",
> executorMemoryQuantity){}}}{{{}.addToLimits("memory",
> executorMemoryQuantity){}}}{{{}.addToRequests("cpu",
> executorCpuQuantity){}}}{{{}.addToLimits(executorResourceQuantities.asJava){}}}{{{}.endResources(){}}}{{{{}}{}}}
>
> i.e. it always adds the memory limit, and there's no way to stop it.
> Oh - most of our code is in python, so it is not bound by the JVM memory
> settings,
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]