Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/20553
> This is to avoid changing the semantics of spark.executor.cores and
spark.task.cpus and their role in task scheduling, task parallelism, dynamic
resource allocation, etc. The new configuration property only determines the
physical CPU cores available to an executor.
Do you mean `spark.kubernetes.executor.cores` will only be used with k8s
for static allocation? It looks to me that if we wanna k8s work with Spark
dynamic allocation better, we have to change the semantics of
`spark.executor.cores` to support fraction. Or we introduce a new dynamic
allocation module for k8s, which reads `spark.kubernetes.executor.cores`.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]