Github user liyinan926 commented on the issue:

    https://github.com/apache/spark/pull/20553
  
    > Do you mean `spark.kubernetes.executor.cores` will only be used with k8s 
for static allocation? It looks to me that if we wanna k8s work with Spark 
dynamic allocation better, we have to change the semantics of 
`spark.executor.cores` to support fraction. Or we introduce a new dynamic 
allocation module for k8s, which reads `spark.kubernetes.executor.cores`.
    
    Since it's only used to define the **physical** cpu request for executor 
pods, it can be used for both static and dynamic allocation in k8s mode. IMO, 
`spark.executor.cores` and `spark.task.cpus` are for defining the cpu resource 
availability and demands in a virtual sense. An executor with `100m` physical 
cpus allocated can still run 2 tasks if they fit in. This is equivalently to a 
scenario in which `spark.executor.cores=2` and `spark.task.cpus=1`. Task 
parallelism per executor and dynamic resource allocation are still based on the 
semantics of `spark.executor.cores` and `spark.task.cpus`. So I don't think we 
need to change the semantics of these two properties and we don't need a new 
version of dynamic resource allocation for k8s. IMO, it's unfortunate that 
`spark.executor.cores` is used both for defining the physical cpu request and 
for defining task parallelism. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to