Github user liyinan926 commented on the issue:

    https://github.com/apache/spark/pull/20553
  
    @foxish I think the confusion mostly comes from the property name. I don't 
think we need to change the semantics of `spark.executor.cores` nor the way 
dynamic resource allocation works in K8S mode. Like what I said above, 
`spark.executor.cores`/`spark.task.cpus` still determines the number of tasks 
that can run in an executor simultaneously given the cpus specified in 
`spark.kubernetes.executor.cores`.   


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to