Github user foxish commented on the issue:

    https://github.com/apache/spark/pull/20553
  
    @cloud-fan @jiangxb1987 I assume you're referring to 
[ExecutorAllocationManager.scala#L114-L118](https://github.com/apache/spark/blob/fc6fe8a1d0f161c4788f3db94de49a8669ba3bcc/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala#L114-L118)
    
    @liyinan926, while this might be the least obtrusive way to get Kubernetes 
mode to have fractional executor cores and not change other backends, it seems 
to me like separating the notion of a pod's CPU request and 
`spark.executor.cores` might prove confusing to users. We don't expect Spark 
users to necessarily understand pods or containers. Elaborating on @srowen's 
point on the JIRA, in theory, could we relax `spark.executor.cores` to accept 
doubles and then push the check down to the individual resource managers - 
force conversion to int if needed? It might require a similar change to 
`spark.task.cpus` also possibly. Thoughts?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to