dfercode commented on PR #40771:
URL: https://github.com/apache/spark/pull/40771#issuecomment-1507988511

   > > The cpu limits are set by 
spark.kubernetes.{driver,executor}.limit.cores. The cpu is set by 
spark.{driver,executor}.cores. The memory request and limit are set by summing 
the values of spark.{driver,executor}.memory and 
spark.{driver,executor}.memoryOverhead. Other resource limits are set by 
spark.{driver,executor}.resources.{resourceName}.* configs.
   > 
   > Referring to the doc, we can actually set driver pod memory alone
   
   Current config is setting the pod request.memory and limit.memory in the 
**same value** from by summing the values of spark.{driver,executor}.memory and 
spark.{driver,executor}.memoryOverhead.
   But request.memory and limit.memory are different kind of params from k8s, 
always keep same value may not a good practice. In most case the request memory 
quota we can get is always smaller than limit.memory from infrastructure team. 
If spark pods request.memory can only same as limit.memory, then total memory 
we can use is based on the smaller one.
   requests 定义了对应的容器所需要的最小资源量。
   limits 定义了对应容器最大可以消耗的资源上限。


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to