TongWei1105 commented on PR #49109:
URL: https://github.com/apache/spark/pull/49109#issuecomment-2530218485

   > Thank you for making a PR, but this doesn't comply with K8s `LimitRange`, 
does it? For me, this looks like a breaking change instead of an improvement.
   > 
   > As you know, K8s has `LimitRange` features which provides the default 
limit value like the following already. Apache Spark has been respecting it. 
https://kubernetes.io/docs/concepts/policy/limit-range/#limitrange-and-admission-checks-for-pods
   
   Indeed, LimitRange provides default resource requests and limits for each 
Pod at the namespace level. However, in production environments, where users 
may configure various values for spark.executor.cores, it seems more reasonable 
to set the default limit equal to the request for Spark Executors, making 
resource usage behavior more predictable.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to