mridulm commented on PR #45240:
URL: https://github.com/apache/spark/pull/45240#issuecomment-1980260288

   I would like to understand the usecase better here - It is still unclear to 
me what characteristics you are shooting for by this PR.
   
   Reduction in OOM is mentioned [as a 
usecase](https://github.com/apache/spark/pull/45240#issuecomment-1976891862) - 
but overhead does not impact that, heap memory does.
   
   > The reason why we don't explicitly set the memory overhead is because we 
could accidentally be reducing the overall memory the user has access to.
   
   Any increase in overhead memory implies reduction in available heap memory 
if the max container memory has a upper limit and it is at that limit (if 
container memory is not limited - there is no impact of memory available to 
user, tune memory and overhead independently).
   
   The reason behind the question is, I am trying to understand if the default 
min (which was estimated quite a while ago) needs to be relooked or not - and 
understanding why your deployment needs a higher min will help.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to