CodingCat commented on PR #2358:
URL: 
https://github.com/apache/incubator-celeborn/pull/2358#issuecomment-1998918364

   > If a Spark application has enough memory to buffer push data, why not 
simply set a larger push threshold instead of adapting the threshold from 64k 
to executorMemory * 0.4, which could increase the risk of encountering an OOM 
error?
   
   sorry, I am a bit confused by your question.....if a Spark application has 
enough memory to buffer push data and we can "simply set a larger push 
threshold", why do we still have OOM when setting executorMemory * 0.4 as 
threshold?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to