cfmcgrady commented on PR #2358:
URL: 
https://github.com/apache/incubator-celeborn/pull/2358#issuecomment-1998910249

   > small partition data triggered too many pushes therefore high cost of 
compression
   
   I also encounter the same issue recently
   
   I know this feature is optional.
   The question is: If a Spark application has enough memory to buffer push 
data, why not simply set a larger push threshold instead of adapting the 
threshold from 64k to `executorMemory * 0.4`, which could increase the risk of 
encountering an OOM error?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to