mridulm edited a comment on pull request #32733:
URL: https://github.com/apache/spark/pull/32733#issuecomment-852688429


   I am missing something here, if a block is < 
`spark.shuffle.accurateBlockThreshold`, it is recorded as a small block - else 
it is marked as a huge block. The point of 
`spark.shuffle.accurateBlockThreshold` is that it is not very high, so should 
not cause OOM - are you configuring it to a very high value ?
   Note that this gets triggered when (typically) you have > 2k partitions - so 
the benefits of using `HighlyCompressedMapStatus` is to prevent other issues 
which more accurate tracking ends up with for very large number of partitions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to