[ 
https://issues.apache.org/jira/browse/SPARK-20801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16016813#comment-16016813
 ] 

Apache Spark commented on SPARK-20801:
--------------------------------------

User 'jinxing64' has created a pull request for this issue:
https://github.com/apache/spark/pull/18031

> Store accurate size of blocks in MapStatus when it's above threshold.
> ---------------------------------------------------------------------
>
>                 Key: SPARK-20801
>                 URL: https://issues.apache.org/jira/browse/SPARK-20801
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: jin xing
>
> Currently, when number of reduces is above 2000, HighlyCompressedMapStatus is 
> used to store size of blocks. in HighlyCompressedMapStatus, only average size 
> is stored for non empty blocks. Which is not good for memory control when we 
> shuffle blocks. It makes sense to store the accurate size of block when it's 
> above threshold.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to