[ 
https://issues.apache.org/jira/browse/SPARK-20801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan resolved SPARK-20801.
---------------------------------
       Resolution: Fixed
    Fix Version/s: 2.2.0

Issue resolved by pull request 18031
[https://github.com/apache/spark/pull/18031]

> Store accurate size of blocks in MapStatus when it's above threshold.
> ---------------------------------------------------------------------
>
>                 Key: SPARK-20801
>                 URL: https://issues.apache.org/jira/browse/SPARK-20801
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: jin xing
>             Fix For: 2.2.0
>
>
> Currently, when number of reduces is above 2000, HighlyCompressedMapStatus is 
> used to store size of blocks. in HighlyCompressedMapStatus, only average size 
> is stored for non empty blocks. Which is not good for memory control when we 
> shuffle blocks. It makes sense to store the accurate size of block when it's 
> above threshold.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to