Sen Fang created SPARK-10543:
--------------------------------

             Summary: Peak Execution Memory Quantile should be Pre-task Basis
                 Key: SPARK-10543
                 URL: https://issues.apache.org/jira/browse/SPARK-10543
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 1.5.0
            Reporter: Sen Fang
            Priority: Minor


Currently the Peak Execution Memory quantiles seem to be cumulative rather than 
per task basis. For example, I have seen a value of 2TB in one of my jobs on 
the quantile metric but each individual task shows less than 1GB on the bottom 
table.

[~andrewor14] In your PR https://github.com/apache/spark/pull/7770, the 
screenshot shows the Max Peak Execution Memory of 792.5KB while in the bottom 
it's about 50KB per task (unless your workload is skewed)

The fix seems straightforward that we use the `update` rather than `value` from 
the accumulable. I'm happy to provide a PR if people agree this is the right 
behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to