[ 
https://issues.apache.org/jira/browse/SPARK-29273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939827#comment-16939827
 ] 

angerszhu commented on SPARK-29273:
-----------------------------------

[~UncleHuang]

{code}
  /**
   * Peak memory used by internal data structures created during shuffles, 
aggregations and
   * joins. The value of this accumulator should be approximately the sum of 
the peak sizes
   * across all such data structures created in this task. For SQL jobs, this 
only tracks all
   * unsafe operators and ExternalSort.
   */
  def peakExecutionMemory: Long = _peakExecutionMemory.sum

 {code}

> Spark peakExecutionMemory metrics is zero
> -----------------------------------------
>
>                 Key: SPARK-29273
>                 URL: https://issues.apache.org/jira/browse/SPARK-29273
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.4.3
>         Environment: hadoop 2.7.3
> spark 2.4.3
> jdk 1.8.0_60
>            Reporter: huangweiyi
>            Priority: Major
>
> with spark 2.4.3 in our production environment, i want to get the 
> peakExecutionMemory which is exposed by the TaskMetrics, but alway get the 
> zero value



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to