[jira] [Comment Edited] (SPARK-29273) Spark peakExecutionMemory metrics is zero

2019-09-27 Thread angerszhu (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939827#comment-16939827
 ] 

angerszhu edited comment on SPARK-29273 at 9/28/19 3:07 AM:


[~UncleHuang]


{code}
  /**
   * Peak memory used by internal data structures created during shuffles, 
aggregations and
   * joins. The value of this accumulator should be approximately the sum of 
the peak sizes
   * across all such data structures created in this task. For SQL jobs, this 
only tracks all
   * unsafe operators and ExternalSort.
   */
  def peakExecutionMemory: Long = _peakExecutionMemory.sum

 {code}


was (Author: angerszhuuu):
[~UncleHuang]
PS: I work for exa spark now.

{code}
  /**
   * Peak memory used by internal data structures created during shuffles, 
aggregations and
   * joins. The value of this accumulator should be approximately the sum of 
the peak sizes
   * across all such data structures created in this task. For SQL jobs, this 
only tracks all
   * unsafe operators and ExternalSort.
   */
  def peakExecutionMemory: Long = _peakExecutionMemory.sum

 {code}

> Spark peakExecutionMemory metrics is zero
> -
>
> Key: SPARK-29273
> URL: https://issues.apache.org/jira/browse/SPARK-29273
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.3
> Environment: hadoop 2.7.3
> spark 2.4.3
> jdk 1.8.0_60
>Reporter: huangweiyi
>Priority: Major
>
> with spark 2.4.3 in our production environment, i want to get the 
> peakExecutionMemory which is exposed by the TaskMetrics, but alway get the 
> zero value



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29273) Spark peakExecutionMemory metrics is zero

2019-09-27 Thread angerszhu (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939827#comment-16939827
 ] 

angerszhu edited comment on SPARK-29273 at 9/28/19 3:06 AM:


[~UncleHuang]
PS: I work for exa spark now.

{code}
  /**
   * Peak memory used by internal data structures created during shuffles, 
aggregations and
   * joins. The value of this accumulator should be approximately the sum of 
the peak sizes
   * across all such data structures created in this task. For SQL jobs, this 
only tracks all
   * unsafe operators and ExternalSort.
   */
  def peakExecutionMemory: Long = _peakExecutionMemory.sum

 {code}


was (Author: angerszhuuu):
[~UncleHuang]

{code}
  /**
   * Peak memory used by internal data structures created during shuffles, 
aggregations and
   * joins. The value of this accumulator should be approximately the sum of 
the peak sizes
   * across all such data structures created in this task. For SQL jobs, this 
only tracks all
   * unsafe operators and ExternalSort.
   */
  def peakExecutionMemory: Long = _peakExecutionMemory.sum

 {code}

> Spark peakExecutionMemory metrics is zero
> -
>
> Key: SPARK-29273
> URL: https://issues.apache.org/jira/browse/SPARK-29273
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.3
> Environment: hadoop 2.7.3
> spark 2.4.3
> jdk 1.8.0_60
>Reporter: huangweiyi
>Priority: Major
>
> with spark 2.4.3 in our production environment, i want to get the 
> peakExecutionMemory which is exposed by the TaskMetrics, but alway get the 
> zero value



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org