wypoon commented on a change in pull request #23767: [SPARK-26329][CORE][WIP] 
Faster polling of executor memory metrics.
URL: https://github.com/apache/spark/pull/23767#discussion_r257418114
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/SparkContext.scala
 ##########
 @@ -2380,10 +2381,14 @@ class SparkContext(config: SparkConf) extends Logging {
 
   /** Reports heartbeat metrics for the driver. */
   private def reportHeartBeat(): Unit = {
-    val driverUpdates = _heartbeater.getCurrentMetrics()
+    val currentMetrics = ExecutorMetrics.getCurrentMetrics(env.memoryManager)
+    val driverUpdates = new HashMap[(Int, Int), ExecutorMetrics]
+    // In the driver, we do not track per-stage metrics, so use a dummy stage
+    // for the key
+    driverUpdates.put((-1, -1), new ExecutorMetrics(currentMetrics))
 
 Review comment:
   The driver case is a bit different, because we do not know what stage(s) 
is/are running when we poll the metrics, so we just use a dummy key to keep the 
format of the update the same, a Map[(Int, Int), ExecutorMetrics]. In the 
EventLoggingListener, in onExecutorMetricsUpdate, if an update came from the 
driver, we record those peaks for all active stages. Thus, we do get per stage 
metrics from the driver in the event logs.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to