Github user Ngone51 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21221#discussion_r187824094
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1753,9 +1766,21 @@ class DAGScheduler(
messageScheduler.shutdownNow()
eventProcessLoop.stop()
taskScheduler.stop()
+ heartbeater.stop()
+ }
+
+ /** Reports heartbeat metrics for the driver. */
+ private def reportHeartBeat(): Unit = {
--- End diff --
> With cluster mode, including YARN, there isn't a local executor, so the
metrics for the driver would not be collected.
Yes. But the problem is can we use `executor`'s
`getCurrentExecutorMetrics()` method for collecting memory metrics for `driver`
? IIRC, `driver` do not acqurie memory from execution memory pool at least.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]