Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/20259#discussion_r161956457
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -179,6 +181,7 @@ private[deploy] class Master(
}
persistenceEngine = persistenceEngine_
leaderElectionAgent = leaderElectionAgent_
+ startupTime = System.currentTimeMillis()
--- End diff --
> Spark master process zombie, the background has a shell script
automatically pull the spark master process to ensure high availability, but
the restart process, there may be some applications such as failure.
I didn't understand what you are going to express, but just a guess, you
have some external script to monitor the availability of master process, if it
disappears, the script would restart it?
if this is the case, why not output last restart time in your own
monitoring system since you can get this easily in that "script" instead of
leaking to Spark's code base and additionally you always need to query Spark UI
for the timestamp?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]