Ngone51 commented on a change in pull request #25943:
[WIP][SPARK-29261][SQL][CORE] Support recover live entities from KVStore for
(SQL)AppStatusListener
URL: https://github.com/apache/spark/pull/25943#discussion_r335526820
##########
File path: core/src/main/scala/org/apache/spark/status/storeTypes.scala
##########
@@ -76,6 +109,29 @@ private[spark] class JobDataWrapper(
@JsonIgnore @KVIndex("completionTime")
private def completionTime: Long =
info.completionTime.map(_.getTime).getOrElse(-1L)
+
+ def toLiveJob: LiveJob = {
Review comment:
We'd write them out whenever a live job needs to be updated into KVStore
with current updated logic. As it doesn't write out on every task complete but
would update when exceeds configured `liveUpdatePeriodNs` or some "last"
behavior happens.
But as you mentioned, this could require much memory and pay pressure on
snapshotting.
But can this info be lossy ? User may find that a Job consists of X tasks
would have X + m finished tasks where extra m tasks are those duplicate
completed indices with speculative enabled.
We may use some known and accurate infos, e.g `numTasks`, `activeTasks`,
`completedTasks`, `failedTasks`, to calculate the correct
`completedIndices.size / completedStages.size` when a job finished.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]