Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/19399#discussion_r142959826
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -850,6 +869,18 @@ private[history] class AppListingListener(log:
FileStatus, clock: Clock) extends
fileSize)
}
+ def applicationStatus : Option[String] = {
+ if (startTime.getTime == -1) {
+ Some("<Not Started>")
+ } else if (endTime.getTime == -1) {
+ Some("<In Progress>")
+ } else if (jobToStatus.isEmpty || jobToStatus.exists(_._2 !=
"Succeeded")) {
--- End diff --
also, I dunno if this criteria is even accurate. You could have a
successful app that doesn't run any jobs -- eg., its kicked off by cron
regularly, and then it checks some metadata to see if any work needs to be
done, and if not, it just quits. Doesn't seem right to call it "failed".
In progress is also tricky, as the app may have been killed without endTime
getting written.
Anyway, I guess this is OK, just pointing out some reasons why this can be
misleading. In particular, I think it would be nicer if spark actually logged
whether or not the app was successful.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]