deshanxiao commented on a change in pull request #23637:
[SPARK-26714][CORE][WEBUI] Show 0 partition job in WebUI
URL: https://github.com/apache/spark/pull/23637#discussion_r251268717
##########
File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
##########
@@ -693,6 +693,10 @@ private[spark] class DAGScheduler(
val jobId = nextJobId.getAndIncrement()
if (partitions.size == 0) {
+ listenerBus.post(
Review comment:
I don't want to make sure to specify exactly the same start/end time. But
when partition is zero, the job will not be scheduled and return immediately.
So, it will be better to post events in `runJob` for semantics.
```
case scala.util.Success(_) =>
// post a events if partition is zero
post(jobend)......
logInfo("Job %d finished: %s, took %f s".format
(waiter.jobId, callSite.shortForm, (System.nanoTime - start) /
1e9))
case scala.util.Failure(exception) =>
logInfo("Job %d failed: %s, took %f s".format
(waiter.jobId, callSite.shortForm, (System.nanoTime - start) /
1e9))
// SPARK-8644: Include user stack trace in exceptions coming from
DAGScheduler.
val callerStackTrace = Thread.currentThread().getStackTrace.tail
exception.setStackTrace(exception.getStackTrace ++ callerStackTrace)
throw exception
}
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]