Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/10595#discussion_r48830663
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/scheduler/JobSet.scala ---
@@ -59,17 +59,15 @@ case class JobSet(
// Time taken to process all the jobs from the time they were submitted
// (i.e. including the time they wait in the streaming scheduler queue)
- def totalDelay: Long = {
- processingEndTime - time.milliseconds
- }
+ def totalDelay: Long = processingEndTime - time.milliseconds
def toBatchInfo: BatchInfo = {
BatchInfo(
time,
streamIdToInputInfo,
submissionTime,
- if (processingStartTime >= 0) Some(processingStartTime) else None,
- if (processingEndTime >= 0) Some(processingEndTime) else None,
+ if (hasStarted) Some(processingStartTime) else None,
--- End diff --
Tested it locally (and can't wait to see the results from Jenkins).
The current code *overly* assumes that the times can be `0` (which
cannot...ever). It is also more clearer that at `hasCompleted`
`processingEndTime` is already set. It's over-complicated as it's now IMHO.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]