Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8466#discussion_r38005894
  
    --- Diff: 
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
    @@ -260,13 +267,18 @@ class DAGSchedulerSuite
     
       test("zero split job") {
         var numResults = 0
    +    var failureReason: Option[Exception] = None
         val fakeListener = new JobListener() {
    -      override def taskSucceeded(partition: Int, value: Any) = numResults 
+= 1
    -      override def jobFailed(exception: Exception) = throw exception
    +      override def taskSucceeded(partition: Int, value: Any): Unit = 
numResults += 1
    +      override def jobFailed(exception: Exception): Unit = {
    +        failureReason = Some(exception)
    +      }
         }
         val jobId = submit(new MyRDD(sc, 0, Nil), Array(), listener = 
fakeListener)
         assert(numResults === 0)
         cancel(jobId)
    +    assert(failureReason.isDefined)
    +    assert(failureReason.get.getMessage() === "Job 0 cancelled ")
    --- End diff --
    
    this test used to just log an exception on `cancel(jobId)`.  I'm not sure 
what it was supposed to be testing before.  I made the minimal change here, by 
capturing the exception and checking it.  But maybe `cancel(jobId)` should 
*not* be creating an exception?  Is the idea that if you submit a job with no 
partitions, it will immediately stop?  That way, if you try to cancel it, you'd 
just hit [this 
case](https://github.com/apache/spark/blob/bb1640529725c6c38103b95af004f8bd90eeee5c/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala#L1224)
 with a harmless `logDebug`?  That suggests we should change 
[`handleJobSubmitted`](https://github.com/apache/spark/blob/bb1640529725c6c38103b95af004f8bd90eeee5c/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala#L757)
 to handle empty jobs, the same way we handle it in 
[`submitMissingTasks`](https://github.com/apache/spark/blob/bb1640529725c6c38103b95af004f8bd90eeee5c/core/src/main/scala/org/apache/spark/scheduler/DAGS
 cheduler.scala#L910) for stages with no partitions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to