Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15986#discussion_r89256962
  
    --- Diff: 
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala ---
    @@ -274,4 +274,30 @@ class TaskSchedulerImplSuite extends SparkFunSuite 
with LocalSparkContext with L
         assert("executor1" === taskDescriptions3(0).executorId)
       }
     
    +  test("if an executor is lost then state for tasks running on that 
executor is cleaned up") {
    +    sc = new SparkContext("local", "TaskSchedulerImplSuite")
    +    val taskScheduler = new TaskSchedulerImpl(sc)
    +    taskScheduler.initialize(new FakeSchedulerBackend)
    +    // Need to initialize a DAGScheduler for the taskScheduler to use for 
callbacks.
    +    new DAGScheduler(sc, taskScheduler) {
    +      override def taskStarted(task: Task[_], taskInfo: TaskInfo) {}
    +      override def executorAdded(execId: String, host: String) {}
    +    }
    +
    +    val e0Offers = Seq(new WorkerOffer("executor0", "host0", 1))
    +    val attempt1 = FakeTask.createTaskSet(1)
    +
    +    // submit attempt 1, offer resources, task gets scheduled
    +    taskScheduler.submitTasks(attempt1)
    +    val taskDescriptions = taskScheduler.resourceOffers(e0Offers).flatten
    +    assert(1 === taskDescriptions.length)
    +
    +    // mark executor0 as dead
    +    taskScheduler.executorLost("executor0", SlaveLost())
    +
    +    // Check that state associated with the lost task attempt is cleaned 
up:
    +    assert(taskScheduler.taskIdToExecutorId.isEmpty)
    --- End diff --
    
    I suppose that we should also strengthen the assertions in the existing 
tests to check that these maps are updated following task successes, but this 
may be tricky given that the existing tests aren't exercising the 
`statusUpdate` path. Rather, we may have to test this more end-to-end by 
asserting that these always become empty once all jobs and tasks are done. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to