Github user kayousterhout commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11996#discussion_r64655524
  
    --- Diff: 
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
    @@ -789,6 +791,51 @@ class TaskSetManagerSuite extends SparkFunSuite with 
LocalSparkContext with Logg
         assert(TaskLocation("executor_host1_3") === 
ExecutorCacheTaskLocation("host1", "3"))
       }
     
    +  test("Kill other task attempts when one attempt belonging to the same 
task succeeds") {
    +    sc = new SparkContext("local", "test")
    +    val sched = new FakeTaskScheduler(sc, ("exec1", "host1"), ("exec2", 
"host2"))
    +    val taskSet = FakeTask.createTaskSet(4)
    +    val manager = new TaskSetManager(sched, taskSet, MAX_TASK_FAILURES)
    +    val accumUpdatesByTask: Array[Seq[AccumulableInfo]] = 
taskSet.tasks.map { task =>
    +      task.initialAccumulators.map { a => a.toInfo(Some(0L), None) }
    +    }
    +    // Offer resources for 4 tasks to start
    +    for ((k, v) <- List(
    +        "exec1" -> "host1",
    +        "exec1" -> "host1",
    +        "exec2" -> "host2",
    +        "exec2" -> "host2")) {
    +      val taskOption = manager.resourceOffer(k, v, NO_PREF)
    +      assert(taskOption.isDefined)
    +      val task = taskOption.get
    +      assert(task.executorId === k)
    +    }
    +    assert(sched.startedTasks.toSet === Set(0, 1, 2, 3))
    +    // Complete the 3 tasks and leave 1 task in running
    +    for (id <- Set(0, 1, 2)) {
    +      manager.handleSuccessfulTask(id, createTaskResult(id, 
accumUpdatesByTask(id)))
    +      assert(sched.endedTasks(id) === Success)
    +    }
    +
    +    // Wait for the threshold time to start speculative attempt for the 
running task
    +    Thread.sleep(100)
    --- End diff --
    
    This does seem a little trickier than I'd anticipated.  I think the best 
thing to do is (1) change the Schedulable class to take a minTimeToSpeculation 
required argument.  This looks pretty simple to do -- you just need to change 
two implementations, and this is a private spark class, so we're not changing a 
public or developer API. (2) add a constant  MIN_TIME_TO_SPECULATION in the 
TaskSchedulerImpl object, and pass that value in when TaskSchedulerImpl calls 
checkSpeculatableTasks.
    
    I think overtime, we should have more tests to verify the speculation 
behavior, so this relatively small change to make this code path more testable 
seems worthwhile. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to