Github user kayousterhout commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14079#discussion_r86473290
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/ExecutorFailuresInTaskSet.scala 
---
    @@ -25,26 +25,30 @@ import scala.collection.mutable.HashMap
     private[scheduler] class ExecutorFailuresInTaskSet(val node: String) {
       /**
        * Mapping from index of the tasks in the taskset, to the number of 
times it has failed on this
    -   * executor.
    +   * executor and the expiry time.
        */
    -  val taskToFailureCount = HashMap[Int, Int]()
    +  val taskToFailureCountAndExpiryTime = HashMap[Int, (Int, Long)]()
    --- End diff --
    
    Given that the expiry time is only used in BlacklistTracker, I think it 
would be better to store the failure time here -- so all of the logic of 
handling the expiration time can be encapsulated in the BlacklistTracker (and 
that also makes it MARGINALLY more clear that the task-set isn't actually doing 
any expiration)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to