Github user hthuynh2 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21729#discussion_r201411743
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
    @@ -87,7 +87,7 @@ private[spark] class TaskSetManager(
       // Set the coresponding index of Boolean var when the task killed by 
other attempt tasks,
       // this happened while we set the `spark.speculation` to true. The task 
killed by others
       // should not resubmit while executor lost.
    -  private val killedByOtherAttempt: Array[Boolean] = new 
Array[Boolean](numTasks)
    +  private val killedByOtherAttempt = new HashSet[Long]
    --- End diff --
    
    I think we should use ArrayBuffer[Long] instead of Array[Long] because the 
number of elements can grow when there are more killed attempts.
    Also, I think there is a downside of using Array-like data structure for 
this variable. Lookup operation for array-like data structure takes linear time 
and that operation is used many times when we check if a task need to be 
resubmitted (inside executorLost method of TSM). This will not matter much if 
the size of the array is small, but still I think this is something we might 
want to consider. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to