Github user markhamstra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/159#discussion_r10684023
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
    @@ -59,6 +59,15 @@ private[spark] class TaskSetManager(
       // CPUs to request per task
       val CPUS_PER_TASK = conf.getInt("spark.task.cpus", 1)
     
    +  /*
    +   * Sometimes if an executor is dead or in an otherwise invalid state, 
the driver
    +   * does not realize right away leading to repeated task failures. If 
enabled,
    +   * this temporarily prevents a task from re-launching on an executor 
where
    +   * it just failed.
    +   */
    +  private[this] val EXECUTOR_TASK_BLACKLIST_TIMEOUT =
    +    conf.getLong("spark.task.executorBlacklistTimeout", 0L)
    --- End diff --
    
    Converting the entire codebase to `private[this]` might not be an 
unreasonable thing to do.  Only in very rare instances does `private[this]` not 
work where plain `private` does (it wouldn't surprise me if every instance of 
`private` in Spark could safely be made `private[this]`), and it can provide 
some performance benefits (not everything that you would expect to get inlined 
away actually does), so arguably `private[this]` should be the default instead 
of `private`.
    
    If someone has a few spare cycles, doing the search and replace and running 
the spark-perf numbers could at least give us some idea of how much we are 
giving away by using the slightly simpler syntax.  Digging out just when 
`private[this]` makes a difference would be more work, and maybe not worth 
doing. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to