Github user dragos commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7532#discussion_r35086953
  
    --- Diff: 
core/src/main/scala/org/apache/spark/deploy/master/ApplicationInfo.scala ---
    @@ -43,6 +42,18 @@ private[spark] class ApplicationInfo(
       @transient var endTime: Long = _
       @transient var appSource: ApplicationSource = _
     
    +  // A cap on the number of executors this application can have at any 
given time.
    +  // By default, this is infinite. Only after the first allocation request 
is issued
    +  // by the application will this be set to a finite value.
    +  @transient var executorLimit: Int = _
    +
    +  // A set of workers on which this application cannot launch executors.
    +  // This is used to handle kill requests when `spark.executor.cores` is 
NOT set. In this mode,
    +  // at most one executor from this application can be run on each worker. 
When an executor is
    +  // killed, its worker is added to the blacklist to avoid having the 
master immediately schedule
    +  // a new executor on the worker.
    +  @transient private var blacklistedWorkers: mutable.HashSet[String] = _
    --- End diff --
    
    I still think it doesn't need to be *both* a `var` and a mutable 
collection. You could (with minimal changes) make this an `immutable.HashSet`.
    
    My 2c.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to