Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7532#discussion_r35017480
  
    --- Diff: 
core/src/main/scala/org/apache/spark/deploy/master/ApplicationInfo.scala ---
    @@ -43,6 +42,18 @@ private[spark] class ApplicationInfo(
       @transient var endTime: Long = _
       @transient var appSource: ApplicationSource = _
     
    +  // A cap on the number of executors this application can have at any 
given time.
    +  // By default, this is infinite. Only after the first allocation request 
is issued
    +  // by the application will this be set to a finite value.
    +  @transient var executorLimit: Int = _
    +
    +  // A set of workers on which this application cannot launch executors.
    +  // This is used to handle kill requests when `spark.executor.cores` is 
NOT set. In this mode,
    +  // at most one executor from this application can be run on each worker. 
When an executor is
    +  // killed, its worker is added to the blacklist to avoid having the 
master immediately schedule
    +  // a new executor on the worker.
    +  @transient var blacklistedWorkers: mutable.HashSet[String] = _
    --- End diff --
    
    I think it needs to be a var because we mark it transient, but yes I 
believe this can be `private[master]`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to