Github user vanzin commented on the pull request:

    https://github.com/apache/spark/pull/5018#issuecomment-81957526
  
    > the driver can't communicate to the AM that it wants fewer executors than 
are currently running
    
    It can. But the AM won't kill running executors if the target count goes 
below the number of running executors. It will just set the target to the new, 
lower value, so that when the driver kills executors explicitly, the AM won't 
start new ones.
    
    > killing executors basically doesn't affect any of the bookkeeping for 
pending / target num executors
    
    That's exactly what this patch does - on the AM side. What we had discussed 
is that on the driver side, you do need to update the target number when you 
kill an executor explicitly by calling `SparkContext.killExecutors`. Otherwise 
the AM will just start new ones for you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to