Github user kevincox commented on the pull request:

    https://github.com/apache/spark/pull/10761#issuecomment-171850023
  
    I'm implementing a system where Spark can reduce the number of executors in 
low-resource situations. This allows jobs to utilize an entire cluster when it 
is unneeded but when another job starts (especially development or interactive 
jobs) the currently running jobs can scale back to allow it in. This means that 
there is no longer a balance between cluster utilization and interactive job 
launching.
    
    Before this refactor the changes for that feature were messy and 
disorganized, refactoring the class allows a simple implementation that 
requires a single new message and a single new state.
    
    Also there are immediate benefits of the new design, the 100ms polling is 
gone for an event driven approach which will likely wake up every minute or so. 
Also cleaner code encourages future improvements :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to