Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/2746#issuecomment-59870868
  
    With regard to the configuration barrier, I actually think the exposed 
configs are pretty straightforward. I think even the inexperienced user can 
reason about the number of executors being scaled up and down within a custom 
range. All the user needs to set is the min and the max, and everything else is 
optional.
    
    That said, I should clarify that I am not discounting this other policy 
once and for all. I do believe in its merits, but I think the default scaling 
policies in Spark should be as simple as possible, both in terms of 
implementation and semantics. I am open to introducing it as a pluggable policy 
in a future release, but I prefer to use a different approach as the first-cut 
implementation for the aforementioned reasons.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to