Github user ehnalis commented on the pull request:

    https://github.com/apache/spark/pull/6082#issuecomment-102044222
  
    If you run it on a cluster with free resources you will save 4.8 seconds 
with this improvement each time there are pending allocations for executors. If 
you run it on a very crowded cluster you might end up saving 0 seconds, but at 
least on a large cluster Spark jobs will not overwhelm the RM and will not 
generate unnecessary HBs. Simple as is. Of course, this will be effective on 
larger clusters. On small clusters with no-so-many-apps, but with a longer 
queue one might wish to lower intervals to a desired rate to decrease the 
makespan of Spark jobs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to