Github user ehnalis commented on the pull request:

    https://github.com/apache/spark/pull/6082#issuecomment-101820063
  
    I've opened that issue for YARN, but it's not a good practice to rely on 
that.
    
    Multiplicative back-off is very ancient practice, not so hard to predict 
and decreases congestion nicely. There are more effective models in network 
rate-limiting, but it's simple and effective. We just can't HB in every 200ms, 
basically when our first HB was unsuccessful for containers, there's only a 
little more chance we got that the next one will be successful. Also, consider 
a contested server with thousands of Spark jobs. But, yeah. We want to provide 
a faster start-up for jobs on clusters with a lot of free resources. So we 
start with 200ms.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to