Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/6082#discussion_r30536989
--- Diff: docs/running-on-yarn.md ---
@@ -71,9 +71,22 @@ Most of the configs are the same for Spark on YARN as
for other deployment modes
</tr>
<tr>
<td><code>spark.yarn.scheduler.heartbeat.interval-ms</code></td>
- <td>5000</td>
+ <td>3000</td>
<td>
The interval in ms in which the Spark application master heartbeats
into the YARN ResourceManager.
+ The value is capped at half the value of YARN's configuration for the
expiry interval
+ (<code>yarn.am.liveness-monitor.expiry-interval-ms</code>).
+ </td>
+</tr>
+<tr>
+ <td><code>spark.yarn.scheduler.initial-allocation.interval</code></td>
+ <td>200ms</td>
+ <td>
+ The initial interval in which the Spark application master eagerly
heartbeats to the YARN ResourceManager
+ when there are pending container allocation requests. It should be no
larger than
+ <code>spark.yarn.scheduler.heartbeat.interval-ms</code>. The
allocation interval will double on
--- End diff --
s/double/doubled
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]