Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/6082#discussion_r30178192
--- Diff: docs/running-on-yarn.md ---
@@ -74,6 +74,14 @@ Most of the configs are the same for Spark on YARN as
for other deployment modes
<td>5000</td>
<td>
The interval in ms in which the Spark application master heartbeats
into the YARN ResourceManager.
+ To avoid the application master to be expired by late reporting, if a
higher value is provided, the interval will be set to the half of the expiry
interval in YARN's configuration
<code>(yarn.am.liveness-monitor.expiry-interval-ms / 2)</code>.
+ </td>
+</tr>
+<tr>
+ <td><code>spark.yarn.scheduler.allocation.interval-ms</code></td>
--- End diff --
We don't define options with units anymore. Instead, use
`spark.yarn.scheduler.allocation.interval` and the default `200ms`. And remove
any mention of units from the docs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]