Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r19002923
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -362,9 +372,19 @@ private[spark] class Worker(
}
}
- def masterDisconnected() {
+ private def masterDisconnected() {
logError("Connection to master failed! Waiting for master to
reconnect...")
connected = false
+ scheduleAttemptsToReconnectToMaster()
+ }
+
+ private def scheduleAttemptsToReconnectToMaster() {
--- End diff --
I dug into Hadoop source and actually found out that the default policy for
Hadoop reconnects is to retry every 10 seconds for 6 attempts, and then every
60 seconds for 10 attempts. Each attempt also has a fuzz factor applied of
[0.5t, 1.5t] to prevent a thundering herd of reconnect attempts across the
cluster.
I don't have a strong opinion on infinite vs ~10min of retries -- I'd vote
for following Hadoop's lead unless presented with compelling arguments to do
something different.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]