Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18988031
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -362,9 +372,19 @@ private[spark] class Worker(
}
}
- def masterDisconnected() {
+ private def masterDisconnected() {
logError("Connection to master failed! Waiting for master to
reconnect...")
connected = false
+ scheduleAttemptsToReconnectToMaster()
+ }
+
+ private def scheduleAttemptsToReconnectToMaster() {
--- End diff --
hmmm.....now, I think exit after several retries might be better,
In your case, without restarting the worker after the restarting master may
bring some problems, especially when the user didn't set RECOVERY_MODE, all
application information is lost, for instance, the application whose resource
requirement hasn't been filled will not be served anymore....the complete
system will run in a weird status, so you eventually need to restart the
applications (i.e. kill executors -> restart , which is equivalent to restart
all workers)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]