Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/7839#discussion_r36121140
--- Diff:
network/yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
---
@@ -100,11 +119,33 @@ private boolean isAuthenticationEnabled() {
*/
@Override
protected void serviceInit(Configuration conf) {
+
+ // In case this NM was killed while there were running spark
applications, we need to restore
+ // lost state for the existing executors. We look for an existing
file in the NM's local dirs.
+ // If we don't find one, then we choose a file to use to save the
state next time. However, we
+ // do *not* immediately register all the executors in that file, just
in case the application
+ // was terminated while the NM was restarting. We wait until yarn
tells the service about the
+ // app again via #initializeApplication, so we know its still running.
That is important
+ // for preventing a leak where the app data would stick around
*forever*. This does leave
+ // a small race -- if the NM restarts *again*, after only some of the
existing apps have been
+ // re-registered, their info will be lost.
+ registeredExecutorFile =
+
findRegisteredExecutorFile(conf.get("yarn.nodemanager.local-dirs").split(","));
+ try {
+ reloadRegisteredExecutors();
+ } catch (Exception e) {
+ logger.error("Failed to load previously registered executors", e);
+ }
+
TransportConf transportConf = new TransportConf(new
HadoopConfigProvider(conf));
// If authentication is enabled, set up the shuffle server to use a
// special RPC handler that filters out unauthenticated fetch requests
boolean authEnabled = conf.getBoolean(SPARK_AUTHENTICATE_KEY,
DEFAULT_SPARK_AUTHENTICATE);
- blockHandler = new ExternalShuffleBlockHandler(transportConf);
+ try {
+ blockHandler = new ExternalShuffleBlockHandler(transportConf,
registeredExecutorFile);
+ } catch (Exception e) {
+ logger.error("Failed to initial external shuffle service", e);
--- End diff --
I was really uncertain about this -- we could either have the service keep
going, though its failed to reload the executors, in which case all existing
apps on this node would be doomed. Or we could fail the service, in which case
all existing and future executors on this node would fail. What is preferable?
I think there are a handful of other places where I'm just logging exceptions
though I wasn't really sure if that made sense.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]