Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/7839#discussion_r36133197
--- Diff:
network/yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
---
@@ -100,11 +119,33 @@ private boolean isAuthenticationEnabled() {
*/
@Override
protected void serviceInit(Configuration conf) {
+
+ // In case this NM was killed while there were running spark
applications, we need to restore
+ // lost state for the existing executors. We look for an existing
file in the NM's local dirs.
+ // If we don't find one, then we choose a file to use to save the
state next time. However, we
+ // do *not* immediately register all the executors in that file, just
in case the application
+ // was terminated while the NM was restarting. We wait until yarn
tells the service about the
+ // app again via #initializeApplication, so we know its still running.
That is important
+ // for preventing a leak where the app data would stick around
*forever*. This does leave
+ // a small race -- if the NM restarts *again*, after only some of the
existing apps have been
+ // re-registered, their info will be lost.
--- End diff --
Just to be clear, I know this leak is most likely very small, but the
problem is how hard it is to ever clean it up. if you always re-registered
everything in that file, then any bogus apps in there can't be removed, unless
you manually go in and delete the file. Even if you restart the NM again,
you'd just re-read the file with that bogus app still in there.
You will know better than me how strong the guarantees are from Yarn for
calling `stopApplication` when the NM comes back, vs. how often NMs get
restarted (and thus we increase the chance for an app to stop during an NM
restart), vs. how long till we do a "hard reset" of an NM where the local dir
gets cleaned up.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]