zhouyejoe commented on code in PR #35906:
URL: https://github.com/apache/spark/pull/35906#discussion_r903171091


##########
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##########
@@ -632,6 +737,14 @@ public void registerExecutor(String appId, 
ExecutorShuffleInfo executorInfo) {
           appsShuffleInfo.compute(appId, (id, appShuffleInfo) -> {
             if (appShuffleInfo == null || attemptId > 
appShuffleInfo.attemptId) {
               originalAppShuffleInfo.set(appShuffleInfo);
+              AppPathsInfo appPathsInfo = new AppPathsInfo(appId, 
executorInfo.localDirs,
+                  mergeDir, executorInfo.subDirsPerLocalDir);
+              // Clean up the outdated App Attempt local path info in the DB 
and
+              // put the newly registered local path info from newer attempt 
into the DB.
+              if (appShuffleInfo != null) {
+                removeAppAttemptPathInfoFromDB(new AppAttemptId(appId, 
appShuffleInfo.attemptId));
+              }
+              writeAppPathsInfoToDb(appId, attemptId, appPathsInfo);

Review Comment:
   Umm, so we should better still adding the multiple attempts handling during 
DB reloading? If we have to guarantee the success of DB removal, but it fails, 
should we fail the new app attempt Executor registration here? I feel the later 
one is a little bit overkill. WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to