mridulm commented on code in PR #37922:
URL: https://github.com/apache/spark/pull/37922#discussion_r1059526011
##########
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##########
@@ -396,6 +403,56 @@ public void applicationRemoved(String appId, boolean
cleanupLocalDirs) {
}
}
+ @Override
+ public void removeShuffleMerge(RemoveShuffleMerge msg) {
+ AppShuffleInfo appShuffleInfo = validateAndGetAppShuffleInfo(msg.appId);
+ if (appShuffleInfo.attemptId != msg.appAttemptId) {
+ throw new IllegalArgumentException(
+ String.format("The attempt id %s in this RemoveShuffleMerge message
does not match "
+ + "with the current attempt id %s stored in shuffle service
for application %s",
+ msg.appAttemptId, appShuffleInfo.attemptId, msg.appId));
+ }
+ appShuffleInfo.shuffles.computeIfPresent(msg.shuffleId, (shuffleId,
mergePartitionsInfo) -> {
+ boolean deleteCurrent =
+ msg.shuffleMergeId == DELETE_CURRENT_MERGED_SHUFFLE_ID ||
+ msg.shuffleMergeId == mergePartitionsInfo.shuffleMergeId;
+ AppAttemptShuffleMergeId currentAppAttemptShuffleMergeId =
+ new AppAttemptShuffleMergeId(
+ msg.appId, msg.appAttemptId, msg.shuffleId,
mergePartitionsInfo.shuffleMergeId);
+ AppAttemptShuffleMergeId appAttemptShuffleMergeId = new
AppAttemptShuffleMergeId(
+ msg.appId, msg.appAttemptId, msg.shuffleId, msg.shuffleMergeId);
+ if(deleteCurrent) {
+ // request to clean up shuffle we are currently hosting
+ if (!mergePartitionsInfo.isFinalized()) {
+ submitCleanupTask(() -> {
+ closeAndDeleteOutdatedPartitions(
+ currentAppAttemptShuffleMergeId,
mergePartitionsInfo.shuffleMergePartitions);
+ writeAppAttemptShuffleMergeInfoToDB(appAttemptShuffleMergeId);
+ });
+ } else {
+ submitCleanupTask(() -> {
+ deleteMergedFiles(currentAppAttemptShuffleMergeId,
+ mergePartitionsInfo.getReduceIds());
+ writeAppAttemptShuffleMergeInfoToDB(appAttemptShuffleMergeId);
+ mergePartitionsInfo.setReduceIds(new int[0]);
+ });
+ }
+ } else if(msg.shuffleMergeId < mergePartitionsInfo.shuffleMergeId) {
+ throw new RuntimeException(String.format("Asked to remove old shuffle
merged data for " +
+ "application %s shuffleId %s shuffleMergeId %s, but current
shuffleMergeId %s ",
+ msg.appId, msg.shuffleId, msg.shuffleMergeId,
mergePartitionsInfo.shuffleMergeId));
+ } else if (msg.shuffleMergeId > mergePartitionsInfo.shuffleMergeId) {
+ // cleanup request for newer shuffle - remove the outdated data we
have.
+ submitCleanupTask(() -> {
+ closeAndDeleteOutdatedPartitions(
+ currentAppAttemptShuffleMergeId,
mergePartitionsInfo.shuffleMergePartitions);
+ writeAppAttemptShuffleMergeInfoToDB(appAttemptShuffleMergeId);
Review Comment:
I am not sure I understand the concern.
`closeAndDeleteOutdatedPartitions` does two things -
* cleanup finalization details from DB.
* cleanup files from disk.
Both of these can be done lazily.
We are only keeping `appShuffleInfo.shuffles` consistent with the DB, and
use that to filter during recovery.
Note - there is a bug in `finalizeShuffleMerge`, as I mentioned earlier
[here](https://github.com/apache/spark/pull/37922#discussion_r990753031) (see
the last part of my comment) ... so please keep that in mind when analyzing
this codepath.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]