zuston commented on code in PR #1681:
URL:
https://github.com/apache/incubator-uniffle/pull/1681#discussion_r1596209895
##########
storage/src/main/java/org/apache/uniffle/storage/handler/api/ShuffleDeleteHandler.java:
##########
@@ -24,5 +24,5 @@ public interface ShuffleDeleteHandler {
*
* @param appId ApplicationId for delete
*/
- void delete(String[] storageBasePaths, String appId, String user);
+ void delete(String[] storageBasePaths, String appId, String user, String
shuffleServerId);
Review Comment:
I don't like this makes the shuffleServerId populated in this method var. I
hope this shuffleServerId as this handler's private member var
##########
server/src/test/java/org/apache/uniffle/server/storage/HadoopStorageManagerTest.java:
##########
@@ -112,4 +117,82 @@ public void testRegisterRemoteStorage() {
assertNull(hs3.getConf().get("k2"));
assertNull(hs3.getConf().get("k3"));
}
+
+ @Test
+ public void testRemoveExpiredResources(@TempDir File remoteBasePath) throws
Exception {
+ ShuffleServerConf conf = new ShuffleServerConf();
+ conf.setString(
+ ShuffleServerConf.RSS_STORAGE_TYPE.key(),
StorageType.MEMORY_LOCALFILE_HDFS.name());
+ String shuffleServerId = "127.0.0.1:19999";
+ conf.setString(ShuffleServerConf.SHUFFLE_SERVER_ID, shuffleServerId);
+ HadoopStorageManager hadoopStorageManager = new HadoopStorageManager(conf);
+ final String remoteStoragePath1 = new File(remoteBasePath,
"path1").getAbsolutePath();
+ String appId = "testRemoveExpiredResources";
+ hadoopStorageManager.registerRemoteStorage(
+ appId, new RemoteStorageInfo(remoteStoragePath1, ImmutableMap.of("k1",
"v1", "k2", "v2")));
+ Map<String, HadoopStorage> appStorageMap =
hadoopStorageManager.getAppIdToStorages();
+
+ HadoopStorage storage = appStorageMap.get(appId);
+ String appPath =
ShuffleStorageUtils.getFullShuffleDataFolder(storage.getStoragePath(), appId);
+ File appPathFile = new File(appPath);
+ File partitionDir = new File(appPathFile, "1/1-1/");
+ partitionDir.mkdirs();
+ // Simulate the case that there are two shuffle servers write data.
+ File dataFile = new File(partitionDir, shuffleServerId + "_1.data");
+ dataFile.createNewFile();
+ File dataFile2 = new File(partitionDir, "shuffleserver2_1.data");
+ dataFile2.createNewFile();
+ assertTrue(partitionDir.exists());
+ // purged for expired
+ assertEquals(1, appStorageMap.size());
+ AppPurgeEvent shufflePurgeEvent = new AppPurgeEvent(appId, "", null, true);
+ hadoopStorageManager.removeResources(shufflePurgeEvent);
+ assertEquals(0, appStorageMap.size());
+ assertTrue(partitionDir.exists());
+ assertFalse(dataFile.exists());
+ assertTrue(dataFile2.exists());
+
+ // purged for unregister
+ AppPurgeEvent appPurgeEvent = new AppPurgeEvent(appId, "");
+ hadoopStorageManager.removeResources(appPurgeEvent);
+ assertEquals(0, appStorageMap.size());
+ assertFalse(appPathFile.exists());
+ }
+
+ @Test
+ public void testRemoveExpiredResources2(@TempDir File remoteBasePath) throws
Exception {
Review Comment:
What's the meaning of 2 ? You should add the comment to describe this case
to test what ?
##########
storage/src/main/java/org/apache/uniffle/storage/handler/api/ShuffleDeleteHandler.java:
##########
@@ -24,5 +24,5 @@ public interface ShuffleDeleteHandler {
*
* @param appId ApplicationId for delete
*/
- void delete(String[] storageBasePaths, String appId, String user);
+ void delete(String[] storageBasePaths, String appId, String user, String
shuffleServerId);
Review Comment:
If so, we don't need to modify the AppPurgeEvent's struction
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]