errose28 commented on code in PR #7726:
URL: https://github.com/apache/ozone/pull/7726#discussion_r1925933064


##########
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandler.java:
##########
@@ -456,6 +459,68 @@ public void testDeleteContainer() throws IOException {
     }
   }
 
+
+  @Test
+  public void testDeleteContainerTimeout() throws IOException, 
InterruptedException {
+    final String testDir = tempDir.toString();
+    try {
+      final long containerID = 1L;
+      final String clusterId = UUID.randomUUID().toString();
+      final String datanodeId = UUID.randomUUID().toString();
+      final ConfigurationSource conf = new OzoneConfiguration();
+      final ContainerSet containerSet = new ContainerSet(1000);
+      final MutableVolumeSet volumeSet = mock(MutableVolumeSet.class);
+      final Clock clock = mock(Clock.class);
+      long startTime = System.currentTimeMillis();
+      when(clock.millis()).thenReturn(startTime)
+          .thenReturn(startTime + 
conf.getTimeDuration(OzoneConfigKeys.OZONE_DELETE_CONTAINER_TIMEOUT,
+              OzoneConfigKeys.OZONE_DELETE_CONTAINER_TIMEOUT_DEFAULT, 
TimeUnit.MILLISECONDS) + 1);
+
+      HddsVolume hddsVolume = new HddsVolume.Builder(testDir).conf(conf)
+          .clusterID(clusterId).datanodeUuid(datanodeId)
+          .volumeSet(volumeSet)
+          .build();
+      hddsVolume.format(clusterId);
+      hddsVolume.createWorkingDir(clusterId, null);
+      hddsVolume.createTmpDirs(clusterId);
+
+      when(volumeSet.getVolumesList())
+          .thenReturn(Collections.singletonList(hddsVolume));
+
+      List<HddsVolume> hddsVolumeList = StorageVolumeUtil
+          .getHddsVolumesList(volumeSet.getVolumesList());
+
+      assertEquals(1, hddsVolumeList.size());
+
+      final ContainerMetrics metrics = ContainerMetrics.create(conf);
+
+      final AtomicInteger icrReceived = new AtomicInteger(0);
+
+      final KeyValueHandler kvHandler = new KeyValueHandler(conf,
+          datanodeId, containerSet, volumeSet, metrics,
+          c -> icrReceived.incrementAndGet(), clock);
+      kvHandler.setClusterID(clusterId);
+
+      final ContainerCommandRequestProto createContainer =
+          createContainerRequest(datanodeId, containerID);
+      kvHandler.handleCreateContainer(createContainer, null);
+      assertEquals(1, icrReceived.get());
+      assertNotNull(containerSet.getContainer(containerID));
+
+      // The delete should not have gone through due to the mocked clock
+      kvHandler.deleteContainer(containerSet.getContainer(containerID), true);
+      assertEquals(1, icrReceived.get());
+      assertNotNull(containerSet.getContainer(containerID));
+
+      // Delete the container normally, and it should go through
+      kvHandler.deleteContainer(containerSet.getContainer(containerID), true);
+      assertEquals(2, icrReceived.get());
+      assertNull(containerSet.getContainer(containerID));
+    } finally {
+      FileUtils.deleteDirectory(new File(testDir));

Review Comment:
   I don't think this is necessary since `testDir` points to the same location 
as `tempDir` which is tagged with `@TempDir`. I know `testDeleteContainer` does 
this too but I think it is a leftover artifact from before junit5 migration



##########
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandler.java:
##########
@@ -456,6 +459,68 @@ public void testDeleteContainer() throws IOException {
     }
   }
 
+
+  @Test
+  public void testDeleteContainerTimeout() throws IOException, 
InterruptedException {
+    final String testDir = tempDir.toString();
+    try {
+      final long containerID = 1L;
+      final String clusterId = UUID.randomUUID().toString();
+      final String datanodeId = UUID.randomUUID().toString();
+      final ConfigurationSource conf = new OzoneConfiguration();
+      final ContainerSet containerSet = new ContainerSet(1000);
+      final MutableVolumeSet volumeSet = mock(MutableVolumeSet.class);
+      final Clock clock = mock(Clock.class);
+      long startTime = System.currentTimeMillis();
+      when(clock.millis()).thenReturn(startTime)
+          .thenReturn(startTime + 
conf.getTimeDuration(OzoneConfigKeys.OZONE_DELETE_CONTAINER_TIMEOUT,
+              OzoneConfigKeys.OZONE_DELETE_CONTAINER_TIMEOUT_DEFAULT, 
TimeUnit.MILLISECONDS) + 1);
+
+      HddsVolume hddsVolume = new HddsVolume.Builder(testDir).conf(conf)
+          .clusterID(clusterId).datanodeUuid(datanodeId)
+          .volumeSet(volumeSet)
+          .build();
+      hddsVolume.format(clusterId);
+      hddsVolume.createWorkingDir(clusterId, null);
+      hddsVolume.createTmpDirs(clusterId);
+
+      when(volumeSet.getVolumesList())
+          .thenReturn(Collections.singletonList(hddsVolume));
+
+      List<HddsVolume> hddsVolumeList = StorageVolumeUtil
+          .getHddsVolumesList(volumeSet.getVolumesList());
+
+      assertEquals(1, hddsVolumeList.size());
+
+      final ContainerMetrics metrics = ContainerMetrics.create(conf);
+
+      final AtomicInteger icrReceived = new AtomicInteger(0);
+
+      final KeyValueHandler kvHandler = new KeyValueHandler(conf,
+          datanodeId, containerSet, volumeSet, metrics,
+          c -> icrReceived.incrementAndGet(), clock);
+      kvHandler.setClusterID(clusterId);
+
+      final ContainerCommandRequestProto createContainer =
+          createContainerRequest(datanodeId, containerID);
+      kvHandler.handleCreateContainer(createContainer, null);
+      assertEquals(1, icrReceived.get());
+      assertNotNull(containerSet.getContainer(containerID));
+
+      // The delete should not have gone through due to the mocked clock
+      kvHandler.deleteContainer(containerSet.getContainer(containerID), true);
+      assertEquals(1, icrReceived.get());
+      assertNotNull(containerSet.getContainer(containerID));
+
+      // Delete the container normally, and it should go through

Review Comment:
   Can we explain the clock mocking setup in these comments a little more. It's 
technically correct but a little confusing:
   - First delete sees start time = start time of this test, and lock hold time 
= start time + max wait + 1
     - This equals a wait longer than the configured duration and is expected 
to time out
   - Second delete sees start time = start time + max wait +1 and lock hold 
time = start time + max wait +1
     - This equals no wait on the lock and is expected to pass



##########
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java:
##########
@@ -322,6 +322,11 @@ public final class OzoneConfigKeys {
   public static final String
       OZONE_RECOVERING_CONTAINER_TIMEOUT_DEFAULT = "20m";
 
+  // Specifies how long a delete container command can wait on locks before 
commencing the
+  // actual delete process. This is to avoid a delete command hanging for an 
undetermined
+  // amount of time before SCM is informed the delete has been actioned.
+  public static final String OZONE_DELETE_CONTAINER_TIMEOUT = 
"ozone.delete.container.timeout";
+  public static final String OZONE_DELETE_CONTAINER_TIMEOUT_DEFAULT = "1m";

Review Comment:
   These should go in `DatanodeConfiguration` with the prefix `hdds.datanode`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to