neils-dev commented on code in PR #3741:
URL: https://github.com/apache/ozone/pull/3741#discussion_r1121086980
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java:
##########
@@ -116,8 +117,25 @@ public HddsDatanodeService(boolean printBanner, String[]
args) {
this.args = args != null ? Arrays.copyOf(args, args.length) : null;
}
+ private void cleanTmpDir() {
+ if (datanodeStateMachine != null) {
+ MutableVolumeSet volumeSet =
+ datanodeStateMachine.getContainer().getVolumeSet();
+ for (HddsVolume hddsVolume : StorageVolumeUtil.getHddsVolumesList(
Review Comment:
Would be safer to check if each volume in `MutableVolumeSet` to be
`instanceof ``HddsVolume`. Similar to
https://github.com/apache/ozone/blob/9f77588334854c2e610cce9e0d59391207d6849a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/upgrade/ScmHAFinalizeUpgradeActionDatanode.java#L55
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java:
##########
@@ -350,6 +368,63 @@ public void createDbStore(MutableVolumeSet dbVolumeSet)
throws IOException {
}
}
+ /**
+ * Ensure that volume is initialized properly with
+ * cleanup path. Should disk be re-inserted into
+ * cluster, cleanup path should already be on
+ * disk. This method syncs the HddsVolume
+ * with the path on disk.
+ *
+ * @param id clusterId or scmId
+ */
+ public void checkTmpDirPaths(String id) {
Review Comment:
With restructuring the location of configuring the tmp directory into the
`HddsVolume`, passing the `clusterID` to the tmp dir path config is redundant.
_Remove_ all interfaces with `id `(clusterid) as an argument as the `clusterID
`is a private variable in the `HddsVolume `(`StorageVolume`) instance (use
`getClusterID`).
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java:
##########
@@ -350,6 +368,63 @@ public void createDbStore(MutableVolumeSet dbVolumeSet)
throws IOException {
}
}
+ /**
+ * Ensure that volume is initialized properly with
+ * cleanup path. Should disk be re-inserted into
+ * cluster, cleanup path should already be on
+ * disk. This method syncs the HddsVolume
+ * with the path on disk.
+ *
+ * @param id clusterId or scmId
+ */
+ public void checkTmpDirPaths(String id) {
+ Path tmpPath = createTmpPath(id);
+ deleteServiceDirPath = tmpPath.resolve(TMP_DELETE_SERVICE_DIR);
+ }
+
+ private Path createTmpPath(String id) {
+
+ // HddsVolume root directory path
+ String hddsRoot = getHddsRootDir().toString();
+
+ // HddsVolume path
+ String vol = HddsVolumeUtil.getHddsRoot(hddsRoot);
+
+ Path volPath = Paths.get(vol);
+ Path idPath = Paths.get(id);
+
+ return volPath.resolve(idPath).resolve(TMP_DIR);
+ }
+
+ private void createTmpDir(String id) {
Review Comment:
update interface, unnecessary `id` argument. (clusterid defined in
HddsVolume / StorageVolume)
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -1236,6 +1237,29 @@ private void deleteInternal(Container container, boolean
force)
DELETE_ON_NON_EMPTY_CONTAINER);
}
}
+ if (container.getContainerData() instanceof KeyValueContainerData) {
+ KeyValueContainerData keyValueContainerData =
+ (KeyValueContainerData) container.getContainerData();
+ HddsVolume hddsVolume = keyValueContainerData.getVolume();
+
+ // Rename container location
+ boolean success = KeyValueContainerUtil.ContainerDeleteDirectory
+ .moveToTmpDeleteDirectory(keyValueContainerData, hddsVolume);
+
+ if (success) {
Review Comment:
This block (success) can be restructured to only be used and output only if
debug logging is enabled. Something like:
```
if (!success) {
LOG.error("Failed to move container under " +
hddsVolume.getDeleteServiceDirPath());
throw new StorageContainerException("Moving container failed",
CONTAINER_INTERNAL_ERROR);
}
if (LOG.isDebugEnabled()) {
String containerPath = keyValueContainerData
.getContainerPath().toString();
File containerDir = new File(containerPath);
LOG.debug("Container {} has been successfuly moved under {}",
containerDir.getName(), hddsVolume.getDeleteServiceDirPath());
}
```
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java:
##########
@@ -350,6 +368,63 @@ public void createDbStore(MutableVolumeSet dbVolumeSet)
throws IOException {
}
}
+ /**
+ * Ensure that volume is initialized properly with
+ * cleanup path. Should disk be re-inserted into
+ * cluster, cleanup path should already be on
+ * disk. This method syncs the HddsVolume
+ * with the path on disk.
+ *
+ * @param id clusterId or scmId
+ */
+ public void checkTmpDirPaths(String id) {
+ Path tmpPath = createTmpPath(id);
+ deleteServiceDirPath = tmpPath.resolve(TMP_DELETE_SERVICE_DIR);
+ }
+
+ private Path createTmpPath(String id) {
+
+ // HddsVolume root directory path
+ String hddsRoot = getHddsRootDir().toString();
+
+ // HddsVolume path
+ String vol = HddsVolumeUtil.getHddsRoot(hddsRoot);
+
+ Path volPath = Paths.get(vol);
+ Path idPath = Paths.get(id);
Review Comment:
The id is the <clusterid / scmid> based on the version. It may be best to
check and set the `id` from `VersionDatanodeFeatures.chooseContainerPathID`
here. Once set here, all calls to the tmp dir container delete service path
can be independent of the version, having properly set the path correctly in
one place.
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java:
##########
@@ -350,6 +368,63 @@ public void createDbStore(MutableVolumeSet dbVolumeSet)
throws IOException {
}
}
+ /**
+ * Ensure that volume is initialized properly with
+ * cleanup path. Should disk be re-inserted into
+ * cluster, cleanup path should already be on
+ * disk. This method syncs the HddsVolume
+ * with the path on disk.
+ *
+ * @param id clusterId or scmId
+ */
+ public void checkTmpDirPaths(String id) {
+ Path tmpPath = createTmpPath(id);
+ deleteServiceDirPath = tmpPath.resolve(TMP_DELETE_SERVICE_DIR);
+ }
+
+ private Path createTmpPath(String id) {
+
+ // HddsVolume root directory path
+ String hddsRoot = getHddsRootDir().toString();
+
+ // HddsVolume path
+ String vol = HddsVolumeUtil.getHddsRoot(hddsRoot);
+
+ Path volPath = Paths.get(vol);
+ Path idPath = Paths.get(id);
+
+ return volPath.resolve(idPath).resolve(TMP_DIR);
+ }
+
+ private void createTmpDir(String id) {
+ tmpDirPath = createTmpPath(id);
+
+ if (Files.notExists(tmpDirPath)) {
+ try {
+ Files.createDirectories(tmpDirPath);
+ } catch (IOException ex) {
+ LOG.error("Error creating {}", tmpDirPath.toString(), ex);
+ }
+ }
+ }
+
+ public void createDeleteServiceDir(String id) {
Review Comment:
update interface, unnecessary `id` argument. (clusterid defined in
HddsVolume / StorageVolume)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]