sumitagrawl commented on code in PR #5964:
URL: https://github.com/apache/ozone/pull/5964#discussion_r1462918464
##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerReader.java:
##########
@@ -148,7 +148,18 @@ public void readVolume(File hddsVolumeRootDir) {
LOG.info("Start to verify containers on volume {}", hddsVolumeRootDir);
File currentDir = new File(idDir, Storage.STORAGE_DIR_CURRENT);
File[] containerTopDirs = currentDir.listFiles();
- if (containerTopDirs != null) {
+ if (containerTopDirs != null && containerTopDirs.length > 0) {
+ try {
+ // idDir is working directory having data
+ // and volume is initialized with temp path
+ hddsVolume.createTmpDirs(idDir.getName());
Review Comment:
@SaketaChalamchala
StorageVolumeUtil.checkVolume takes other action like, `upgradeVolume`, and
`creates version files`, and create other directory. These action can not be
taken at this point as` scm Id is not available` (only present after SCM
registeration, and used to decide working directory or upgrade).
Here, it identifies working directory from directory path as existing code
and using same for validation later. So trying initialize temp path in same
working directory to have cleanup logic. Being temp directory, this will not
have any impact.
`ozone debug container inspect|info|export|list` should not create temp dir,
I think existing bug where it performs delete also. We should fix using
`shouldDeleteRecovering` for this flow. This needs fix.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]