ivandika3 commented on code in PR #7566:
URL: https://github.com/apache/ozone/pull/7566#discussion_r1906536325
##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java:
##########
@@ -1373,4 +1376,116 @@ protected void validateEncryptionKeyInfo(OmBucketInfo
bucketInfo, KeyArgs keyArg
keyArgs.getKeyName() + " in encrypted bucket " +
keyArgs.getBucketName(), INVALID_REQUEST);
}
}
+
+ protected void addMissingParentsToCache(OmBucketInfo omBucketInfo,
+ List<OmDirectoryInfo> missingParentInfos,
+ OMMetadataManager omMetadataManager,
+ long volumeId,
+ long bucketId,
+ long transactionLogIndex) throws IOException {
+
+ // validate and update namespace for missing parent directory.
+ checkBucketQuotaInNamespace(omBucketInfo, missingParentInfos.size());
+ omBucketInfo.incrUsedNamespace(missingParentInfos.size());
Review Comment:
In this case, if we are only adding a directories in cache which will be
cleaned up afterwards (for abort and expired abort), I don't think we should
increased the usedNamespace of the parent bucket since this will cause
`usedNamespace` to never be 0 since the correspodning `decrUsedNamespace` will
not be called.
##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java:
##########
@@ -1373,4 +1376,116 @@ protected void validateEncryptionKeyInfo(OmBucketInfo
bucketInfo, KeyArgs keyArg
keyArgs.getKeyName() + " in encrypted bucket " +
keyArgs.getBucketName(), INVALID_REQUEST);
}
}
+
+ protected void addMissingParentsToCache(OmBucketInfo omBucketInfo,
+ List<OmDirectoryInfo> missingParentInfos,
+ OMMetadataManager omMetadataManager,
+ long volumeId,
+ long bucketId,
+ long transactionLogIndex) throws IOException {
+
+ // validate and update namespace for missing parent directory.
+ checkBucketQuotaInNamespace(omBucketInfo, missingParentInfos.size());
+ omBucketInfo.incrUsedNamespace(missingParentInfos.size());
+
+ // Add cache entries for the missing parent directories.
+ OMFileRequest.addDirectoryTableCacheEntries(omMetadataManager,
+ volumeId, bucketId, transactionLogIndex,
+ missingParentInfos, null);
+ }
+
+ protected OmKeyInfo getOmKeyInfoFromOpenKeyTable(String dbMultipartKey,
+ String keyName,
+ OMMetadataManager omMetadataManager) throws IOException {
+ return omMetadataManager.getOpenKeyTable(getBucketLayout())
+ .get(dbMultipartKey);
+ }
+
+ protected void addMultiPartToCache(
+ OMMetadataManager omMetadataManager, String multipartOpenKey,
+ OMFileRequest.OMPathInfoWithFSO pathInfoFSO, OmKeyInfo omKeyInfo,
+ String keyName, long transactionLogIndex
+ ) {
+
+ // Add multi part to cache
+ OMFileRequest.addOpenFileTableCacheEntry(omMetadataManager,
+ multipartOpenKey, omKeyInfo, pathInfoFSO.getLeafNodeName(),
+ keyName, transactionLogIndex);
+
+ }
+
+ protected boolean addMissingDirectoriesToCacheEnabled() {
+ return false;
+ }
+
+ protected List<OmDirectoryInfo> addOrGetMissingDirectories(OzoneManager
ozoneManager,
+ OzoneManagerProtocolProtos.KeyArgs keyArgs,
+ long trxnLogIndex) throws
+ IOException {
+ OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+ final String volumeName = keyArgs.getVolumeName();
+ final String bucketName = keyArgs.getBucketName();
+ final String keyName = keyArgs.getKeyName();
+ OmBucketInfo omBucketInfo = getBucketInfo(omMetadataManager,
+ volumeName, bucketName);
+ OMFileRequest.OMPathInfoWithFSO pathInfoFSO = OMFileRequest
+ .verifyDirectoryKeysInPath(omMetadataManager, volumeName, bucketName,
+ keyName, Paths.get(keyName));
+ List<OmDirectoryInfo> missingParentInfos =
+ getAllMissingParentDirInfo(ozoneManager, keyArgs, omBucketInfo,
+ pathInfoFSO, trxnLogIndex);
+ if (!addMissingDirectoriesToCacheEnabled()) {
+ return missingParentInfos;
+ }
Review Comment:
May I know what is the purpose of this? Are there any OM requests that
called `addOrGetMissingDirectories` but will not actually add the missing
directories to the cache?
If there is, please add a corresponding log. If there isn't, I think we can
just remove this?
##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequestWithFSO.java:
##########
@@ -213,6 +199,9 @@ public OMClientResponse validateAndUpdateCache(OzoneManager
ozoneManager, TermIn
omMetadataManager.getMultipartInfoTable().addCacheEntry(
multipartKey, multipartKeyInfo, transactionLogIndex);
+ if (bucketInfo == null) {
+ throw new IOException("bucketInfo is null");
+ }
Review Comment:
Hm, `validateBucketAndVolume` is already checking the cache as well.
Moreover, the OM request was done under a BUCKET_LOCK, so the bucket should
still exists. Bucket not found check should be earlier before updating any OM
state. Moreover, it should be an `OMException` thrown with `BUCKET_NOT_FOUND`
`ResultCodes`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]