prashantpogde commented on a change in pull request #1004:
URL: https://github.com/apache/hadoop-ozone/pull/1004#discussion_r436884838



##########
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
##########
@@ -103,6 +103,16 @@
 
   String getOzoneKey(String volume, String bucket, String key);
 
+  /**
+   * Given a volume, bucket, return the corresponding DB key.
+   *
+   * @param fileHandleInfo - unique keyId that is used by NFS to create a

Review comment:
       yup, done.

##########
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##########
@@ -118,12 +118,19 @@
    * |----------------------------------------------------------------------|
    * |  multipartInfoTable| /volumeName/bucketName/keyName/uploadId ->...   |
    * |----------------------------------------------------------------------|
+   * | keyIdTable         | /volumeName/bucketName/keyId -> KeyName         |
+   * |----------------------------------------------------------------------|
+   *
+   * TBD : Renames need to be made keyIdTable aware. Also KeyId based lookups
+   * should be able to handle any possible race with renames/deletes.
+   *
    */
 
   public static final String USER_TABLE = "userTable";
   public static final String VOLUME_TABLE = "volumeTable";
   public static final String BUCKET_TABLE = "bucketTable";
   public static final String KEY_TABLE = "keyTable";
+  public static final String KEY_ID_TABLE = "keyIdTable";

Review comment:
       If you insist, I will change :)

##########
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/file/OMDirectoryCreateResponse.java
##########
@@ -81,6 +81,15 @@ protected void addToDBBatch(OMMetadataManager 
omMetadataManager,
       omMetadataManager.getKeyTable().putWithBatch(batchOperation, dirKey,
           dirKeyInfo);
 
+      // We can also persist the fileHandle to KeyIdTable.
+      if (dirKeyInfo.getFileHandleInfo() != 0) {
+        omMetadataManager.getKeyIdTable().putWithBatch(batchOperation,
+            omMetadataManager.getOzoneKeyIdTableKey(
+                dirKeyInfo.getFileHandleInfo()),
+            omMetadataManager.getOzoneKey(dirKeyInfo.getVolumeName(),
+                dirKeyInfo.getBucketName(),
+                dirKeyInfo.getKeyName()));

Review comment:
       yup, done.

##########
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java
##########
@@ -87,6 +87,15 @@ protected void addToDBBatch(OMMetadataManager 
omMetadataManager,
         omKeyInfo.getBucketName(), omKeyInfo.getKeyName(), openKeySessionID);
     omMetadataManager.getOpenKeyTable().putWithBatch(batchOperation,
         openKey, omKeyInfo);
+
+    // We can also persist the fileHandle to KeyIdTable.
+    if (omKeyInfo.getFileHandleInfo() != 0) {
+      omMetadataManager.getKeyIdTable().putWithBatch(batchOperation,
+          omMetadataManager.getOzoneKeyIdTableKey(
+              omKeyInfo.getFileHandleInfo()),
+          omMetadataManager.getOzoneKey(omKeyInfo.getVolumeName(),
+              omKeyInfo.getBucketName(), omKeyInfo.getKeyName()));
+    }

Review comment:
       yes, moved to OmKeyCommitResponse.

##########
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
##########
@@ -103,6 +103,16 @@
 
   String getOzoneKey(String volume, String bucket, String key);
 
+  /**
+   * Given a volume, bucket, return the corresponding DB key.
+   *
+   * @param fileHandleInfo - unique keyId that is used by NFS to create a

Review comment:
       @hanishakoneru thank you for reviewing the changes.
   Yes, updated this.

##########
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyRenameResponse.java
##########
@@ -93,6 +93,15 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
       omMetadataManager.getKeyTable().putWithBatch(batchOperation,
           omMetadataManager.getOzoneKey(volumeName, bucketName, toKeyName),
           newKeyInfo);
+      // At this point we can also update the KeyIdTable.
+      if (newKeyInfo.getFileHandleInfo() != 0) {
+        omMetadataManager.getKeyIdTable().putWithBatch(batchOperation,
+            omMetadataManager.getOzoneKeyIdTableKey(
+                newKeyInfo.getFileHandleInfo()), toKeyName);
+        omMetadataManager.getKeyIdTable().deleteWithBatch(batchOperation,
+            omMetadataManager.getOzoneKeyIdTableKey(
+                newKeyInfo.getFileHandleInfo()));

Review comment:
       yup, I have changed this.

##########
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyRenameResponse.java
##########
@@ -93,6 +93,15 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
       omMetadataManager.getKeyTable().putWithBatch(batchOperation,

Review comment:
       Not sure, if we need to do this. 
   - In case of replay its only deleting fromKey, in this case we do not have 
the toKeyName available, its NULL,
   - In the first tie, it seems the sequence would be : 
       - update mapping id <-> toKeyname
       - add toKey
       - delete FromKey
   
   So in case of replay if the newkey is already there then we should also have 
the id-<->keyname mapping updated.
   
   I also noticed that the sequence of delete and newkey addition was reversed 
before this change. Not sure how this worked.

##########
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
##########
@@ -156,6 +156,7 @@ public OMClientResponse validateAndUpdateCache(OzoneManager 
ozoneManager,
           .setReplicationFactor(keyArgs.getFactor())
           .setObjectID(objectID)
           .setUpdateID(transactionLogIndex)
+          .setFileHandleInfo(objectID)

Review comment:
       Let us keep this for now. If it becomes redundant, I will remove this 
with subsequent PRs.

##########
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3InitiateMultipartUploadResponse.java
##########
@@ -69,6 +69,16 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
         multipartKey, omKeyInfo);
     omMetadataManager.getMultipartInfoTable().putWithBatch(batchOperation,
         multipartKey, omMultipartKeyInfo);
+
+
+    if (omKeyInfo.getFileHandleInfo() != 0) {
+      omMetadataManager.getKeyIdTable().putWithBatch(batchOperation,
+          omMetadataManager.getOzoneKeyIdTableKey(
+              omKeyInfo.getFileHandleInfo()),
+          omMetadataManager.getOzoneKey(omKeyInfo.getVolumeName(),
+              omKeyInfo.getBucketName(),
+              omKeyInfo.getKeyName()));
+    }

Review comment:
       Yup, Updated the changes.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to