errose28 commented on code in PR #6385:
URL: https://github.com/apache/ozone/pull/6385#discussion_r1534659562


##########
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneKey.java:
##########
@@ -70,11 +70,19 @@ public class OzoneKey {
    * Constructs OzoneKey from OmKeyInfo.
    *
    */
+
+  /**
+   * The object and update ID of an existing key. These will be null
+   * if the key is not yet created on OM and read from OM.

Review Comment:
   I'm not sure what this means. They will be null if you try to read a key 
that is not yet created? Should it say "These will be null if the key is not 
yet created on OM OR read from OM", i.e. they are only set when using the 
overwrite API?



##########
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java:
##########
@@ -469,6 +469,25 @@ public OzoneOutputStream createKey(String key, long size,
         .createKey(volumeName, name, key, size, replicationConfig, 
keyMetadata);
   }
 
+  /**
+   * Overwrite an existing key using optimistic locking. The existingKey must 
exist in Ozone to allow
+   * the new key to be created with the same name. Additionally, the 
existingKey must not have been
+   * modified since the time its details were read. This is controlled by the 
objectID and updateID
+   * fields in the existingKey. If the key is replaced the objectID will 
change. If it has been updated
+   * (eg appended) the updateID will change. If either of these have changed 
since the existingKey
+   * was read, either the initial key create will fail, or the key will fail 
to commit after the data
+   * has been written.

Review Comment:
   This needs to document file and directory rename handling too.



##########
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneKey.java:
##########
@@ -179,6 +207,16 @@ public ReplicationConfig getReplicationConfig() {
     return replicationConfig;
   }
 
+  @JsonIgnore
+  public Long getObjectID() {

Review Comment:
   Why are these json ignored? I think they would be good to see from an `ozone 
sh key info` command.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java:
##########
@@ -775,10 +775,19 @@ protected OmKeyInfo prepareFileInfo(
       dbKeyInfo.setReplicationConfig(replicationConfig);
 
       // Construct a new metadata map from KeyArgs.
-      // Clear the old one when the key is overwritten.
-      dbKeyInfo.getMetadata().clear();
-      dbKeyInfo.getMetadata().putAll(KeyValueUtil.getFromProtobuf(
-          keyArgs.getMetadataList()));
+      // Clear the old one when the key is overwritten unless it is a key 
overwrite
+      // using optimistic commit.
+      if (!keyArgs.hasOverwriteUpdateID()) {
+        dbKeyInfo.getMetadata().clear();
+        dbKeyInfo.getMetadata().putAll(KeyValueUtil.getFromProtobuf(
+            keyArgs.getMetadataList()));
+      }
+      if (keyArgs.hasOverwriteObjectID()) {
+        dbKeyInfo.setOverwriteObjectID(keyArgs.getOverwriteObjectID());
+      }
+      if (keyArgs.hasOverwriteUpdateID()) {
+        dbKeyInfo.setOverwriteUpdateID(keyArgs.getOverwriteUpdateID());
+      }

Review Comment:
   I'm not sure I understand this section in the original code either. It looks 
like we are copying the key metadata as it exists at the time of create, when 
it seems we should actually be doing this at the time of commit. Wherever this 
logic is happening, this is probably where we want to copy the native ACLS for 
the overwrite case as well.



##########
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java:
##########
@@ -1643,7 +1685,9 @@ public List<OzoneKey> listKeys(String volumeName, String 
bucketName,
               key.getCreationTime(),
               key.getModificationTime(),
               key.getReplicationConfig(),
-              key.isFile()))
+              key.isFile(),
+              null,
+              null))

Review Comment:
   nit. We can use the constructor without these two parameters instead. Same 
result but looks cleaner here.



##########
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java:
##########
@@ -240,6 +240,13 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager, TermIn
         throw new OMException("Failed to " + action + " key, as " + dbOpenKey +
             "entry is not found in the OpenKey table", KEY_NOT_FOUND);
       }
+
+      validateOptimisticLockingOverwrite(keyToDelete, omKeyInfo, auditMap);
+      // Optimistic locking validation has passed. Now set the overwrite 
fields to null so they are
+      // not persisted in the key table.
+      omKeyInfo.setOverwriteUpdateID(null);
+      omKeyInfo.setOverwriteObjectID(null);

Review Comment:
   Do we even need these persisted to the open key table? The client could just 
hold the update ID in memory from the original read, and pass it as the 
overwrite update ID on commit for the OM to use only during the in-memory 
portion of the commit. If the client dies before finishing the overwrite it 
will lose its client ID and not be able to access the open key to resume 
writing anyways.
   
   It seems like we can move all new logic to the commit phase without 
modifying key create requests or the protos that are written to the DB.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to