dennishuo commented on code in PR #3205:
URL: https://github.com/apache/polaris/pull/3205#discussion_r2663645712


##########
persistence/relational-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/idempotency/PostgresIdempotencyStore.java:
##########
@@ -143,7 +144,23 @@ public boolean updateHeartbeat(
                       realmId,
                       idempotencyKey,
                       executorId)));
-      return rows > 0;
+      if (rows > 0) {
+        return HeartbeatResult.UPDATED;
+      }
+
+      // No rows updated: determine why by loading the current record, if any.
+      Optional<IdempotencyRecord> existing = load(realmId, idempotencyKey);

Review Comment:
   Since the benefit of putting returning `HeartbeatResult` directly from the 
persistence impl instead of making the caller call `load` afterwards is the 
conflicting state can be read "atomically" with whatever caused the update to 
fail.
   
   We probably don't have to over-optimize for those types of edge cases 
initially though so no action needed. Just leaving this here for posterity.
   
   Optionally, you could add a `// TODO` to do some kind of atomic or 
transactional read with the update.
   
   WIth postgres, it might be possible to just use `RETURNING` to accomplish 
this, though I'm not a postgres expert so it's just an idea :)



##########
polaris-core/src/main/java/org/apache/polaris/core/persistence/IdempotencyStore.java:
##########
@@ -44,6 +44,35 @@ enum ReserveResultType {
     DUPLICATE
   }
 
+  /**
+   * Result of attempting to update the heartbeat for an in-progress 
idempotency record.
+   *
+   * <p>This allows callers to distinguish between different "no update" 
scenarios instead of
+   * overloading them into a boolean.
+   */
+  enum HeartbeatResult {

Review Comment:
   The most future-proof approach would be to return a struct that could 
include such an enum in addition to potentially an inline conflicting 
`IdempotencyRecord`, error string, etc.
   
   But I guess we'll know better when we have the callsites so I'm okay with 
iterating on it once we see the entire thing working end-to-end. We should 
still make sure to minimize needing to change the SPI too significantly in the 
future though.



##########
polaris-core/src/main/java/org/apache/polaris/core/persistence/IdempotencyStore.java:
##########
@@ -44,6 +44,35 @@ enum ReserveResultType {
     DUPLICATE
   }
 
+  /**
+   * Result of attempting to update the heartbeat for an in-progress 
idempotency record.
+   *
+   * <p>This allows callers to distinguish between different "no update" 
scenarios instead of
+   * overloading them into a boolean.
+   */
+  enum HeartbeatResult {
+    /** The heartbeat was successfully updated for the in-progress record 
owned by this executor. */
+    UPDATED,
+
+    /**
+     * The idempotency record exists but has already been finalized with an 
HTTP status; there is no
+     * longer an in-progress reservation to heartbeat.
+     */
+    FINALIZED,
+
+    /**
+     * No idempotency record exists for the specified realm and key. This can 
happen if the record
+     * was never created or has already been purged.
+     */
+    NOT_FOUND,
+
+    /**
+     * An in-progress idempotency record exists for the key, but it is owned 
by a different
+     * executor. The caller should stop heartbeating as it no longer owns the 
reservation.
+     */
+    LOST_OWNERSHIP

Review Comment:
   I may have glossed over this case -- I assumed mismatch executorId is 
normally an unexpected collision on an idempotencyKey. How are executors 
supposed to force leasing of the idempotency key? Will we actually support 
different executors/clients "correctly" sharing one idempotency key? And if so 
why does it differ from a single executor sending (maybe accidental) async 
retries?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to