C0urante commented on code in PR #13801:
URL: https://github.com/apache/kafka/pull/13801#discussion_r1286347860


##########
connect/runtime/src/main/java/org/apache/kafka/connect/storage/ConnectorOffsetBackingStore.java:
##########
@@ -279,15 +290,52 @@ public Future<Void> set(Map<ByteBuffer, ByteBuffer> 
values, Callback<Void> callb
             throw new IllegalStateException("At least one non-null offset 
store must be provided");
         }
 
-        return primaryStore.set(values, (primaryWriteError, ignored) -> {
+        List<ByteBuffer> partitionsWithTombstoneOffsets = 
values.entrySet().stream()
+                .filter(offsetEntry -> offsetEntry.getValue() == null)
+                .map(Map.Entry::getKey).collect(Collectors.toList());
+
+        Map<ByteBuffer, ByteBuffer> tombstoneOffsets = new HashMap<>();
+        for (ByteBuffer partition : partitionsWithTombstoneOffsets) {
+            tombstoneOffsets.put(partition, null);
+        }
+        Map<ByteBuffer, ByteBuffer> regularOffsets = values.entrySet().stream()
+                .filter(offsetEntry -> offsetEntry.getValue() != null)
+                .collect(Collectors.toMap(Map.Entry::getKey, 
Map.Entry::getValue));
+
+        // If the supplied offsets contain tombstone values, then tombstone 
offsets are extracted first,
+        // written to the secondary store in a synchronous manner and then to 
the primary store.
+        // This is because, if a tombstone offset is successfully written to 
the per-connector offsets topic,
+        // but cannot be written to the global offsets topic, then the global 
offsets topic will still contain that
+        // source offset, but the per-connector topic will not. Due to the 
fallback-on-global logic used by the worker,
+        // if a task requests offsets for one of the tombstoned partitions, 
the worker will provide it with the
+        // offsets present in the global offsets topic, instead of indicating 
to the task that no offsets can be found.
+        CompletableFuture<Void> offsetWriteFuture = 
CompletableFuture.completedFuture(null);
+        if (secondaryStore != null && !tombstoneOffsets.isEmpty()) {
+            offsetWriteFuture.thenApply(v -> {
+                Future<Void> secondaryWriteFuture = 
secondaryStore.set(tombstoneOffsets, new FutureCallback<>());
+                try {
+                    if (exactlyOnce) {
+                        secondaryWriteFuture.get();
+                    } else {
+                        secondaryWriteFuture.get(offsetFlushTimeoutMs, 
TimeUnit.MILLISECONDS);
+                    }
+                } catch (ExecutionException e) {
+                    log.error("{} Flush of tombstone(s) offsets to secondary 
store threw an unexpected exception: ", this, e.getCause());
+                } catch (Exception e) {
+                    log.error("{} Got Exception when trying to flush 
tombstone(s) offsets to secondary store", this, e);
+                }

Review Comment:
   Why are we catching these exceptions at all? Shouldn't we be throwing them 
so that the offset commit fails?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to