FrankChen021 commented on a change in pull request #12008:
URL: https://github.com/apache/druid/pull/12008#discussion_r771807303



##########
File path: 
extensions-core/kafka-indexing-service/src/main/java/org/apache/druid/indexing/kafka/IncrementalPublishingKafkaIndexTaskRunner.java
##########
@@ -126,38 +126,56 @@ private void possiblyResetOffsetsOrWait(
       TaskToolbox taskToolbox
   ) throws InterruptedException, IOException
   {
-    final Map<TopicPartition, Long> resetPartitions = new HashMap<>();
-    boolean doReset = false;
+    final Map<TopicPartition, Long> newOffsetInMetadata = new HashMap<>();
+
     if (task.getTuningConfig().isResetOffsetAutomatically()) {
       for (Map.Entry<TopicPartition, Long> outOfRangePartition : 
outOfRangePartitions.entrySet()) {
         final TopicPartition topicPartition = outOfRangePartition.getKey();
-        final long nextOffset = outOfRangePartition.getValue();
-        // seek to the beginning to get the least available offset
+        final long outOfRangeOffset = outOfRangePartition.getValue();
+
         StreamPartition<Integer> streamPartition = StreamPartition.of(
             topicPartition.topic(),
             topicPartition.partition()
         );
-        final Long leastAvailableOffset = 
recordSupplier.getEarliestSequenceNumber(streamPartition);
-        if (leastAvailableOffset == null) {
-          throw new ISE(
-              "got null sequence number for partition[%s] when fetching from 
kafka!",
-              topicPartition.partition()
-          );
+
+        final Long earliestAvailableOffset = 
recordSupplier.getEarliestSequenceNumber(streamPartition);
+        if (earliestAvailableOffset == null) {
+          throw new ISE("got null earliest sequence number for partition[%s] 
when fetching from kafka!",
+                        topicPartition.partition());
         }
-        // reset the seek
-        recordSupplier.seek(streamPartition, nextOffset);
-        // Reset consumer offset if resetOffsetAutomatically is set to true
-        // and the current message offset in the kafka partition is more than 
the
-        // next message offset that we are trying to fetch
-        if (leastAvailableOffset > nextOffset) {
-          doReset = true;
-          resetPartitions.put(topicPartition, nextOffset);
+
+        if (outOfRangeOffset < earliestAvailableOffset) {
+          //
+          // In this case, it's probably because partition expires before the 
Druid could read from next offset
+          // so the messages in [outOfRangeOffset, earliestAvailableOffset) is 
lost.
+          // These lost messages could not be restored even a manual reset is 
performed
+          // So, it's reasonable to reset the offset the earliest available 
position
+          //
+          recordSupplier.seek(streamPartition, earliestAvailableOffset);
+          newOffsetInMetadata.put(topicPartition, outOfRangeOffset);

Review comment:
       It should be the `earliestAvailableOffset`. This is a silly mistake. 
Thanks for pointing it out.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to