samarthjain commented on a change in pull request #12008:
URL: https://github.com/apache/druid/pull/12008#discussion_r761332196



##########
File path: 
extensions-core/kafka-indexing-service/src/main/java/org/apache/druid/indexing/kafka/IncrementalPublishingKafkaIndexTaskRunner.java
##########
@@ -126,38 +126,56 @@ private void possiblyResetOffsetsOrWait(
       TaskToolbox taskToolbox
   ) throws InterruptedException, IOException
   {
-    final Map<TopicPartition, Long> resetPartitions = new HashMap<>();
-    boolean doReset = false;
+    final Map<TopicPartition, Long> newOffsetInMetadata = new HashMap<>();
+
     if (task.getTuningConfig().isResetOffsetAutomatically()) {
       for (Map.Entry<TopicPartition, Long> outOfRangePartition : 
outOfRangePartitions.entrySet()) {
         final TopicPartition topicPartition = outOfRangePartition.getKey();
-        final long nextOffset = outOfRangePartition.getValue();
-        // seek to the beginning to get the least available offset
+        final long outOfRangeOffset = outOfRangePartition.getValue();
+
         StreamPartition<Integer> streamPartition = StreamPartition.of(
             topicPartition.topic(),
             topicPartition.partition()
         );
-        final Long leastAvailableOffset = 
recordSupplier.getEarliestSequenceNumber(streamPartition);
-        if (leastAvailableOffset == null) {
-          throw new ISE(
-              "got null sequence number for partition[%s] when fetching from 
kafka!",
-              topicPartition.partition()
-          );
+
+        final Long earliestAvailableOffset = 
recordSupplier.getEarliestSequenceNumber(streamPartition);
+        if (earliestAvailableOffset == null) {
+          throw new ISE("got null earliest sequence number for partition[%s] 
when fetching from kafka!",
+                        topicPartition.partition());
         }
-        // reset the seek
-        recordSupplier.seek(streamPartition, nextOffset);
-        // Reset consumer offset if resetOffsetAutomatically is set to true
-        // and the current message offset in the kafka partition is more than 
the
-        // next message offset that we are trying to fetch
-        if (leastAvailableOffset > nextOffset) {
-          doReset = true;
-          resetPartitions.put(topicPartition, nextOffset);
+
+        if (outOfRangeOffset < earliestAvailableOffset) {

Review comment:
       @gianm - could you explain the scenario in which the offset we are 
asking the consumer to seek to could possibly be higher than the latest 
available offset in Kafka? 
   ```
   @Nonnull
     @Override
     protected List<OrderedPartitionableRecord<Integer, Long, 
KafkaRecordEntity>> getRecords(
         RecordSupplier<Integer, Long, KafkaRecordEntity> recordSupplier,
         TaskToolbox toolbox
     ) throws Exception
     {
       try {
         return recordSupplier.poll(task.getIOConfig().getPollTimeout());
       }
       catch (OffsetOutOfRangeException e) {
         //
         // Handles OffsetOutOfRangeException, which is thrown if the seeked-to
         // offset is not present in the topic-partition. This can happen if 
we're asking a task to read from data
         // that has not been written yet (which is totally legitimate). So 
let's wait for it to show up
         //
         log.warn("OffsetOutOfRangeException with message [%s]", 
e.getMessage());
         possiblyResetOffsetsOrWait(e.offsetOutOfRangePartitions(), 
recordSupplier, toolbox);
         return Collections.emptyList();
       }
     }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to