DL1231 commented on PR #20837:
URL: https://github.com/apache/kafka/pull/20837#issuecomment-3511639969

   @apoorvmittal10 Thanks for your patient reply. I'd like to follow up with 
another question:
   
   If offset 30 is the bad record.
   - First acquisition: Get half the records (0-249) → will still fail because 
it includes offset 30
   - Second acquisition: Get half of that range (0-125) → will still fail 
because it still includes offset 30
   - Final attempt: Get only 1 record (offset 0) → succeeds, but offset 30 (the 
actual bad record) remains untouched
   - Next acquisition range becomes 1-499 → will fail again
   
   In this scenario, records 1-125 would have been delivered 5 times and 
eventually get archived, while the actual bad record at offset 30 continues to 
cause failures.
   
   So this solution essentially reduces the impact range by about 3/4, but 
doesn't completely isolate the bad record. Is my understanding correct?
   
   This seems to minimize the collateral damage rather than surgically removing 
the problematic record. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to