apoorvmittal10 commented on PR #20837:
URL: https://github.com/apache/kafka/pull/20837#issuecomment-3511510542

   @DL1231 I am not sure if we are on same page with the change. I am trying to 
write the problem statement and probable solution below:
   
   Problem Statement:
   There can be a bad record in a batch which can trigger an application crash, 
if application doesn't handle bad records correctly, which will keep bumping 
the batch's delivery count untill full batch is archived. However, whole batch 
will be archived when a single bad record might have causing the crash.
   
   Solution:
   Determining which offset is bad is not possible at broker's end. But broker 
can restrict the acquired records to a subset so only bad record is skipped. We 
can do the following:
   
   1. If delivery count of a batch is `>= 3` then only acquire 1/2 of the batch 
records i.e for a batch of 0-499 (500 records) if batch delivery count is 3 
then start offset tracking and acquire 0-249 (250 records)
   2. If delivery count is again bumped then keeping acquring 1/2 of previously 
acquired offsets until last delivery attempt i.e. 0-124 (125 records)
   3. For last delivery attempt, acquire only 1 offset. Then only the bad 
record will be skipped.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to