showuon commented on PR #16959:
URL: https://github.com/apache/kafka/pull/16959#issuecomment-2309775773

   > This patch might create holes in the remote log segments lineage, if the 
RLMM plugin implementation is eventually consistent. Previously, we were 
deleting the data once the data breaches either retention size (or) time. With 
this patch, we are deleting the dangling segments aggressively.
   
   > If the RLMM plugin is eventually consistent and if it returns the previous 
state instead of the current state momentarily, then there is a chance of 
segment deletion and can create holes in the segment lineage.
   
   Hmm.. good point! Indeed, we can't make sure the metadata log manager always 
returns the latest state.
   
   > Another way to fix this problem is to track the failed segment-ids in the 
copy-path itself and delete them similar to case where we delete the uploaded 
segment when it breaches the configured custom metadata size.
   
   Delete in the copy-path is a good suggestion. This way we don't have to rely 
on the RLMM any more. Let me update the PR! Thanks.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to