kamalcph commented on code in PR #16959:
URL: https://github.com/apache/kafka/pull/16959#discussion_r1731483586


##########
core/src/main/java/kafka/log/remote/RemoteLogManager.java:
##########
@@ -1254,7 +1265,10 @@ void cleanupExpiredRemoteLogSegments() throws 
RemoteStorageException, ExecutionE
                         canProcess = false;
                         continue;
                     }
-                    if 
(RemoteLogSegmentState.DELETE_SEGMENT_FINISHED.equals(metadata.state())) {
+                    // skip the COPY_SEGMENT_STARTED segments since they might 
be the dangling segments that failed before
+                    // and blocks the normal segment deletion, ex: it failed 
`isRemoteSegmentWithinLeaderEpochs` check... etc
+                    if 
(RemoteLogSegmentState.DELETE_SEGMENT_FINISHED.equals(metadata.state()) ||
+                            
RemoteLogSegmentState.COPY_SEGMENT_STARTED.equals(metadata.state())) {

Review Comment:
   Can we avoid this check?  The pending dangling segments (that was missed to 
delete in the copy-phase due to remote storage non-availability during that 
time) gets removed when they breach the retention time / size.
   
   > blocks the normal segment deletion
   
   The normal segment gets deleted in the next run since we calculate the size 
based on the COPY_SEGMENT_FINISHED and DELETE_SEGMENT_STARTED states. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to