smiroslav commented on code in PR #2467:
URL: https://github.com/apache/jackrabbit-oak/pull/2467#discussion_r2294091627
##########
oak-segment-azure/src/main/java/org/apache/jackrabbit/oak/segment/azure/AzureArchiveManager.java:
##########
@@ -80,38 +80,16 @@ public AzureArchiveManager(BlobContainerClient
readBlobContainerClient, BlobCont
@Override
public List<String> listArchives() throws IOException {
try {
- List<String> archiveNames =
readBlobContainerClient.listBlobsByHierarchy(rootPrefix).stream()
+ return
readBlobContainerClient.listBlobsByHierarchy(rootPrefix).stream()
Review Comment:
> Or you mean when someone manually deletes the blob in Azure?
Another Oak process could do it. Alternatively, the segment may not have
been uploaded due to the abrupt termination of the current process. Upon
restarting the application, the archive might contain 0001 segment but not
0000 segment. By filtering out based on the absence of segment 0000, the new
process is unaware that the problem has happened.
> My concern is that not filtering out those archives anymore in
AzureArchiveManager.listArchives() is a change in behavior.
Yes, like getting rid of the part that deletes the segments :).
I can add filter logic back, and when we introduce a marker for the delete
operation, we can do filtering based on that.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]