Hi, To delete blobs the segment compaction has to be executed before hand to remove older revisions referring to the blobs in the datastore. You can take a look at the IT for the blob garbage collection [1]. I am not so sure about the creation of new segement tar file should be directly related to a run of MarkSweepGarbageCollector as it only reads the internal index for external binaries.
Thanks Amit [1] https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/test/java/org/apache/jackrabbit/oak/segment/SegmentDataStoreBlobGCIT.java#L218-L221 On Wed, Apr 15, 2020 at 12:06 AM Marco Piovesana <[email protected]> wrote: > Hi all, > I'm running some tests with the MarkSweepGarbageCollector to run a garbage > collection on a local file store. > I'm running it with the maxLastModifiedInterval set to 0, so I expected the > garbage collector to remove the binary right after I delete the file. > What happens, however, is that the binary is never deleted, and for each > execution of the garbage collector I see a new segment tar file (doesn't > matter if I run a FileStore.fullGC() or not before). > Why is that? What am I missing? > > Marco. >
