[ 
https://issues.apache.org/jira/browse/OAK-9765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17534703#comment-17534703
 ] 

Piercarlo Slavazza commented on OAK-9765:
-----------------------------------------

[~amitj] thanks for the hints.

??Also, need to create enough garbage in the segment store so that the segment 
GC is effective??

In the linked code, 500 blobs of 16K each are created - for a total amount of 
~8MB blobs: in my test I usually create a single blob of 10MB, but still fails 
to be swept (I added the exact other snippet that you linked) - so is it 
correct to infer that it is not just the total amount of blobs but also the 
number of blobs (that have to overcome each a certain threshold)?

> Garbage Collection does not remove blobs file from the file system
> ------------------------------------------------------------------
>
>                 Key: OAK-9765
>                 URL: https://issues.apache.org/jira/browse/OAK-9765
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>    Affects Versions: 1.42.0
>            Reporter: Piercarlo Slavazza
>            Priority: Blocker
>
> Using a NodeStore backed by a FileStore, with a blob store of type 
> FileBlobStore:
>  # (having configured GC with estimation {_}disabled{_})
>  # a file is added as a blob
>  # then the node where the blob is references is _removed_
>  # then the GC is run
>  # expected behaviour: the node is no more accessible, _and_ no chunk of the 
> blob is present on the file system
>  # actual behaviour: the node is no more accessible BUT all the chunks are 
> still present on the file system
> Steps to reproduce: execute the (really tiny) main in 
> [https://github.com/PiercarloSlavazza/oak-garbage-collection-test/] 
> (instructions in the readme)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to