[
https://issues.apache.org/jira/browse/OAK-9765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17534712#comment-17534712
]
Amit Jain commented on OAK-9765:
--------------------------------
[~piercarlo_s]
{quote} * so is it correct to infer that it is not just the total amount of
blobs but also the number of blobs (that have to overcome each a certain
threshold)?{quote}
Since, we are taking about the segment store, it is about the total no. of
nodes and the properties associated with those nodes which is affect it. In the
test code that I linked it accomplishes it by creating nodes with inlined blobs
(< 16 KB). A 10 MB blob will not be stored in the node store directly but only
a a few bytes of the id.
> Garbage Collection does not remove blobs file from the file system
> ------------------------------------------------------------------
>
> Key: OAK-9765
> URL: https://issues.apache.org/jira/browse/OAK-9765
> Project: Jackrabbit Oak
> Issue Type: Bug
> Affects Versions: 1.42.0
> Reporter: Piercarlo Slavazza
> Priority: Blocker
>
> Using a NodeStore backed by a FileStore, with a blob store of type
> FileBlobStore:
> # (having configured GC with estimation {_}disabled{_})
> # a file is added as a blob
> # then the node where the blob is references is _removed_
> # then the GC is run
> # expected behaviour: the node is no more accessible, _and_ no chunk of the
> blob is present on the file system
> # actual behaviour: the node is no more accessible BUT all the chunks are
> still present on the file system
> Steps to reproduce: execute the (really tiny) main in
> [https://github.com/PiercarloSlavazza/oak-garbage-collection-test/]
> (instructions in the readme)
--
This message was sent by Atlassian Jira
(v8.20.7#820007)