[
https://issues.apache.org/jira/browse/OAK-4598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393971#comment-15393971
]
Francesco Mari commented on OAK-4598:
-------------------------------------
The problem about duplicated binary references is not caused by the new binary
references index. The root cause seems to be the generation-based garbage
collection in the Segment Store. Assuming that {{N}} generations are retained
at any time by the Segment Store and that no other change happens to the
content between compaction cycles, the collector will receive {{N}} times the
same binary references. You can easily verify this statement by tuning the
{{SegmentGCOptions}} in the {{SegmentDataStoreBlobGCIT}}. [~mduerig], can you
confirm this? [~amitjain], can the data store garbage collection algorithm be
adapted to discard duplicates?
> Collection of references retrieves less when large number of blobs added
> ------------------------------------------------------------------------
>
> Key: OAK-4598
> URL: https://issues.apache.org/jira/browse/OAK-4598
> Project: Jackrabbit Oak
> Issue Type: Bug
> Components: segment-tar
> Reporter: Amit Jain
> Assignee: Francesco Mari
> Labels: datastore, gc
> Fix For: Segment Tar 0.0.8
>
>
> When large number of external blobs are added to the DataStore (50000) and a
> cycle of compaction executed then the reference collection logic only returns
> lesser number of blob references. It reports correct number of blob
> references when number of blobs added are less indicatingsome sort of
> overflow.
> Another related issue observed when testing with lesser number of blobs is
> that the references returned are double the amount expected, so maybe there
> should be some sort of de-duplication which should be added.
> Without compaction the blob references are returned correctly atleast till
> 100000 (ExternalBlobId#testNullBlobId)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)