[
https://issues.apache.org/jira/browse/OAK-4598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394994#comment-15394994
]
Amit Jain commented on OAK-4598:
--------------------------------
bq. can the data store garbage collection algorithm be adapted to discard
duplicates
It already does that so correctness wise it's not a problem. It just adds a
performance overhead as these records are externally sorted and also reporting
(logging) when collection is going on would be inaccurate.
bq. The root cause seems to be the generation-based garbage collection in the
Segment Store. Assuming that N generations are retained at any time by the
Segment Store and that no other change happens to the content between
compaction cycles, the collector will receive N times the same binary
references
This could turn significant for larger repositories as for the test case itself
it adds a factor of 2.
> Collection of references retrieves less when large number of blobs added
> ------------------------------------------------------------------------
>
> Key: OAK-4598
> URL: https://issues.apache.org/jira/browse/OAK-4598
> Project: Jackrabbit Oak
> Issue Type: Bug
> Components: segment-tar
> Reporter: Amit Jain
> Assignee: Francesco Mari
> Labels: datastore, gc
> Fix For: Segment Tar 0.0.8
>
>
> When large number of external blobs are added to the DataStore (50000) and a
> cycle of compaction executed then the reference collection logic only returns
> lesser number of blob references. It reports correct number of blob
> references when number of blobs added are less indicatingsome sort of
> overflow.
> Another related issue observed when testing with lesser number of blobs is
> that the references returned are double the amount expected, so maybe there
> should be some sort of de-duplication which should be added.
> Without compaction the blob references are returned correctly atleast till
> 100000 (ExternalBlobId#testNullBlobId)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)