[ 
https://issues.apache.org/jira/browse/OAK-632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved OAK-632.
-------------------------------

       Resolution: Fixed
    Fix Version/s: 0.13

The latest numbers are pretty good, so resolving as Fixed:

{noformat}
100 nodes in 28 ms
200 nodes in 85 ms
400 nodes in 174 ms
800 nodes in 350 ms
1600 nodes in 498 ms
3200 nodes in 769 ms
6400 nodes in 1171 ms
12800 nodes in 1624 ms
25600 nodes in 3657 ms
51200 nodes in 7236 ms
{noformat}

Only a bit more than 1k segments are created (i.e. roughly 1 segment per 
revision, as expected) and the storageSize of the MongoDB database is about 
17MB, which is quite reasonable (less than 200 bytes per node).

> SegmentMK: Efficient updates of flat nodes
> ------------------------------------------
>
>                 Key: OAK-632
>                 URL: https://issues.apache.org/jira/browse/OAK-632
>             Project: Jackrabbit Oak
>          Issue Type: Sub-task
>          Components: segmentmk
>            Reporter: Jukka Zitting
>            Priority: Minor
>             Fix For: 0.13
>
>
> The SegmentMK already uses the Hash Array Mapped Trie (HAMT) data structure 
> for the child node entries of a node. This persistent data structure allows 
> child node entries to be added, modified or changed in O(log n) added bytes 
> of storage. However, we so far only have the code to write a new HAMT data 
> structure from scratch, not the code to selectively update just parts of it. 
> To properly support large, flat content we need to implement also the latter 
> part.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to