[ 
https://issues.apache.org/jira/browse/OAK-5192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996610#comment-15996610
 ] 

Thomas Mueller commented on OAK-5192:
-------------------------------------

> Also with recent changes in OakDirectory files should not be getting inlined 
> so the growth being seen here is purely due to blobId values.

Maybe this is not working as expected? 48'047'720 bytes for _just_ the lucene 
index, if all of that is blobIds, is a _lot_ of blobIds.

> Change the chunk size from 1MB to 10MB and see its impact
> So to represent 1GB file we would be using ~50KB of space in NodeStore
> have an OakDirectory implementation which does not chunk at all and streams 
> the complete binary in one piece

50 KB is just 0.005% of 1 GB. So you can save 0.005% of disk space with that, 
right? It doesn't sound like this would solve the problem, which is "Reduce 
Lucene related growth of _repository_ size". This is _both_ datastore and 
nodestore. This would just reduce the nodestore size, but have only a tiny 
impact on repository size.

I still think that following options will have a much bigger impact:
* try to store the index less often in the repository (only store once per 
minute)
* change the merge policy

Another option might be to convert some of the Lucene indexes to asynchronous 
property indexes.

> Reduce Lucene related growth of repository size
> -----------------------------------------------
>
>                 Key: OAK-5192
>                 URL: https://issues.apache.org/jira/browse/OAK-5192
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: lucene, segment-tar
>            Reporter: Michael Dürig
>            Assignee: Tommaso Teofili
>              Labels: perfomance, scalability
>             Fix For: 1.8, 1.7.3
>
>         Attachments: added-bytes-zoom.png
>
>
> I observed Lucene indexing contributing to up to 99% of repository growth. 
> While the size of the index itself is well inside reasonable bounds, the 
> overall turnover of data being written and removed again can be as much as 
> 99%. 
> In the case of the TarMK this negatively impacts overall system performance 
> due to fast growing number of tar files / segments, bad locality of 
> reference, cache misses/thrashing when looking up segments and vastly 
> prolonged garbage collection cycles.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to