[ 
https://issues.apache.org/jira/browse/OAK-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mueller resolved OAK-8950.
---------------------------------
    Resolution: Fixed

> DataStore: FileCache should use one cache segment
> -------------------------------------------------
>
>                 Key: OAK-8950
>                 URL: https://issues.apache.org/jira/browse/OAK-8950
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: blob
>            Reporter: Thomas Mueller
>            Assignee: Thomas Mueller
>            Priority: Major
>
> The FileCache in the caching data store (Azure, S3) uses the default segment 
> count of 16. The effect of that is:
>  * if the maximum cache size is e.g. 16 GB
>  * and there are e.g. 15 files 1 GB each (total 15 GB),
>  * it can happen that some files are evicted, 
>  * because internally the cache is using 16 segments of 1 GB each,
>  * and by chance 2 files could be in the same segment,
>  * so that one of those files is evicted
> The workaround is to use a really large cache size (e.g. 100 GB if you only 
> want 15 GB of cache size), but the drawback is that, if most files are very 
> small, that the cache size could become actually 100 GB.
> The best solution is probably to use only 1 segment. There is tiny a 
> concurrency issue: right now, deleting files is synchronized on the segment. 
> But I think that's not a big problem (to be tested).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to