[ 
https://issues.apache.org/jira/browse/OAK-1042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved OAK-1042.
--------------------------------

       Resolution: Fixed
    Fix Version/s: 0.10

I worked around the {{CacheLIRS.get(key, loader)}} bottleneck by explicitly 
tracking concurrent loads at a higher level, see http://svn.apache.org/r1528593.

With that I think we can mark this as resolved.

> Segment node store caching
> --------------------------
>
>                 Key: OAK-1042
>                 URL: https://issues.apache.org/jira/browse/OAK-1042
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>            Reporter: Thomas Mueller
>            Assignee: Thomas Mueller
>             Fix For: 0.10
>
>
> Segment node stores caching seems to use quite a lot of CPU. According to my 
> test, the oak-run SimpleSearchTest uses about 50% for Segment node store 
> caching, when using the built-in profiler:
> {code}
> java -mx1g -Dwarmup=3 -Druntime=15 -jar target/oak-run-*.jar benchmark 
> SimpleSearchTest Oak-Tar
> packages:
> 48%: com.google.common.cache <== cache
> 16%: org.apache.jackrabbit.oak.plugins.segment
> 8%: org.apache.jackrabbit.oak.plugins.memory
> 4%: org.apache.jackrabbit.oak.util
> 3%: org.apache.jackrabbit.oak.core
> 2%: org.apache.jackrabbit.oak.benchmark
> 2%: com.google.common.base   <== cache
> .
> Oak-Tar                          308     310     313     324     344      48
> {code}
> The problem seems to be the cache in the FileStore. As far as I see, the 
> cache limit is 1000 <UUID, Segment> entries (size based, not weight based).
> I wonder if there is a simple way to reduce CPU usage. I will try with the 
> LIRS cache.
> I also wonder if this cache should really be size limited, and not weight 
> limited (segments can have different sizes as far as I know)?



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to