[ 
https://issues.apache.org/jira/browse/OAK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597300#comment-14597300
 ] 

Marcel Reutegger commented on OAK-2989:
---------------------------------------

bq. is the amount of change or the period configurable ?

The DocumentNodeStore persists changes to a branch when there are 10'000 
changes. This includes calls to addNode() and setProperty(). This can be 
tweaked with a system property {{-Dupdate.limit}}.

bq. Do you have a mechanism already in place to reproduce issues due to large 
data set ?

We have LargeOperationIT in oak-jcr, but a standalone test class or a new test 
in oak-run is probably easier.

> Swap large commits to disk in order to avoid OOME
> -------------------------------------------------
>
>                 Key: OAK-2989
>                 URL: https://issues.apache.org/jira/browse/OAK-2989
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 1.2.2
>            Reporter: Timothee Maret
>             Fix For: 1.3.2
>
>
> As described in [0] large commits consume a fair amount of memory. With very 
> large commits, this become problematic as commits may eat up 100GB or more 
> and thus causing OOME and aborting the commit.
> Instead of keeping the whole commit in memory, the implementation may store 
> parts of it on the disk once the heap memory consumption reaches a 
> configurable threshold.
> This would allow to solve the issue and not simply mitigate it as in 
> OAK-2968, OAK-2969.
> The behaviour may already be supported for some configurations of Oak. At 
> least the setup Mongo + DocumentStore seemed not to support it.
> [0] http://permalink.gmane.org/gmane.comp.apache.jackrabbit.oak.devel/8196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to