Hi,
Based on the page fault behaviour, I think the areas mapped and reported by
pmap are being actively accessed by the JVM. The number of page faults for
Oak 1.4.11 is well over 2x the number of page faults for Oak 1.0.29 on the
same VM, with the same DB when performing an oak-run offline compaction.
This is on the same VM with the same repository in the same state. The Tar
files are not the same, but 1 copy of the tar files is 32GB in both
instances, 1.4.11 maps 64GB as mentioned before.

I dont know if its the behaviour seen in OAK-4274. I have seen similar in
the past. I was not confident that a GC cycle did unmap, but it would be
logical.
Best Regards
Ian

On 23 March 2017 at 09:07, Francesco Mari <[email protected]> wrote:

> You might be hitting OAK-4274, which I discovered quite some time ago.
> I'm not aware of a way to resolve this issue at the moment.
>
> 2017-03-22 16:47 GMT+01:00 Alex Parvulescu <[email protected]>:
> > Hi,
> >
> > To give more background this came about during an investigation into a
> slow
> > offline compaction but it may affect any running FileStore as well (to be
> > verified).
> > I don't think it's related to oak-run itself, but more with the way we
> map
> > files, and so far it looks like a bug (there is no reasonable explanation
> > for mapping each tar file twice).
> >
> > Took a quick look at the TarReader but there are not many changes in this
> > area 1.0 vs. 1.4 branches.
> > If no one has better ideas, I'll create an oak issue and investigate
> this a
> > bit further.
> >
> > thanks,
> > alex
> >
> >
> > On Wed, Mar 22, 2017 at 4:28 PM, Ian Boston <[email protected]> wrote:
> >
> >> Hi,
> >> I am looking at Oak-run and I see 2x the mapped memory between 1.0.29
> and
> >> 1.4.10. It looks like in 1.0.29 each segment file is mapped into memory
> >> once, but in 1.4.10 its mapped into memory 2x.
> >>
> >> Is this expected ?
> >>
> >> Its not great for page faults.
> >> Best Regards
> >> Ian
> >>
>

Reply via email to