[ 
https://issues.apache.org/jira/browse/OAK-4966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629253#comment-15629253
 ] 

Alex Parvulescu commented on OAK-4966:
--------------------------------------

in an interesting turn of events, it seems setting the global usage threshold 
on the memory pool affects _all_ jmx clients of the pool.
While testing out a patch, I accidentally triggered a heap dump from the 
{{org.apache.felix.webconsole.plugins.memoryusage}}:
bq. *WARN* [Service Thread] org.apache.felix.webconsole.plugins.memoryusage 
Received Memory Threshold Exceeded Notification, dumping Heap
it seems there's some overlap in features here, both competing for the same 
threshold setting. here it could be argued that in fact the {{memoryusage}} 
plugin needs to check if the notification falls under its own locally defined 
threshold or not instead of blindly dumping the heap on each event received. 
this would allow for registering another listener while keeping this one 
disabled (or coming up with a trick to even allow multiple listeners jointly 
registered on the same pool using the smallest threshold of all as a common 
ground).

Our single purpose is to cancel compaction, I'm not sure dumping the entire 
heap is what we want to do here. I see 2 options:
* reverting to a polling approach?
* fixing the {{memoryusage}} plugin to allow multiple listeners.

[~mduerig], [~chetanm] thoughts?

[0] 
https://github.com/apache/felix/blob/trunk/webconsole-plugins/memoryusage/src/main/java/org/apache/felix/webconsole/plugins/memoryusage/internal/MemoryUsageSupport.java#L553

> Re-introduce a blocker for compaction based on available heap
> -------------------------------------------------------------
>
>                 Key: OAK-4966
>                 URL: https://issues.apache.org/jira/browse/OAK-4966
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: segment-tar
>            Reporter: Alex Parvulescu
>            Assignee: Alex Parvulescu
>             Fix For: 1.6, 1.5.13
>
>         Attachments: OAK-4966.patch
>
>
> As seen in a local test, running compaction on a tight heap can lead to 
> OOMEs. There used to be a best effort barrier against this situation 'not 
> enough heap for compaction', but we removed it with the compaction maps.
> I think it makes sense to add it again based on the max size of some of the 
> caches: segment cache {{256MB}} by default [0] and some writer caches which 
> can go up to {{2GB}} all combined [1] and probably others I missed.
> [0] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/SegmentCache.java#L48
> [1] 
> https://github.com/apache/jackrabbit-oak/blob/trunk/oak-segment-tar/src/main/java/org/apache/jackrabbit/oak/segment/WriterCacheManager.java#L50



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to