[
https://jira.codehaus.org/browse/MRM-1785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=335633#comment-335633
]
Alix Lourme commented on MRM-1785:
----------------------------------
Hi,
Taking a new dump today after a full GC (file : 1,5Go), Eclipse MAT presents 1
leak suspect (equivalent to n°2 below) :
{code}
37 188 instances of "org.apache.jackrabbit.core.XASessionImpl", loaded by
"org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6f53dd5f8" occupy 529 004 456
(70,16%) bytes. These instances are referenced from one instance of
"java.util.HashMap$Entry[]", loaded by "<system class loader>"
Keywords
java.util.HashMap$Entry[]
org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6f53dd5f8
org.apache.jackrabbit.core.XASessionImpl
{code}
+Supposition+ : Jackrabbit repository grows up with user interactions (search,
etc) and forgets to clean some cache items ?
{quote}
did you try a 2.0.0-SNAPSHOT?
{quote}
If this version has some changes / improvement in internal mechanisms, we can
consider this issue in "to be tested" status (in 2.0.0) and waiting deployment
of the futur 2.0.0 release in our production.
@Olivier : Do you want some other complement from dump (could be explained this
behaviour) ... or next version could be helpful on this problem ? Thanks in
advance.
> Little memory leak detected
> ---------------------------
>
> Key: MRM-1785
> URL: https://jira.codehaus.org/browse/MRM-1785
> Project: Archiva
> Issue Type: Bug
> Components: Problem Reporting
> Affects Versions: 1.4-M4
> Environment: Linux SLES 11 86_64
> Reporter: Alix Lourme
> Priority: Critical
> Attachments: 20131105-091005-dump.png,
> 20131105-091005-ProblemSuspect-1-1-description.png,
> 20131105-091005-ProblemSuspect-1-2-ShortestPaths.png,
> 20131105-091005-ProblemSuspect-1-3-AccumulatedObjects.png,
> 20131105-091005-ProblemSuspect-1-4-AccumulatedObjectsByClass.png,
> 20131105-091005-ProblemSuspect-2-1-description.png,
> 20131105-091005-ProblemSuspect-2-2-CommonPathToTheAccumulationPoint.png,
> GC-HeapUsage-AfterProblem.png, GC-HeapUsage-BeforeProblem.png,
> GC-HeapUsage-OneWeek.png, GC-InvocationCountOneWeek.png
>
>
> Perhaps some duplicate of MRM-1741 (but no activity since 6 months => openend
> a new)
> ----
> We are using Archiva 1.4-M4 for our company, and we found some memory problem
> usage on this version.
> It seems not to be a "big problem", because there is _not directly_ a
> OutOfMemory, the Jetty server becomes very slow before. Symptom from CI
> platorm (for exemple) :
> {quote}
> Server returned HTTP response code: 502 for URL:
> http://[url]/repository/[repoName]/[grouId]/[artifactId]/[version]/maven-metadata.xml
> {quote}
> ----
> +Informations about volumetry+ :
> Requests by day (wc -l request-XXX.log) : *430000* (average)
> Company repositories (_Total File Count_ from Stats in Repositories menu) :
> * company-releases : 574771
> * company-snapshots : 118905
> * proxied-releases : 232626
> * proxied-snapshots : 2136
> * extra-libs : 9232
> * commercial-libs : 4587
> * *Total : 950000*
> +Note+ : Option "Skip Packed Index creation" activated on each repositories.
> ----
> +Analysis+ :
> The GC grow up during the week :
> !GC-HeapUsage-OneWeek.png!
> The invocation to GC grow up when memory is short :
> !GC-InvocationCountOneWeek.png!
> Before the problem, we can see the impact (difficulty to garbage memory) :
> !GC-HeapUsage-BeforeProblem.png!
> After a application restart, the common usage is less than 2Go :
> !GC-HeapUsage-AfterProblem.png!
> => So the supposition is a little memory leak.
> ----
> A solution could be some cache time reduction or SoftReference/WeakReference
> usage.
> Today I have not more information about problem, restart was urgent, and
> analysis a 4Go HeapDump a little difficult.
> I will take HeapDumps the next week to give some detail about memory usage.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira