[ 
http://issues.apache.org/jira/browse/JAMES-592?page=comments#action_12433750 ] 
            
Noel J. Bergman commented on JAMES-592:
---------------------------------------

As per comments on mailing list:

Unfortunately, I once again confirm (this time using the new memstat check in 
RemoteManager) that we are losing about 2MB per day.  And it does seem related 
to the days, not the amount of mail.

I wonder about the memory consumed by those Phoenix artifacts that showed up in 
the prior logs.  Considering that both Stefano and Norman run with larger heaps 
than I do, and that at least Stefano rotates logs relatively infrequently 
compared to my daily rotation, it is entirely possible that they would not see 
such leaks for extended periods of time.  Also, load testing wouldn't help in 
this case, since it seems to be independent of load.

I am still prepared to release, but we will have to document this issue.

To check for this in your environment, once a day, run memstat -gc (new to 
2.3.0 RC3) in the RemoteManager, and record the available heap after the 
garbage collection.  I have done this for the past 3 days, and see ~2MB less 
memory available each day.

> James leaks memory slowly
> -------------------------
>
>                 Key: JAMES-592
>                 URL: http://issues.apache.org/jira/browse/JAMES-592
>             Project: James
>          Issue Type: Bug
>    Affects Versions: 2.3.0rc3, 2.3.0rc2
>            Reporter: Norman Maurer
>         Assigned To: Noel J. Bergman
>            Priority: Blocker
>
> Noel wrote on list:
> I do not know where in the application it is happening, but after running
> JAMES non-stop since Fri Aug 11 03:29:57 EDT 2006, this morning the JVM
> started to throw OutOfMemoryError exceptions, such as:
> 21/08/06 08:39:47 WARN  mailstore: Exception retrieving mail:
> java.lang.RuntimeException: Exception caught while retrieving an object,
> cause: java.lang.OutOfMemoryError, so we're deleting it.
> That did not recover, so it wasn't just due to a transient large allocation
> (which I limit, anyway), so there is definitely something leaking, albeit
> slowly.  Keep in mind that the store was one of the victims, but not
> necessarily the cause.
> The JVM process size had steadily grown from a somewhat stable 114MB to
> 130MB last night.  I did not look at it this morning before restarting the
> server.
>         --- Noel

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to