[
https://issues.apache.org/jira/browse/ARTEMIS-3200?focusedWorklogId=595365&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-595365
]
ASF GitHub Bot logged work on ARTEMIS-3200:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 12/May/21 14:24
Start Date: 12/May/21 14:24
Worklog Time Spent: 10m
Work Description: gtully commented on pull request #3568:
URL: https://github.com/apache/activemq-artemis/pull/3568#issuecomment-839815064
the closable go away when the connection closes, the issue/leak appears when
there are repeated producer open/close on the same connection. It may need more
tests that verify error conditions, I don't think those exist at the moment,
there was one covering abort.
I will write some more tests and follow up with the commit.
At first look it seemed the closeable was just a catch all, and it hid the
fact that the close was not always called. I think close should be called, b/c
we already track the producer senders and we need to remove those in any event.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 595365)
Time Spent: 1h 50m (was: 1h 40m)
> ProtonAbstractReceiver memory leak
> ----------------------------------
>
> Key: ARTEMIS-3200
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3200
> Project: ActiveMQ Artemis
> Issue Type: Bug
> Components: AMQP
> Affects Versions: 2.17.0
> Environment:
> We were testing the new version of artemis, 2.16 and 2.17.
> We are running spring integration and are using sync and async message flows
> which make use of artemis. With our async message flow we encountered no
> problems, but with the sync message flows we noticed the issues stated above.
> Reporter: Bas
> Assignee: Gary Tully
> Priority: Blocker
> Time Spent: 1h 50m
> Remaining Estimate: 0h
>
> Hi,
> Thanks for a wonderfull broker.
> We were testing the new version of artemis, 2.16 and 2.17 and we noticed the
> memory will go up during our performance tests and eventually the garbace
> collector can no longer free enough memory to keep running.
> We are running spring integration and are using sync and async message flows
> which make use of artemis. With our async message flow we encountered no
> problems, but with the sync message flows we noticed the issues stated above.
> We tracked the issues by using MAT to a lambda in the class
> [https://github.com/apache/activemq-artemis/blob/master/artemis-protocols/artemis-amqp-protocol/src/main/java/org/apache/activemq/artemis/protocol/amqp/proton/ProtonAbstractReceiver.java]
> A largemessage cleanup action is registered through the method addClosable on
> line 92. This lambda is not remembered for life cycle managment and never
> de-registered. It seems we create a lot of there ProtonAbstractReceivers
> through our way of messaging. And they pile up in the session. Only when we
> close the session memory is freed. That we create a large nr of
> ProtonAbstractReceiver might also not be correct. I doubt this because we see
> no issues in our async flows. I'm not sure what these ProtonAbstractReceivers
> are used for and when they are created and when not. Maybe there is a place
> where I could look like some related code or some documentation.
> I would like to make a unit tests and see if there is a way this can be fixed
> but could someone give me a hint when this ProtonAbstracReceiver is created
> and if there is already a test case resembling what I want to do so I can use
> that for a quick start?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)