I found it: it's https://issues.apache.org/jira/browse/JOHNZON-396
It's fixed in version 2.0.1 but since Artemis is shading the old
version 1.21 the bug is still present. It affects all responses from
the broker that contain a string larger than ~200kB. In our case a list
of around 4,400 addresses.

Is there a way to access the value in the response without using the
buggy JSON parser?

Thorsten

Am Dienstag, dem 07.10.2025 um 15:17 -0500 schrieb Justin Bertram:
> Are you doing anything else with JMS and/or Artemis-specific code in
> your
> application other than this snippet you pasted?
> 
> Unfortunately I don't have any hunches about what the problem might
> be.
> This code has been in place for years and this is the first time I
> can
> recall ever seeing any kind of issue with it - memory or otherwise. I
> was
> hoping to get some clues about the problem based on a more detailed
> description of your use-case. It would be helpful to understand
> everything
> your application is doing with JMS and any Artemis-specific
> implementation
> classes.
> 
> 
> Justin
> 
> On Tue, Oct 7, 2025 at 2:45 PM Thorsten Meinl
> <[email protected]>
> wrote:
> 
> > Am Dienstag, dem 07.10.2025 um 11:12 -0500 schrieb Justin Bertram:
> > > The image indicates there's accumulation related to JSON parsing
> > > which I
> > > wouldn't expect during the normal process of consuming a message.
> > > 
> > > Can you share any more details about your use-case?
> > Not sure what details you need, but we access the message body with
> > getBody(byte[].class) as well as some string properties. We also
> > send
> > very similar messages from the same application (also with a byte[]
> > body) inside the same transaction. Afterwards the JMS context is
> > committed. And then it repeats.
> > 
> > > Do you have a reproducer?
> > Not yet, I can work on one.
> > 
> > >  Are the client and the broker in the same JVM?
> > No.
> > 
> > Do you have any suspicion where to look closer? I am happy to debug
> > myself if I get some pointers. This might be easier than stripping
> > everything down to a minimal reproducer.
> > 
> > 
> > Thanks,
> > 
> > Thorsten
> > 
> > 
> > > On Tue, Oct 7, 2025 at 10:53 AM Thorsten Meinl
> > > <[email protected]>
> > > wrote:
> > > 
> > > > Bummer. I hope this works:
> > > > 
> > > > 
> > https://drive.google.com/file/d/1U7wLYGDibiJi08_egHLnRxeEcDPSZgT9/view?usp=sharing
> > > > 
> > > > Am Dienstag, dem 07.10.2025 um 10:47 -0500 schrieb Justin
> > > > Bertram:
> > > > > I believe your attachment was stripped by the mailing list.
> > > > > Could
> > > > > you
> > > > > provide a link to it?
> > > > > 
> > > > > 
> > > > > Justin
> > > > > 
> > > > > On Tue, Oct 7, 2025 at 10:38 AM Thorsten Meinl
> > > > > <[email protected]>
> > > > > wrote:
> > > > > 
> > > > > > Hi,
> > > > > > 
> > > > > > We are using the Artemis JMS client 2.42.0 in an
> > > > > > application
> > > > > > with
> > > > > > the
> > > > > > following pattern:
> > > > > > 
> > > > > > try (var context =
> > > > > > connectionFactory.createContext(Session.SESSION_TRANSACTED)
> > > > > > ;
> > > > > >     var consumer =
> > > > > > context.createSharedDurableConsumer(topic,
> > > > > > queueName, QUEUE_FILTER)) {
> > > > > >      while (!Thread.currentThread().isInterrupted()) {
> > > > > >         var message = consumer.receive();
> > > > > >         if (message == null) {
> > > > > >              break;
> > > > > >         }
> > > > > >         // do some stuff
> > > > > >         context.commit();
> > > > > >     }
> > > > > > }
> > > > > > 
> > > > > > The service regularly runs out of memory after some time.
> > > > > > We
> > > > > > created a
> > > > > > heap dump and found some data structures deep in the
> > > > > > Artemis
> > > > > > client
> > > > > > of
> > > > > > more than 500MB while our messages are are in almost all
> > > > > > cases
> > > > > > below
> > > > > > 1MB in some exceptional cases up to 20MB. I have attached a
> > > > > > snippet
> > > > > > of
> > > > > > the heap dump.
> > > > > > Is this an issue in the client code or are we doing
> > > > > > something
> > > > > > wrong
> > > > > > in
> > > > > > our application code?
> > > > > > 
> > > > > > Thanks,
> > > > > > 
> > > > > > Thorsten
> > > > > > 
> > > > > > 
> > > > > > --
> > > > > > Dr.-Ing. Thorsten Meinl
> > > > > > KNIME AG
> > > > > > Talacker 50
> > > > > > 8001 Zurich, Switzerland
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > -----------------------------------------------------------
> > > > > > ----
> > > > > > ----
> > > > > > --
> > > > > > To unsubscribe, e-mail:
> > > > > > [email protected]
> > > > > > For additional commands, e-mail:
> > > > > > [email protected]
> > > > > > For further information, visit:
> > > > > > https://activemq.apache.org/contact
> > > > > > 
> > > > 
> > > > --
> > > > Dr.-Ing. Thorsten Meinl
> > > > KNIME AG
> > > > Talacker 50
> > > > 8001 Zurich, Switzerland
> > > > 
> > > > 
> > > > ---------------------------------------------------------------
> > > > ----
> > > > --
> > > > To unsubscribe, e-mail: [email protected]
> > > > For additional commands, e-mail: [email protected]
> > > > For further information, visit:
> > > > https://activemq.apache.org/contact
> > > > 
> > > > 
> > > > 
> > 
> > --
> > Dr.-Ing. Thorsten Meinl
> > KNIME AG
> > Talacker 50
> > 8001 Zurich, Switzerland
> > 
> > 
> > -------------------------------------------------------------------
> > --
> > To unsubscribe, e-mail: [email protected]
> > For additional commands, e-mail: [email protected]
> > For further information, visit: https://activemq.apache.org/contact
> > 
> > 
> > 

-- 
Dr.-Ing. Thorsten Meinl
KNIME AG
Talacker 50
8001 Zurich, Switzerland


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
For further information, visit: https://activemq.apache.org/contact


Reply via email to