-Xmx defines Heap memory. Direct memory is not part of the Heap.
To expand a little more on a topic: by default, if you don't set any memory
parameters, JVM uses 25% of total memory assigned to pod for Heap, and the same
amount for Direct memory. As you can imagine this is fine because the remaining
50% can be used by native JVM memory, libraries and "OS stuff". The default
configuration is the most stable one. Now if you set -Xmx to 50% or more, but
don't set -XX:MaxDirectMemorySize, it means that both Heap and Direct memory
can now expand beyond 100% of Pod memory. Hence not so stable configuration.
So you should set -XX:MaxDirectMemorySize not to the percentage of -Xmx, but so
that the -XX:MaxDirectMemorySize + -Xmx is not above 80-85% of Pod limit. Of
course, that means that some of that reserved memory will always be "wasted"
and free, but if you want the most stable configuration, this is it.
How much percentage of that 80-85% sum should be for Direct Memory and how much
for Heap is another question. That depends on your load type. Memory metrics
should show you that during the time. Maybe you need 50/50 for both. Or maybe
you need 60/40 or 70/30.
Regarding Netty configuration, I'm not expert on the topic, but from the
information you've provided you should be fine with default configuration. In
any case, given a little bit convulated Netty documentation be sure to read
links to the source code I have provided. Also, you could enable Netty
debugging to see how much memory it actually detects (it is written in debug
log) and if you need to change anything. For more advanced Netty questions you
should ask on their mailing list.
--
Best Regards,
Vilius
-----Original Message-----
From: Shiv Kumar Dixit <[email protected]>
Sent: Monday, January 19, 2026 10:43 AM
To: [email protected]
Cc: Vilius Šumskas <[email protected]>
Subject: RE: K8s broker pod getting killed with OOM
Hello Vilius,
Thanks for your input.
1. When we get set limit of pod, it is much lesser than the node's memory
capacity. So, there is no issue of over allocation resulting in OOM.
2. Do we need set both -XX:MaxDirectMemorySize and -Dio.netty.maxDirectMemory
or Netty will derive -Dio.netty.maxDirectMemory from -XX:MaxDirectMemorySize?
3. We are using OpenJDK 17 with HotSpot.
4. We will enable netty-allocator and see what data we can extract out of it.
Thanks for the hint.
5. Based on your input that "most stable result you should set
-XX:MaxDirectMemorySize so that heap + direct memory size never goes above
80-85% of total memory", it appears if we set -XX:MaxDirectMemorySize as 25% of
-Xmx, it will be within limit of pod.
Best Regards
Shiv
-----Original Message-----
From: Vilius Šumskas via users <[email protected]>
Sent: 16 January 2026 06:36 AM
To: [email protected]
Cc: Vilius Šumskas <[email protected]>
Subject: RE: K8s broker pod getting killed with OOM
Unverified Sender: The sender of this email has not been verified. Review the
content of the message carefully and verify the identity of the sender before
acting on this email: replying, opening attachments or clicking links.
That's a little bit off-topic, but would you consider exchanging -Xmx into
-XX:MaxRAMPercentage for default configuration in 3.0 too? This would
universally improve default JVM memory settings under various scenarios,
without the need to change it every time more physical memory is added. And if
yes, should I create a ticket for this?
--
Vilius
-----Original Message-----
From: Clebert Suconic <[email protected]>
Sent: Thursday, January 15, 2026 12:36 AM
To: [email protected]
Subject: Re: K8s broker pod getting killed with OOM
so, in summary, what I'm recommending you is:
use max-size-messages for all the queues.. for your large queues, use something
like 10MB and for your small queues 100K
also keep max-read-page-bytes in use... keep it at 20M
If I could change the past I would have a max-size on every address we deploy,
and having global-max-size for the upmost emergency case..
it's something I'm looking to change into artemis 3.0 or 4.0. (I can't change
that into a minor version, as it could break certain cases...
as some users that I know use heavy filtering and can't really rely on paging).
On Wed, Jan 14, 2026 at 5:31 PM Clebert Suconic <[email protected]>
wrote:
>
> I would recommend against trusting global-max-size. and use max-size
> for all the addresses.
>
> Also what is your reading attributes. I would recommending using the
> new prefetch values.
>
>
>
> And also what operator are you using? arkmq? your own?
>
> On Wed, Jan 14, 2026 at 7:44 AM Shiv Kumar Dixit
> <[email protected]> wrote:
> >
> > We are hosting Artemis broker in Kubernetes using operator-based solution.
> > We deploy the broker as statefulset with 2 or 4 replicas. We assign for
> > e.g. 6 GB for heap and 9 GB for pod, 1.2 GB (1/5 of max heap) for
> > global-max-size. All addresses normally use -1 for max-size-bytes but some
> > less frequently used queues are defined with 100KB for max-size-bytes to
> > allow early paging.
> >
> >
> >
> > We have following observations:
> >
> > 1. As the broker pod starts, broker container immediately occupies 6 GB for
> > max heap. It seems expected as both min and max heap are same.
> >
> > 2. Pod memory usage starts with 6+ GB and once we have pending messages,
> > good producers and consumers connect to broker, invalid SSL attempts
> > happen, broker GUI access happens etc. during normal broker operations -
> > pod memory usage keeps increasing and now reaches 9 GB.
> >
> > 3. Once the pod hits limit of 9 GB, K8s kills the pod with OOMKilling event
> > and restarts the pod. Here we don’t see broker container getting killed
> > with OOM rather pod is killed and restarted. It forces the broker to
> > restart.
> >
> > 4. We have configured artemis.profile to capture memory dump in case of OOM
> > of broker but it never happens. So, we are assuming broker process is not
> > going out of memory, but pod is going out of memory due to increased
> > non-heap usage.
> >
> > 5. Only way to recover here is to increase heap and pod memory limits from
> > 6 GB and 9 GB to higher values and wait for next re-occurrence.
> >
> >
> >
> > 1. Is there any way to analyse what is going wrong with non-heap native
> > memory usage?
> >
> > 2. If non-heap native memory is expected to increase to such extent due to
> > pending messages, SSL errors etc.?
> >
> > 3. Is there any param we can use to restrict the non-heap native memory
> > usage?
> >
> > 4. If netty which handles connection aspect of broker can create such
> > memory consumption and cause OOM of pod?
> >
> > 5. Can we have any monitoring param that can hint that pod is potentially
> > in danger of getting killed?
> >
> >
> >
> > Thanks
> >
> > Shiv
>
>
>
> --
> Clebert Suconic
--
Clebert Suconic
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]