Eric,

It is important to note that Java process memory usage consists of multiple
factors beyond the maximum heap size. Kubernetes limits are based on total
memory usage, so it is essential to account for all Java memory elements
when specifying Kubernetes memory limits.

The following article has a helpful summary of Java Virtual Machine memory
usage:

https://developers.redhat.com/articles/2021/09/09/how-jvm-uses-and-allocates-memory

The formula for memory usage is helpful:

> JVM memory = Heap memory + Metaspace + CodeCache + (ThreadStackSize *
Number of Threads) + DirectByteBuffers + Jvm-native

Based on that formula, several key factors include the total number of
threads, and the size of direct byte buffers.

Specifically, it is important to highlight that the default direct byte
buffer limit is equal to the maximum heap size. This means that unless the
size of direct memory is configured for the JVM, using a custom argument in
bootstrap.conf, the maximum heap size should be something less than half of
Kubernetes memory limits.

All of these factors will be different based on the particular Processors
configured, and the specific configuration parameters selected. Some
Processors use direct byte buffers, others do not, so it depends on the
supporting libraries used.

That's a longer way of saying, Joe's suggestion of 2 GB for maximum heap
size should be a safe starting point with 8 GB as the memory limit to avoid
OOM killed pods.

Regards,
David Handermann

On Wed, Aug 16, 2023 at 6:08 PM Joe Witt <[email protected]> wrote:

> Eric
>
> Try using 2GB for the heap and seeing that helps. I also believe there are
> specific pod settings youll want to use to avoid it getting nuked by k8s.
>
> This blog may give you great things to consider
> https://home.robusta.dev/blog/kubernetes-memory-limit
>
> Thanks
>
> On Wed, Aug 16, 2023 at 3:29 PM Eric Secules <[email protected]> wrote:
>
>> My nifi.properties uses the filesystem repository implementations for
>> everything except the component status repository. So it's sticking to the
>> defaults. Could the VolatileComponentStatusRepository be what's eating
>> up my memory. It would fit well with the use pattern of the test framework
>> creating many processors and connections but deleting a processor also
>> makes its component status unavailable, so I would assume that also
>> releases associated memory.
>>
>> On Wed, Aug 16, 2023 at 2:35 PM Eric Secules <[email protected]> wrote:
>>
>>> Hi,
>>>
>>> I was wondering what the recommendation is for how much memory to start
>>> the nifi jvm with and then how much to set the docker container memory
>>> limit as a function of the jvm memory setting.
>>>
>>> I have nifi started with -xms and -xmx set to 4gb and docker memory
>>> resource limits of 8GB on the container for a small test environment. And
>>> it gets killed by k8s for consuming too much memory after the tests run for
>>> a while. My workload is automated tests where each test puts a flow (avg 70
>>> processors) on the canvas, runs files through, and removes the flow when
>>> the test is done. About 4 of these tests run in parallel and we are only
>>> testing flow functionality so the flows only deal with 10s of flowfiles
>>> each.
>>>
>>> I am still on version 1.14.0 and I would like to know what other things
>>> in nifi contribute to memory use besides flowfiles in flight. Also is it
>>> normal that I see my containers using much more memory than I allotted to
>>> the jvm?
>>>
>>> Thanks,
>>> Eric
>>>
>>

Reply via email to