Hi,

In our project, we implemented an in-memory cache, based on Apache Ignite 3.1, 
running on Kubernetes. The Ignite cluster is configured with 3 replicas.
We are noticing that each of the 3 pods has a memory consumption that 
progressively increases until it reaches the configured limit, at which point a 
pod restart occurs.

For your reference, we are using the following configuration:

  *
StatefulSet memory limit: 2Gi
  *
3 replicas
  *
JVM_MAX_MEM = JVM_MIN_MEM: 1Gi
  *
In-memory table created with:
     *
Zone with high availability
     *
Profile using "aimem" engine

We also created a cronjob that keeps the number of entries in cache within a 
limit of 6000, but this has no effect on the memory stability of the pods.
In fact, we even tried clearing all entries from the cache table and did not 
observe any memory release from the pods. On the contrary, consumption 
continued to rise slowly.
Is there any known memory issue that could explain this behavior? Or could 
there be some setting we missed?

Thanks in advance.
Kind regards,
Joel Ferreira

Reply via email to