Hello, Ignite 3 uses the MVCC approach for transaction isolation, which
means that multiple versions of data (including removed data) is stored for
a configured period of time, see [1]. I can suggest changing the
"dataAvailabilityTimeMillis" property, see [2].

[1]
https://ignite.apache.org/docs/ignite3/latest/administrators-guide/storage/data-partitions#version-storage
[2]
https://ignite.apache.org/docs/ignite3/latest/administrators-guide/config/cluster-config#garbage-collection-configuration

On Mon, Jan 19, 2026 at 12:22 PM Joel Ferreira (Nokia) via user <
[email protected]> wrote:

> Hi,
>
> In our project, we implemented an in-memory cache, based on Apache Ignite
> 3.1, running on Kubernetes. The Ignite cluster is configured with 3
> replicas.
> We are noticing that each of the 3 pods has a memory consumption that
> progressively increases until it reaches the configured limit, at which
> point a pod restart occurs.
>
> For your reference, we are using the following configuration:
>
>    - StatefulSet memory limit: 2Gi
>    - 3 replicas
>    - JVM_MAX_MEM = JVM_MIN_MEM: 1Gi
>    - In-memory table created with:
>    - Zone with high availability
>       - Profile using "aimem" engine
>
>
> We also created a cronjob that keeps the number of entries in cache within
> a limit of 6000, but this has no effect on the memory stability of the pods.
> In fact, we even tried clearing all entries from the cache table and did
> not observe any memory release from the pods. On the contrary, consumption
> continued to rise slowly.
> Is there any known memory issue that could explain this behavior? Or could
> there be some setting we missed?
>
> Thanks in advance.
> Kind regards,
> Joel Ferreira
>


-- 
With regards,
Aleksandr Polovtsev

Reply via email to