[
https://issues.apache.org/jira/browse/IGNITE-20327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mikhail Petrov updated IGNITE-20327:
------------------------------------
Description:
1. CQ is registered through the thin client. Assume that we filter out all
events except cache entry expired events.
2. A huge amount of cache entries expiry on the cluster and the corresponding
CQ events are created on the node that holds CQ listener.
3. Assume that thin client connection is slow. Thus, all events designated for
the thin client are accumulated in the selector queue
GridSelectorNioSessionImpl#queue before they are sent. Note, that all thin
clients messages stored in the serializes form.
Here is two main problems
1. Currently EXPIRY and REMOVE CacheContinuousQueryEntry entries are
initialized with both oldValue and newValue as a reference to the same object
(that was done to meet JCache requirements - see
https://issues.apache.org/jira/browse/IGNITE-8714)
During thin client CQ event serialization, we process both oldValue and
newValue independently. As a result, the same value is serialized twice, which
can significantly increase the amount of memory consumed by the
GridSelectorNioSessionImpl#queue.
2. Messages designated to the thin clients are serialized with the use of
POOLED allocator. The problem here is that the POOLED allocator allocates
memory in powers of two. As a result, if the serialized message is slightly
larger than 2^n bytes, then twice as much memory will be allocated to store it.
As a result each EXPIRY/REMOVE CQ event that is awaiting its sending to the
thin client side can consume <expired/removed cache entry size> * 4 of Java
Heap.
was:
1. CQ is registered through the thin client. Assume that we filter out all
events except cache entry expired events.
2. A huge amount of cache entries expiry on the cluster and the corresponding
CQ events are created on the node that holds CQ listener.
3. Assume that thin client connection is slow. Thus, all events designated for
the thin client are accumulated in the selector queue
GridSelectorNioSessionImpl#queue before they are sent. Note, that all thin
clients messages stored in the serializes form.
Here is two main problems
1. EXPIRY and REMOVE CacheContinuousQueryEntry entries initializes both
oldValue and newValue with the same object to meet JCache requirements - see
https://issues.apache.org/jira/browse/IGNITE-8714
During thin client CQ event serialization, we process both oldValue and
newValue independently. As a result, the same value is serialized twice, which
can significantly increase the amount of memory consumed by the
GridSelectorNioSessionImpl#queue.
2. Messages designated to the thin clients are serialized with the use of
POOLED allocator. The problem here is that the POOLED allocator allocates
memory in powers of two. As a result, if the serialized message is slightly
larger than 2^n bytes, then twice as much memory will be allocated to store it.
As a result each EXPIRY/REMOVE CQ event that is awaiting its sending to the
thin client side can consume <expired/removed cache entry size> * 4 of Java
Heap.
> [Thin clients] Continuous Query EXPIRY/REMOVE events can consumes a huge
> amount of heap
> ----------------------------------------------------------------------------------------
>
> Key: IGNITE-20327
> URL: https://issues.apache.org/jira/browse/IGNITE-20327
> Project: Ignite
> Issue Type: Task
> Reporter: Mikhail Petrov
> Assignee: Mikhail Petrov
> Priority: Major
>
> 1. CQ is registered through the thin client. Assume that we filter out all
> events except cache entry expired events.
> 2. A huge amount of cache entries expiry on the cluster and the corresponding
> CQ events are created on the node that holds CQ listener.
> 3. Assume that thin client connection is slow. Thus, all events designated
> for the thin client are accumulated in the selector queue
> GridSelectorNioSessionImpl#queue before they are sent. Note, that all thin
> clients messages stored in the serializes form.
> Here is two main problems
> 1. Currently EXPIRY and REMOVE CacheContinuousQueryEntry entries are
> initialized with both oldValue and newValue as a reference to the same object
> (that was done to meet JCache requirements - see
> https://issues.apache.org/jira/browse/IGNITE-8714)
> During thin client CQ event serialization, we process both oldValue and
> newValue independently. As a result, the same value is serialized twice,
> which can significantly increase the amount of memory consumed by the
> GridSelectorNioSessionImpl#queue.
> 2. Messages designated to the thin clients are serialized with the use of
> POOLED allocator. The problem here is that the POOLED allocator allocates
> memory in powers of two. As a result, if the serialized message is slightly
> larger than 2^n bytes, then twice as much memory will be allocated to store
> it.
> As a result each EXPIRY/REMOVE CQ event that is awaiting its sending to the
> thin client side can consume <expired/removed cache entry size> * 4 of Java
> Heap.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)