[ 
https://issues.apache.org/jira/browse/IGNITE-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16999427#comment-16999427
 ] 

Stefan Miklosovic commented on IGNITE-8775:
-------------------------------------------

[~irudyak] [~kotamrajuyashasvi] I check this one and it seems to me that the 
problem is not existing "out of the box".

If there is not lbp used in DataSource, Cassandra Driver, by default, uses this 
when "withLoadBalancingPolicy" is used on it:

{quote}Configures the load balancing policy to use for the new cluster.
If no load balancing policy is set through this method, 
Policies.defaultLoadBalancingPolicy() will be used instead
{quote}

Investigating further, "Policies.defaultLoadBalancingPolicy()" uses:

{quote}
The default load balancing policy is DCAwareRoundRobinPolicy with token 
awareness (so new TokenAwarePolicy(new DCAwareRoundRobinPolicy())).
Note that this policy shuffles the replicas when token awareness is used, see 
TokenAwarePolicy.TokenAwarePolicy(LoadBalancingPolicy, boolean) for an 
explanation of the tradeoffs
{quote}

If you check the implementation of that policy and its "init" method, there is 
not any case when we add "more" hosts. It is added only in case it does not 
exist.

This problem does exist in the implementation of "RoundRobinPolicy" in its 
init, that is true. But the only place you are using this policy is in tests.

Hence, if one uses his own RoundBalancingPolicy (as it seems to be the case, I 
wonder why it is so because you should use DCAwareRoundRobinPolicy over 
"primitive" RoundRobinPolicy whenever it is needed), it is up to the user to 
implement own policy which does not have this problem, in this particular case, 
by extending RoundRobinPolicy and overriding its init() method and all other 
methods which are using private fields in the old policy (as we would just 
re-declared them in our custom policy).

> Memory leak in ignite-cassandra module while using RoundRobinPolicy 
> LoadBalancingPolicy
> ---------------------------------------------------------------------------------------
>
>                 Key: IGNITE-8775
>                 URL: https://issues.apache.org/jira/browse/IGNITE-8775
>             Project: Ignite
>          Issue Type: Bug
>          Components: cassandra
>            Reporter: Yashasvi Kotamraju
>            Assignee: Igor Rudyak
>            Priority: Major
>
> OutOfMemory Exception is observed when encountered with the issue 
> IGNITE-8354. Though the issue is solved, preventing OOM by preventing 
> unnecessary refresh of Cassandra session refresh, there seems to be a memory 
> leak in ignite-cassandra module while using RoundRobinPolicy 
> LoadBalancingPolicy while refreshing cassandra session. which seems to be the 
> root cause of OOM.
> In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java 
>  when refresh() method is invoked to handle Exceptions, new Cluster is build 
>  with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same 
>  RoundRobinPolicy object would be used while building Cluster when refresh() 
>  is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList<Host>
>  liveHosts. When ever init(Cluster cluster, Collection<Host> hosts) is called 
>  on RoundRobinPolicy  it calls liveHosts.addAll(hosts) adding all the Host 
>  Object Collection to liveHosts. 
>  When ever Cluster is build during refresh() the Host Collection are added 
> during init call, 
>  again to the liveHosts of the same RoundRobinPolicy  .Thus same Hosts are 
> added again to liveHosts for every refresh() and the size would grow 
> indefinitely after many refresh() calls causing OOM. Even in the heap dump 
> post OOM we found huge number of Objects in liveHosts of  RoundRobinPolicy 
> Object. 
> Some possible solutions would be 
>  1. To use new LoadBalancingPolicy object while building new Cluster during 
>  refresh(). 
>  2. Somehow clear Objects in liveHosts during refresh(). 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to