[jira] [Created] (IGNITE-8788) Getting NullPointerException during commit into cassandra, after reconnecting to ignite server
Yashasvi Kotamraju created IGNITE-8788: -- Summary: Getting NullPointerException during commit into cassandra, after reconnecting to ignite server Key: IGNITE-8788 URL: https://issues.apache.org/jira/browse/IGNITE-8788 Project: Ignite Issue Type: Bug Components: cassandra Reporter: Yashasvi Kotamraju Assignee: Igor Rudyak When ignite client reconnects to restarted ignite server, while commiting data into cassandra NullPointerException is observed for random runs. caused by: java.lang.NullPointerException at org.apache.ignite.cache.store.cassandra.persistence.PojoField.getValueFromObject(PojoField.java:167) at org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindValues(PersistenceController.java:450) at org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindKeyValue(PersistenceController.java:202) at org.apache.ignite.cache.store.cassandra.session.transaction.WriteMutation.bindStatement(WriteMutation.java:58) at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:499) After going through the source code there is a suspicion that its a java serialization issue in ignite cassandra module In org.apache.ignite.cache.store.cassandra.persistence.PojoField.java, there is a PojoFieldAccessor instance variable which is transient type, so it will not be part of serialization and if PojoField object is serialized and then deserialized it would have PojoFieldAccessor as null. And in the Exception we are seeing the same, NullPointerException when getValue(..) is called on null PojoFieldAccessor in PojoField.getValueFromObject() method . So when ever PojoField object is serialized and then deserialized we might be observing the issue. Reproducer can be found at: http://apache-ignite-users.70518.x6.nabble.com/Getting-NullPointerException-during-commit-into-cassandra-after-reconnecting-to-ignite-server-td22005.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8775) Memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy
[ https://issues.apache.org/jira/browse/IGNITE-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju updated IGNITE-8775: --- Description: OutOfMemory Exception is observed when encountered with the issue IGNITE-8354. Though the issue is solved, preventing OOM by preventing unnecessary refresh of Cassandra session refresh, there seems to be a memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy while refreshing cassandra session. which seems to be the root cause of OOM. In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java when refresh() method is invoked to handle Exceptions, new Cluster is build with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same RoundRobinPolicy object would be used while building Cluster when refresh() is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList liveHosts. When ever init(Cluster cluster, Collection hosts) is called on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host Object Collection to liveHosts. When ever Cluster is build during refresh() the Host Collection are added during init call, again to the liveHosts of the same RoundRobinPolicy .Thus same Hosts are added again to liveHosts for every refresh() and the size would grow indefinitely after many refresh() calls causing OOM. Even in the heap dump post OOM we found huge number of Objects in liveHosts of RoundRobinPolicy Object. Some possible solutions would be 1. To use new LoadBalancingPolicy object while building new Cluster during refresh(). 2. Somehow clear Objects in liveHosts during refresh(). was: OutOfMemory Exception is observed when encountered with the issue IGNITE-8354. Though the issue is solved, preventing OOM by preventing unnecessary refresh of Cassandra session refresh, there seems to be a memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy while refreshing cassandra session. which seems to be the root cause of OOM. In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java when refresh() method is invoked to handle Exceptions, new Cluster is build with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same RoundRobinPolicy object would be used while building Cluster when refresh() is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList liveHosts. When ever init(Cluster cluster, Collection hosts) is called on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host Object Collection to liveHosts. When ever Cluster is build during refresh() the Host Collection are added again to the liveHosts of the same RoundRobinPolicy that is used. Thus same Hosts are added again to liveHosts for every refresh() and the size would grow indefinitely after many refresh() calls causing OOM. Even in the heap dump post OOM we found huge number of Objects in liveHosts of RoundRobinPolicy Object. Some possible solutions would be 1. To use new LoadBalancingPolicy object while building new Cluster during refresh(). 2. Somehow clear Objects in liveHosts during refresh(). > Memory leak in ignite-cassandra module while using RoundRobinPolicy > LoadBalancingPolicy > --- > > Key: IGNITE-8775 > URL: https://issues.apache.org/jira/browse/IGNITE-8775 > Project: Ignite > Issue Type: Bug > Components: cassandra >Reporter: Yashasvi Kotamraju >Assignee: Igor Rudyak >Priority: Major > > OutOfMemory Exception is observed when encountered with the issue > IGNITE-8354. Though the issue is solved, preventing OOM by preventing > unnecessary refresh of Cassandra session refresh, there seems to be a memory > leak in ignite-cassandra module while using RoundRobinPolicy > LoadBalancingPolicy while refreshing cassandra session. which seems to be the > root cause of OOM. > In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java > when refresh() method is invoked to handle Exceptions, new Cluster is build > with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same > RoundRobinPolicy object would be used while building Cluster when refresh() > is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList > liveHosts. When ever init(Cluster cluster, Collection hosts) is called > on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host > Object Collection to liveHosts. > When ever Cluster is build during refresh() the Host Collection are added > during init call, > again to the liveHosts of the same RoundRobinPolicy .Thus same Hosts are > added again to liveHosts for every refresh() and the size would grow > indefinitely af
[jira] [Assigned] (IGNITE-8775) Memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy
[ https://issues.apache.org/jira/browse/IGNITE-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju reassigned IGNITE-8775: -- Assignee: Yashasvi Kotamraju > Memory leak in ignite-cassandra module while using RoundRobinPolicy > LoadBalancingPolicy > --- > > Key: IGNITE-8775 > URL: https://issues.apache.org/jira/browse/IGNITE-8775 > Project: Ignite > Issue Type: Bug > Components: cassandra >Reporter: Yashasvi Kotamraju >Assignee: Yashasvi Kotamraju >Priority: Major > > OutOfMemory Exception is observed when encountered with the issue > IGNITE-8354. Though the issue is solved, preventing OOM by preventing > unnecessary refresh of Cassandra session refresh, there seems to be a memory > leak in ignite-cassandra module while using RoundRobinPolicy > LoadBalancingPolicy while refreshing cassandra session. which seems to be the > root cause of OOM. > In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java > when refresh() method is invoked to handle Exceptions, new Cluster is build > with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same > RoundRobinPolicy object would be used while building Cluster when refresh() > is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList > liveHosts. When ever init(Cluster cluster, Collection hosts) is called > on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host > Object Collection to liveHosts. > When ever Cluster is build during refresh() the Host Collection are added > again to the liveHosts of the same RoundRobinPolicy that is used. Thus same > Hosts are added again to liveHosts for every refresh() and the size would > grow indefinitely after many refresh() calls causing OOM. Even in the heap > dump post OOM we found huge number of Objects in liveHosts of > RoundRobinPolicy Object. > Some possible solutions would be > 1. To use new LoadBalancingPolicy object while building new Cluster during > refresh(). > 2. Somehow clear Objects in liveHosts during refresh(). > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8775) Memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy
[ https://issues.apache.org/jira/browse/IGNITE-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju reassigned IGNITE-8775: -- Assignee: Igor Rudyak (was: Yashasvi Kotamraju) > Memory leak in ignite-cassandra module while using RoundRobinPolicy > LoadBalancingPolicy > --- > > Key: IGNITE-8775 > URL: https://issues.apache.org/jira/browse/IGNITE-8775 > Project: Ignite > Issue Type: Bug > Components: cassandra >Reporter: Yashasvi Kotamraju >Assignee: Igor Rudyak >Priority: Major > > OutOfMemory Exception is observed when encountered with the issue > IGNITE-8354. Though the issue is solved, preventing OOM by preventing > unnecessary refresh of Cassandra session refresh, there seems to be a memory > leak in ignite-cassandra module while using RoundRobinPolicy > LoadBalancingPolicy while refreshing cassandra session. which seems to be the > root cause of OOM. > In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java > when refresh() method is invoked to handle Exceptions, new Cluster is build > with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same > RoundRobinPolicy object would be used while building Cluster when refresh() > is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList > liveHosts. When ever init(Cluster cluster, Collection hosts) is called > on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host > Object Collection to liveHosts. > When ever Cluster is build during refresh() the Host Collection are added > again to the liveHosts of the same RoundRobinPolicy that is used. Thus same > Hosts are added again to liveHosts for every refresh() and the size would > grow indefinitely after many refresh() calls causing OOM. Even in the heap > dump post OOM we found huge number of Objects in liveHosts of > RoundRobinPolicy Object. > Some possible solutions would be > 1. To use new LoadBalancingPolicy object while building new Cluster during > refresh(). > 2. Somehow clear Objects in liveHosts during refresh(). > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8775) Memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy
[ https://issues.apache.org/jira/browse/IGNITE-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju updated IGNITE-8775: --- Description: OutOfMemory Exception is observed when encountered with the issue IGNITE-8354. Though the issue is solved, preventing OOM by preventing unnecessary refresh of Cassandra session refresh, there seems to be a memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy while refreshing cassandra session. which seems to be the root cause of OOM. In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java when refresh() method is invoked to handle Exceptions, new Cluster is build with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same RoundRobinPolicy object would be used while building Cluster when refresh() is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList liveHosts. When ever init(Cluster cluster, Collection hosts) is called on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host Object Collection to liveHosts. When ever Cluster is build during refresh() the Host Collection are added again to the liveHosts of the same RoundRobinPolicy that is used. Thus same Hosts are added again to liveHosts for every refresh() and the size would grow indefinitely after many refresh() calls causing OOM. Even in the heap dump post OOM we found huge number of Objects in liveHosts of RoundRobinPolicy Object. Some possible solutions would be 1. To use new LoadBalancingPolicy object while building new Cluster during refresh(). 2. Somehow clear Objects in liveHosts during refresh(). was: OutOfMemory Exception is observed when encountered with the issue IGNITE-8354. Though the issue is solved, preventing OOM by preventing unnecessary refresh of Cassandra session refresh, there seems to be a memory leak in In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java when refresh() method is invoked to handle Exceptions, new Cluster is build with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same RoundRobinPolicy object would be used while building Cluster when refresh() is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList liveHosts. When ever init(Cluster cluster, Collection hosts) is called on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host Object Collection to liveHosts. When ever Cluster is build during refresh() the Host Collection are added again to the liveHosts of the same RoundRobinPolicy that is used. Thus same Hosts are added again to liveHosts for every refresh() and the size would grow indefinitely after many refresh() calls causing OOM. Even in the heap dump post OOM we found huge number of Objects in liveHosts of RoundRobinPolicy Object. > Memory leak in ignite-cassandra module while using RoundRobinPolicy > LoadBalancingPolicy > --- > > Key: IGNITE-8775 > URL: https://issues.apache.org/jira/browse/IGNITE-8775 > Project: Ignite > Issue Type: Bug > Components: cassandra >Reporter: Yashasvi Kotamraju >Priority: Major > > OutOfMemory Exception is observed when encountered with the issue > IGNITE-8354. Though the issue is solved, preventing OOM by preventing > unnecessary refresh of Cassandra session refresh, there seems to be a memory > leak in ignite-cassandra module while using RoundRobinPolicy > LoadBalancingPolicy while refreshing cassandra session. which seems to be the > root cause of OOM. > In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java > when refresh() method is invoked to handle Exceptions, new Cluster is build > with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same > RoundRobinPolicy object would be used while building Cluster when refresh() > is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList > liveHosts. When ever init(Cluster cluster, Collection hosts) is called > on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host > Object Collection to liveHosts. > When ever Cluster is build during refresh() the Host Collection are added > again to the liveHosts of the same RoundRobinPolicy that is used. Thus same > Hosts are added again to liveHosts for every refresh() and the size would > grow indefinitely after many refresh() calls causing OOM. Even in the heap > dump post OOM we found huge number of Objects in liveHosts of > RoundRobinPolicy Object. > Some possible solutions would be > 1. To use new LoadBalancingPolicy object while building new Cluster during > refresh(). > 2. Somehow clear Objects in liveHosts during refresh(). > -- This message was sent by Atl
[jira] [Updated] (IGNITE-8775) Memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy
[ https://issues.apache.org/jira/browse/IGNITE-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju updated IGNITE-8775: --- Description: OutOfMemory Exception is observed when encountered with the issue IGNITE-8354. Though the issue is solved, preventing OOM by preventing unnecessary refresh of Cassandra session refresh, there seems to be a memory leak in In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java when refresh() method is invoked to handle Exceptions, new Cluster is build with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same RoundRobinPolicy object would be used while building Cluster when refresh() is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList liveHosts. When ever init(Cluster cluster, Collection hosts) is called on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host Object Collection to liveHosts. When ever Cluster is build during refresh() the Host Collection are added again to the liveHosts of the same RoundRobinPolicy that is used. Thus same Hosts are added again to liveHosts for every refresh() and the size would grow indefinitely after many refresh() calls causing OOM. Even in the heap dump post OOM we found huge number of Objects in liveHosts of RoundRobinPolicy Object. was: In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java when refresh() method is invoked to handle Exceptions, new Cluster is build with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same RoundRobinPolicy object would be used while building Cluster when refresh() is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList liveHosts. When ever init(Cluster cluster, Collection hosts) is called on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host Object Collection to liveHosts. When ever Cluster is build during refresh() the Host Collection are added again to the liveHosts of the same RoundRobinPolicy that is used. Thus same Hosts are added again to liveHosts for every refresh() and the size would grow indefinitely after many refresh() calls causing OOM. Even in the heap dump post OOM we found huge number of Objects in liveHosts of RoundRobinPolicy Object. > Memory leak in ignite-cassandra module while using RoundRobinPolicy > LoadBalancingPolicy > --- > > Key: IGNITE-8775 > URL: https://issues.apache.org/jira/browse/IGNITE-8775 > Project: Ignite > Issue Type: Bug > Components: cassandra >Reporter: Yashasvi Kotamraju >Priority: Major > > OutOfMemory Exception is observed when encountered with the issue > IGNITE-8354. Though the issue is solved, preventing OOM by preventing > unnecessary refresh of Cassandra session refresh, there seems to be a memory > leak in > In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java > when refresh() method is invoked to handle Exceptions, new Cluster is build > with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same > RoundRobinPolicy object would be used while building Cluster when refresh() > is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList > liveHosts. When ever init(Cluster cluster, Collection hosts) is called > on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host > Object Collection to liveHosts. > When ever Cluster is build during refresh() the Host Collection are added > again to the liveHosts of the same RoundRobinPolicy that is used. Thus same > Hosts are added again to liveHosts for every refresh() and the size would > grow indefinitely after many refresh() calls causing OOM. Even in the heap > dump post OOM we found huge number of Objects in liveHosts of > RoundRobinPolicy Object. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8775) Memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy
Yashasvi Kotamraju created IGNITE-8775: -- Summary: Memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy Key: IGNITE-8775 URL: https://issues.apache.org/jira/browse/IGNITE-8775 Project: Ignite Issue Type: Bug Components: cassandra Reporter: Yashasvi Kotamraju In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java when refresh() method is invoked to handle Exceptions, new Cluster is build with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same RoundRobinPolicy object would be used while building Cluster when refresh() is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList liveHosts. When ever init(Cluster cluster, Collection hosts) is called on RoundRobinPolicy it calls liveHosts.addAll(hosts) adding all the Host Object Collection to liveHosts. When ever Cluster is build during refresh() the Host Collection are added again to the liveHosts of the same RoundRobinPolicy that is used. Thus same Hosts are added again to liveHosts for every refresh() and the size would grow indefinitely after many refresh() calls causing OOM. Even in the heap dump post OOM we found huge number of Objects in liveHosts of RoundRobinPolicy Object. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6500) POJO fields of java wrapper type are not retaining null values from Cassandra persistent store, while using ignite's CassandraCacheStoreFactory
[ https://issues.apache.org/jira/browse/IGNITE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479004#comment-16479004 ] Yashasvi Kotamraju commented on IGNITE-6500: Hi Igor created a new PR with the changes suggested by you. please review the PR > POJO fields of java wrapper type are not retaining null values from Cassandra > persistent store, while using ignite's CassandraCacheStoreFactory > --- > > Key: IGNITE-6500 > URL: https://issues.apache.org/jira/browse/IGNITE-6500 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.1 >Reporter: Yashasvi Kotamraju >Assignee: Yashasvi Kotamraju >Priority: Minor > Labels: patch > Fix For: 2.6 > > > While using ignite's CassandraCacheStoreFactory(part of > ignite-cassandra-store.jar) as cacheStoreFactory for a cache, if a POJO field > is of wrapper class type, and the column value mapped in Cassandra persistent > store is null then the POJO field is getting set to default primitive type > instead of null. > For Example: Assume a table 'person' in a Cassandra persistent store with the > following structure and data. > *table person:* > *column*person_no(int)phno(text) address(text) age(int) > name(text) > *data* 1 12353 null > nullyash > person_no is the PRIMARY_KEY. > This table is mapped to person POJO for ignite cache. > public class person{ > private int person_no; > private String name; > private Integer age=null; > private String phno; > private String address; > .getters and setters etc.. > } > Now we load the row from Cassandra into ignite cache using cache.get(1) or > cache.load(..) And we are using ignite's CassandraCacheStoreFactory for this > cache. > Let person p1 = cache.get(1); > now p1.getName returns "yash", p1.getAddress returns null. > But p1.getAge returns 0 instead of null. It is expected null value since the > value is null in Cassandra persistent store. > Hence if the value is 0 for the age field there is no way differentiate if it > was null or it was actually 0. The similar problem exists for other wrapper > types -> Long, Float, Double, Boolean. > This problem cause is as follows. > In > org.apache.ignite.cache.store.cassandra.persistence.PojoField.setValueFromRow(..) > method first the Cassandra field value is obtained by using the method > PropertyMappingHelper.getCassandraColumnValue(..). This method calls DataStax > Driver methods Row.getInt() or Row.getFloat() or Row.getDouble() etc.. > depending upon the column. This value obtained from this method is then set > to the respective POJO field. But According to Datastax documentation getInt > returns 0 if column value is null and similarly getLong returns 0L , > getDouble return 0.0 etc. Hence PropertyMappingHelper. > getCassandraColumnValue returns 0 or 0L or 0.0 or false even if the value is > null. And then this value is set to the wrapper type POJO fields. The problem > only persists with the primitive data types in Cassandra mapped to wrapper > type fields in POJO. For other types like String , Date etc.. the null values > are retained in the POJO fields. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6252) Cassandra Cache Store Session does not retry if prepare statement failed
[ https://issues.apache.org/jira/browse/IGNITE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467639#comment-16467639 ] Yashasvi Kotamraju commented on IGNITE-6252: Ticket addressing the Issue in the above comment, regarding continuous cassandra refresh issue : https://issues.apache.org/jira/browse/IGNITE-8354 > Cassandra Cache Store Session does not retry if prepare statement failed > > > Key: IGNITE-6252 > URL: https://issues.apache.org/jira/browse/IGNITE-6252 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.0, 2.1 >Reporter: Sunny Chan >Assignee: Igor Rudyak >Priority: Major > Fix For: 2.6 > > > During our testing, we have found that certain warning about prepared > statement: > 2017-08-31 11:27:19.479 > org.apache.ignite.cache.store.cassandra.CassandraCacheStore > flusher-0-#265%% WARN CassandraCacheStore - Prepared statement cluster > error detected, refreshing Cassandra session > com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute > unknown prepared query : 0xc7647611fd755386ef63478ee7de577b. You may have > used a PreparedStatement that was created with another Cluster instance. > We notice that after this warning occurs some of the data didn't persist > properly in cassandra cache. After further examining the Ignite's > CassandraSessionImpl code in method > execute(BatchExecutionAssistance,Iterable), we found that at around [line > 283|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L283], > if the prepare statement fails in the asnyc call, it will not retry the > operation as the error is stored in [line > 269|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L269] > and cleared in [line > 277|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L277] > but it was not checked again after going through the [ResultSetFuture > |https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L307]. > I believe in line 307 you should check for error != null such that any > failure will be retry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8354) Ignite Continuosly refreshes Cassandra Session when there is an Exception in execute method of CassandraSessionImpl
Yashasvi Kotamraju created IGNITE-8354: -- Summary: Ignite Continuosly refreshes Cassandra Session when there is an Exception in execute method of CassandraSessionImpl Key: IGNITE-8354 URL: https://issues.apache.org/jira/browse/IGNITE-8354 Project: Ignite Issue Type: Bug Components: cassandra Reporter: Yashasvi Kotamraju Assignee: Igor Rudyak *In CassandraSessionImpl.java* When handlePreparedStatementClusterError method is called during Exception, the session is refreshed.There might be many preparedstatements created with old session(since a session object can be shared between different batches). So when we execute the preparedstatements created with old session on a new session created , we get the the Exception "com.datastax.driver.core.exceptions.InvalidQueryException You may have used a PreparedStatement that was created with another Cluster instance". Which would again call handlePreparedStatementClusterError and refresh session again and this happens continuously. We have observed continuous cassandra session refresh warnings when this scenario occurred. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-6252) Cassandra Cache Store Session does not retry if prepare statement failed
[ https://issues.apache.org/jira/browse/IGNITE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju resolved IGNITE-6252. Resolution: Fixed > Cassandra Cache Store Session does not retry if prepare statement failed > > > Key: IGNITE-6252 > URL: https://issues.apache.org/jira/browse/IGNITE-6252 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.0, 2.1 >Reporter: Sunny Chan >Assignee: Igor Rudyak >Priority: Major > Fix For: 2.6 > > > During our testing, we have found that certain warning about prepared > statement: > 2017-08-31 11:27:19.479 > org.apache.ignite.cache.store.cassandra.CassandraCacheStore > flusher-0-#265%% WARN CassandraCacheStore - Prepared statement cluster > error detected, refreshing Cassandra session > com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute > unknown prepared query : 0xc7647611fd755386ef63478ee7de577b. You may have > used a PreparedStatement that was created with another Cluster instance. > We notice that after this warning occurs some of the data didn't persist > properly in cassandra cache. After further examining the Ignite's > CassandraSessionImpl code in method > execute(BatchExecutionAssistance,Iterable), we found that at around [line > 283|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L283], > if the prepare statement fails in the asnyc call, it will not retry the > operation as the error is stored in [line > 269|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L269] > and cleared in [line > 277|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L277] > but it was not checked again after going through the [ResultSetFuture > |https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L307]. > I believe in line 307 you should check for error != null such that any > failure will be retry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-6252) Cassandra Cache Store Session does not retry if prepare statement failed
[ https://issues.apache.org/jira/browse/IGNITE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433582#comment-16433582 ] Yashasvi Kotamraju edited comment on IGNITE-6252 at 4/12/18 6:00 AM: - Hi Igor I was looking at the final commit changes mentioned here in the ticket. [https://github.com/apache/ignite/pull/2583]. The if(row!=null) code change was done as a part of another Issue fix. https://issues.apache.org/jira/browse/IGNITE-5779 But I see there is another issue here. *In CassandraSessionImpl.java* When handlePreparedStatementClusterError method is called during Exception, the session is refreshed.There might be many preparedstatements created with old session(since a session object can be shared between different batches). So when we execute the preparedstatements created with old session on a new session created , we get the the Exception "com.datastax.driver.core.exceptions.InvalidQueryException You may have used a PreparedStatement that was created with another Cluster instance". Which would again call handlePreparedStatementClusterError and refresh session again and this happens continuously. We have observed continuous cassandra session refresh warnings when this scenario occurred. was (Author: kotamrajuyashasvi): Hi Igor Sorry. I was looking at the final commit changes mentioned here in the ticket. [https://github.com/apache/ignite/pull/2583]. But I see there is another issue here. *In CassandraSessionImpl.java* When handlePreparedStatementClusterError method is called during Exception, the session is refreshed.There might be many preparedstatements created with old session(since a session object can be shared between different batches). So when we execute the preparedstatements created with old session on a new session created , we get the the Exception "com.datastax.driver.core.exceptions.InvalidQueryException You may have used a PreparedStatement that was created with another Cluster instance". Which would again call handlePreparedStatementClusterError and refresh session again and this happens continuously. We have observed continuous cassandra session refresh warnings when this scenario occurred. > Cassandra Cache Store Session does not retry if prepare statement failed > > > Key: IGNITE-6252 > URL: https://issues.apache.org/jira/browse/IGNITE-6252 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.0, 2.1 >Reporter: Sunny Chan >Assignee: Igor Rudyak >Priority: Major > Fix For: 2.5 > > > During our testing, we have found that certain warning about prepared > statement: > 2017-08-31 11:27:19.479 > org.apache.ignite.cache.store.cassandra.CassandraCacheStore > flusher-0-#265%% WARN CassandraCacheStore - Prepared statement cluster > error detected, refreshing Cassandra session > com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute > unknown prepared query : 0xc7647611fd755386ef63478ee7de577b. You may have > used a PreparedStatement that was created with another Cluster instance. > We notice that after this warning occurs some of the data didn't persist > properly in cassandra cache. After further examining the Ignite's > CassandraSessionImpl code in method > execute(BatchExecutionAssistance,Iterable), we found that at around [line > 283|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L283], > if the prepare statement fails in the asnyc call, it will not retry the > operation as the error is stored in [line > 269|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L269] > and cleared in [line > 277|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L277] > but it was not checked again after going through the [ResultSetFuture > |https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L307]. > I believe in line 307 you should check for error != null such that any > failure will be retry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6252) Cassandra Cache Store Session does not retry if prepare statement failed
[ https://issues.apache.org/jira/browse/IGNITE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433582#comment-16433582 ] Yashasvi Kotamraju commented on IGNITE-6252: Hi Igor Sorry. I was looking at the final commit changes mentioned here in the ticket. [https://github.com/apache/ignite/pull/2583]. But I see there is another issue here. *In CassandraSessionImpl.java* When handlePreparedStatementClusterError method is called during Exception, the session is refreshed.There might be many preparedstatements created with old session(since a session object can be shared between different batches). So when we execute the preparedstatements created with old session on a new session created , we get the the Exception "com.datastax.driver.core.exceptions.InvalidQueryException You may have used a PreparedStatement that was created with another Cluster instance". Which would again call handlePreparedStatementClusterError and refresh session again and this happens continuously. We have observed continuous cassandra session refresh warnings when this scenario occurred. > Cassandra Cache Store Session does not retry if prepare statement failed > > > Key: IGNITE-6252 > URL: https://issues.apache.org/jira/browse/IGNITE-6252 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.0, 2.1 >Reporter: Sunny Chan >Assignee: Igor Rudyak >Priority: Major > Fix For: 2.5 > > > During our testing, we have found that certain warning about prepared > statement: > 2017-08-31 11:27:19.479 > org.apache.ignite.cache.store.cassandra.CassandraCacheStore > flusher-0-#265%% WARN CassandraCacheStore - Prepared statement cluster > error detected, refreshing Cassandra session > com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute > unknown prepared query : 0xc7647611fd755386ef63478ee7de577b. You may have > used a PreparedStatement that was created with another Cluster instance. > We notice that after this warning occurs some of the data didn't persist > properly in cassandra cache. After further examining the Ignite's > CassandraSessionImpl code in method > execute(BatchExecutionAssistance,Iterable), we found that at around [line > 283|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L283], > if the prepare statement fails in the asnyc call, it will not retry the > operation as the error is stored in [line > 269|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L269] > and cleared in [line > 277|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L277] > but it was not checked again after going through the [ResultSetFuture > |https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L307]. > I believe in line 307 you should check for error != null such that any > failure will be retry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-6252) Cassandra Cache Store Session does not retry if prepare statement failed
[ https://issues.apache.org/jira/browse/IGNITE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16427115#comment-16427115 ] Yashasvi Kotamraju edited comment on IGNITE-6252 at 4/10/18 6:41 AM: - Also whenever session refresh() is called to handle Exception, new session is created. But there might be many preparedstatements created with old session(since a session object can be shared between different batches). So when we execute the preparedstatements created with old session on a new session created , we get the the Exception "com.datastax.driver.core.exceptions.InvalidQueryException Tried to execute unknown prepared query You may have used a PreparedStatement that was created with another Cluster instance" which would again refresh and create new cassandra session and so on ...refresh() will be called continuosly and also the same Exception. A solution would be to get the Exception message while async executing prepared statement, and check if the exception message contains the String "You may have used a PreparedStatement that was created with another Cluster instance". If so get the new prepared statement with new session created, and restart the Batch Method again. was (Author: kotamrajuyashasvi): Also whenever session refresh() is called to handle Exception, new session is created. But there might be many preparedstatements created with old session. So when we execute the preparedstatements created with old session on a new session created , we get the the Exception "com.datastax.driver.core.exceptions.InvalidQueryException Tried to execute unknown prepared query " which would again refresh and create new cassandra session and so on ...refresh() will be called continuosly > Cassandra Cache Store Session does not retry if prepare statement failed > > > Key: IGNITE-6252 > URL: https://issues.apache.org/jira/browse/IGNITE-6252 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.0, 2.1 >Reporter: Sunny Chan >Assignee: Igor Rudyak >Priority: Major > Fix For: 2.5 > > > During our testing, we have found that certain warning about prepared > statement: > 2017-08-31 11:27:19.479 > org.apache.ignite.cache.store.cassandra.CassandraCacheStore > flusher-0-#265%% WARN CassandraCacheStore - Prepared statement cluster > error detected, refreshing Cassandra session > com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute > unknown prepared query : 0xc7647611fd755386ef63478ee7de577b. You may have > used a PreparedStatement that was created with another Cluster instance. > We notice that after this warning occurs some of the data didn't persist > properly in cassandra cache. After further examining the Ignite's > CassandraSessionImpl code in method > execute(BatchExecutionAssistance,Iterable), we found that at around [line > 283|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L283], > if the prepare statement fails in the asnyc call, it will not retry the > operation as the error is stored in [line > 269|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L269] > and cleared in [line > 277|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L277] > but it was not checked again after going through the [ResultSetFuture > |https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L307]. > I believe in line 307 you should check for error != null such that any > failure will be retry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6252) Cassandra Cache Store Session does not retry if prepare statement failed
[ https://issues.apache.org/jira/browse/IGNITE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16427115#comment-16427115 ] Yashasvi Kotamraju commented on IGNITE-6252: Also whenever session refresh() is called to handle Exception, new session is created. But there might be many preparedstatements created with old session. So when we execute the preparedstatements created with old session on a new session created , we get the the Exception "com.datastax.driver.core.exceptions.InvalidQueryException Tried to execute unknown prepared query " which would again refresh and create new cassandra session and so on ...refresh() will be called continuosly > Cassandra Cache Store Session does not retry if prepare statement failed > > > Key: IGNITE-6252 > URL: https://issues.apache.org/jira/browse/IGNITE-6252 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.0, 2.1 >Reporter: Sunny Chan >Assignee: Igor Rudyak >Priority: Major > Fix For: 2.5 > > > During our testing, we have found that certain warning about prepared > statement: > 2017-08-31 11:27:19.479 > org.apache.ignite.cache.store.cassandra.CassandraCacheStore > flusher-0-#265%% WARN CassandraCacheStore - Prepared statement cluster > error detected, refreshing Cassandra session > com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute > unknown prepared query : 0xc7647611fd755386ef63478ee7de577b. You may have > used a PreparedStatement that was created with another Cluster instance. > We notice that after this warning occurs some of the data didn't persist > properly in cassandra cache. After further examining the Ignite's > CassandraSessionImpl code in method > execute(BatchExecutionAssistance,Iterable), we found that at around [line > 283|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L283], > if the prepare statement fails in the asnyc call, it will not retry the > operation as the error is stored in [line > 269|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L269] > and cleared in [line > 277|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L277] > but it was not checked again after going through the [ResultSetFuture > |https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L307]. > I believe in line 307 you should check for error != null such that any > failure will be retry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (IGNITE-6252) Cassandra Cache Store Session does not retry if prepare statement failed
[ https://issues.apache.org/jira/browse/IGNITE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju reopened IGNITE-6252: The code fix using the condition : *assistant.processedCount() == dataSize* will not work if prepared statement is not a select statement. > Cassandra Cache Store Session does not retry if prepare statement failed > > > Key: IGNITE-6252 > URL: https://issues.apache.org/jira/browse/IGNITE-6252 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.0, 2.1 >Reporter: Sunny Chan >Assignee: Igor Rudyak >Priority: Major > Fix For: 2.5 > > > During our testing, we have found that certain warning about prepared > statement: > 2017-08-31 11:27:19.479 > org.apache.ignite.cache.store.cassandra.CassandraCacheStore > flusher-0-#265%% WARN CassandraCacheStore - Prepared statement cluster > error detected, refreshing Cassandra session > com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute > unknown prepared query : 0xc7647611fd755386ef63478ee7de577b. You may have > used a PreparedStatement that was created with another Cluster instance. > We notice that after this warning occurs some of the data didn't persist > properly in cassandra cache. After further examining the Ignite's > CassandraSessionImpl code in method > execute(BatchExecutionAssistance,Iterable), we found that at around [line > 283|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L283], > if the prepare statement fails in the asnyc call, it will not retry the > operation as the error is stored in [line > 269|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L269] > and cleared in [line > 277|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L277] > but it was not checked again after going through the [ResultSetFuture > |https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L307]. > I believe in line 307 you should check for error != null such that any > failure will be retry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6252) Cassandra Cache Store Session does not retry if prepare statement failed
[ https://issues.apache.org/jira/browse/IGNITE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16426566#comment-16426566 ] Yashasvi Kotamraju commented on IGNITE-6252: *In CassandraSessionImpl.java* *In method* *@Override public R execute(BatchExecutionAssistant assistant, Iterable data)* *.* *ResultSet resSet = futureResult.getValue().getUninterruptibly();* *Row row = resSet != null && resSet.iterator().hasNext() ? resSet.iterator().next() : null;* *if (row != null)* *assistant.process(row, futureResult.getKey());* *...* If prepared statements are Insert/Update/Delete then no result will be returned according to Cassandra docs. Hence{color:#FF} {color}*resSet.iterator().hasNext()* will always return false and so *row* will always be null. Hence its not added to *assistant.process* so *assistant.processedCount()* will always be 0. and never be equal to *datasize*. We end up retrying eventhough it was inserted/deleted/updated and there were no Exceptions, and try CQL_EXECUTION_ATTEMPTS_COUNT attempts trying again and again and finally throw Exception : *"Failed to process " + (dataSize - assistant.processedCount()) +* *" of " + dataSize + " elements, during " + assistant.operationName() +* *" operation with Cassandra"* Hence the code fix using the condition : assistant.processedCount() == dataSize will not work if prepared statement is not a select statement. We can put a boolean flag *retry* which is set to false initially in every attempt. If at any point the execution flow enters Exception code block then we can set *retry* as true. Then instead of condition : *if (tblAbsenceEx == null && hostsAvailEx == null && prepStatEx == null)* we can use: *if(!retry)* as a condition on whether to return the processed data or retry. In addition to this we need to maintain separate HashSet of keyvalues of ResultSetFuture where *resSet !=null && (* *resSet.iterator().hasNext()* == *false)* Which is the case for Insert/Upadate/Delete preparedstatements. Then while re attempting in addition to !assistant.alreadyProcessed(seqNum) check we also check if the HashSet does not contains the seqNum. > Cassandra Cache Store Session does not retry if prepare statement failed > > > Key: IGNITE-6252 > URL: https://issues.apache.org/jira/browse/IGNITE-6252 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.0, 2.1 >Reporter: Sunny Chan >Assignee: Igor Rudyak >Priority: Major > Fix For: 2.5 > > > During our testing, we have found that certain warning about prepared > statement: > 2017-08-31 11:27:19.479 > org.apache.ignite.cache.store.cassandra.CassandraCacheStore > flusher-0-#265%% WARN CassandraCacheStore - Prepared statement cluster > error detected, refreshing Cassandra session > com.datastax.driver.core.exceptions.InvalidQueryException: Tried to execute > unknown prepared query : 0xc7647611fd755386ef63478ee7de577b. You may have > used a PreparedStatement that was created with another Cluster instance. > We notice that after this warning occurs some of the data didn't persist > properly in cassandra cache. After further examining the Ignite's > CassandraSessionImpl code in method > execute(BatchExecutionAssistance,Iterable), we found that at around [line > 283|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L283], > if the prepare statement fails in the asnyc call, it will not retry the > operation as the error is stored in [line > 269|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L269] > and cleared in [line > 277|https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L277] > but it was not checked again after going through the [ResultSetFuture > |https://github.com/apache/ignite/blob/86bd544a557663bce497134f7826be6b24d53330/modules/cassandra/store/src/main/java/org/apache/ignite/cache/store/cassandra/session/CassandraSessionImpl.java#L307]. > I believe in line 307 you should check for error != null such that any > failure will be retry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (IGNITE-6500) POJO fields of java wrapper type are not retaining null values from Cassandra persistent store, while using ignite's CassandraCacheStoreFactory
[ https://issues.apache.org/jira/browse/IGNITE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju updated IGNITE-6500: --- Comment: was deleted (was: IGNITE-6500 POJO fields of java wrapper type are not retaining null values from Cassandra persistent store, while using ignite's CassandraCacheStoreFactory.) > POJO fields of java wrapper type are not retaining null values from Cassandra > persistent store, while using ignite's CassandraCacheStoreFactory > --- > > Key: IGNITE-6500 > URL: https://issues.apache.org/jira/browse/IGNITE-6500 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.1 >Reporter: Yashasvi Kotamraju >Assignee: Yashasvi Kotamraju >Priority: Minor > Labels: patch > Fix For: 2.4 > > > While using ignite's CassandraCacheStoreFactory(part of > ignite-cassandra-store.jar) as cacheStoreFactory for a cache, if a POJO field > is of wrapper class type, and the column value mapped in Cassandra persistent > store is null then the POJO field is getting set to default primitive type > instead of null. > For Example: Assume a table 'person' in a Cassandra persistent store with the > following structure and data. > *table person:* > *column*person_no(int)phno(text) address(text) age(int) > name(text) > *data* 1 12353 null > nullyash > person_no is the PRIMARY_KEY. > This table is mapped to person POJO for ignite cache. > public class person{ > private int person_no; > private String name; > private Integer age=null; > private String phno; > private String address; > .getters and setters etc.. > } > Now we load the row from Cassandra into ignite cache using cache.get(1) or > cache.load(..) And we are using ignite's CassandraCacheStoreFactory for this > cache. > Let person p1 = cache.get(1); > now p1.getName returns "yash", p1.getAddress returns null. > But p1.getAge returns 0 instead of null. It is expected null value since the > value is null in Cassandra persistent store. > Hence if the value is 0 for the age field there is no way differentiate if it > was null or it was actually 0. The similar problem exists for other wrapper > types -> Long, Float, Double, Boolean. > This problem cause is as follows. > In > org.apache.ignite.cache.store.cassandra.persistence.PojoField.setValueFromRow(..) > method first the Cassandra field value is obtained by using the method > PropertyMappingHelper.getCassandraColumnValue(..). This method calls DataStax > Driver methods Row.getInt() or Row.getFloat() or Row.getDouble() etc.. > depending upon the column. This value obtained from this method is then set > to the respective POJO field. But According to Datastax documentation getInt > returns 0 if column value is null and similarly getLong returns 0L , > getDouble return 0.0 etc. Hence PropertyMappingHelper. > getCassandraColumnValue returns 0 or 0L or 0.0 or false even if the value is > null. And then this value is set to the wrapper type POJO fields. The problem > only persists with the primitive data types in Cassandra mapped to wrapper > type fields in POJO. For other types like String , Date etc.. the null values > are retained in the POJO fields. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (IGNITE-6500) POJO fields of java wrapper type are not retaining null values from Cassandra persistent store, while using ignite's CassandraCacheStoreFactory
[ https://issues.apache.org/jira/browse/IGNITE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju updated IGNITE-6500: --- Description: While using ignite's CassandraCacheStoreFactory(part of ignite-cassandra-store.jar) as cacheStoreFactory for a cache, if a POJO field is of wrapper class type, and the column value mapped in Cassandra persistent store is null then the POJO field is getting set to default primitive type instead of null. For Example: Assume a table 'person' in a Cassandra persistent store with the following structure and data. *table person:* *column*person_no(int)phno(text) address(text) age(int) name(text) *data* 1 12353 null nullyash person_no is the PRIMARY_KEY. This table is mapped to person POJO for ignite cache. public class person{ private int person_no; private String name; private Integer age=null; private String phno; private String address; .getters and setters etc.. } Now we load the row from Cassandra into ignite cache using cache.get(1) or cache.load(..) And we are using ignite's CassandraCacheStoreFactory for this cache. Let person p1 = cache.get(1); now p1.getName returns "yash", p1.getAddress returns null. But p1.getAge returns 0 instead of null. It is expected null value since the value is null in Cassandra persistent store. Hence if the value is 0 for the age field there is no way differentiate if it was null or it was actually 0. The similar problem exists for other wrapper types -> Long, Float, Double, Boolean. This problem cause is as follows. In org.apache.ignite.cache.store.cassandra.persistence.PojoField.setValueFromRow(..) method first the Cassandra field value is obtained by using the method PropertyMappingHelper.getCassandraColumnValue(..). This method calls DataStax Driver methods Row.getInt() or Row.getFloat() or Row.getDouble() etc.. depending upon the column. This value obtained from this method is then set to the respective POJO field. But According to Datastax documentation getInt returns 0 if column value is null and similarly getLong returns 0L , getDouble return 0.0 etc. Hence PropertyMappingHelper. getCassandraColumnValue returns 0 or 0L or 0.0 or false even if the value is null. And then this value is set to the wrapper type POJO fields. The problem only persists with the primitive data types in Cassandra mapped to wrapper type fields in POJO. For other types like String , Date etc.. the null values are retained in the POJO fields. was:While using ignite-cassandra-store, if a POJO field is of wrapper class type, and the column value mapped in cassandra persistent store is null then the POJO field is getting set to default primitive type instead of null. Summary: POJO fields of java wrapper type are not retaining null values from Cassandra persistent store, while using ignite's CassandraCacheStoreFactory (was: While using ignite-cassandra-store, POJO field having wrapper type, mapped to Cassandra table are getting initialized to respective default value of primitive type instead of null if column value is null. ) To fix this problem we can simply make an initial check in org.apache.ignite.cache.store.cassandra.persistence.PojoField.setValueFromRow(..) method to see if the column value is null using DataStax Driver method *Row.isNull(String Column)*. If this method returns true then setValueFromRow(..) is returned back without making further method calls to: *PropertyMappingHelper.getCassandraColumnValue(row, col, accessor.getFieldType(), serializer);* *accessor.setValue(obj, val);* Hence the POJO fields will retain their respective java instance type default values. For wrapper types/Object types its null. For primitive types its same as return value of the respective Row.getxxx(..) methods when column value is null. Please let me know if there are any flaws with this approach or has any impacts on other modules. > POJO fields of java wrapper type are not retaining null values from Cassandra > persistent store, while using ignite's CassandraCacheStoreFactory > --- > > Key: IGNITE-6500 > URL: https://issues.apache.org/jira/browse/IGNITE-6500 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.1 >Reporter: Yashasvi Kotamraju >Assignee: Yashasvi Kotamraju >Priority: Minor > Fix For: 2.3 > > > While using ignite's CassandraCacheStoreFactory(part of > ignite-cassandra-store.jar) as cacheStoreFactory for a cache, if a POJO field > is of wrapper class type,
[jira] [Updated] (IGNITE-6500) While using ignite-cassandra-store, POJO field having wrapper type, mapped to Cassandra table are getting initialized to respective default value of primitive type inste
[ https://issues.apache.org/jira/browse/IGNITE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju updated IGNITE-6500: --- Affects Version/s: 2.1 Fix Version/s: 2.3 > While using ignite-cassandra-store, POJO field having wrapper type, mapped to > Cassandra table are getting initialized to respective default value of > primitive type instead of null if column value is null. > - > > Key: IGNITE-6500 > URL: https://issues.apache.org/jira/browse/IGNITE-6500 > Project: Ignite > Issue Type: Bug > Components: cassandra >Affects Versions: 2.1 >Reporter: Yashasvi Kotamraju >Assignee: Yashasvi Kotamraju >Priority: Minor > Fix For: 2.3 > > > While using ignite-cassandra-store, if a POJO field is of wrapper class > type, and the column value mapped in cassandra persistent store is null then > the POJO field is getting set to default primitive type instead of null. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (IGNITE-6500) While using ignite-cassandra-store, POJO field having wrapper type, mapped to Cassandra table are getting initialized to respective default value of primitive type inst
[ https://issues.apache.org/jira/browse/IGNITE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yashasvi Kotamraju reassigned IGNITE-6500: -- Assignee: Yashasvi Kotamraju > While using ignite-cassandra-store, POJO field having wrapper type, mapped to > Cassandra table are getting initialized to respective default value of > primitive type instead of null if column value is null. > - > > Key: IGNITE-6500 > URL: https://issues.apache.org/jira/browse/IGNITE-6500 > Project: Ignite > Issue Type: Bug > Components: cassandra >Reporter: Yashasvi Kotamraju >Assignee: Yashasvi Kotamraju >Priority: Minor > > While using ignite-cassandra-store, if a POJO field is of wrapper class > type, and the column value mapped in cassandra persistent store is null then > the POJO field is getting set to default primitive type instead of null. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (IGNITE-6500) While using ignite-cassandra-store, POJO field having wrapper type, mapped to Cassandra table are getting initialized to respective default value of primitive type inste
Yashasvi Kotamraju created IGNITE-6500: -- Summary: While using ignite-cassandra-store, POJO field having wrapper type, mapped to Cassandra table are getting initialized to respective default value of primitive type instead of null if column value is null. Key: IGNITE-6500 URL: https://issues.apache.org/jira/browse/IGNITE-6500 Project: Ignite Issue Type: Bug Components: cassandra Reporter: Yashasvi Kotamraju Priority: Minor While using ignite-cassandra-store, if a POJO field is of wrapper class type, and the column value mapped in cassandra persistent store is null then the POJO field is getting set to default primitive type instead of null. -- This message was sent by Atlassian JIRA (v6.4.14#64029)