[jira] [Commented] (IGNITE-12329) Invalid handling of remote entries causes partition desync and transaction hanging in COMMITTING state.
[ https://issues.apache.org/jira/browse/IGNITE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16962306#comment-16962306 ] Alexei Scherbakov commented on IGNITE-12329: The contribution also includes fix for GridDhtLocalPartition equals and hashCode. [~ivan.glukos] Ready for review. > Invalid handling of remote entries causes partition desync and transaction > hanging in COMMITTING state. > --- > > Key: IGNITE-12329 > URL: https://issues.apache.org/jira/browse/IGNITE-12329 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.7.6 >Reporter: Alexei Scherbakov >Assignee: Alexei Scherbakov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > This can happen if transaction is mapped on a partition which is about to be > evicted on backup. > Due to bugs entry belonging to other cache may be excluded from commit or > entry containing a lock can be removed without lock release causes depending > transactions to hang. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12329) Invalid handling of remote entries causes partition desync and transaction hanging in COMMITTING state.
[ https://issues.apache.org/jira/browse/IGNITE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16962305#comment-16962305 ] Ignite TC Bot commented on IGNITE-12329: {panel:title=Branch: [pull/7018/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4732671buildTypeId=IgniteTests24Java8_RunAll] > Invalid handling of remote entries causes partition desync and transaction > hanging in COMMITTING state. > --- > > Key: IGNITE-12329 > URL: https://issues.apache.org/jira/browse/IGNITE-12329 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.7.6 >Reporter: Alexei Scherbakov >Assignee: Alexei Scherbakov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > This can happen if transaction is mapped on a partition which is about to be > evicted on backup. > Due to bugs entry belonging to other cache may be excluded from commit or > entry containing a lock can be removed without lock release causes depending > transactions to hang. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-5247) TcpCommunicationSpi calls GridNioRecoveryDescriptor with looks unusually large rcvCnt and fail with null.
[ https://issues.apache.org/jira/browse/IGNITE-5247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16962088#comment-16962088 ] Alexey Goncharuk commented on IGNITE-5247: -- Confirming the issue running a client on x86 and server on zOS. > TcpCommunicationSpi calls GridNioRecoveryDescriptor with looks unusually > large rcvCnt and fail with null. > -- > > Key: IGNITE-5247 > URL: https://issues.apache.org/jira/browse/IGNITE-5247 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.0 >Reporter: Chandra Bose Renganathan >Assignee: Alexey Goncharuk >Priority: Major > > TcpCommunicationSpi calls GridNioRecoveryDescriptor with looks unusually > large rcvCnt and fail with null. > This happens when new node try to join existing grid of size 2 or 3, > Environment Machine 1: > SunOS betapm 5.10 Generic_150400-48 sun4v sparc sun4v > Environment Machine 2: > Linux sbpmwsv1 2.6.32-642.13.1.el6.x86_64 #1 SMP Wed Nov 23 16:03:01 EST 2016 > x86_64 x86_64 x86_64 GNU/Linux > rcvCnt=216172782113783808 ??? > The configuration following. MyTcpDiscoveryIpFinderAdapter returns 3 > InetSocketAddress including self. > IgniteConfiguration cfg = new IgniteConfiguration(); > TcpDiscoverySpi spi = new TcpDiscoverySpi(); > JdkMarshaller marshaller = new JdkMarshaller(); > cfg.setMarshaller(marshaller); > spi.setLocalPort(8087); > spi.setSocketTimeout(8L); > spi.setAckTimeout(8L); > > spi.setLocalPortRange(1); > spi.setIpFinder(new MyTcpDiscoveryIpFinderAdapter()); > cfg.setDiscoverySpi(spi); > cfg.setGridName("demo-cluster"); > cfg.setClientMode(false); > Ignite ignite = Ignition.start(cfg); > IgniteCache cache = ignite.getOrCreateCache("bose"); > System.out.println(cache.get("key1")); > May 19, 2017 10:09:13 AM org.apache.ignite.logger.java.JavaLogger error > SEVERE: Closing NIO session because of unhandled exception. > class org.apache.ignite.internal.util.nio.GridNioException: null > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2043) > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1868) > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1573) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.ackReceived(GridNioRecoveryDescriptor.java:211) > at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:647) > at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:342) > at > org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279) > at > org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109) > at > org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:117) > at > org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109) > at > org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:88) > at > org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109) > at > org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3062) > at > org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175) > at > org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:1121) > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2031) > ... 4 more > May 19, 2017 10:09:14 AM org.apache.ignite.logger.java.JavaLogger error > SEVERE: Closing NIO session because of unhandled exception. > class org.apache.ignite.internal.util.nio.GridNioException: null > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2043) > at >
[jira] [Assigned] (IGNITE-5247) TcpCommunicationSpi calls GridNioRecoveryDescriptor with looks unusually large rcvCnt and fail with null.
[ https://issues.apache.org/jira/browse/IGNITE-5247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk reassigned IGNITE-5247: Assignee: Alexey Goncharuk > TcpCommunicationSpi calls GridNioRecoveryDescriptor with looks unusually > large rcvCnt and fail with null. > -- > > Key: IGNITE-5247 > URL: https://issues.apache.org/jira/browse/IGNITE-5247 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.0 >Reporter: Chandra Bose Renganathan >Assignee: Alexey Goncharuk >Priority: Major > > TcpCommunicationSpi calls GridNioRecoveryDescriptor with looks unusually > large rcvCnt and fail with null. > This happens when new node try to join existing grid of size 2 or 3, > Environment Machine 1: > SunOS betapm 5.10 Generic_150400-48 sun4v sparc sun4v > Environment Machine 2: > Linux sbpmwsv1 2.6.32-642.13.1.el6.x86_64 #1 SMP Wed Nov 23 16:03:01 EST 2016 > x86_64 x86_64 x86_64 GNU/Linux > rcvCnt=216172782113783808 ??? > The configuration following. MyTcpDiscoveryIpFinderAdapter returns 3 > InetSocketAddress including self. > IgniteConfiguration cfg = new IgniteConfiguration(); > TcpDiscoverySpi spi = new TcpDiscoverySpi(); > JdkMarshaller marshaller = new JdkMarshaller(); > cfg.setMarshaller(marshaller); > spi.setLocalPort(8087); > spi.setSocketTimeout(8L); > spi.setAckTimeout(8L); > > spi.setLocalPortRange(1); > spi.setIpFinder(new MyTcpDiscoveryIpFinderAdapter()); > cfg.setDiscoverySpi(spi); > cfg.setGridName("demo-cluster"); > cfg.setClientMode(false); > Ignite ignite = Ignition.start(cfg); > IgniteCache cache = ignite.getOrCreateCache("bose"); > System.out.println(cache.get("key1")); > May 19, 2017 10:09:13 AM org.apache.ignite.logger.java.JavaLogger error > SEVERE: Closing NIO session because of unhandled exception. > class org.apache.ignite.internal.util.nio.GridNioException: null > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2043) > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1868) > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1573) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.ignite.internal.util.nio.GridNioRecoveryDescriptor.ackReceived(GridNioRecoveryDescriptor.java:211) > at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:647) > at > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onMessage(TcpCommunicationSpi.java:342) > at > org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279) > at > org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109) > at > org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:117) > at > org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109) > at > org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:88) > at > org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109) > at > org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3062) > at > org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175) > at > org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:1121) > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2031) > ... 4 more > May 19, 2017 10:09:14 AM org.apache.ignite.logger.java.JavaLogger error > SEVERE: Closing NIO session because of unhandled exception. > class org.apache.ignite.internal.util.nio.GridNioException: null > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2043) > at > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1868) > at >
[jira] [Commented] (IGNITE-12300) ComputeJob#cancel executes with wrong SecurityContext
[ https://issues.apache.org/jira/browse/IGNITE-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16962063#comment-16962063 ] Ignite TC Bot commented on IGNITE-12300: {panel:title=Branch: [pull/7017/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4731963buildTypeId=IgniteTests24Java8_RunAll] > ComputeJob#cancel executes with wrong SecurityContext > - > > Key: IGNITE-12300 > URL: https://issues.apache.org/jira/browse/IGNITE-12300 > Project: Ignite > Issue Type: Bug >Reporter: Denis Garus >Assignee: Denis Garus >Priority: Major > Attachments: ComputeJobCancelReproducerTest.java > > Time Spent: 10m > Remaining Estimate: 0h > > ComputeJob#cancel executes with the security context of a current node rather > than a security context of a node that initiates ComputeJob. > > Reproducer: > [https://github.com/apache/ignite/pull/6984/files] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12336) CacheMetricsImpl instance will be created twice in case of near cache is configured
Andrey N. Gura created IGNITE-12336: --- Summary: CacheMetricsImpl instance will be created twice in case of near cache is configured Key: IGNITE-12336 URL: https://issues.apache.org/jira/browse/IGNITE-12336 Project: Ignite Issue Type: Bug Reporter: Andrey N. Gura Assignee: Andrey N. Gura Fix For: 2.8 {{CacheMetricsImpl}} instance will be created twice for DHT cache in case of near cache is configured. It is absolutely redundant instance because cache context already contains metrics instance. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (IGNITE-11866) Add ability to activate cluster in read-only mode
[ https://issues.apache.org/jira/browse/IGNITE-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Antonov resolved IGNITE-11866. - Assignee: Sergey Antonov Resolution: Duplicate > Add ability to activate cluster in read-only mode > - > > Key: IGNITE-11866 > URL: https://issues.apache.org/jira/browse/IGNITE-11866 > Project: Ignite > Issue Type: Improvement >Reporter: Sergey Antonov >Assignee: Sergey Antonov >Priority: Major > > After IGNITE-11256 we have cluster read-only mode. We should have ability to > activate cluster and enable read-only mode like atomic operation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (IGNITE-12225) Add enum for cluster state
[ https://issues.apache.org/jira/browse/IGNITE-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Antonov updated IGNITE-12225: Description: We have 3 cluster states at the moment: inactive, active, read-only. For getting current cluster state and changing them {{IgniteCluster}} has methods: * {{boolean active()}}, {{void active(boolean active)}} - for cluster activation/deactivation * {{boolean readOnly()}}, {{void readOnly(boolean readOnly)}} - for enabling/disabling read-only mode. Also we have control.sh commans for changing cluster state: * {{--activate}} * {{--deactivate}} * {{--read-only-on}} * {{--read-only-off}} For me current API looks unuseful. My proposal: # Create enum {{ClusterState}} with values {{ACTIVE}}, {{INACTIVE}}, {{READ-ONLY}}. # Add methods to {{IgniteCluster}}: #* {{ClusterState state()}} returns current cluster state #* {{void state(ClusterState newState)}} changes cluster state to {{newState}} state # Mark as deprecated the following methods in {{IgniteCluster}}: {{boolean active()}}, {{void active(boolean active)}}, # Add new command to control.sh: {{control.sh --set-state (ACTIVE|INACTIVE|READ-ONLY)}} [--yes] # Add warn message that command is depricated in control.sh. Commands: --activate, --deactivate, # Remove commands from control.sh: --read-only-on, --read-only-off (no one release wasn't published with this functional) # Add new methods to {{IgniteConfiguration}}: #* {{ClusterState getClusterStateOnStart()}} #* {{IgniteConfiguration setClusterStateOnStart(ClusterState state)}} # Deprecate methods in {{IgniteConfiguration}}: #* {{boolean isActiveOnStart()}} #* {{IgniteConfiguration setActiveOnStart(boolean activeOnStart)}} was: We have 3 cluster states at the moment: inactive, active, read-only. For getting current cluster state and changing them {{IgniteCluster}} has methods: * {{boolean active()}}, {{void active(boolean active)}} - for cluster activation/deactivation * {{boolean readOnly()}}, {{void readOnly(boolean readOnly)}} - for enabling/disabling read-only mode. Also we have control.sh commans for changing cluster state: * {{--activate}} * {{--deactivate}} * {{--read-only-on}} * {{--read-only-off}} For me current API looks unuseful. My proposal: # Create enum {{ClusterState}} with values {{ACTIVE}}, {{INACTIVE}}, {{READ-ONLY}}. # Add methods to {{IgniteCluster}}: #* {{ClusterState state()}} returns current cluster state #* {{void state(ClusterState newState)}} changes cluster state to {{newState}} state # Mark as deprecated the following methods in {{IgniteCluster}}: {{boolean active()}}, {{void active(boolean active)}}, # Add new command to control.sh: {{control.sh --set-state (ACTIVE|INACTIVE|READ-ONLY)}} [--yes] # Add warn message that command is depricated in control.sh. Commands: --activate, --deactivate, # Remove commands from control.sh: --read-only-on, --read-only-off (no one release wasn't published with this functional) > Add enum for cluster state > -- > > Key: IGNITE-12225 > URL: https://issues.apache.org/jira/browse/IGNITE-12225 > Project: Ignite > Issue Type: Improvement >Reporter: Sergey Antonov >Assignee: Sergey Antonov >Priority: Major > Fix For: 2.8 > > > We have 3 cluster states at the moment: inactive, active, read-only. > For getting current cluster state and changing them {{IgniteCluster}} has > methods: > * {{boolean active()}}, {{void active(boolean active)}} - for cluster > activation/deactivation > * {{boolean readOnly()}}, {{void readOnly(boolean readOnly)}} - for > enabling/disabling read-only mode. > Also we have control.sh commans for changing cluster state: > * {{--activate}} > * {{--deactivate}} > * {{--read-only-on}} > * {{--read-only-off}} > For me current API looks unuseful. My proposal: > # Create enum {{ClusterState}} with values {{ACTIVE}}, {{INACTIVE}}, > {{READ-ONLY}}. > # Add methods to {{IgniteCluster}}: > #* {{ClusterState state()}} returns current cluster state > #* {{void state(ClusterState newState)}} changes cluster state to > {{newState}} state > # Mark as deprecated the following methods in {{IgniteCluster}}: {{boolean > active()}}, {{void active(boolean active)}}, > # Add new command to control.sh: {{control.sh --set-state > (ACTIVE|INACTIVE|READ-ONLY)}} [--yes] > # Add warn message that command is depricated in control.sh. Commands: > --activate, --deactivate, > # Remove commands from control.sh: --read-only-on, --read-only-off (no one > release wasn't published with this functional) > # Add new methods to {{IgniteConfiguration}}: > #* {{ClusterState getClusterStateOnStart()}} > #* {{IgniteConfiguration setClusterStateOnStart(ClusterState state)}} > # Deprecate methods in {{IgniteConfiguration}}: > #* {{boolean isActiveOnStart()}} > #* {{IgniteConfiguration
[jira] [Updated] (IGNITE-12328) IgniteException "Failed to resolve nodes topology" during cache.removeAll() and constantly changing topology
[ https://issues.apache.org/jira/browse/IGNITE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Rakov updated IGNITE-12328: Release Note: Fixed possible exception on removeAll during topology change > IgniteException "Failed to resolve nodes topology" during cache.removeAll() > and constantly changing topology > > > Key: IGNITE-12328 > URL: https://issues.apache.org/jira/browse/IGNITE-12328 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.7.6 >Reporter: Alexei Scherbakov >Assignee: Alexei Scherbakov >Priority: Major > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > {noformat} > [2019-09-25 13:13:58,339][ERROR][TxThread-threadNum-3] Failed to complete > transaction. > org.apache.ignite.IgniteException: Failed to resolve nodes topology > [cacheGrp=cache_group_36, topVer=AffinityTopologyVersion [topVer=16, > minorTopVer=0], history=[AffinityTopologyVersion [topVer=13, minorTopVer=0], > AffinityTopologyVersion [topVer=14, minorTopVer=0], AffinityTopologyVersion > [topVer=15, minorTopVer=0]], snap=Snapshot [topVer=AffinityTopologyVersion > [topVer=15, minorTopVer=0]], locNode=TcpDiscoveryNode > [id=6cbf7666-9a8c-4b61-8b3f-6351ef44bd4a, > consistentId=poc-tester-client-172.25.1.21-id-0, addrs=ArrayList > [172.25.1.21], sockAddrs=HashSet [lab21.gridgain.local/172.25.1.21:0], > discPort=0, order=13, intOrder=0, lastExchangeTime=1569406379934, loc=true, > ver=2.5.10#20190922-sha1:02133315, isClient=true]] > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.resolveDiscoCache(GridDiscoveryManager.java:2125) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.cacheGroupAffinityNodes(GridDiscoveryManager.java:2007) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheUtils.affinityNodes(GridCacheUtils.java:465) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture.map0(GridDhtColocatedLockFuture.java:939) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture.map(GridDhtColocatedLockFuture.java:911) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture.map(GridDhtColocatedLockFuture.java:811) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.lockAllAsync(GridDhtColocatedCache.java:656) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.GridDistributedCacheAdapter.txLockAsync(GridDistributedCacheAdapter.java:109) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.removeAllAsync0(GridNearTxLocal.java:1648) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.removeAllAsync(GridNearTxLocal.java:521) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter$33.inOp(GridCacheAdapter.java:2619) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter$SyncInOp.op(GridCacheAdapter.java:4701) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:3780) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.removeAll0(GridCacheAdapter.java:2617) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.removeAll(GridCacheAdapter.java:2606) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.removeAll(IgniteCacheProxyImpl.java:1553) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.removeAll(GatewayProtectedCacheProxy.java:1026) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.scenario.TxBalanceTask$TxBody.doTxRemoveAll(TxBalanceTask.java:291) > ~[poc-tester-0.1.0-SNAPSHOT.jar:?] > at > org.apache.ignite.scenario.TxBalanceTask$TxBody.call(TxBalanceTask.java:93) > ~[poc-tester-0.1.0-SNAPSHOT.jar:?] > at > org.apache.ignite.scenario.TxBalanceTask$TxBody.call(TxBalanceTask.java:70) > ~[poc-tester-0.1.0-SNAPSHOT.jar:?] > at >
[jira] [Commented] (IGNITE-12049) Allow custom authenticators to use SSL certificates
[ https://issues.apache.org/jira/browse/IGNITE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961906#comment-16961906 ] Ignite TC Bot commented on IGNITE-12049: {panel:title=Branch: [pull/6796/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4729735buildTypeId=IgniteTests24Java8_RunAll] > Allow custom authenticators to use SSL certificates > --- > > Key: IGNITE-12049 > URL: https://issues.apache.org/jira/browse/IGNITE-12049 > Project: Ignite > Issue Type: Improvement >Reporter: Ryabov Dmitrii >Assignee: Ryabov Dmitrii >Priority: Minor > Time Spent: 50m > Remaining Estimate: 0h > > Add SSL certificates to AuthenticationContext, so, authenticators can make > additional checks based on SSL certificates. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (IGNITE-12335) IgniteDataStreamer flush cannot be really interrupted
[ https://issues.apache.org/jira/browse/IGNITE-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kamyshnikov updated IGNITE-12335: -- Description: IgniteDataStreamer flush operation cannot be interrupted: 1) datastreamer.close(true) does not interrupt flushing (though it has cancellation mode) 2) flushingThread.interrupt does not interrupt flushing (though IgniteInterruptedException is declared in the flush's method throws clause) 3) dataStreamer timeout does not work at all if flushingThread is interrupted 4) dataStreamer timeout does not stop flushing (after catching IgniteDataStreamerTimeoutException) 5) Ignition.closeAll(true) can even result in JVM halt if there was dataStreamer flush running Cases on the diagram: !image-2019-10-29-13-05-25-969.png! Reproducer: [^DataStreamerFlushInterruptionTest.java] RCA: For the cases with Thread.interrrupt: 1) Probably, org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.Buffer#flush method when it enters org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#acquireRemapSemaphore does not trigger InterruptedException because it avoid all the operations on semaphore. 2) org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#doFlush has a big while(true) loop that does not handle IgniteDataStreamerTimeoutException (treating it as just IgniteCheckedException leading to "Remaps needed - flush buffers.") No RCA for dataStreamer.close(true). was: IgniteDataStreamer flush operation cannot be interrupted: 1) datastreamer.close(true) does not interrupt flushing (though it has cancellation mode) 2) flushingThread.interrupt does not interrupt flushing (though IgniteInterruptedException is declared in the flush's method throws clause) 3) dataStreamer timeout does not work at all if flushingThread is interrupted 4) dataStreamer timeout does not stop flushing (after catching IgniteDataStreamerTimeoutException) 5) Ignition.closeAll(true) can even result in JVM halt if there was dataStreamer flush running Cases on the diagram: !image-2019-10-29-13-05-25-969.png|thumbnail! Reproducer: [^DataStreamerFlushInterruptionTest.java] RCA: For the cases with Thread.interrrupt: 1) Probably, org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.Buffer#flush method when it enters org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#acquireRemapSemaphore does not trigger InterruptedException because it avoid all the operations on semaphore. 2) org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#doFlush has a big while(true) loop that does not handle IgniteDataStreamerTimeoutException (treating it as just IgniteCheckedException leading to "Remaps needed - flush buffers.") No RCA for dataStreamer.close(true). > IgniteDataStreamer flush cannot be really interrupted > - > > Key: IGNITE-12335 > URL: https://issues.apache.org/jira/browse/IGNITE-12335 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.5, 2.7.6 >Reporter: Igor Kamyshnikov >Priority: Major > Attachments: DataStreamerFlushInterruptionTest.java, > image-2019-10-29-13-05-25-969.png > > > IgniteDataStreamer flush operation cannot be interrupted: > 1) datastreamer.close(true) does not interrupt flushing (though it has > cancellation mode) > 2) flushingThread.interrupt does not interrupt flushing (though > IgniteInterruptedException is declared in the flush's method throws clause) > 3) dataStreamer timeout does not work at all if flushingThread is interrupted > 4) dataStreamer timeout does not stop flushing (after catching > IgniteDataStreamerTimeoutException) > 5) Ignition.closeAll(true) can even result in JVM halt if there was > dataStreamer flush running > Cases on the diagram: > !image-2019-10-29-13-05-25-969.png! > Reproducer: > [^DataStreamerFlushInterruptionTest.java] > RCA: > For the cases with Thread.interrrupt: > 1) Probably, > org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.Buffer#flush > method when it enters > org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#acquireRemapSemaphore > does not trigger InterruptedException because it avoid all the operations on > semaphore. > 2) > org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#doFlush > has a big while(true) loop that does not handle > IgniteDataStreamerTimeoutException (treating it as just > IgniteCheckedException leading to "Remaps needed - flush buffers.") > No RCA for dataStreamer.close(true). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (IGNITE-12335) IgniteDataStreamer flush cannot be really interrupted
Igor Kamyshnikov created IGNITE-12335: - Summary: IgniteDataStreamer flush cannot be really interrupted Key: IGNITE-12335 URL: https://issues.apache.org/jira/browse/IGNITE-12335 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.7.6, 2.5 Reporter: Igor Kamyshnikov Attachments: DataStreamerFlushInterruptionTest.java, image-2019-10-29-13-05-25-969.png IgniteDataStreamer flush operation cannot be interrupted: 1) datastreamer.close(true) does not interrupt flushing (though it has cancellation mode) 2) flushingThread.interrupt does not interrupt flushing (though IgniteInterruptedException is declared in the flush's method throws clause) 3) dataStreamer timeout does not work at all if flushingThread is interrupted 4) dataStreamer timeout does not stop flushing (after catching IgniteDataStreamerTimeoutException) 5) Ignition.closeAll(true) can even result in JVM halt if there was dataStreamer flush running Cases on the diagram: !image-2019-10-29-13-05-25-969.png|thumbnail! Reproducer: [^DataStreamerFlushInterruptionTest.java] RCA: For the cases with Thread.interrrupt: 1) Probably, org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.Buffer#flush method when it enters org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#acquireRemapSemaphore does not trigger InterruptedException because it avoid all the operations on semaphore. 2) org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#doFlush has a big while(true) loop that does not handle IgniteDataStreamerTimeoutException (treating it as just IgniteCheckedException leading to "Remaps needed - flush buffers.") No RCA for dataStreamer.close(true). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12328) IgniteException "Failed to resolve nodes topology" during cache.removeAll() and constantly changing topology
[ https://issues.apache.org/jira/browse/IGNITE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961819#comment-16961819 ] Alexei Scherbakov commented on IGNITE-12328: The contribution also include fixes: 1. pessimistic tx lock request processing over incomplete topology. 2. atomic cache is remapped on the compatible topology. [~irakov] Ready for review. > IgniteException "Failed to resolve nodes topology" during cache.removeAll() > and constantly changing topology > > > Key: IGNITE-12328 > URL: https://issues.apache.org/jira/browse/IGNITE-12328 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.7.6 >Reporter: Alexei Scherbakov >Assignee: Alexei Scherbakov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > {noformat} > [2019-09-25 13:13:58,339][ERROR][TxThread-threadNum-3] Failed to complete > transaction. > org.apache.ignite.IgniteException: Failed to resolve nodes topology > [cacheGrp=cache_group_36, topVer=AffinityTopologyVersion [topVer=16, > minorTopVer=0], history=[AffinityTopologyVersion [topVer=13, minorTopVer=0], > AffinityTopologyVersion [topVer=14, minorTopVer=0], AffinityTopologyVersion > [topVer=15, minorTopVer=0]], snap=Snapshot [topVer=AffinityTopologyVersion > [topVer=15, minorTopVer=0]], locNode=TcpDiscoveryNode > [id=6cbf7666-9a8c-4b61-8b3f-6351ef44bd4a, > consistentId=poc-tester-client-172.25.1.21-id-0, addrs=ArrayList > [172.25.1.21], sockAddrs=HashSet [lab21.gridgain.local/172.25.1.21:0], > discPort=0, order=13, intOrder=0, lastExchangeTime=1569406379934, loc=true, > ver=2.5.10#20190922-sha1:02133315, isClient=true]] > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.resolveDiscoCache(GridDiscoveryManager.java:2125) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.cacheGroupAffinityNodes(GridDiscoveryManager.java:2007) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheUtils.affinityNodes(GridCacheUtils.java:465) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture.map0(GridDhtColocatedLockFuture.java:939) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture.map(GridDhtColocatedLockFuture.java:911) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture.map(GridDhtColocatedLockFuture.java:811) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.lockAllAsync(GridDhtColocatedCache.java:656) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.GridDistributedCacheAdapter.txLockAsync(GridDistributedCacheAdapter.java:109) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.removeAllAsync0(GridNearTxLocal.java:1648) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.removeAllAsync(GridNearTxLocal.java:521) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter$33.inOp(GridCacheAdapter.java:2619) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter$SyncInOp.op(GridCacheAdapter.java:4701) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:3780) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.removeAll0(GridCacheAdapter.java:2617) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.removeAll(GridCacheAdapter.java:2606) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.removeAll(IgniteCacheProxyImpl.java:1553) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.removeAll(GatewayProtectedCacheProxy.java:1026) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.scenario.TxBalanceTask$TxBody.doTxRemoveAll(TxBalanceTask.java:291) > ~[poc-tester-0.1.0-SNAPSHOT.jar:?] > at > org.apache.ignite.scenario.TxBalanceTask$TxBody.call(TxBalanceTask.java:93) >
[jira] [Commented] (IGNITE-12328) IgniteException "Failed to resolve nodes topology" during cache.removeAll() and constantly changing topology
[ https://issues.apache.org/jira/browse/IGNITE-12328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961813#comment-16961813 ] Ignite TC Bot commented on IGNITE-12328: {panel:title=Branch: [pull/7015/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=4731699buildTypeId=IgniteTests24Java8_RunAll] > IgniteException "Failed to resolve nodes topology" during cache.removeAll() > and constantly changing topology > > > Key: IGNITE-12328 > URL: https://issues.apache.org/jira/browse/IGNITE-12328 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.7.6 >Reporter: Alexei Scherbakov >Assignee: Alexei Scherbakov >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > {noformat} > [2019-09-25 13:13:58,339][ERROR][TxThread-threadNum-3] Failed to complete > transaction. > org.apache.ignite.IgniteException: Failed to resolve nodes topology > [cacheGrp=cache_group_36, topVer=AffinityTopologyVersion [topVer=16, > minorTopVer=0], history=[AffinityTopologyVersion [topVer=13, minorTopVer=0], > AffinityTopologyVersion [topVer=14, minorTopVer=0], AffinityTopologyVersion > [topVer=15, minorTopVer=0]], snap=Snapshot [topVer=AffinityTopologyVersion > [topVer=15, minorTopVer=0]], locNode=TcpDiscoveryNode > [id=6cbf7666-9a8c-4b61-8b3f-6351ef44bd4a, > consistentId=poc-tester-client-172.25.1.21-id-0, addrs=ArrayList > [172.25.1.21], sockAddrs=HashSet [lab21.gridgain.local/172.25.1.21:0], > discPort=0, order=13, intOrder=0, lastExchangeTime=1569406379934, loc=true, > ver=2.5.10#20190922-sha1:02133315, isClient=true]] > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.resolveDiscoCache(GridDiscoveryManager.java:2125) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.cacheGroupAffinityNodes(GridDiscoveryManager.java:2007) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheUtils.affinityNodes(GridCacheUtils.java:465) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture.map0(GridDhtColocatedLockFuture.java:939) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture.map(GridDhtColocatedLockFuture.java:911) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedLockFuture.map(GridDhtColocatedLockFuture.java:811) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.lockAllAsync(GridDhtColocatedCache.java:656) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.GridDistributedCacheAdapter.txLockAsync(GridDistributedCacheAdapter.java:109) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.removeAllAsync0(GridNearTxLocal.java:1648) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.removeAllAsync(GridNearTxLocal.java:521) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter$33.inOp(GridCacheAdapter.java:2619) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter$SyncInOp.op(GridCacheAdapter.java:4701) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:3780) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.removeAll0(GridCacheAdapter.java:2617) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.removeAll(GridCacheAdapter.java:2606) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.removeAll(IgniteCacheProxyImpl.java:1553) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.removeAll(GatewayProtectedCacheProxy.java:1026) > ~[ignite-core-2.5.10.jar:2.5.10] > at > org.apache.ignite.scenario.TxBalanceTask$TxBody.doTxRemoveAll(TxBalanceTask.java:291) > ~[poc-tester-0.1.0-SNAPSHOT.jar:?] > at >
[jira] [Updated] (IGNITE-12206) Partition state validation warns are not printed to log
[ https://issues.apache.org/jira/browse/IGNITE-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-12206: - Description: GridDhtPartitionsExchangeFuture.java {code:java} if (grpCtx == null || grpCtx.config().isReadThrough() || grpCtx.config().isWriteThrough() || grpCtx.config().getCacheStoreFactory() != null || grpCtx.config().getRebalanceDelay() == -1 || grpCtx.config().getRebalanceMode() == CacheRebalanceMode.NONE || grpCtx.config().getExpiryPolicyFactory() == null || SKIP_PARTITION_SIZE_VALIDATION) return null;{code} In case of using custom ExpiryPolicy, partition states validation should be skipped, so it looks like a typo, probably it should be grpCtx.config().getExpiryPolicyFactory() != null was: GridDhtPartitionsExchangeFuture.java {code:java} if (grpCtx == null || grpCtx.config().isReadThrough() || grpCtx.config().isWriteThrough() || grpCtx.config().getCacheStoreFactory() != null || grpCtx.config().getRebalanceDelay() == -1 || grpCtx.config().getRebalanceMode() == CacheRebalanceMode.NONE || grpCtx.config().getExpiryPolicyFactory() == null || SKIP_PARTITION_SIZE_VALIDATION) return null;{code} Looks like a typo, probably it should be grpCtx.config().getExpiryPolicyFactory() != null > Partition state validation warns are not printed to log > --- > > Key: IGNITE-12206 > URL: https://issues.apache.org/jira/browse/IGNITE-12206 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.7 >Reporter: Stepachev Maksim >Assignee: Stepachev Maksim >Priority: Major > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > GridDhtPartitionsExchangeFuture.java > > {code:java} > if (grpCtx == null > || grpCtx.config().isReadThrough() > || grpCtx.config().isWriteThrough() > || grpCtx.config().getCacheStoreFactory() != null > || grpCtx.config().getRebalanceDelay() == -1 > || grpCtx.config().getRebalanceMode() == > CacheRebalanceMode.NONE > || grpCtx.config().getExpiryPolicyFactory() == null > || SKIP_PARTITION_SIZE_VALIDATION) > return null;{code} > > In case of using custom ExpiryPolicy, partition states validation should be > skipped, so it looks like a typo, probably it should be > grpCtx.config().getExpiryPolicyFactory() != null -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (IGNITE-12206) Partition state validation warns are printed to log
[ https://issues.apache.org/jira/browse/IGNITE-12206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-12206: - Summary: Partition state validation warns are printed to log (was: Partition state validation warns are not printed to log) > Partition state validation warns are printed to log > --- > > Key: IGNITE-12206 > URL: https://issues.apache.org/jira/browse/IGNITE-12206 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.7 >Reporter: Stepachev Maksim >Assignee: Stepachev Maksim >Priority: Major > Fix For: 2.8 > > Time Spent: 20m > Remaining Estimate: 0h > > GridDhtPartitionsExchangeFuture.java > > {code:java} > if (grpCtx == null > || grpCtx.config().isReadThrough() > || grpCtx.config().isWriteThrough() > || grpCtx.config().getCacheStoreFactory() != null > || grpCtx.config().getRebalanceDelay() == -1 > || grpCtx.config().getRebalanceMode() == > CacheRebalanceMode.NONE > || grpCtx.config().getExpiryPolicyFactory() == null > || SKIP_PARTITION_SIZE_VALIDATION) > return null;{code} > > In case of using custom ExpiryPolicy, partition states validation should be > skipped, so it looks like a typo, probably it should be > grpCtx.config().getExpiryPolicyFactory() != null -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (IGNITE-12200) More informative assertion message at constructor of CachedDeploymentInfo (GridCacheDeploymentManager class)
[ https://issues.apache.org/jira/browse/IGNITE-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Sorokin updated IGNITE-12200: - Fix Version/s: 2.8 > More informative assertion message at constructor of CachedDeploymentInfo > (GridCacheDeploymentManager class) > > > Key: IGNITE-12200 > URL: https://issues.apache.org/jira/browse/IGNITE-12200 > Project: Ignite > Issue Type: Improvement >Affects Versions: 2.5, 2.7.5 >Reporter: Dmitriy Sorokin >Assignee: Dmitriy Sorokin >Priority: Minor > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > {code:java} > /** > * @param sndId Sender. > * @param ldrId Loader ID. > * @param userVer User version. > * @param depMode Deployment mode. > * @param participants Participants. > */ > private CachedDeploymentInfo(UUID sndId, IgniteUuid ldrId, String userVer, > DeploymentMode depMode, > Map participants) { > assert sndId.equals(ldrId.globalId()) || participants != null; > this.sndId = sndId; > this.ldrId = ldrId; > this.userVer = userVer; > this.depMode = depMode; > this.participants = participants == null || participants.isEmpty() ? null > : > new ConcurrentLinkedHashMap<>(participants); > } > {code} > The code above may produce the following stacktrace, where AssertionError > should contain more informative message for better root cause analysis: > {noformat} > 2019-09-17 > 18:29:29.890[ERROR][query-#1577440%DPL_GRID%DplGridNodeName%][o.a.i.i.p.cache.GridCacheIoManager] > Failed to process message [senderId=4c071d12-325a-4bb1-a68d-cc910f636562, > msg=GridCacheQueryRequest [id=4922, > cacheName=com.sbt.limits.data.entity.LimitTemplateV1Entity_DPL_union-module, > type=SCAN, fields=false, clause=null, clsName=null, keyValFilter=null, > rdc=null, trans=null, pageSize=1024, incBackups=false, cancel=false, > incMeta=false, all=false, keepBinary=true, > subjId=4c071d12-325a-4bb1-a68d-cc910f636562, taskHash=0, part=-1, > topVer=AffinityTopologyVersion [topVer=191, minorTopVer=0], > super=GridCacheIdMessage [cacheId=-724666788]]]2019-09-17 > 18:29:29.890[ERROR][query-#1577440%DPL_GRID%DplGridNodeName%][o.a.i.i.p.cache.GridCacheIoManager] > Failed to process message [senderId=4c071d12-325a-4bb1-a68d-cc910f636562, > msg=GridCacheQueryRequest [id=4922, > cacheName=com.sbt.limits.data.entity.LimitTemplateV1Entity_DPL_union-module, > type=SCAN, fields=false, clause=null, clsName=null, keyValFilter=null, > rdc=null, trans=null, pageSize=1024, incBackups=false, cancel=false, > incMeta=false, all=false, keepBinary=true, > subjId=4c071d12-325a-4bb1-a68d-cc910f636562, taskHash=0, part=-1, > topVer=AffinityTopologyVersion [topVer=191, minorTopVer=0], > super=GridCacheIdMessage [cacheId=-724666788]]] > java.lang.AssertionError: null > at > org.apache.ignite.internal.processors.cache.GridCacheDeploymentManager$CachedDeploymentInfo.(GridCacheDeploymentManager.java:918) > at > org.apache.ignite.internal.processors.cache.GridCacheDeploymentManager$CachedDeploymentInfo.(GridCacheDeploymentManager.java:889) > at > org.apache.ignite.internal.processors.cache.GridCacheDeploymentManager.p2pContext(GridCacheDeploymentManager.java:422) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.unmarshall(GridCacheIoManager.java:1547) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:582) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:386) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:312) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:102) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:301) > at > org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556) > at > org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184) > at > org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125) > at > org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2019-09-17 > 18:29:29.912[ERROR][query-#1577440%DPL_GRID%DplGridNodeName%][org.apache.ignite.Ignite] > Critical system error detected. Will be handled accordingly
[jira] [Updated] (IGNITE-12207) Inclusion of super.toString() info into some descenders of GridCacheMessage
[ https://issues.apache.org/jira/browse/IGNITE-12207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Sorokin updated IGNITE-12207: - Fix Version/s: 2.8 > Inclusion of super.toString() info into some descenders of GridCacheMessage > --- > > Key: IGNITE-12207 > URL: https://issues.apache.org/jira/browse/IGNITE-12207 > Project: Ignite > Issue Type: Improvement >Affects Versions: 2.7, 2.7.6 >Reporter: Dmitriy Sorokin >Assignee: Dmitriy Sorokin >Priority: Minor > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Sometimes when errors related to processing of descenders of GridCacheMessage > happens, we could need information which contained at the GridCacheMessage > class, in particular deployment information, contained if depInfo field. In > the some message classes which extends GridCacheMessage, toString() method > doesn't include the 'super' part, so we haven't that information at log error > messages, as at example below: > {noformat} > 2019-09-17 > 18:29:29.890[ERROR][query-#1577440%DPL_GRID%DplGridNodeName%][o.a.i.i.p.cache.GridCacheIoManager] > Failed to process message [senderId=4c071d12-325a-4bb1-a68d-cc910f636562, > msg=GridCacheQueryRequest [id=4922, > cacheName=com.sbt.limits.data.entity.LimitTemplateV1Entity_DPL_union-module, > type=SCAN, fields=false, clause=null, clsName=null, keyValFilter=null, > rdc=null, trans=null, pageSize=1024, incBackups=false, cancel=false, > incMeta=false, all=false, keepBinary=true, > subjId=4c071d12-325a-4bb1-a68d-cc910f636562, taskHash=0, part=-1, > topVer=AffinityTopologyVersion [topVer=191, minorTopVer=0], > super=GridCacheIdMessage [cacheId=-724666788]]]2019-09-17 > 18:29:29.890[ERROR][query-#1577440%DPL_GRID%DplGridNodeName%][o.a.i.i.p.cache.GridCacheIoManager] > Failed to process message [senderId=4c071d12-325a-4bb1-a68d-cc910f636562, > msg=GridCacheQueryRequest [id=4922, > cacheName=com.sbt.limits.data.entity.LimitTemplateV1Entity_DPL_union-module, > type=SCAN, fields=false, clause=null, clsName=null, keyValFilter=null, > rdc=null, trans=null, pageSize=1024, incBackups=false, cancel=false, > incMeta=false, all=false, keepBinary=true, > subjId=4c071d12-325a-4bb1-a68d-cc910f636562, taskHash=0, part=-1, > topVer=AffinityTopologyVersion [topVer=191, minorTopVer=0], > super=GridCacheIdMessage [cacheId=-724666788]]] > java.lang.AssertionError: null > at > org.apache.ignite.internal.processors.cache.GridCacheDeploymentManager$CachedDeploymentInfo.(GridCacheDeploymentManager.java:918) > at > org.apache.ignite.internal.processors.cache.GridCacheDeploymentManager$CachedDeploymentInfo.(GridCacheDeploymentManager.java:889) > at > org.apache.ignite.internal.processors.cache.GridCacheDeploymentManager.p2pContext(GridCacheDeploymentManager.java:422) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.unmarshall(GridCacheIoManager.java:1547) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:582) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:386) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:312) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:102) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:301) > at > org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556) > at > org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184) > at > org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125) > at > org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > The assertion condition which produced error above includes the value which > obtained from GridCacheMessage.depInfo. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (IGNITE-12189) Implement correct limit for TextQuery
[ https://issues.apache.org/jira/browse/IGNITE-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961789#comment-16961789 ] Ivan Pavlukhin commented on IGNITE-12189: - [~Yuriy_Shuliha], sorry a delay. A little rush these days on my schedule. I will return to it tomorrow. > Implement correct limit for TextQuery > - > > Key: IGNITE-12189 > URL: https://issues.apache.org/jira/browse/IGNITE-12189 > Project: Ignite > Issue Type: Improvement > Components: general >Reporter: Yuriy Shuliha >Assignee: Yuriy Shuliha >Priority: Major > Fix For: 2.8 > > Time Spent: 8h 10m > Remaining Estimate: 0h > > PROBLEM > For now each server-node returns all response records to the client-node and > it may contain ~thousands, ~hundred thousands records. > Event if we need only first 10-100. Again, all the results are added to > queue in _*GridCacheQueryFutureAdapter*_ in arbitrary order by pages. > There are no any means to deliver deterministic result. > SOLUTION > Implement _*limit*_ as parameter for _*TextQuery*_ and > _*GridCacheQueryRequest*_ > It should be passed as limit parameter in Lucene's > _*IndexSearcher.search()*_ in _*GridLuceneIndex*_. > For distributed queries _*limit*_ will also trim response queue when merging > results. > Type: long > Special value: : 0 -> No limit (Integer.MAX_VALUE); -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (IGNITE-12317) Add EvictionFilter factory support in IgniteConfiguration.
[ https://issues.apache.org/jira/browse/IGNITE-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexei Scherbakov updated IGNITE-12317: --- Fix Version/s: (was: 2.9) 2.8 > Add EvictionFilter factory support in IgniteConfiguration. > -- > > Key: IGNITE-12317 > URL: https://issues.apache.org/jira/browse/IGNITE-12317 > Project: Ignite > Issue Type: Sub-task > Components: cache >Reporter: Nikolai Kulagin >Assignee: Nikolai Kulagin >Priority: Major > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > Some entities on cache configuration are configured via factories, while > others are set directly, for example, eviction policy and eviction filter. > Need to add new configuration properties for eviction filter factory and > deprecate old ones (do not remove for compatibility). -- This message was sent by Atlassian Jira (v8.3.4#803005)