[jira] [Commented] (IGNITE-18976) Affinity broken on thick client after reconnection

2023-03-11 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17699199#comment-17699199
 ] 

Sergey Kosarev commented on IGNITE-18976:
-

[~ivandasch], appreciate your efforts! Thanks a  lot!

> Affinity broken on thick client after reconnection
> --
>
> Key: IGNITE-18976
> URL: https://issues.apache.org/jira/browse/IGNITE-18976
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.14
>Reporter: Sergey Kosarev
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: ise
> Fix For: 2.15
>
> Attachments: IgniteClientReconnectAffinityTest.java
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> 1 Using AffinyKey +BynaryTypeconfiguration 
> 2 Client is reconnected
> 3 Affinity is Broken and Binary marshalling is broken:
> on Affinity.partition wrong value is returning:
> {noformat}
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:95)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyPartition(IgniteClientReconnectAffinityTest.java:213)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.testReconnectClientAnnotatedAffinityKeyWithBinaryConfigPartition(IgniteClientReconnectAffinityTest.java:123)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2504)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat} [^IgniteClientReconnectAffinityTest.java] 
> Exception on cache.get :
> {noformat}
> class org.apache.ignite.binary.BinaryObjectException: Failed to serialize 
> object 
> [typeName=org.apache.ignite.internal.IgniteClientReconnectAffinityTest$TestAnnotatedKey]
>   at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:916)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:232)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:165)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:152)
>   at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:583)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:1492)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheKeyObject(CacheObjectBinaryProcessorImpl.java:1287)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheKeyObject(GridCacheContext.java:1818)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.getAsync(GridDhtColocatedCache.java:279)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4759)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.repairableGet(GridCacheAdapter.java:4725)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1373)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1108)
>   at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:686)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyGet(IgniteClientReconnectAffinityTest.java:180)
>   at 
> 

[jira] [Commented] (IGNITE-18976) Affinity broken on thick client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697432#comment-17697432
 ] 

Sergey Kosarev commented on IGNITE-18976:
-

[~ivandasch], Thank you for prompt response.

> Affinity broken on thick client after reconnection
> --
>
> Key: IGNITE-18976
> URL: https://issues.apache.org/jira/browse/IGNITE-18976
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.14
>Reporter: Sergey Kosarev
>Assignee: Ivan Daschinsky
>Priority: Major
> Attachments: IgniteClientReconnectAffinityTest.java
>
>
> 1 Using AffinyKey +BynaryTypeconfiguration 
> 2 Client is reconnected
> 3 Affinity is Broken and Binary marshalling is broken:
> on Affinity.partition wrong value is returning:
> {noformat}
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:95)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyPartition(IgniteClientReconnectAffinityTest.java:213)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.testReconnectClientAnnotatedAffinityKeyWithBinaryConfigPartition(IgniteClientReconnectAffinityTest.java:123)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2504)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat} [^IgniteClientReconnectAffinityTest.java] 
> Exception on cache.get :
> {noformat}
> class org.apache.ignite.binary.BinaryObjectException: Failed to serialize 
> object 
> [typeName=org.apache.ignite.internal.IgniteClientReconnectAffinityTest$TestAnnotatedKey]
>   at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:916)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:232)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:165)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:152)
>   at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:583)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:1492)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheKeyObject(CacheObjectBinaryProcessorImpl.java:1287)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheKeyObject(GridCacheContext.java:1818)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.getAsync(GridDhtColocatedCache.java:279)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4759)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.repairableGet(GridCacheAdapter.java:4725)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1373)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1108)
>   at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:686)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyGet(IgniteClientReconnectAffinityTest.java:180)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.testReconnectClientAnnotatedAffinityKeyWithBinaryConfigGet(IgniteClientReconnectAffinityTest.java:118)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Updated] (IGNITE-18976) Affinity broken on thick client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-18976:

Summary: Affinity broken on thick client after reconnection  (was: Affinity 
broken on the  client after reconnection)

> Affinity broken on thick client after reconnection
> --
>
> Key: IGNITE-18976
> URL: https://issues.apache.org/jira/browse/IGNITE-18976
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.16
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
> Attachments: IgniteClientReconnectAffinityTest.java
>
>
> 1 Using AffinyKey +BynaryTypeconfiguration 
> 2 Client is reconnected
> 3 Affinity is Broken and Binary marshalling is broken:
> on Affinity.partition wrong value is returning:
> {noformat}
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:95)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyPartition(IgniteClientReconnectAffinityTest.java:213)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.testReconnectClientAnnotatedAffinityKeyWithBinaryConfigPartition(IgniteClientReconnectAffinityTest.java:123)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2504)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat} [^IgniteClientReconnectAffinityTest.java] 
> Exception on cache.get :
> {noformat}
> class org.apache.ignite.binary.BinaryObjectException: Failed to serialize 
> object 
> [typeName=org.apache.ignite.internal.IgniteClientReconnectAffinityTest$TestAnnotatedKey]
>   at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:916)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:232)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:165)
>   at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:152)
>   at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:583)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:1492)
>   at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheKeyObject(CacheObjectBinaryProcessorImpl.java:1287)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheKeyObject(GridCacheContext.java:1818)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.getAsync(GridDhtColocatedCache.java:279)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4759)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.repairableGet(GridCacheAdapter.java:4725)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1373)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1108)
>   at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:686)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyGet(IgniteClientReconnectAffinityTest.java:180)
>   at 
> org.apache.ignite.internal.IgniteClientReconnectAffinityTest.testReconnectClientAnnotatedAffinityKeyWithBinaryConfigGet(IgniteClientReconnectAffinityTest.java:118)
>   at 

[jira] [Updated] (IGNITE-18976) Affinity broken on the client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-18976:

Attachment: IgniteClientReconnectAffinityTest.java

> Affinity broken on the  client after reconnection
> -
>
> Key: IGNITE-18976
> URL: https://issues.apache.org/jira/browse/IGNITE-18976
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.16
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
> Attachments: IgniteClientReconnectAffinityTest.java
>
>
> 1 Using AffinyKey and BynaryTypeconfiguration 
> 2 Client is reconnected
> 3 Affinity is Broken:



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18976) Affinity broken on the client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-18976:

Description: 
1 Using AffinyKey +BynaryTypeconfiguration 
2 Client is reconnected
3 Affinity is Broken and Binary marshalling is broken:

on Affinity.partition wrong value is returning:
{noformat}
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:95)
at 
org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyPartition(IgniteClientReconnectAffinityTest.java:213)
at 
org.apache.ignite.internal.IgniteClientReconnectAffinityTest.testReconnectClientAnnotatedAffinityKeyWithBinaryConfigPartition(IgniteClientReconnectAffinityTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2504)
at java.lang.Thread.run(Thread.java:748)
{noformat} [^IgniteClientReconnectAffinityTest.java] 

Exception on cache.get :
{noformat}
class org.apache.ignite.binary.BinaryObjectException: Failed to serialize 
object 
[typeName=org.apache.ignite.internal.IgniteClientReconnectAffinityTest$TestAnnotatedKey]

at 
org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:916)
at 
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:232)
at 
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:165)
at 
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:152)
at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:583)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:1492)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheKeyObject(CacheObjectBinaryProcessorImpl.java:1287)
at 
org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheKeyObject(GridCacheContext.java:1818)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.getAsync(GridDhtColocatedCache.java:279)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4759)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.repairableGet(GridCacheAdapter.java:4725)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1373)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1108)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:686)
at 
org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyGet(IgniteClientReconnectAffinityTest.java:180)
at 
org.apache.ignite.internal.IgniteClientReconnectAffinityTest.testReconnectClientAnnotatedAffinityKeyWithBinaryConfigGet(IgniteClientReconnectAffinityTest.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 

[jira] [Updated] (IGNITE-18976) Affinity broken on the client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-18976:

Attachment: (was: IgniteClientReconnectAffinityTest.java)

> Affinity broken on the  client after reconnection
> -
>
> Key: IGNITE-18976
> URL: https://issues.apache.org/jira/browse/IGNITE-18976
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.16
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>
> 1 Using AffinyKey and BynaryTypeconfiguration 
> 2 Client is reconnected
> 3 Affinity is Broken:



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18976) Affinity broken on the client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-18976:

Description: 
1 Using AffinyKey and BynaryTypeconfiguration 
2 Client is reconnected
3 Affinity is Broken:


> Affinity broken on the  client after reconnection
> -
>
> Key: IGNITE-18976
> URL: https://issues.apache.org/jira/browse/IGNITE-18976
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.16
>Reporter: Sergey Kosarev
>Priority: Major
> Attachments: IgniteClientReconnectAffinityTest.java
>
>
> 1 Using AffinyKey and BynaryTypeconfiguration 
> 2 Client is reconnected
> 3 Affinity is Broken:



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-18976) Affinity broken on the client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev reassigned IGNITE-18976:
---

Assignee: Sergey Kosarev

> Affinity broken on the  client after reconnection
> -
>
> Key: IGNITE-18976
> URL: https://issues.apache.org/jira/browse/IGNITE-18976
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.16
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
> Attachments: IgniteClientReconnectAffinityTest.java
>
>
> 1 Using AffinyKey and BynaryTypeconfiguration 
> 2 Client is reconnected
> 3 Affinity is Broken:



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18976) Affinity broken on the client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-18976:

Description: (was: /*
 * Copyright 2019 GridGain Systems, Inc. and Contributors.
 *
 * Licensed under the GridGain Community Edition License (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 * 
https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.ignite.internal;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteException;
import org.apache.ignite.IgniteLogger;
import org.apache.ignite.binary.BinaryTypeConfiguration;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheKeyConfiguration;
import org.apache.ignite.cache.affinity.AffinityKeyMapped;
import org.apache.ignite.cluster.ClusterNode;
import org.apache.ignite.configuration.BinaryConfiguration;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.internal.managers.communication.GridIoMessage;
import org.apache.ignite.internal.util.typedef.F;
import org.apache.ignite.internal.util.typedef.T2;
import org.apache.ignite.lang.IgniteInClosure;
import org.apache.ignite.plugin.extensions.communication.Message;
import org.apache.ignite.resources.LoggerResource;
import org.apache.ignite.spi.IgniteSpiException;
import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.junit.Test;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.UUID;

/**
 *
 */
public class IgniteClientReconnectAffinityTest extends 
IgniteClientReconnectAbstractTest {
/** */
private static final int SRV_CNT = 1;

/** */
private UUID nodeId;
private Ignite client;

/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String 
igniteInstanceName) throws Exception {
IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);

TestCommunicationSpi commSpi = new TestCommunicationSpi();

commSpi.setSharedMemoryPort(-1);

cfg.setCommunicationSpi(commSpi);

cfg.setPeerClassLoadingEnabled(false);

((TcpDiscoverySpi)cfg.getDiscoverySpi()).setNetworkTimeout(5000);

cfg.setCacheKeyConfiguration(new 
CacheKeyConfiguration(TestNotAnnotatedKey.class.getName(), 
TestNotAnnotatedKey.AFFINITY_KEY_FIELD))
.setBinaryConfiguration(
new BinaryConfiguration()
.setTypeConfigurations(Arrays.asList(
new BinaryTypeConfiguration()

.setTypeName(TestNotAnnotatedKey.class.getName()),
new BinaryTypeConfiguration()

.setTypeName(TestAnnotatedKey.class.getName())
))
)
  ;

return cfg;
}

/** {@inheritDoc} */
@Override protected int serverCount() {
return 0;
}

/** {@inheritDoc} */
@Override protected void beforeTest() throws Exception {
startGrids(SRV_CNT);
}

/** {@inheritDoc} */
@Override protected void afterTest() throws Exception {
stopAllGrids();
}

@Test
public void testReconnectClientNotAnnotatedAffinityKeyGet() throws 
Exception {
clientMode = true;

final Ignite client = startGrid(SRV_CNT);

assertTrue(client.cluster().localNode().isClient());

final Ignite srv = clientRouter(client);

final IgniteCache clientCache = 
client.getOrCreateCache(new CacheConfiguration(DEFAULT_CACHE_NAME)
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
);

final IgniteCache srvCache = 
srv.cache(DEFAULT_CACHE_NAME);

assertNotNull(srvCache);

final String val = "val";

clientCache.put(TestNotAnnotatedKey.of(1), val);

assertEquals(val, clientCache.get(TestNotAnnotatedKey.of(1)));

assertEquals(val, srvCache.get(TestNotAnnotatedKey.of(1)));

reconnectClientNode(client, srv, new Runnable() {
@Override public void run() {

[jira] [Updated] (IGNITE-18976) Affinity broken on the client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-18976:

Attachment: IgniteClientReconnectAffinityTest.java

> Affinity broken on the  client after reconnection
> -
>
> Key: IGNITE-18976
> URL: https://issues.apache.org/jira/browse/IGNITE-18976
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.16
>Reporter: Sergey Kosarev
>Priority: Major
> Attachments: IgniteClientReconnectAffinityTest.java
>
>
> /*
>  * Copyright 2019 GridGain Systems, Inc. and Contributors.
>  *
>  * Licensed under the GridGain Community Edition License (the "License");
>  * you may not use this file except in compliance with the License.
>  * You may obtain a copy of the License at
>  *
>  * 
> https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.ignite.internal;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.IgniteException;
> import org.apache.ignite.IgniteLogger;
> import org.apache.ignite.binary.BinaryTypeConfiguration;
> import org.apache.ignite.cache.CacheAtomicityMode;
> import org.apache.ignite.cache.CacheKeyConfiguration;
> import org.apache.ignite.cache.affinity.AffinityKeyMapped;
> import org.apache.ignite.cluster.ClusterNode;
> import org.apache.ignite.configuration.BinaryConfiguration;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.internal.managers.communication.GridIoMessage;
> import org.apache.ignite.internal.util.typedef.F;
> import org.apache.ignite.internal.util.typedef.T2;
> import org.apache.ignite.lang.IgniteInClosure;
> import org.apache.ignite.plugin.extensions.communication.Message;
> import org.apache.ignite.resources.LoggerResource;
> import org.apache.ignite.spi.IgniteSpiException;
> import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import org.junit.Test;
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.HashMap;
> import java.util.HashSet;
> import java.util.List;
> import java.util.Map;
> import java.util.Set;
> import java.util.UUID;
> /**
>  *
>  */
> public class IgniteClientReconnectAffinityTest extends 
> IgniteClientReconnectAbstractTest {
> /** */
> private static final int SRV_CNT = 1;
> /** */
> private UUID nodeId;
> private Ignite client;
> /** {@inheritDoc} */
> @Override protected IgniteConfiguration getConfiguration(String 
> igniteInstanceName) throws Exception {
> IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);
> TestCommunicationSpi commSpi = new TestCommunicationSpi();
> commSpi.setSharedMemoryPort(-1);
> cfg.setCommunicationSpi(commSpi);
> cfg.setPeerClassLoadingEnabled(false);
> ((TcpDiscoverySpi)cfg.getDiscoverySpi()).setNetworkTimeout(5000);
> cfg.setCacheKeyConfiguration(new 
> CacheKeyConfiguration(TestNotAnnotatedKey.class.getName(), 
> TestNotAnnotatedKey.AFFINITY_KEY_FIELD))
> .setBinaryConfiguration(
> new BinaryConfiguration()
> .setTypeConfigurations(Arrays.asList(
> new BinaryTypeConfiguration()
> 
> .setTypeName(TestNotAnnotatedKey.class.getName()),
> new BinaryTypeConfiguration()
> 
> .setTypeName(TestAnnotatedKey.class.getName())
> ))
> )
>   ;
> return cfg;
> }
> /** {@inheritDoc} */
> @Override protected int serverCount() {
> return 0;
> }
> /** {@inheritDoc} */
> @Override protected void beforeTest() throws Exception {
> startGrids(SRV_CNT);
> }
> /** {@inheritDoc} */
> @Override protected void afterTest() throws Exception {
> stopAllGrids();
> }
> @Test
> public void testReconnectClientNotAnnotatedAffinityKeyGet() throws 
> Exception {
> clientMode = true;
> final Ignite client = startGrid(SRV_CNT);
> assertTrue(client.cluster().localNode().isClient());
> final Ignite srv = clientRouter(client);
> 

[jira] [Created] (IGNITE-18976) Affinity broken on the client after reconnection

2023-03-07 Thread Sergey Kosarev (Jira)
Sergey Kosarev created IGNITE-18976:
---

 Summary: Affinity broken on the  client after reconnection
 Key: IGNITE-18976
 URL: https://issues.apache.org/jira/browse/IGNITE-18976
 Project: Ignite
  Issue Type: Bug
  Components: binary
Affects Versions: 2.16
Reporter: Sergey Kosarev


/*
 * Copyright 2019 GridGain Systems, Inc. and Contributors.
 *
 * Licensed under the GridGain Community Edition License (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 * 
https://www.gridgain.com/products/software/community-edition/gridgain-community-edition-license
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.ignite.internal;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteException;
import org.apache.ignite.IgniteLogger;
import org.apache.ignite.binary.BinaryTypeConfiguration;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheKeyConfiguration;
import org.apache.ignite.cache.affinity.AffinityKeyMapped;
import org.apache.ignite.cluster.ClusterNode;
import org.apache.ignite.configuration.BinaryConfiguration;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.internal.managers.communication.GridIoMessage;
import org.apache.ignite.internal.util.typedef.F;
import org.apache.ignite.internal.util.typedef.T2;
import org.apache.ignite.lang.IgniteInClosure;
import org.apache.ignite.plugin.extensions.communication.Message;
import org.apache.ignite.resources.LoggerResource;
import org.apache.ignite.spi.IgniteSpiException;
import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.junit.Test;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.UUID;

/**
 *
 */
public class IgniteClientReconnectAffinityTest extends 
IgniteClientReconnectAbstractTest {
/** */
private static final int SRV_CNT = 1;

/** */
private UUID nodeId;
private Ignite client;

/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String 
igniteInstanceName) throws Exception {
IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);

TestCommunicationSpi commSpi = new TestCommunicationSpi();

commSpi.setSharedMemoryPort(-1);

cfg.setCommunicationSpi(commSpi);

cfg.setPeerClassLoadingEnabled(false);

((TcpDiscoverySpi)cfg.getDiscoverySpi()).setNetworkTimeout(5000);

cfg.setCacheKeyConfiguration(new 
CacheKeyConfiguration(TestNotAnnotatedKey.class.getName(), 
TestNotAnnotatedKey.AFFINITY_KEY_FIELD))
.setBinaryConfiguration(
new BinaryConfiguration()
.setTypeConfigurations(Arrays.asList(
new BinaryTypeConfiguration()

.setTypeName(TestNotAnnotatedKey.class.getName()),
new BinaryTypeConfiguration()

.setTypeName(TestAnnotatedKey.class.getName())
))
)
  ;

return cfg;
}

/** {@inheritDoc} */
@Override protected int serverCount() {
return 0;
}

/** {@inheritDoc} */
@Override protected void beforeTest() throws Exception {
startGrids(SRV_CNT);
}

/** {@inheritDoc} */
@Override protected void afterTest() throws Exception {
stopAllGrids();
}

@Test
public void testReconnectClientNotAnnotatedAffinityKeyGet() throws 
Exception {
clientMode = true;

final Ignite client = startGrid(SRV_CNT);

assertTrue(client.cluster().localNode().isClient());

final Ignite srv = clientRouter(client);

final IgniteCache clientCache = 
client.getOrCreateCache(new CacheConfiguration(DEFAULT_CACHE_NAME)
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
);

final IgniteCache srvCache = 
srv.cache(DEFAULT_CACHE_NAME);

assertNotNull(srvCache);

final String val = "val";

clientCache.put(TestNotAnnotatedKey.of(1), val);

assertEquals(val, clientCache.get(TestNotAnnotatedKey.of(1)));

assertEquals(val, 

[jira] [Commented] (IGNITE-17043) Performance degradation in Marshaller

2022-06-07 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550909#comment-17550909
 ] 

Sergey Kosarev commented on IGNITE-17043:
-

[~sdanilov], thank you for contribution!

> Performance degradation in Marshaller
> -
>
> Key: IGNITE-17043
> URL: https://issues.apache.org/jira/browse/IGNITE-17043
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.13, 2.14
>Reporter: Sergey Kosarev
>Assignee: Semyon Danilov
>Priority: Major
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> There is a problem in ignite-core code in GridHandleTable used inside 
> OptimizedMarshaller where the internal buffers grow in size and does not 
> shrink back.
> What problematic is in GridHandleTable? This is its reset() method that fills 
> arrays in memory. Done once, it's not a big deal. Done a million times for a 
> long buffer, it becomes really long and CPU-consuming.
> Here is simple reproducer (omitting imports for brevity):
> Marshalling of the same object at first takes about 50ms, and then after 
> degradation more than 100 seconds.
> {code:title=DegradationReproducer.java|borderStyle=solid}
> public class DegradationReproducer extends BinaryMarshallerSelfTest {
> @Test
> public void reproduce() throws Exception {
> List> obj = IntStream.range(0, 
> 10).mapToObj(Collections::singletonList).collect(Collectors.toList());
> for (int i = 0; i < 50; i++) {
> Assert.assertThat(measureMarshal(obj), Matchers.lessThan(1000L));
> }
> binaryMarshaller().marshal(
> Collections.singletonList(IntStream.range(0, 
> 1000_000).mapToObj(String::valueOf).collect(Collectors.toList()))
> );
> Assert.assertThat(measureMarshal(obj), Matchers.lessThan(1000L));
> }
> private long measureMarshal(List> obj) throws 
> IgniteCheckedException {
> info("marshalling started ");
> long millis = System.currentTimeMillis();
> binaryMarshaller().marshal(obj);
> millis = System.currentTimeMillis() - millis;
> info("marshalling finished in " + millis + " ms");
> return millis;
> }
> }
> {code}
> on my machine reslust is:
> {quote}
> .
> [2022-05-26 20:58:27,178][INFO 
> ][test-runner-#1%binary.DegradationReproducer%][root] marshalling finished in 
> 39 ms
> [2022-05-26 20:58:27,769][INFO 
> ][test-runner-#1%binary.DegradationReproducer%][root] marshalling started 
> [2022-05-26 21:02:03,588][INFO 
> ][test-runner-#1%binary.DegradationReproducer%][root] marshalling finished in 
> 215819 ms
> [2022-05-26 21:02:03,593][ERROR][main][root] Test failed 
> [test=DegradationReproducer#reproduce[useBinaryArrays = true], 
> duration=218641]
> java.lang.AssertionError: 
> Expected: a value less than <1000L>
>  but: <*215819L*> was greater than <1000L>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:956)
>   at org.junit.Assert.assertThat(Assert.java:923)
>   at 
> org.apache.ignite.internal.binary.DegradationReproducer.reproduce(DegradationReproducer.java:27)
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17043) Performance degradation in Marshaller

2022-05-26 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-17043:

Description: 
There is a problem in ignite-core code in GridHandleTable used inside 
OptimizedMarshaller where the internal buffers grow in size and does not shrink 
back.
What problematic is in GridHandleTable? This is its reset() method that fills 
arrays in memory. Done once, it's not a big deal. Done a million times for a 
long buffer, it becomes really long and CPU-consuming.

Here is simple reproducer (omitting imports for brevity):

Marshalling of the same object at first takes about 50ms, and then after 
degradation more than 100 seconds.

{code:title=DegradationReproducer.java|borderStyle=solid}
public class DegradationReproducer extends BinaryMarshallerSelfTest {

@Test
public void reproduce() throws Exception {
List> obj = IntStream.range(0, 
10).mapToObj(Collections::singletonList).collect(Collectors.toList());

for (int i = 0; i < 50; i++) {
Assert.assertThat(measureMarshal(obj), Matchers.lessThan(1000L));
}

binaryMarshaller().marshal(
Collections.singletonList(IntStream.range(0, 
1000_000).mapToObj(String::valueOf).collect(Collectors.toList()))
);

Assert.assertThat(measureMarshal(obj), Matchers.lessThan(1000L));
}

private long measureMarshal(List> obj) throws 
IgniteCheckedException {
info("marshalling started ");
long millis = System.currentTimeMillis();

binaryMarshaller().marshal(obj);

millis = System.currentTimeMillis() - millis;

info("marshalling finished in " + millis + " ms");

return millis;
}
}

{code}

on my machine reslust is:
{quote}
.
[2022-05-26 20:58:27,178][INFO 
][test-runner-#1%binary.DegradationReproducer%][root] marshalling finished in 
39 ms
[2022-05-26 20:58:27,769][INFO 
][test-runner-#1%binary.DegradationReproducer%][root] marshalling started 
[2022-05-26 21:02:03,588][INFO 
][test-runner-#1%binary.DegradationReproducer%][root] marshalling finished in 
215819 ms
[2022-05-26 21:02:03,593][ERROR][main][root] Test failed 
[test=DegradationReproducer#reproduce[useBinaryArrays = true], duration=218641]
java.lang.AssertionError: 
Expected: a value less than <1000L>
 but: <*215819L*> was greater than <1000L>

at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.ignite.internal.binary.DegradationReproducer.reproduce(DegradationReproducer.java:27)
{quote}

  was:
There is a problem in ignite-core code in GridHandleTable used inside 
OptimizedMarshaller where the internal buffers grow in size and does not shrink 
back.
SingletonList is serialized with OptimizedMarshaller by default in Ignite. In 
contrast, for ArrayList serialization, BinaryMarshallerExImpl is used.
The difference between OptimizedMarshaller and BinaryMarshallerExImpl is that 
when OptimizedMarshaller starts to serialize an object node, all the descedant 
nodes continue to be serialized in OptimizedMarshaller using the same 
GridHandleTable associated with the current thread. GridHandleTable is static 
for a thread and never shrinks in size, its buffer becomes only larger in time.
BinaryMarshallerExImpl though, can divert serizliation to OptimizedMarshaller 
down the road.
What problematic is in GridHandleTable? This is its reset() method that fills 
arrays in memory. Done once, it's not a big deal. Done a million times for a 
long buffer, it becomes really long and CPU-consuming.

Here is simple reproducer (omitting imports for brevity):
{code:title=DegradationReproducer.java|borderStyle=solid}
public class DegradationReproducer extends BinaryMarshallerSelfTest {

@Test
public void reproduce() throws Exception {
List> obj = IntStream.range(0, 
10).mapToObj(Collections::singletonList).collect(Collectors.toList());

for (int i = 0; i < 50; i++) {
Assert.assertThat(measureMarshal(obj), Matchers.lessThan(1000L));
}

binaryMarshaller().marshal(
Collections.singletonList(IntStream.range(0, 
1000_000).mapToObj(String::valueOf).collect(Collectors.toList()))
);

Assert.assertThat(measureMarshal(obj), Matchers.lessThan(1000L));
}

private long measureMarshal(List> obj) throws 
IgniteCheckedException {
info("marshalling started ");
long millis = System.currentTimeMillis();

binaryMarshaller().marshal(obj);

millis = System.currentTimeMillis() - millis;

info("marshalling finished in " + millis + " ms");

return millis;
}
}

{code}

on my machine reslust is:
{quote}
.
[2022-05-26 20:58:27,178][INFO 
][test-runner-#1%binary.DegradationReproducer%][root] 

[jira] [Updated] (IGNITE-17043) Performance degradation in Marshaller

2022-05-26 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-17043:

Description: 
There is a problem in ignite-core code in GridHandleTable used inside 
OptimizedMarshaller where the internal buffers grow in size and does not shrink 
back.
SingletonList is serialized with OptimizedMarshaller by default in Ignite. In 
contrast, for ArrayList serialization, BinaryMarshallerExImpl is used.
The difference between OptimizedMarshaller and BinaryMarshallerExImpl is that 
when OptimizedMarshaller starts to serialize an object node, all the descedant 
nodes continue to be serialized in OptimizedMarshaller using the same 
GridHandleTable associated with the current thread. GridHandleTable is static 
for a thread and never shrinks in size, its buffer becomes only larger in time.
BinaryMarshallerExImpl though, can divert serizliation to OptimizedMarshaller 
down the road.
What problematic is in GridHandleTable? This is its reset() method that fills 
arrays in memory. Done once, it's not a big deal. Done a million times for a 
long buffer, it becomes really long and CPU-consuming.

Here is simple reproducer (omitting imports for brevity):
{code:title=DegradationReproducer.java|borderStyle=solid}
public class DegradationReproducer extends BinaryMarshallerSelfTest {

@Test
public void reproduce() throws Exception {
List> obj = IntStream.range(0, 
10).mapToObj(Collections::singletonList).collect(Collectors.toList());

for (int i = 0; i < 50; i++) {
Assert.assertThat(measureMarshal(obj), Matchers.lessThan(1000L));
}

binaryMarshaller().marshal(
Collections.singletonList(IntStream.range(0, 
1000_000).mapToObj(String::valueOf).collect(Collectors.toList()))
);

Assert.assertThat(measureMarshal(obj), Matchers.lessThan(1000L));
}

private long measureMarshal(List> obj) throws 
IgniteCheckedException {
info("marshalling started ");
long millis = System.currentTimeMillis();

binaryMarshaller().marshal(obj);

millis = System.currentTimeMillis() - millis;

info("marshalling finished in " + millis + " ms");

return millis;
}
}

{code}

on my machine reslust is:
{quote}
.
[2022-05-26 20:58:27,178][INFO 
][test-runner-#1%binary.DegradationReproducer%][root] marshalling finished in 
39 ms
[2022-05-26 20:58:27,769][INFO 
][test-runner-#1%binary.DegradationReproducer%][root] marshalling started 
[2022-05-26 21:02:03,588][INFO 
][test-runner-#1%binary.DegradationReproducer%][root] marshalling finished in 
215819 ms
[2022-05-26 21:02:03,593][ERROR][main][root] Test failed 
[test=DegradationReproducer#reproduce[useBinaryArrays = true], duration=218641]
java.lang.AssertionError: 
Expected: a value less than <1000L>
 but: <215819L> was greater than <1000L>

at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.ignite.internal.binary.DegradationReproducer.reproduce(DegradationReproducer.java:27)
{quote}

  was:
There is a problem in ignite-core code in GridHandleTable used inside 
OptimizedMarshaller where the internal buffers grow in size and does not shrink 
back.
SingletonList is serialized with OptimizedMarshaller by default in Ignite. In 
contrast, for ArrayList serialization, BinaryMarshallerExImpl is used.
The difference between OptimizedMarshaller and BinaryMarshallerExImpl is that 
when OptimizedMarshaller starts to serialize an object node, all the descedant 
nodes continue to be serialized in OptimizedMarshaller using the same 
GridHandleTable associated with the current thread. GridHandleTable is static 
for a thread and never shrinks in size, its buffer becomes only larger in time.
BinaryMarshallerExImpl though, can divert serizliation to OptimizedMarshaller 
down the road.
What problematic is in GridHandleTable? This is its reset() method that fills 
arrays in memory. Done once, it's not a big deal. Done a million times for a 
long buffer, it becomes really long and CPU-consuming.




> Performance degradation in Marshaller
> -
>
> Key: IGNITE-17043
> URL: https://issues.apache.org/jira/browse/IGNITE-17043
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.13, 2.14
>Reporter: Sergey Kosarev
>Priority: Major
>
> There is a problem in ignite-core code in GridHandleTable used inside 
> OptimizedMarshaller where the internal buffers grow in size and does not 
> shrink back.
> SingletonList is serialized with OptimizedMarshaller by default in Ignite. In 
> contrast, for ArrayList serialization, BinaryMarshallerExImpl is used.
> The difference 

[jira] [Created] (IGNITE-17043) Performance degradation in Marshaller

2022-05-26 Thread Sergey Kosarev (Jira)
Sergey Kosarev created IGNITE-17043:
---

 Summary: Performance degradation in Marshaller
 Key: IGNITE-17043
 URL: https://issues.apache.org/jira/browse/IGNITE-17043
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.13, 2.14
Reporter: Sergey Kosarev


There is a problem in ignite-core code in GridHandleTable used inside 
OptimizedMarshaller where the internal buffers grow in size and does not shrink 
back.
SingletonList is serialized with OptimizedMarshaller by default in Ignite. In 
contrast, for ArrayList serialization, BinaryMarshallerExImpl is used.
The difference between OptimizedMarshaller and BinaryMarshallerExImpl is that 
when OptimizedMarshaller starts to serialize an object node, all the descedant 
nodes continue to be serialized in OptimizedMarshaller using the same 
GridHandleTable associated with the current thread. GridHandleTable is static 
for a thread and never shrinks in size, its buffer becomes only larger in time.
BinaryMarshallerExImpl though, can divert serizliation to OptimizedMarshaller 
down the road.
What problematic is in GridHandleTable? This is its reset() method that fills 
arrays in memory. Done once, it's not a big deal. Done a million times for a 
long buffer, it becomes really long and CPU-consuming.





--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2021-09-17 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416807#comment-17416807
 ] 

Sergey Kosarev commented on IGNITE-12793:
-

[~sergey-chugunov], is there any plans for fixing this bug? It happened to me 
again recently.

> Deadlock in the System Pool on Metadata processing
> --
>
> Key: IGNITE-12793
> URL: https://issues.apache.org/jira/browse/IGNITE-12793
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8, 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Chugunov
>Priority: Major
> Attachments: ignite-12793-threaddump.txt
>
>
> I've recently tried to apply Ilya's idea 
> (https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread 
> pools and tried to set system pool to 3 in my own tests.
>  It caused deadlock on a client node and I think it can happen not only on 
> such small pool values.
> Details are following:
>  I'm not using persistence currently (if it matters).
>  On the client note I use ignite compute to call a job on every server node 
> (there are 3 server nodes in the tests).
> Then I've found in logs:
> {noformat}
> [10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773]
> grid-timeout-worker-#8
> [WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
> task completed in last 3ms, is system thread pool size large enough?)
>  [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]
> {noformat}
> I see in threaddumps that all 3 system pool workers do the same - processing 
> of job responses:
> {noformat}
>   "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
> waiting on condition [0x7b91d000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
>  at 
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
>  at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
>  at 
> 

[jira] [Assigned] (IGNITE-14687) BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of overflow and cause JVM crash

2021-05-06 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev reassigned IGNITE-14687:
---

Assignee: Sergey Kosarev

> BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of 
> overflow and cause JVM crash
> ---
>
> Key: IGNITE-14687
> URL: https://issues.apache.org/jira/browse/IGNITE-14687
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>
> Reproducer is easy:
> while (true) out.writeByteArray(bytes);
> 
> #
>  # A fatal error has been detected by the Java Runtime Environment:
>  #
>  # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, 
> pid=17128, tid=0x24e4
>  #
> 
>  
> Actually It happened to me occassionally when by mistake a compute job tried 
> to return too many results. JVM crashed on the job result serialization.
> See link to full reproducers below.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-14687) BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of overflow and cause JVM crash

2021-05-06 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-14687:

Summary: BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in 
case of overflow and cause JVM crash  (was: BinaryHeapOutputStream 
BinaryOffheapOutputStream corrupt memory in case of overflow and cause JVM 
crush)

> BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of 
> overflow and cause JVM crash
> ---
>
> Key: IGNITE-14687
> URL: https://issues.apache.org/jira/browse/IGNITE-14687
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>
> Reproducer is easy:
> while (true) out.writeByteArray(bytes);
> 
> #
>  # A fatal error has been detected by the Java Runtime Environment:
>  #
>  # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, 
> pid=17128, tid=0x24e4
>  #
> 
>  
> Actually It happened to me occassionally when by mistake a compute job tried 
> to return too many results. JVM crashed on the job result serialization.
> See link to full reproducers below.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-14687) BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of overflow and cause JVM crush

2021-05-06 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-14687:

Description: 
Reproducer is easy:

while (true) out.writeByteArray(bytes);


#
 # A fatal error has been detected by the Java Runtime Environment:
 #
 # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, pid=17128, 
tid=0x24e4
 #



 

Actually It happened to me occassionally when by mistake a compute job tried to 
return too many results. JVM crashed on the job result serialization.

See link to full reproducers below.

 

  was:
reproducer is easy:

while (true) out.writeByteArray(bytes);

#
 # A fatal error has been detected by the Java Runtime Environment:
 #
 # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, pid=17128, 
tid=0x24e4
 #



 

Actually It happened to me occassionally when by mistake a compute job tried to 
return too many results. JVM crashed on the job result serialization.

 

 


> BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of 
> overflow and cause JVM crush
> ---
>
> Key: IGNITE-14687
> URL: https://issues.apache.org/jira/browse/IGNITE-14687
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>
> Reproducer is easy:
> while (true) out.writeByteArray(bytes);
> 
> #
>  # A fatal error has been detected by the Java Runtime Environment:
>  #
>  # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, 
> pid=17128, tid=0x24e4
>  #
> 
>  
> Actually It happened to me occassionally when by mistake a compute job tried 
> to return too many results. JVM crashed on the job result serialization.
> See link to full reproducers below.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-14687) BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of overflow and cause JVM crush

2021-05-06 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-14687:

Description: 
reproducer is easy:

while (true) out.writeByteArray(bytes);

#
 # A fatal error has been detected by the Java Runtime Environment:
 #
 # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, pid=17128, 
tid=0x24e4
 #

---

 

Actually It happened to me occassionally when by mistake a compute job tried to 
return too many results. JVM crashed on the job result serialization.

 

 

  was:
reproducer is easy:

while (true) out.writeByteArray(bytes);

#
 # A fatal error has been detected by the Java Runtime Environment:
 #
 # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, pid=17128, 
tid=0x24e4
 #

--

 

Actually It happened to me occassionally when by mistake a compute job tried to 
return too many results. JVM crashed on the job result serialization.

 

 


> BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of 
> overflow and cause JVM crush
> ---
>
> Key: IGNITE-14687
> URL: https://issues.apache.org/jira/browse/IGNITE-14687
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>
> reproducer is easy:
> while (true) out.writeByteArray(bytes);
> 
> #
>  # A fatal error has been detected by the Java Runtime Environment:
>  #
>  # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, 
> pid=17128, tid=0x24e4
>  #
> ---
>  
> Actually It happened to me occassionally when by mistake a compute job tried 
> to return too many results. JVM crashed on the job result serialization.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-14687) BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of overflow and cause JVM crush

2021-05-06 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-14687:

Description: 
reproducer is easy:

while (true) out.writeByteArray(bytes);

#
 # A fatal error has been detected by the Java Runtime Environment:
 #
 # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, pid=17128, 
tid=0x24e4
 #



 

Actually It happened to me occassionally when by mistake a compute job tried to 
return too many results. JVM crashed on the job result serialization.

 

 

  was:
reproducer is easy:

while (true) out.writeByteArray(bytes);

#
 # A fatal error has been detected by the Java Runtime Environment:
 #
 # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, pid=17128, 
tid=0x24e4
 #

---

 

Actually It happened to me occassionally when by mistake a compute job tried to 
return too many results. JVM crashed on the job result serialization.

 

 


> BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of 
> overflow and cause JVM crush
> ---
>
> Key: IGNITE-14687
> URL: https://issues.apache.org/jira/browse/IGNITE-14687
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>
> reproducer is easy:
> while (true) out.writeByteArray(bytes);
> 
> #
>  # A fatal error has been detected by the Java Runtime Environment:
>  #
>  # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, 
> pid=17128, tid=0x24e4
>  #
> 
>  
> Actually It happened to me occassionally when by mistake a compute job tried 
> to return too many results. JVM crashed on the job result serialization.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-14687) BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of overflow and cause JVM crush

2021-05-06 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-14687:

Description: 
reproducer is easy:

while (true) out.writeByteArray(bytes);

#
 # A fatal error has been detected by the Java Runtime Environment:
 #
 # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, pid=17128, 
tid=0x24e4
 #

--

 

Actually It happened to me occassionally when by mistake a compute job tried to 
return too many results. JVM crashed on the job result serialization.

 

 

  was:
reproducer is easy:

while (true) out.writeByteArray(bytes);

-

#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, pid=17128, 
tid=0x24e4
#

--

 

Actually It happened to me occassionally when by mistake a compute job tried to 
return too many results. JVM crashed on the job result serialization.

 

 


> BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of 
> overflow and cause JVM crush
> ---
>
> Key: IGNITE-14687
> URL: https://issues.apache.org/jira/browse/IGNITE-14687
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>
> reproducer is easy:
> while (true) out.writeByteArray(bytes);
> 
> #
>  # A fatal error has been detected by the Java Runtime Environment:
>  #
>  # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, 
> pid=17128, tid=0x24e4
>  #
> --
>  
> Actually It happened to me occassionally when by mistake a compute job tried 
> to return too many results. JVM crashed on the job result serialization.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-14687) BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of overflow and cause JVM crush

2021-05-06 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-14687:

Description: 
reproducer is easy:

while (true) out.writeByteArray(bytes);

-

#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, pid=17128, 
tid=0x24e4
#

--

 

Actually It happened to me occassionally when by mistake a compute job tried to 
return too many results. JVM crashed on the job result serialization.

 

 

> BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of 
> overflow and cause JVM crush
> ---
>
> Key: IGNITE-14687
> URL: https://issues.apache.org/jira/browse/IGNITE-14687
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>
> reproducer is easy:
> while (true) out.writeByteArray(bytes);
> -
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02742b26, 
> pid=17128, tid=0x24e4
> #
> --
>  
> Actually It happened to me occassionally when by mistake a compute job tried 
> to return too many results. JVM crashed on the job result serialization.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-14687) BinaryHeapOutputStream BinaryOffheapOutputStream corrupt memory in case of overflow and cause JVM crush

2021-05-06 Thread Sergey Kosarev (Jira)
Sergey Kosarev created IGNITE-14687:
---

 Summary: BinaryHeapOutputStream BinaryOffheapOutputStream corrupt 
memory in case of overflow and cause JVM crush
 Key: IGNITE-14687
 URL: https://issues.apache.org/jira/browse/IGNITE-14687
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Kosarev






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-04-13 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17082231#comment-17082231
 ] 

Sergey Kosarev commented on IGNITE-12793:
-

[~sergey-chugunov] , one more question.

I found method that heplepd me as temporary work-around: 
org.apache.ignite.internal.binary.BinaryContext#registerUserTypesSchema

I added it after starting of every client node(of course  I also registeed user 
types in BinaryConfiguration before).

((CacheObjectBinaryProcessorImpl)ignite0.context().cacheObjects()).binaryContext().registerUserTypesSchema();

 

I found that 
org.apache.ignite.internal.binary.BinaryContext#registerUserTypesSchema is 
executed on start of thin client 
(org.apache.ignite.internal.client.thin.TcpIgniteClient#TcpIgniteClient).

 

Do you know why this method is not executing on usual (thick) client start?

 

> Deadlock in the System Pool on Metadata processing
> --
>
> Key: IGNITE-12793
> URL: https://issues.apache.org/jira/browse/IGNITE-12793
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8, 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.9
>
> Attachments: ignite-12793-threaddump.txt
>
>
> I've recently tried to apply Ilya's idea 
> (https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread 
> pools and tried to set system pool to 3 in my own tests.
>  It caused deadlock on a client node and I think it can happen not only on 
> such small pool values.
> Details are following:
>  I'm not using persistence currently (if it matters).
>  On the client note I use ignite compute to call a job on every server node 
> (there are 3 server nodes in the tests).
> Then I've found in logs:
>  {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773]
> {grid-timeout-worker-#8}
> [WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
> task completed in last 3ms, is system thread pool size large enough?)
>  [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}
> I see in threaddumps that all 3 system pool workers do the same - processing 
> of job responses:
>  {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
> waiting on condition [0x7b91d000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
>  at 
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
>  at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> 

[jira] [Commented] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-04-13 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17082223#comment-17082223
 ] 

Sergey Kosarev commented on IGNITE-12793:
-

[~sergey-chugunov], sounds good. I agree with your solution.

 

  

 

> Deadlock in the System Pool on Metadata processing
> --
>
> Key: IGNITE-12793
> URL: https://issues.apache.org/jira/browse/IGNITE-12793
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8, 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.9
>
> Attachments: ignite-12793-threaddump.txt
>
>
> I've recently tried to apply Ilya's idea 
> (https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread 
> pools and tried to set system pool to 3 in my own tests.
>  It caused deadlock on a client node and I think it can happen not only on 
> such small pool values.
> Details are following:
>  I'm not using persistence currently (if it matters).
>  On the client note I use ignite compute to call a job on every server node 
> (there are 3 server nodes in the tests).
> Then I've found in logs:
>  {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773]
> {grid-timeout-worker-#8}
> [WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
> task completed in last 3ms, is system thread pool size large enough?)
>  [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}
> I see in threaddumps that all 3 system pool workers do the same - processing 
> of job responses:
>  {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
> waiting on condition [0x7b91d000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
>  at 
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
>  at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
>  at 
> 

[jira] [Commented] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-04-09 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17079020#comment-17079020
 ] 

Sergey Kosarev commented on IGNITE-12793:
-

[~sergey-chugunov], can you look at this please? 

> Deadlock in the System Pool on Metadata processing
> --
>
> Key: IGNITE-12793
> URL: https://issues.apache.org/jira/browse/IGNITE-12793
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8, 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
> Attachments: ignite-12793-threaddump.txt
>
>
> I've recently tried to apply Ilya's idea 
> (https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread 
> pools and tried to set system pool to 3 in my own tests.
>  It caused deadlock on a client node and I think it can happen not only on 
> such small pool values.
> Details are following:
>  I'm not using persistence currently (if it matters).
>  On the client note I use ignite compute to call a job on every server node 
> (there are 3 server nodes in the tests).
> Then I've found in logs:
>  {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773]
> {grid-timeout-worker-#8}
> [WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
> task completed in last 3ms, is system thread pool size large enough?)
>  [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}
> I see in threaddumps that all 3 system pool workers do the same - processing 
> of job responses:
>  {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
> waiting on condition [0x7b91d000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
>  at 
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
>  at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
>  at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:828)
>  at 
> 

[jira] [Updated] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-04-01 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12793:

Description: 
I've recently tried to apply Ilya's idea 
(https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread pools 
and tried to set system pool to 3 in my own tests.
 It caused deadlock on a client node and I think it can happen not only on such 
small pool values.

Details are following:
 I'm not using persistence currently (if it matters).
 On the client note I use ignite compute to call a job on every server node 
(there are 3 server nodes in the tests).

Then I've found in logs:
 {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773]

{grid-timeout-worker-#8}

[WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
task completed in last 3ms, is system thread pool size large enough?)
 [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}

I see in threaddumps that all 3 system pool workers do the same - processing of 
job responses:
 {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
waiting on condition [0x7b91d000]
 java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
 at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
 at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
 at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
 at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
 at 
org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
 at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
 at 
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
 at 
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
 at 
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
 at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
 at 
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
 at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
 at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
 at 
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:828)
 at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1134)
 }}

As I found analyzing this stack trace, unmarshalling a user object the first 
time(per type) causes Binary metadata request (despite I've registered this 
type in BinaryConfiguration.setTypeConfiguration)

And all this futures will be completed after consequent MetadataResponseMessage 
will be received and processed on the client node. But 
MetadataResponseMessage(GridTopic.TOPIC_METADATA_REQ) is also processing in 
system pool. (I see that method GridIoManager#processRegularMessage routes it 
to the System Pool). 
 So it causes deadlock as the Sytem Pool is already full.


[jira] [Updated] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-04-01 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12793:

Description: 
I've recently tried to apply Ilya's idea 
(https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread pools 
and tried to set system pool to 3 in my own tests.
 It caused deadlock on a client node and I think it can happen not only on such 
small pool values.

Details are following:
 I'm not using persistence currently (if it matters).
 On the client note I use ignite compute to call a job on every server node 
(there are 3 server nodes in the tests).

Then I've found in logs:
 {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773]

{grid-timeout-worker-#8}

[WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
task completed in last 3ms, is system thread pool size large enough?)
 [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}

I see in threaddumps that all 3 system pool workers do the same - processing of 
job responses:
 {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
waiting on condition [0x7b91d000]
 java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
 at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
 at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
 at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
 at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
 at 
org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
 at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
 at 
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
 at 
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
 at 
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
 at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
 at 
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
 at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
 at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
 at 
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:828)
 at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1134)
 }}

As I found analyzing this stack trace, unmarshalling a user object the first 
time(per type) causes Binary metadata request (despite I've registered this 
type in BinaryConfiguration.setTypeConfiguration)

And all this futures will be completed after consequent MetadataResponseMessage 
will be received and processed on the client node. But 
MetadataResponseMessage(GridTopic.TOPIC_METADATA_REQ) is also processing in 
system pool. (I see that method GridIoManager#processRegularMessage routes it 
to the System Pool). 
 So it causes deadlock as the Sytem Pool is already full.


[jira] [Updated] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-04-01 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12793:

Affects Version/s: 2.8

> Deadlock in the System Pool on Metadata processing
> --
>
> Key: IGNITE-12793
> URL: https://issues.apache.org/jira/browse/IGNITE-12793
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8, 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
> Attachments: ignite-12793-threaddump.txt
>
>
> I've recently tried to apply Ilya's idea 
> (https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread 
> pools and tried to set system pool to 3 in my own tests.
>  It caused deadlock on a client node and I think it can happen not only on 
> such small pool values.
> Details are following:
>  I'm not using persistence currently (if it matters).
>  On the client note I use ignite compute to call a job on every server node 
> (there are 3 server nodes in the tests).
> Then I've found in logs:
>  {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773]
> {grid-timeout-worker-#8}
> [WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
> task completed in last 3ms, is system thread pool size large enough?)
>  [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}
> I see in threaddumps that all 3 system pool workers do the same - processing 
> of job responses:
>  {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
> waiting on condition [0x7b91d000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
>  at 
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
>  at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
>  at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:828)
>  at 
> 

[jira] [Updated] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-04-01 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12793:

Description: 
I've recently tried to apply Ilya's idea 
(https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread pools 
and tried to set system pool to 3 in my own tests.
 It caused deadlock on a client node and I think it can happen not only on such 
small pool values.

Details are following:
 I'm not using persistence currently (if it matters).
 On the client note I use ignite compute to call a job on every server node 
(there are 3 server nodes in the tests).

Then I've found in logs:
 {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773]

{grid-timeout-worker-#8}

[WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
task completed in last 3ms, is system thread pool size large enough?)
 [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}

I see in threaddumps that all 3 system pool workers do the same - processing of 
job responses:
 {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
waiting on condition [0x7b91d000]
 java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
 at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
 at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
 at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
 at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
 at 
org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
 at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
 at 
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
 at 
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
 at 
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
 at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
 at 
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
 at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
 at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
 at 
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:828)
 at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1134)
 }}

As I found analyzing this stack trace, unmarshalling a user object the first 
time(per type) causes Binary metadata request (despite I've registered this 
type in BinaryConfiguration.setTypeConfiguration)

And all this futures will be completed after consequent MetadataResponseMessage 
will be received and processed on the client node. But 
MetadataResponseMessage(GridTopic.TOPIC_METADATA_REQ) is also processing in 
system pool. (I see that method GridIoManager#processRegularMessage routes it 
to the System Pool). 
 So it causes deadlock as the Sytem Pool is already full.


[jira] [Updated] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-04-01 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12793:

Attachment: ignite-12793-threaddump.txt

> Deadlock in the System Pool on Metadata processing
> --
>
> Key: IGNITE-12793
> URL: https://issues.apache.org/jira/browse/IGNITE-12793
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
> Attachments: ignite-12793-threaddump.txt
>
>
> I've recently tried to apply Ilya's idea 
> (https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread 
> pools and tried to set system pool to 3 in my own tests.
>  It caused deadlock on a client node and I think it can happen not only on 
> such small pool values.
> Details are following:
>  I'm not using persistence currently (if it matters).
>  On the client note I use ignite compute to call a job on every server node 
> (there are 3 server nodes in the tests).
> Then I've found in logs:
>  {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773] {grid-timeout-worker-#8} 
> [WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
> task completed in last 3ms, is system thread pool size large enough?)
>  [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}
> I see in threaddumps that all 3 system pool workers do the same - processing 
> of job responses:
>  {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
> waiting on condition [0x7b91d000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
>  at 
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
>  at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
>  at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:828)
>  at 
> 

[jira] [Commented] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-03-31 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072220#comment-17072220
 ] 

Sergey Kosarev commented on IGNITE-12793:
-

[~sergey-chugunov], I've reproduced the problem on master.

See my commit: 
[https://github.com/macrergate/ignite/commit/9a7d2d27af30018a5f6faccb39176a35243ccfa2]

> Deadlock in the System Pool on Metadata processing
> --
>
> Key: IGNITE-12793
> URL: https://issues.apache.org/jira/browse/IGNITE-12793
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> I've recently tried to apply Ilya's idea 
> (https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread 
> pools and tried to set system pool to 3 in my own tests.
>  It caused deadlock on a client node and I think it can happen not only on 
> such small pool values.
> Details are following:
>  I'm not using persistence currently (if it matters).
>  On the client note I use ignite compute to call a job on every server node 
> (there are 3 server nodes in the tests).
> Then I've found in logs:
>  {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773] {grid-timeout-worker-#8} 
> [WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
> task completed in last 3ms, is system thread pool size large enough?)
>  [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}
> I see in threaddumps that all 3 system pool workers do the same - processing 
> of job responses:
>  {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
> waiting on condition [0x7b91d000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
>  at 
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
>  at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
>  at 
> 

[jira] [Commented] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-03-19 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17062448#comment-17062448
 ] 

Sergey Kosarev commented on IGNITE-12793:
-

[~sergey-chugunov], actually I've used gidgain community-edition 8.7.10

> Deadlock in the System Pool on Metadata processing
> --
>
> Key: IGNITE-12793
> URL: https://issues.apache.org/jira/browse/IGNITE-12793
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> I've recently tried to apply Ilya's idea 
> (https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread 
> pools and tried to set system pool to 3 in my own tests.
>  It caused deadlock on a client node and I think it can happen not only on 
> such small pool values.
> Details are following:
>  I'm not using persistence currently (if it matters).
>  On the client note I use ignite compute to call a job on every server node 
> (there are 3 server nodes in the tests).
> Then I've found in logs:
>  {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773] {grid-timeout-worker-#8} 
> [WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
> task completed in last 3ms, is system thread pool size large enough?)
>  [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}
> I see in threaddumps that all 3 system pool workers do the same - processing 
> of job responses:
>  {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
> waiting on condition [0x7b91d000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
>  at 
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
>  at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
>  at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:828)
>  at 
> 

[jira] [Updated] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-03-17 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12793:

Description: 
I've recently tried to apply Ilya's idea 
(https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread pools 
and tried to set system pool to 3 in my own tests.
 It caused deadlock on a client node and I think it can happen not only on such 
small pool values.

Details are following:
 I'm not using persistence currently (if it matters).
 On the client note I use ignite compute to call a job on every server node 
(there are 3 server nodes in the tests).

Then I've found in logs:
 {{[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773] {grid-timeout-worker-#8} 
[WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
task completed in last 3ms, is system thread pool size large enough?)
 [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]}}

I see in threaddumps that all 3 system pool workers do the same - processing of 
job responses:
 {{ "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
waiting on condition [0x7b91d000]
 java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
 at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
 at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
 at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
 at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
 at 
org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
 at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
 at 
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
 at 
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
 at 
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
 at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
 at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
 at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
 at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
 at 
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
 at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
 at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
 at 
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:828)
 at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1134)
 }}

As I found analyzing this stack trace, unmarshalling a user object the first 
time(per type) causes Binary metadata request (despite I've registered this 
type in BinaryConfiguration.setTypeConfiguration)

And all this futures will be completed after consequent MetadataResponseMessage 
will be received and processed on the client node. But 
MetadataResponseMessage(GridTopic.TOPIC_METADATA_REQ) is also processing in 
system pool. (I see that method GridIoManager#processRegularMessage routes it 
to the System Pool). 
 So it causes deadlock as the Sytem Pool is already full.

 

[jira] [Updated] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-03-17 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12793:

Description: 
I've recently tried to apply Ilya's idea 
(https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread pools 
and tried to set system pool to 3 in my own tests.
It caused deadlock on a client node and I think it can happen not only on such 
small pool values.

Details are following:
I'm not using persistence currently (if it matters).
On the client note I use ignite compute to  call   a job on every server node 
(there are 3 server nodes in the tests).

Then I've found in logs:
{{
[10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773] { grid-timeout-worker-#8} 
[WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
task completed in last 3ms, is system thread pool size large enough?)
[10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, 
qSize=9]
}}

I see in threaddumps that all 3 system pool workers do the same - processing of 
job responses:
{{
"sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
waiting on condition [0x7b91d000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
at 
org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
at 
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
at 
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
at 
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
at 
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
at 
org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
at 
org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
at 
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:828)
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1134)
}}

As I found analyzing this stack trace, unmarshalling a user object  the first 

[jira] [Created] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-03-17 Thread Sergey Kosarev (Jira)
Sergey Kosarev created IGNITE-12793:
---

 Summary: Deadlock in the System Pool on Metadata processing
 Key: IGNITE-12793
 URL: https://issues.apache.org/jira/browse/IGNITE-12793
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.7.6
Reporter: Sergey Kosarev






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-03-09 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055595#comment-17055595
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~Pavlukhin], thank you! Shouldn't it go to 2.8.1 also? 

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Critical
> Fix For: 2.9
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-02-28 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047890#comment-17047890
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~Pavlukhin], please look. no blockers here, do you give final approve? 

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Critical
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-02-27 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046772#comment-17046772
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~Pavlukhin], thanks, I made changes and replied, TC in progress.

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Critical
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-02-19 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039789#comment-17039789
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~Pavlukhin], no problem at all, it can wait. Just don't forget about me.

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-02-18 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038899#comment-17038899
 ] 

Sergey Kosarev edited comment on IGNITE-12549 at 2/18/20 8:54 AM:
--

[~Pavlukhin], I've updated PR and replied to you, TC Run in progress, please 
check my changes [PR-7277|https://github.com/apache/ignite/pull/7277].


was (Author: macrergate):
[~Pavlukhin], I've updated PR and replied to you, please check 
[PR-7277|https://github.com/apache/ignite/pull/7277].

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-02-18 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038899#comment-17038899
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~Pavlukhin], I've updated PR and replied to you, please check 
[PR-7277|https://github.com/apache/ignite/pull/7277].

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-8414) In-memory cache should use BLAT as their Affinity Topology

2020-02-12 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035180#comment-17035180
 ] 

Sergey Kosarev commented on IGNITE-8414:


[~EdShangGG], please see [dev 
list|http://apache-ignite-developers.2346864.n4.nabble.com/DISCUSSION-Deprecation-of-obsolete-rebalancing-functionality-td45824.html]

Does it really implemented?



> In-memory cache should use BLAT as their Affinity Topology
> --
>
> Key: IGNITE-8414
> URL: https://issues.apache.org/jira/browse/IGNITE-8414
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Eduard Shangareev
>Assignee: Eduard Shangareev
>Priority: Major
>  Labels: IEP-4, Phase-2
>
> Now in-memory caches use all active server nodes as affinity topology and it 
> changes with each node join and exit. What differs from persistent caches 
> behavior which uses BLAT (BaseLine Affinity Topology) as their affinity 
> topology.
> It causes problems:
> - we lose (in general) co-location between different caches;
> - we can't avoid PME when a non-BLAT node joins cluster;
> - implementation should consider 2 different approaches to affinity
> calculation.
> To handle these problems we should make in-memory and persistent cache work 
> similar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12662) Get rid of CacheConfiguration#getRebalanceDelay and related functionality.

2020-02-12 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035162#comment-17035162
 ] 

Sergey Kosarev commented on IGNITE-12662:
-

[~ascherbakov], I believe it can't be done until in-memory caches will use 
baseline topology, i.e until 
[IGNITE-8414|https://issues.apache.org/jira/browse/IGNITE-8414] is implemented. 
Do you agree?

> Get rid of CacheConfiguration#getRebalanceDelay and related functionality.
> --
>
> Key: IGNITE-12662
> URL: https://issues.apache.org/jira/browse/IGNITE-12662
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Scherbakov
>Priority: Major
> Fix For: 2.9
>
>
> We have for a long time this property to mitigate a case with premature 
> rebalancing on node restart.
> Currently this is handled by baseline topology.
> I suggest to deprecate and remove related functionality in next releases.
> For example org.apache.ignite.IgniteCache#rebalance is no longer needed as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-22 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev reassigned IGNITE-12549:
---

Assignee: Sergey Kosarev

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-22 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17021095#comment-17021095
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~Pavlukhin], thanks. Please check my reply there.

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-21 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17020154#comment-17020154
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~Pavlukhin], can you please review my PR?

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-20 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019552#comment-17019552
 ] 

Sergey Kosarev edited comment on IGNITE-12549 at 1/20/20 3:20 PM:
--

[~Pavlukhin], there is a difference.

I've found the second problem in the method: 
{{org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl#projection}}

there is stacktrace:

projection:628, IgniteCacheProxyImpl 
(org.apache.ignite.internal.processors.cache)
 query:809, IgniteCacheProxyImpl (org.apache.ignite.internal.processors.cache)
 query:412, GatewayProtectedCacheProxy 
(org.apache.ignite.internal.processors.cache)

 

it looks like IgniteCacheProxyImpl#projection somewhat duplicates logic of 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

and both have the same error - lacks the check for moving partitions.

 

 


was (Author: macrergate):
[~Pavlukhin], there is a difference. 

I've found the second problem in the method: 
{{org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl#projection}}

there is stacktrace:

projection:628, IgniteCacheProxyImpl 
(org.apache.ignite.internal.processors.cache)
query:809, IgniteCacheProxyImpl (org.apache.ignite.internal.processors.cache)
query:412, GatewayProtectedCacheProxy 
(org.apache.ignite.internal.processors.cache)

 

it looks like {{IgniteCacheProxyImpl#projection somewhat duplicates logic of 
}}org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

and both have the same error - lacks the check for moving partitions.

 

 

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-20 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019552#comment-17019552
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~Pavlukhin], there is a difference. 

I've found the second problem in the method: 
{{org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl#projection}}

there is stacktrace:

projection:628, IgniteCacheProxyImpl 
(org.apache.ignite.internal.processors.cache)
query:809, IgniteCacheProxyImpl (org.apache.ignite.internal.processors.cache)
query:412, GatewayProtectedCacheProxy 
(org.apache.ignite.internal.processors.cache)

 

it looks like {{IgniteCacheProxyImpl#projection somewhat duplicates logic of 
}}org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

and both have the same error - lacks the check for moving partitions.

 

 

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016959#comment-17016959
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~ascherbakov], thanks for WA, I see.
About fix you suggested I agree it can fix case 1, but how to fix case 2 when 
scanQuery is executed from a client node? 
Can client check that a remote node doesn't have moving partitions?


> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12549:

Description: 
Case 1
1. start server node 1
2. create and fill replicated cache with RebalanceMode.Async (as by default)
3. start servr node 2 
4. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1. start server node 1
2. create and fill replicated cache with RebalanceMode.Async (as by default)
3. start client node 2
4. start server node 3 
5. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.

If executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

If executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.






  was:
Case 1
1.  start server node 1
2.  create and fill replicated cache with RebalanceMode.Async (as by default)
3.  start servr node 2 
4. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1. start server node 1
2. create and fill replicated cache with RebalanceMode.Async (as by default)
3. start client node 2
4. start server node 3 
5. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery

if executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

if executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.







> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> 

[jira] [Updated] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12549:

Description: 
Case 1
1.  start server node 1
2.  create and fill replicated cache with RebalanceMode.Async (as by default)
3.  start servr node 2 
4. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1. start server node 1
2. create and fill replicated cache with RebalanceMode.Async (as by default)
3. start client node 2
4. start server node 3 
5. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery

if executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

if executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.






  was:
Case 1
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3  start servr node 2 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3 start client node 2
3  start server node 3 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery

if executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

if executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.







> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> Case 1
> 1.  start server node 1
> 2.  create and fill replicated cache with RebalanceMode.Async (as by default)
> 3.  start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
>

[jira] [Updated] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12549:

Description: 
Case 1
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3  start servr node 2 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3 start client node 2
3  start server node 3 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery

if executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

if executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.






  was:
Case 1
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3  start servr node 2 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3 start client node 2
3  start server node 3 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 




> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> Case 1
> 1.  start server node 1
> 2  create and fill replicated cache with RebalanceMode.Async (as by default)
> 3  start servr node 2 
> 3 immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1.  start server node 1
> 2  create and fill replicated cache with RebalanceMode.Async (as by default)
> 3 start client node 2
> 3  start server node 3 
> 3 immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> 

[jira] [Created] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)
Sergey Kosarev created IGNITE-12549:
---

 Summary: Scan query/iterator on a replicated cache may get wrong 
results
 Key: IGNITE-12549
 URL: https://issues.apache.org/jira/browse/IGNITE-12549
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.7.6
Reporter: Sergey Kosarev


Case 1
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3  start servr node 2 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3 start client node 2
3  start server node 3 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-24 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11909:

Description: 
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.invokeAll returns should return Map>, but 
keys 
processed on remote node(s) return not unwrapped (as BinaryObject), so we can 
get a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer :

{code}
public class CacheEntryProcessorExample2 {
/** Cache name. */
private static final String CACHE_NAME = 
CacheEntryProcessorExample2.class.getSimpleName();

/** Number of keys. */
private static final int KEY_CNT = 10;

/** Set of predefined keys. */
private static final Set KEYS_SET;

/**
 * Initializes keys set that is used in bulked operations in the example.
 */
static {
KEYS_SET = new HashSet<>();

for (int i = 0; i < KEY_CNT; i++)
KEYS_SET.add(new MyKey(i));
}

/**
 * Executes example.
 *
 * @param args Command line arguments, none required.
 * @throws IgniteException If example execution failed.
 */
public static void main(String[] args) throws IgniteException {
try (Ignite ignite = 
Ignition.start("examples/config/example-ignite.xml")) {

CacheConfiguration ccfg = new 
CacheConfiguration()
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
.setName(CACHE_NAME);

// Auto-close cache at the end of the example.
try (IgniteCache cache = 
ignite.getOrCreateCache(ccfg)) {
Map> map = 
cache.invokeAll(KEYS_SET, (entry, object) -> {
System.out.println("entry.key = " + entry.getKey());

return entry.getKey().getI();
});

map.entrySet().forEach( e -> {
Object key = e.getKey();
System.out.println("key.class = " + 
key.getClass().getSimpleName() + ", key = " + key);
});

map.entrySet().forEach( e -> {
Object key = e.getKey();
if (!(key instanceof MyKey)) {
throw new IllegalArgumentException("MyKey expected, but 
found: " + key.getClass());
}
});

}
finally {
// Distributed cache could be removed from cluster only by 
#destroyCache() call.
ignite.destroyCache(CACHE_NAME);
}
}
}

public static class MyKey {

private int i;

public MyKey() {
}

public MyKey(int i) {
this.i = i;
}

public int getI() {
return i;
}

public void setI(int i) {
this.i = i;
}

@Override
public String toString() {
return "MyKey{" +
"i=" + i +
'}';
}
}
}
{code}


  was:
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) return not unwrapped (as BinaryObject), so we can 
get a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer :

{code}
public class CacheEntryProcessorExample2 {
/** Cache name. */
private static 

[jira] [Updated] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-10 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11909:

Component/s: cache

> Cache.invokeAll() returns a map with BinaryObjects as keys
> --
>
> Key: IGNITE-11909
> URL: https://issues.apache.org/jira/browse/IGNITE-11909
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: APIBug
>
> Preconditions:
> 1) AtomicityMode.Transactional
> 2) Key is custom object. (i.e MyKey)
> cache.returnAll returns should return Map>, 
> but keys 
> processed on remote node(s) return not unwrapped (as BinaryObject), so we can 
> get a map with mixed keys:
> {code}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=151593342, hash=31459296, i=2]
> key.class = MyKey, key = MyKey{i=7}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=405215542, hash=31638042, i=8]
> key.class = MyKey, key = MyKey{i=1}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=1617838096, hash=31548669, i=5]
> key.class = MyKey, key = MyKey{i=0}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=138776324, hash=31578460, i=6]
> key.class = MyKey, key = MyKey{i=9}
> key.class = MyKey, key = MyKey{i=4}
> {code}
> Reproducer :
> {code}
> public class CacheEntryProcessorExample2 {
> /** Cache name. */
> private static final String CACHE_NAME = 
> CacheEntryProcessorExample2.class.getSimpleName();
> /** Number of keys. */
> private static final int KEY_CNT = 10;
> /** Set of predefined keys. */
> private static final Set KEYS_SET;
> /**
>  * Initializes keys set that is used in bulked operations in the example.
>  */
> static {
> KEYS_SET = new HashSet<>();
> for (int i = 0; i < KEY_CNT; i++)
> KEYS_SET.add(new MyKey(i));
> }
> /**
>  * Executes example.
>  *
>  * @param args Command line arguments, none required.
>  * @throws IgniteException If example execution failed.
>  */
> public static void main(String[] args) throws IgniteException {
> try (Ignite ignite = 
> Ignition.start("examples/config/example-ignite.xml")) {
> CacheConfiguration ccfg = new 
> CacheConfiguration()
> .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
> .setName(CACHE_NAME);
> // Auto-close cache at the end of the example.
> try (IgniteCache cache = 
> ignite.getOrCreateCache(ccfg)) {
> Map> map = 
> cache.invokeAll(KEYS_SET, (entry, object) -> {
> System.out.println("entry.key = " + entry.getKey());
> return entry.getKey().getI();
> });
> map.entrySet().forEach( e -> {
> Object key = e.getKey();
> System.out.println("key.class = " + 
> key.getClass().getSimpleName() + ", key = " + key);
> });
> map.entrySet().forEach( e -> {
> Object key = e.getKey();
> if (!(key instanceof MyKey)) {
> throw new IllegalArgumentException("MyKey expected, 
> but found: " + key.getClass());
> }
> });
> }
> finally {
> // Distributed cache could be removed from cluster only by 
> #destroyCache() call.
> ignite.destroyCache(CACHE_NAME);
> }
> }
> }
> public static class MyKey {
> private int i;
> public MyKey() {
> }
> public MyKey(int i) {
> this.i = i;
> }
> public int getI() {
> return i;
> }
> public void setI(int i) {
> this.i = i;
> }
> @Override
> public String toString() {
> return "MyKey{" +
> "i=" + i +
> '}';
> }
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-10 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11909:

Ignite Flags:   (was: Docs Required)

> Cache.invokeAll() returns a map with BinaryObjects as keys
> --
>
> Key: IGNITE-11909
> URL: https://issues.apache.org/jira/browse/IGNITE-11909
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: APIBug
>
> Preconditions:
> 1) AtomicityMode.Transactional
> 2) Key is custom object. (i.e MyKey)
> cache.returnAll returns should return Map>, 
> but keys 
> processed on remote node(s) return not unwrapped (as BinaryObject), so we can 
> get a map with mixed keys:
> {code}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=151593342, hash=31459296, i=2]
> key.class = MyKey, key = MyKey{i=7}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=405215542, hash=31638042, i=8]
> key.class = MyKey, key = MyKey{i=1}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=1617838096, hash=31548669, i=5]
> key.class = MyKey, key = MyKey{i=0}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=138776324, hash=31578460, i=6]
> key.class = MyKey, key = MyKey{i=9}
> key.class = MyKey, key = MyKey{i=4}
> {code}
> Reproducer :
> {code}
> public class CacheEntryProcessorExample2 {
> /** Cache name. */
> private static final String CACHE_NAME = 
> CacheEntryProcessorExample2.class.getSimpleName();
> /** Number of keys. */
> private static final int KEY_CNT = 10;
> /** Set of predefined keys. */
> private static final Set KEYS_SET;
> /**
>  * Initializes keys set that is used in bulked operations in the example.
>  */
> static {
> KEYS_SET = new HashSet<>();
> for (int i = 0; i < KEY_CNT; i++)
> KEYS_SET.add(new MyKey(i));
> }
> /**
>  * Executes example.
>  *
>  * @param args Command line arguments, none required.
>  * @throws IgniteException If example execution failed.
>  */
> public static void main(String[] args) throws IgniteException {
> try (Ignite ignite = 
> Ignition.start("examples/config/example-ignite.xml")) {
> CacheConfiguration ccfg = new 
> CacheConfiguration()
> .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
> .setName(CACHE_NAME);
> // Auto-close cache at the end of the example.
> try (IgniteCache cache = 
> ignite.getOrCreateCache(ccfg)) {
> Map> map = 
> cache.invokeAll(KEYS_SET, (entry, object) -> {
> System.out.println("entry.key = " + entry.getKey());
> return entry.getKey().getI();
> });
> map.entrySet().forEach( e -> {
> Object key = e.getKey();
> System.out.println("key.class = " + 
> key.getClass().getSimpleName() + ", key = " + key);
> });
> map.entrySet().forEach( e -> {
> Object key = e.getKey();
> if (!(key instanceof MyKey)) {
> throw new IllegalArgumentException("MyKey expected, 
> but found: " + key.getClass());
> }
> });
> }
> finally {
> // Distributed cache could be removed from cluster only by 
> #destroyCache() call.
> ignite.destroyCache(CACHE_NAME);
> }
> }
> }
> public static class MyKey {
> private int i;
> public MyKey() {
> }
> public MyKey(int i) {
> this.i = i;
> }
> public int getI() {
> return i;
> }
> public void setI(int i) {
> this.i = i;
> }
> @Override
> public String toString() {
> return "MyKey{" +
> "i=" + i +
> '}';
> }
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-10 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11909:

Labels: APIBug  (was: )

> Cache.invokeAll() returns a map with BinaryObjects as keys
> --
>
> Key: IGNITE-11909
> URL: https://issues.apache.org/jira/browse/IGNITE-11909
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: APIBug
>
> Preconditions:
> 1) AtomicityMode.Transactional
> 2) Key is custom object. (i.e MyKey)
> cache.returnAll returns should return Map>, 
> but keys 
> processed on remote node(s) return not unwrapped (as BinaryObject), so we can 
> get a map with mixed keys:
> {code}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=151593342, hash=31459296, i=2]
> key.class = MyKey, key = MyKey{i=7}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=405215542, hash=31638042, i=8]
> key.class = MyKey, key = MyKey{i=1}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=1617838096, hash=31548669, i=5]
> key.class = MyKey, key = MyKey{i=0}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=138776324, hash=31578460, i=6]
> key.class = MyKey, key = MyKey{i=9}
> key.class = MyKey, key = MyKey{i=4}
> {code}
> Reproducer :
> {code}
> public class CacheEntryProcessorExample2 {
> /** Cache name. */
> private static final String CACHE_NAME = 
> CacheEntryProcessorExample2.class.getSimpleName();
> /** Number of keys. */
> private static final int KEY_CNT = 10;
> /** Set of predefined keys. */
> private static final Set KEYS_SET;
> /**
>  * Initializes keys set that is used in bulked operations in the example.
>  */
> static {
> KEYS_SET = new HashSet<>();
> for (int i = 0; i < KEY_CNT; i++)
> KEYS_SET.add(new MyKey(i));
> }
> /**
>  * Executes example.
>  *
>  * @param args Command line arguments, none required.
>  * @throws IgniteException If example execution failed.
>  */
> public static void main(String[] args) throws IgniteException {
> try (Ignite ignite = 
> Ignition.start("examples/config/example-ignite.xml")) {
> CacheConfiguration ccfg = new 
> CacheConfiguration()
> .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
> .setName(CACHE_NAME);
> // Auto-close cache at the end of the example.
> try (IgniteCache cache = 
> ignite.getOrCreateCache(ccfg)) {
> Map> map = 
> cache.invokeAll(KEYS_SET, (entry, object) -> {
> System.out.println("entry.key = " + entry.getKey());
> return entry.getKey().getI();
> });
> map.entrySet().forEach( e -> {
> Object key = e.getKey();
> System.out.println("key.class = " + 
> key.getClass().getSimpleName() + ", key = " + key);
> });
> map.entrySet().forEach( e -> {
> Object key = e.getKey();
> if (!(key instanceof MyKey)) {
> throw new IllegalArgumentException("MyKey expected, 
> but found: " + key.getClass());
> }
> });
> }
> finally {
> // Distributed cache could be removed from cluster only by 
> #destroyCache() call.
> ignite.destroyCache(CACHE_NAME);
> }
> }
> }
> public static class MyKey {
> private int i;
> public MyKey() {
> }
> public MyKey(int i) {
> this.i = i;
> }
> public int getI() {
> return i;
> }
> public void setI(int i) {
> this.i = i;
> }
> @Override
> public String toString() {
> return "MyKey{" +
> "i=" + i +
> '}';
> }
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-10 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11909:

Affects Version/s: 2.7

> Cache.invokeAll() returns a map with BinaryObjects as keys
> --
>
> Key: IGNITE-11909
> URL: https://issues.apache.org/jira/browse/IGNITE-11909
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: APIBug
>
> Preconditions:
> 1) AtomicityMode.Transactional
> 2) Key is custom object. (i.e MyKey)
> cache.returnAll returns should return Map>, 
> but keys 
> processed on remote node(s) return not unwrapped (as BinaryObject), so we can 
> get a map with mixed keys:
> {code}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=151593342, hash=31459296, i=2]
> key.class = MyKey, key = MyKey{i=7}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=405215542, hash=31638042, i=8]
> key.class = MyKey, key = MyKey{i=1}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=1617838096, hash=31548669, i=5]
> key.class = MyKey, key = MyKey{i=0}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=138776324, hash=31578460, i=6]
> key.class = MyKey, key = MyKey{i=9}
> key.class = MyKey, key = MyKey{i=4}
> {code}
> Reproducer :
> {code}
> public class CacheEntryProcessorExample2 {
> /** Cache name. */
> private static final String CACHE_NAME = 
> CacheEntryProcessorExample2.class.getSimpleName();
> /** Number of keys. */
> private static final int KEY_CNT = 10;
> /** Set of predefined keys. */
> private static final Set KEYS_SET;
> /**
>  * Initializes keys set that is used in bulked operations in the example.
>  */
> static {
> KEYS_SET = new HashSet<>();
> for (int i = 0; i < KEY_CNT; i++)
> KEYS_SET.add(new MyKey(i));
> }
> /**
>  * Executes example.
>  *
>  * @param args Command line arguments, none required.
>  * @throws IgniteException If example execution failed.
>  */
> public static void main(String[] args) throws IgniteException {
> try (Ignite ignite = 
> Ignition.start("examples/config/example-ignite.xml")) {
> CacheConfiguration ccfg = new 
> CacheConfiguration()
> .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
> .setName(CACHE_NAME);
> // Auto-close cache at the end of the example.
> try (IgniteCache cache = 
> ignite.getOrCreateCache(ccfg)) {
> Map> map = 
> cache.invokeAll(KEYS_SET, (entry, object) -> {
> System.out.println("entry.key = " + entry.getKey());
> return entry.getKey().getI();
> });
> map.entrySet().forEach( e -> {
> Object key = e.getKey();
> System.out.println("key.class = " + 
> key.getClass().getSimpleName() + ", key = " + key);
> });
> map.entrySet().forEach( e -> {
> Object key = e.getKey();
> if (!(key instanceof MyKey)) {
> throw new IllegalArgumentException("MyKey expected, 
> but found: " + key.getClass());
> }
> });
> }
> finally {
> // Distributed cache could be removed from cluster only by 
> #destroyCache() call.
> ignite.destroyCache(CACHE_NAME);
> }
> }
> }
> public static class MyKey {
> private int i;
> public MyKey() {
> }
> public MyKey(int i) {
> this.i = i;
> }
> public int getI() {
> return i;
> }
> public void setI(int i) {
> this.i = i;
> }
> @Override
> public String toString() {
> return "MyKey{" +
> "i=" + i +
> '}';
> }
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-10 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11909:

Description: 
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) return not unwrapped (as BinaryObject), so we can 
get a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer :

{code}
public class CacheEntryProcessorExample2 {
/** Cache name. */
private static final String CACHE_NAME = 
CacheEntryProcessorExample2.class.getSimpleName();

/** Number of keys. */
private static final int KEY_CNT = 10;

/** Set of predefined keys. */
private static final Set KEYS_SET;

/**
 * Initializes keys set that is used in bulked operations in the example.
 */
static {
KEYS_SET = new HashSet<>();

for (int i = 0; i < KEY_CNT; i++)
KEYS_SET.add(new MyKey(i));
}

/**
 * Executes example.
 *
 * @param args Command line arguments, none required.
 * @throws IgniteException If example execution failed.
 */
public static void main(String[] args) throws IgniteException {
try (Ignite ignite = 
Ignition.start("examples/config/example-ignite.xml")) {

CacheConfiguration ccfg = new 
CacheConfiguration()
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
.setName(CACHE_NAME);

// Auto-close cache at the end of the example.
try (IgniteCache cache = 
ignite.getOrCreateCache(ccfg)) {
Map> map = 
cache.invokeAll(KEYS_SET, (entry, object) -> {
System.out.println("entry.key = " + entry.getKey());

return entry.getKey().getI();
});

map.entrySet().forEach( e -> {
Object key = e.getKey();
System.out.println("key.class = " + 
key.getClass().getSimpleName() + ", key = " + key);
});

map.entrySet().forEach( e -> {
Object key = e.getKey();
if (!(key instanceof MyKey)) {
throw new IllegalArgumentException("MyKey expected, but 
found: " + key.getClass());
}
});

}
finally {
// Distributed cache could be removed from cluster only by 
#destroyCache() call.
ignite.destroyCache(CACHE_NAME);
}
}
}

public static class MyKey {

private int i;

public MyKey() {
}

public MyKey(int i) {
this.i = i;
}

public int getI() {
return i;
}

public void setI(int i) {
this.i = i;
}

@Override
public String toString() {
return "MyKey{" +
"i=" + i +
'}';
}
}
}
{code}


  was:
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) are not unwrapped and return as BinaryObject, so we 
can get a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer :

{code}
public class CacheEntryProcessorExample2 {
/** Cache name. */
private 

[jira] [Updated] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-10 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11909:

Description: 
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) are not unwrapped and return as BinaryObject, so we 
can get a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer :

{code}
public class CacheEntryProcessorExample2 {
/** Cache name. */
private static final String CACHE_NAME = 
CacheEntryProcessorExample2.class.getSimpleName();

/** Number of keys. */
private static final int KEY_CNT = 10;

/** Set of predefined keys. */
private static final Set KEYS_SET;

/**
 * Initializes keys set that is used in bulked operations in the example.
 */
static {
KEYS_SET = new HashSet<>();

for (int i = 0; i < KEY_CNT; i++)
KEYS_SET.add(new MyKey(i));
}

/**
 * Executes example.
 *
 * @param args Command line arguments, none required.
 * @throws IgniteException If example execution failed.
 */
public static void main(String[] args) throws IgniteException {
try (Ignite ignite = 
Ignition.start("examples/config/example-ignite.xml")) {

CacheConfiguration ccfg = new 
CacheConfiguration()
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
.setName(CACHE_NAME);

// Auto-close cache at the end of the example.
try (IgniteCache cache = 
ignite.getOrCreateCache(ccfg)) {
Map> map = 
cache.invokeAll(KEYS_SET, (entry, object) -> {
System.out.println("entry.key = " + entry.getKey());

return entry.getKey().getI();
});

map.entrySet().forEach( e -> {
Object key = e.getKey();
System.out.println("key.class = " + 
key.getClass().getSimpleName() + ", key = " + key);
});

map.entrySet().forEach( e -> {
Object key = e.getKey();
if (!(key instanceof MyKey)) {
throw new IllegalArgumentException("MyKey expected, but 
found: " + key.getClass());
}
});

}
finally {
// Distributed cache could be removed from cluster only by 
#destroyCache() call.
ignite.destroyCache(CACHE_NAME);
}
}
}

public static class MyKey {

private int i;

public MyKey() {
}

public MyKey(int i) {
this.i = i;
}

public int getI() {
return i;
}

public void setI(int i) {
this.i = i;
}

@Override
public String toString() {
return "MyKey{" +
"i=" + i +
'}';
}
}
}
{code}


  was:
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) are not unwrapped and return as BinaryObject, so we 
can gat a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer :

{code}
public class CacheEntryProcessorExample2 {
/** Cache name. */

[jira] [Updated] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-10 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11909:

Description: 
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) are not unwrapped and return as BinaryObject, so we 
can gat a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer :

{code}
public class CacheEntryProcessorExample2 {
/** Cache name. */
private static final String CACHE_NAME = 
CacheEntryProcessorExample2.class.getSimpleName();

/** Number of keys. */
private static final int KEY_CNT = 10;

/** Set of predefined keys. */
private static final Set KEYS_SET;

/**
 * Initializes keys set that is used in bulked operations in the example.
 */
static {
KEYS_SET = new HashSet<>();

for (int i = 0; i < KEY_CNT; i++)
KEYS_SET.add(new MyKey(i));
}

/**
 * Executes example.
 *
 * @param args Command line arguments, none required.
 * @throws IgniteException If example execution failed.
 */
public static void main(String[] args) throws IgniteException {
try (Ignite ignite = 
Ignition.start("examples/config/example-ignite.xml")) {

CacheConfiguration ccfg = new 
CacheConfiguration()
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
.setName(CACHE_NAME);

// Auto-close cache at the end of the example.
try (IgniteCache cache = 
ignite.getOrCreateCache(ccfg)) {
Map> map = 
cache.invokeAll(KEYS_SET, (entry, object) -> {
System.out.println("entry.key = " + entry.getKey());

return entry.getKey().getI();
});

map.entrySet().forEach( e -> {
Object key = e.getKey();
System.out.println("key.class = " + 
key.getClass().getSimpleName() + ", key = " + key);
});

map.entrySet().forEach( e -> {
Object key = e.getKey();
if (!(key instanceof MyKey)) {
throw new IllegalArgumentException("MyKey expected, but 
found: " + key.getClass());
}
});

}
finally {
// Distributed cache could be removed from cluster only by 
#destroyCache() call.
ignite.destroyCache(CACHE_NAME);
}
}
}

public static class MyKey {

private int i;

public MyKey() {
}

public MyKey(int i) {
this.i = i;
}

public int getI() {
return i;
}

public void setI(int i) {
this.i = i;
}

@Override
public String toString() {
return "MyKey{" +
"i=" + i +
'}';
}
}
}
{code}


  was:
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) are not unwrapped and return as BinaryObject, so we 
can gat a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer :





> Cache.invokeAll() returns a map with BinaryObjects as keys
> 

[jira] [Updated] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-10 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11909:

Description: 
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) are not unwrapped and return as BinaryObject, so we 
can gat a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer :




  was:
Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) are not unwrapped and return as BinaryObject, so we 
can gat a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer is attached.


> Cache.invokeAll() returns a map with BinaryObjects as keys
> --
>
> Key: IGNITE-11909
> URL: https://issues.apache.org/jira/browse/IGNITE-11909
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>
> Preconditions:
> 1) AtomicityMode.Transactional
> 2) Key is custom object. (i.e MyKey)
> cache.returnAll returns should return Map>, 
> but keys 
> processed on remote node(s) are not unwrapped and return as BinaryObject, so 
> we can gat a map with mixed keys:
> {code}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=151593342, hash=31459296, i=2]
> key.class = MyKey, key = MyKey{i=7}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=405215542, hash=31638042, i=8]
> key.class = MyKey, key = MyKey{i=1}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=1617838096, hash=31548669, i=5]
> key.class = MyKey, key = MyKey{i=0}
> key.class = BinaryObjectImpl, key = 
> org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
> [idHash=138776324, hash=31578460, i=6]
> key.class = MyKey, key = MyKey{i=9}
> key.class = MyKey, key = MyKey{i=4}
> {code}
> Reproducer :



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11909) Cache.invokeAll() returns a map with BinaryObjects as keys

2019-06-10 Thread Sergey Kosarev (JIRA)
Sergey Kosarev created IGNITE-11909:
---

 Summary: Cache.invokeAll() returns a map with BinaryObjects as keys
 Key: IGNITE-11909
 URL: https://issues.apache.org/jira/browse/IGNITE-11909
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Kosarev


Preconditions:
1) AtomicityMode.Transactional
2) Key is custom object. (i.e MyKey)

cache.returnAll returns should return Map>, but 
keys 
processed on remote node(s) are not unwrapped and return as BinaryObject, so we 
can gat a map with mixed keys:

{code}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=151593342, hash=31459296, i=2]
key.class = MyKey, key = MyKey{i=7}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=405215542, hash=31638042, i=8]
key.class = MyKey, key = MyKey{i=1}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=1617838096, hash=31548669, i=5]
key.class = MyKey, key = MyKey{i=0}
key.class = BinaryObjectImpl, key = 
org.apache.ignite.examples.datagrid.CacheEntryProcessorExample2$MyKey 
[idHash=138776324, hash=31578460, i=6]
key.class = MyKey, key = MyKey{i=9}
key.class = MyKey, key = MyKey{i=4}
{code}

Reproducer is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11243) Not working control.sh / control.bat in master NPE in output

2019-02-18 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11243:

Description: 
./bin/control.sh  --host --port --baseline
Cluster state: active
Error: java.lang.NullPointerException

control.bat --host  --port  --baseline
Cluster state: active
Error: java.lang.NullPointerException
Press any key to continue . . .

No info in cluster logs matched with call, look like problem in utility run

This bug was introuced by IGNITE-8894 and reproduced when new utility runs on 
an old version node.

  was:
./bin/control.sh  --host --port --baseline
Cluster state: active
Error: java.lang.NullPointerException

control.bat --host  --port  --baseline
Cluster state: active
Error: java.lang.NullPointerException
Press any key to continue . . .

No info in cluster logs matched with call, look like problem in utility run


> Not working control.sh / control.bat in master NPE in output
> 
>
> Key: IGNITE-11243
> URL: https://issues.apache.org/jira/browse/IGNITE-11243
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: ARomantsov
>Assignee: Sergey Kosarev
>Priority: Blocker
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ./bin/control.sh  --host --port --baseline
> Cluster state: active
> Error: java.lang.NullPointerException
> control.bat --host  --port  --baseline
> Cluster state: active
> Error: java.lang.NullPointerException
> Press any key to continue . . .
> No info in cluster logs matched with call, look like problem in utility run
> This bug was introuced by IGNITE-8894 and reproduced when new utility runs on 
> an old version node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-11243) Not working control.sh / control.bat in master NPE in output

2019-02-16 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev reassigned IGNITE-11243:
---

Assignee: Sergey Kosarev

> Not working control.sh / control.bat in master NPE in output
> 
>
> Key: IGNITE-11243
> URL: https://issues.apache.org/jira/browse/IGNITE-11243
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: ARomantsov
>Assignee: Sergey Kosarev
>Priority: Blocker
> Fix For: 2.8
>
>
> ./bin/control.sh  --host --port --baseline
> Cluster state: active
> Error: java.lang.NullPointerException
> control.bat --host  --port  --baseline
> Cluster state: active
> Error: java.lang.NullPointerException
> Press any key to continue . . .
> No info in cluster logs matched with call, look like problem in utility run



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10876) "Affinity changes (coordinator) applied" can be executed in parallel

2019-01-31 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757382#comment-16757382
 ] 

Sergey Kosarev commented on IGNITE-10876:
-

Overall changes look good to me.
I'd suggest to create shortcut doInParallel method to decrease copy-pastes of 
parllelism and systemExecutorservice, but it's optional.

> "Affinity changes (coordinator) applied" can be executed in parallel
> 
>
> Key: IGNITE-10876
> URL: https://issues.apache.org/jira/browse/IGNITE-10876
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Voronkin
>Assignee: Pavel Voronkin
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There is for loop over all cache groups which execution N*P operations in 
> exchange worker where N is number of cache groups, P is number of partitions.
> We spend 80% of time in a loop
> for (CacheGroupContext grp : cctx.cache().cacheGroups()){
> GridDhtPartitionTopology top = grp != null ? grp.topology() : 
> cctx.exchange().clientTopology(grp.groupId(), events().discoveryCache());
> top.beforeExchange(this, true, true);
> } 
> I believe we can execute it in parallel



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Cache 8 Suite on master

2019-01-24 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11044:

Fix Version/s: 2.8

> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Cache 8 Suite on 
> master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci.ignite.apache.org/viewLog.html?buildId=2880832=buildResultsDiv=IgniteTests24Java8_MvccCache8
> It looks like getHeapEntriesCount cache metrics does not work with MVCC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Cache 8 Suite on master

2019-01-24 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16750854#comment-16750854
 ] 

Sergey Kosarev commented on IGNITE-11044:
-

ТС прошло успешно:

https://ci.ignite.apache.org/viewLog.html?buildId=2883849=buildResultsDiv=IgniteTests24Java8_MvccCache8

https://ci.ignite.apache.org/viewLog.html?buildId=2883811=buildResultsDiv=IgniteTests24Java8_Cache8

> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Cache 8 Suite on 
> master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci.ignite.apache.org/viewLog.html?buildId=2880832=buildResultsDiv=IgniteTests24Java8_MvccCache8
> It looks like getHeapEntriesCount cache metrics does not work with MVCC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Cache 8 Suite on master

2019-01-23 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16750059#comment-16750059
 ] 

Sergey Kosarev edited comment on IGNITE-11044 at 1/23/19 3:44 PM:
--

removed check heap cache metrics for MVCC mode.


was (Author: macrergate):
failed test to  mute in TC.
suppose should be fixed in IGNITE-9224 

> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Cache 8 Suite on 
> master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci.ignite.apache.org/viewLog.html?buildId=2880832=buildResultsDiv=IgniteTests24Java8_MvccCache8
> It looks like getHeapEntriesCount cache metrics does not work with MVCC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Cache 8 Suite on master

2019-01-23 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11044:

Summary: CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Cache 
8 Suite on master  (was: CacheMetricsEntitiesCountTest.testEnitiesCount fails 
in MVCC Suite on master)

> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Cache 8 Suite on 
> master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci.ignite.apache.org/viewLog.html?buildId=2880832=buildResultsDiv=IgniteTests24Java8_MvccCache8
> It looks like getHeapEntriesCount cache metrics does not work with MVCC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master

2019-01-23 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11044:

Description: 
https://ci.ignite.apache.org/viewLog.html?buildId=2880832=buildResultsDiv=IgniteTests24Java8_MvccCache8

It looks like getHeapEntriesCount cache metrics does not work with MVCC.

  was:It looks like getHeapEntriesCount cache metrics does not work with MVCC.


> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci.ignite.apache.org/viewLog.html?buildId=2880832=buildResultsDiv=IgniteTests24Java8_MvccCache8
> It looks like getHeapEntriesCount cache metrics does not work with MVCC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master

2019-01-23 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16750059#comment-16750059
 ] 

Sergey Kosarev edited comment on IGNITE-11044 at 1/23/19 2:42 PM:
--

failed test to  mute in TC.
suppose should be fixed in IGNITE-9224 


was (Author: macrergate):
failed test to  mute in TC
suppose should be fixed in IGNITE-9224 

> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> It looks like getHeapEntriesCount cache metrics does not work with MVCC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master

2019-01-23 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev reassigned IGNITE-11044:
---

Assignee: Sergey Kosarev

failed test to  mute in TC
suppose should be fixed in IGNITE-9224 

> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> It looks like getHeapEntriesCount cache metrics does not work with MVCC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master

2019-01-23 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11044:

Labels: MakeTeamcityGreenAgain  (was: )

> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master

2019-01-23 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11044:

Description: It looks like getHeapEntriesCount cache metrics does not work 
with MVCC.

> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> It looks like getHeapEntriesCount cache metrics does not work with MVCC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master

2019-01-23 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11044:

Component/s: mvcc

> CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master
> 
>
> Key: IGNITE-11044
> URL: https://issues.apache.org/jira/browse/IGNITE-11044
> Project: Ignite
>  Issue Type: Bug
>  Components: mvcc
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11044) CacheMetricsEntitiesCountTest.testEnitiesCount fails in MVCC Suite on master

2019-01-23 Thread Sergey Kosarev (JIRA)
Sergey Kosarev created IGNITE-11044:
---

 Summary: CacheMetricsEntitiesCountTest.testEnitiesCount fails in 
MVCC Suite on master
 Key: IGNITE-11044
 URL: https://issues.apache.org/jira/browse/IGNITE-11044
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Kosarev






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11036) IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest fail in master

2019-01-23 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11036:

Description: 
On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
fails in 100% rate.

{code}
junit.framework.AssertionFailedError: 
Expected :3
Actual   :1
 
at junit.framework.Assert.assertEquals(Assert.java:241)
at 
org.apache.ignite.spi.communication.tcp.IgniteTcpCommunicationHandshakeWaitTest.lambda$testHandshakeOnNodeJoining$0(IgniteTcpCommunicationHandshakeWaitTest.java:99)
at 
org.apache.ignite.testframework.GridTestUtils.lambda$runAsync$2(GridTestUtils.java:1009)
{code}

I've investigated and found that the test overrides TcpDiscoverySpi but does 
not set ipFinder to the new object. Setting ipFinder solves the problem.

  was:
On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
fails in 100% rate.
I've investigated and found that the test overrides TcpDiscoverySpi but does 
not set ipFinder to the new object. Setting ipFinder solves the problem.


> IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest
>  fail in master 
> --
>
> Key: IGNITE-11036
> URL: https://issues.apache.org/jira/browse/IGNITE-11036
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
> fails in 100% rate.
> {code}
> junit.framework.AssertionFailedError: 
> Expected :3
> Actual   :1
>  
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at 
> org.apache.ignite.spi.communication.tcp.IgniteTcpCommunicationHandshakeWaitTest.lambda$testHandshakeOnNodeJoining$0(IgniteTcpCommunicationHandshakeWaitTest.java:99)
>   at 
> org.apache.ignite.testframework.GridTestUtils.lambda$runAsync$2(GridTestUtils.java:1009)
> {code}
> I've investigated and found that the test overrides TcpDiscoverySpi but does 
> not set ipFinder to the new object. Setting ipFinder solves the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11036) IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest fail in master

2019-01-23 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11036:

Description: 
On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
fails in 100% rate.
I've investigated and found that the test overrides TcpDiscoverySpi but does 
not set ipFinder to the new object. Setting ipFinder solves the problem.

  was:
On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
fails in 100% rate.
I've investigated and found problem the test overrides TcpDiscoverySpi but does 
not set ipFinder to the new object. Setting ipFinder solves it.


> IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest
>  fail in master 
> --
>
> Key: IGNITE-11036
> URL: https://issues.apache.org/jira/browse/IGNITE-11036
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
> fails in 100% rate.
> I've investigated and found that the test overrides TcpDiscoverySpi but does 
> not set ipFinder to the new object. Setting ipFinder solves the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11036) IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest fail in master

2019-01-22 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11036:

Labels: MakeTeamcityGreenAgain  (was: )

> IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest
>  fail in master 
> --
>
> Key: IGNITE-11036
> URL: https://issues.apache.org/jira/browse/IGNITE-11036
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11036) IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest fail in master

2019-01-22 Thread Sergey Kosarev (JIRA)
Sergey Kosarev created IGNITE-11036:
---

 Summary: 
IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest
 fail in master 
 Key: IGNITE-11036
 URL: https://issues.apache.org/jira/browse/IGNITE-11036
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Kosarev
Assignee: Sergey Kosarev






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11036) IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest fail in master

2019-01-22 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11036:

Description: 
On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
fails in 100% rate.
I've investigated and found problem the test overrides TcpDiscoverySpi but does 
not set ipFinder to the new object. Setting ipFinder solves it.

  was:On my machine this test fails


> IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest
>  fail in master 
> --
>
> Key: IGNITE-11036
> URL: https://issues.apache.org/jira/browse/IGNITE-11036
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
> fails in 100% rate.
> I've investigated and found problem the test overrides TcpDiscoverySpi but 
> does not set ipFinder to the new object. Setting ipFinder solves it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11036) IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest fail in master

2019-01-22 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11036:

Fix Version/s: 2.8

> IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest
>  fail in master 
> --
>
> Key: IGNITE-11036
> URL: https://issues.apache.org/jira/browse/IGNITE-11036
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
> fails in 100% rate.
> I've investigated and found problem the test overrides TcpDiscoverySpi but 
> does not set ipFinder to the new object. Setting ipFinder solves it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11036) IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest fail in master

2019-01-22 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11036:

Affects Version/s: 2.8

> IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest
>  fail in master 
> --
>
> Key: IGNITE-11036
> URL: https://issues.apache.org/jira/browse/IGNITE-11036
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> On my local environment (Mac OS X) IgniteTcpCommunicationHandshakeWaitSslTest 
> fails in 100% rate.
> I've investigated and found problem the test overrides TcpDiscoverySpi but 
> does not set ipFinder to the new object. Setting ipFinder solves it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11036) IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest fail in master

2019-01-22 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-11036:

Description: On my machine this test fails

> IgniteTcpCommunicationHandshakeWaitTest/IgniteTcpCommunicationHandshakeWaitSslTest
>  fail in master 
> --
>
> Key: IGNITE-11036
> URL: https://issues.apache.org/jira/browse/IGNITE-11036
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Assignee: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> On my machine this test fails



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (IGNITE-10925) Failure to submit affinity task from client node

2019-01-21 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748062#comment-16748062
 ] 

Sergey Kosarev edited comment on IGNITE-10925 at 1/21/19 4:14 PM:
--

Actually
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream#available
has implementation: return -1;
it results that when reading CacheMetricsSnapshot we don't read new fields and

it brokes  
org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode#readExternal when 
reading
Map org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode#metrics field



was (Author: macrergate):
Actually
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream#available
has implementation: return -1;
it results that when reading CacheMetricsSnapshot we don't read new fields and

it brokes  
org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode#readExternal when 
reading 
Map org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode#metrics 


> Failure to submit affinity task from client node
> 
>
> Key: IGNITE-10925
> URL: https://issues.apache.org/jira/browse/IGNITE-10925
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.7
>Reporter: Prasad
>Priority: Blocker
>
> Getting following exception while submitting the affinity task from client 
> node to server node.
> Before submitting the affinity task ignite first gets the affinity cached 
> function (AffinityInfo) by submitting the cluster wide task "AffinityJob". 
> But while in the process of retrieving the output of this AffinityJob, ignite 
> deserializes this output. I am getting exception while deserailizing this 
> output.
> Code fails while un-marshalling cachesnapshotmetrics on client node.
>  
> [Userlist 
> Discussion|http://apache-ignite-users.70518.x6.nabble.com/After-upgrading-2-7-getting-Unexpected-error-occurred-during-unmarshalling-td26262.html]
> [Reproducer 
> Project|https://github.com/prasadbhalerao1983/IgniteIssueReproducer.git]
>  
> Step to Reproduce:
> 1) First Run com.example.demo.Server class as a java program
> 2) Then run com.example.demo.Client as java program.
>  
> {noformat}
> 2019-01-14 15:37:02.723 ERROR 10712 --- [springDataNode%] 
> o.a.i.i.processors.task.GridTaskWorker   : Error deserializing job response: 
> GridJobExecuteResponse [nodeId=e9a24c20-0d00-4808-b2f5-13e1ce35496a, 
> sesId=76324db4861-1d85ad49-5b25-454a-b69c-d8685cfc73b0, 
> jobId=86324db4861-1d85ad49-5b25-454a-b69c-d8685cfc73b0, gridEx=null, 
> isCancelled=false, retry=null]
> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object with 
> optimized marshaller
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146) 
> ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_144]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_144]
>  at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to 
> unmarshal object with optimized marshaller
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> 

[jira] [Commented] (IGNITE-10925) Failure to submit affinity task from client node

2019-01-21 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748062#comment-16748062
 ] 

Sergey Kosarev commented on IGNITE-10925:
-

Actually
org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream#available
has implementation: return -1;
it results that when reading CacheMetricsSnapshot we don't read new fields and

it brokes  
org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode#readExternal when 
reading 
Map org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode#metrics 


> Failure to submit affinity task from client node
> 
>
> Key: IGNITE-10925
> URL: https://issues.apache.org/jira/browse/IGNITE-10925
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.7
>Reporter: Prasad
>Priority: Blocker
>
> Getting following exception while submitting the affinity task from client 
> node to server node.
> Before submitting the affinity task ignite first gets the affinity cached 
> function (AffinityInfo) by submitting the cluster wide task "AffinityJob". 
> But while in the process of retrieving the output of this AffinityJob, ignite 
> deserializes this output. I am getting exception while deserailizing this 
> output.
> Code fails while un-marshalling cachesnapshotmetrics on client node.
>  
> [Userlist 
> Discussion|http://apache-ignite-users.70518.x6.nabble.com/After-upgrading-2-7-getting-Unexpected-error-occurred-during-unmarshalling-td26262.html]
> [Reproducer 
> Project|https://github.com/prasadbhalerao1983/IgniteIssueReproducer.git]
>  
> Step to Reproduce:
> 1) First Run com.example.demo.Server class as a java program
> 2) Then run com.example.demo.Client as java program.
>  
> {noformat}
> 2019-01-14 15:37:02.723 ERROR 10712 --- [springDataNode%] 
> o.a.i.i.processors.task.GridTaskWorker   : Error deserializing job response: 
> GridJobExecuteResponse [nodeId=e9a24c20-0d00-4808-b2f5-13e1ce35496a, 
> sesId=76324db4861-1d85ad49-5b25-454a-b69c-d8685cfc73b0, 
> jobId=86324db4861-1d85ad49-5b25-454a-b69c-d8685cfc73b0, gridEx=null, 
> isCancelled=false, retry=null]
> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object with 
> optimized marshaller
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146) 
> ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
>  [ignite-core-2.7.0.jar:2.7.0]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_144]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_144]
>  at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to 
> unmarshal object with optimized marshaller
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:102)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
>  ~[ignite-core-2.7.0.jar:2.7.0]
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10140) 
> ~[ignite-core-2.7.0.jar:2.7.0]
>  ... 10 common frames omitted
> Caused by: 

[jira] [Commented] (IGNITE-6564) Incorrect calculation size and keySize for cluster cache metrics

2019-01-21 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747889#comment-16747889
 ] 

Sergey Kosarev commented on IGNITE-6564:


[~ilyak], I slightly changed test method awaitMetricsUpdate, because 
EVT_NODE_METRICS_UPDATED is invoked with every metrics update received from a 
node, so in common case we need N * N messages, where N is cluster size.

> Incorrect calculation size and keySize for cluster cache metrics
> 
>
> Key: IGNITE-6564
> URL: https://issues.apache.org/jira/browse/IGNITE-6564
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.2
>Reporter: Ilya Kasnacheev
>Assignee: Sergey Kosarev
>Priority: Minor
>  Labels: iep-6
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> They are currently not passed by ring and therefore only taken from current 
> node, which returns incorrect (local) value.
> See CacheMetricsSnapshot class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6564) Incorrect calculation size and keySize for cluster cache metrics

2019-01-18 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16746349#comment-16746349
 ] 

Sergey Kosarev commented on IGNITE-6564:


Returned old behavior 
https://github.com/apache/ignite/pull/5857
[~ilyak], review please.

> Incorrect calculation size and keySize for cluster cache metrics
> 
>
> Key: IGNITE-6564
> URL: https://issues.apache.org/jira/browse/IGNITE-6564
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.2
>Reporter: Ilya Kasnacheev
>Assignee: Sergey Kosarev
>Priority: Minor
>  Labels: iep-6
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> They are currently not passed by ring and therefore only taken from current 
> node, which returns incorrect (local) value.
> See CacheMetricsSnapshot class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (IGNITE-6564) Incorrect calculation size and keySize for cluster cache metrics

2019-01-18 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev reassigned IGNITE-6564:
--

Assignee: Sergey Kosarev  (was: Alexand Polyakov)

> Incorrect calculation size and keySize for cluster cache metrics
> 
>
> Key: IGNITE-6564
> URL: https://issues.apache.org/jira/browse/IGNITE-6564
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.2
>Reporter: Ilya Kasnacheev
>Assignee: Sergey Kosarev
>Priority: Minor
>  Labels: iep-6
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> They are currently not passed by ring and therefore only taken from current 
> node, which returns incorrect (local) value.
> See CacheMetricsSnapshot class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-6564) Incorrect calculation size and keySize for cluster cache metrics

2019-01-18 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16746250#comment-16746250
 ] 

Sergey Kosarev commented on IGNITE-6564:


[~a-polyakov], don't you mind If I fix isssues mentioned by Ilya? 

> Incorrect calculation size and keySize for cluster cache metrics
> 
>
> Key: IGNITE-6564
> URL: https://issues.apache.org/jira/browse/IGNITE-6564
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.2
>Reporter: Ilya Kasnacheev
>Assignee: Alexand Polyakov
>Priority: Minor
>  Labels: iep-6
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> They are currently not passed by ring and therefore only taken from current 
> node, which returns incorrect (local) value.
> See CacheMetricsSnapshot class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10962) Some tests in ignite-spring module use StopOrHaltFailurehandler which can cause Halt of a Test Suite

2019-01-17 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-10962:

Description: 
if a test fails with a critical error it results to the fail all the suite:
 *Process exited with code 130 (Step: Run test suite (Maven))*

{code}
[org.apache.ignite:ignite-spring]   at 
org.apache.ignite.spring.injection.GridServiceInjectionSpringResourceTest.doOneTestIteration(GridServiceInjectionSpringResourceTest.java:107)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
org.apache.ignite.spring.injection.GridServiceInjectionSpringResourceTest.testDeployServiceWithSpring(GridServiceInjectionSpringResourceTest.java:92)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
java.lang.reflect.Method.invoke(Method.java:498)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088)
[05:15:46]  [org.apache.ignite:ignite-spring]   at 
java.lang.Thread.run(Thread.java:748)
[05:15:46]  [org.apache.ignite:ignite-spring] [2019-01-16 
02:15:46,542][ERROR][exchange-worker-#7522%springTest0%][root] Critical system 
error detected. Will be handled accordingly to configured handler 
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet 
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], 
failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class 
o.a.i.IgniteCheckedException: Node is stopping: springTest0]]
[05:15:46]  [org.apache.ignite:ignite-spring] class 
org.apache.ignite.IgniteCheckedException: Node is stopping: springTest0
{code}


 

> Some tests in ignite-spring module use StopOrHaltFailurehandler which can 
> cause Halt of a Test Suite
> 
>
> Key: IGNITE-10962
> URL: https://issues.apache.org/jira/browse/IGNITE-10962
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> if a test fails with a critical error it results to the fail all the suite:
>  *Process exited with code 130 (Step: Run test suite (Maven))*
> {code}
> [org.apache.ignite:ignite-spring] at 
> org.apache.ignite.spring.injection.GridServiceInjectionSpringResourceTest.doOneTestIteration(GridServiceInjectionSpringResourceTest.java:107)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> org.apache.ignite.spring.injection.GridServiceInjectionSpringResourceTest.testDeployServiceWithSpring(GridServiceInjectionSpringResourceTest.java:92)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> java.lang.reflect.Method.invoke(Method.java:498)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> [05:15:46][org.apache.ignite:ignite-spring]   at 
> 

[jira] [Updated] (IGNITE-10962) Some tests in ignite-spring module use StopOrHaltFailurehandler which can cause Halt of TestSuite

2019-01-17 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-10962:

Description: In case of critical 

> Some tests in ignite-spring module use StopOrHaltFailurehandler which can 
> cause Halt of TestSuite
> -
>
> Key: IGNITE-10962
> URL: https://issues.apache.org/jira/browse/IGNITE-10962
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> In case of critical 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10962) Some tests in ignite-spring module use StopOrHaltFailurehandler which can cause Halt of a Test Suite

2019-01-17 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-10962:

Description: (was: In case of critical )

> Some tests in ignite-spring module use StopOrHaltFailurehandler which can 
> cause Halt of a Test Suite
> 
>
> Key: IGNITE-10962
> URL: https://issues.apache.org/jira/browse/IGNITE-10962
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10962) Some tests in ignite-spring module use StopOrHaltFailurehandler which can cause Halt of a Test Suite

2019-01-17 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-10962:

Summary: Some tests in ignite-spring module use StopOrHaltFailurehandler 
which can cause Halt of a Test Suite  (was: Some tests in ignite-spring module 
use StopOrHaltFailurehandler which can cause Halt of TestSuite)

> Some tests in ignite-spring module use StopOrHaltFailurehandler which can 
> cause Halt of a Test Suite
> 
>
> Key: IGNITE-10962
> URL: https://issues.apache.org/jira/browse/IGNITE-10962
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> In case of critical 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-10962) Some tests in ignite-spring module use StopOrHaltFailurehandler which can cause Halt of TestSuite

2019-01-17 Thread Sergey Kosarev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-10962:

Labels: MakeTeamcityGreenAgain  (was: )

> Some tests in ignite-spring module use StopOrHaltFailurehandler which can 
> cause Halt of TestSuite
> -
>
> Key: IGNITE-10962
> URL: https://issues.apache.org/jira/browse/IGNITE-10962
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Kosarev
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10962) Some tests in ignite-spring module use StopOrHaltFailurehandler which can cause Halt of TestSuite

2019-01-17 Thread Sergey Kosarev (JIRA)
Sergey Kosarev created IGNITE-10962:
---

 Summary: Some tests in ignite-spring module use 
StopOrHaltFailurehandler which can cause Halt of TestSuite
 Key: IGNITE-10962
 URL: https://issues.apache.org/jira/browse/IGNITE-10962
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Kosarev






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10938) After restart cluster with non-blt nodes - they left by handler

2019-01-15 Thread Sergey Kosarev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742952#comment-16742952
 ] 

Sergey Kosarev commented on IGNITE-10938:
-

duplicates https://issues.apache.org/jira/browse/IGNITE-9739

> After restart cluster with non-blt nodes - they left by handler
> ---
>
> Key: IGNITE-10938
> URL: https://issues.apache.org/jira/browse/IGNITE-10938
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.8
>Reporter: ARomantsov
>Priority: Critical
> Fix For: 2.8
>
>
> I have cluster wherein topology contain blt and non-blt nodes, but after 
> restart - nodes left by handler
> java.lang.IllegalStateException: Unable to find consistentId by UUID



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >