[jira] [Commented] (IGNITE-12542) Some tests failed due to incompatible changes in IGNITE-12108 and IGNITE-11987

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016660#comment-17016660
 ] 

Ignite TC Bot commented on IGNITE-12542:


{panel:title=Branch: [pull/7261/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4930716buildTypeId=IgniteTests24Java8_RunAll]

> Some tests failed due to incompatible changes in IGNITE-12108 and IGNITE-11987
> --
>
> Key: IGNITE-12542
> URL: https://issues.apache.org/jira/browse/IGNITE-12542
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.ignite.apache.org/buildConfiguration/IgniteTests24Java8_ComputeGrid?branch=%3Cdefault%3E=overview=builds]
>  
> [https://ci.ignite.apache.org/buildConfiguration/IgniteTests24Java8_Basic1?branch=%3Cdefault%3E=overview=builds]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12530) Pages list caching can cause IgniteOOME when checkpoint is triggered by "too many dirty pages" reason

2020-01-16 Thread Stanilovsky Evgeny (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016658#comment-17016658
 ] 

Stanilovsky Evgeny commented on IGNITE-12530:
-

i run some benchmarks after applying this patch, 3 servers 1 client , persist 
enabled, partitioned cache, wal log_only, looks good to me.
 !screenshot-1.png! 

> Pages list caching can cause IgniteOOME when checkpoint is triggered by "too 
> many dirty pages" reason
> -
>
> Key: IGNITE-12530
> URL: https://issues.apache.org/jira/browse/IGNITE-12530
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.8
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
> Attachments: screenshot-1.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When a checkpoint is triggered, we need some amount of page memory to store 
> pages list on-heap cache.
> If data region is too small, a checkpoint is triggered by "too many dirty 
> pages" reason and pages list cache is rather big, we can get 
> IgniteOutOfMemoryException.
> Reproducer:
> {code:java}
> @Override protected IgniteConfiguration getConfiguration(String name) throws 
> Exception {
> IgniteConfiguration cfg = super.getConfiguration(name);
> cfg.setDataStorageConfiguration(new DataStorageConfiguration()
> .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
> .setPersistenceEnabled(true)
> .setMaxSize(50 * 1024 * 1024)
> ));
> return cfg;
> }
> @Test
> public void testUpdatesNotFittingIntoMemoryRegion() throws Exception {
> IgniteEx ignite = startGrid(0);
> ignite.cluster().active(true);
> ignite.getOrCreateCache(DEFAULT_CACHE_NAME);
> try (IgniteDataStreamer streamer = 
> ignite.dataStreamer(DEFAULT_CACHE_NAME)) {
> for (int i = 0; i < 100_000; i++)
> streamer.addData(i, new byte[i % 2048]);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12530) Pages list caching can cause IgniteOOME when checkpoint is triggered by "too many dirty pages" reason

2020-01-16 Thread Stanilovsky Evgeny (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny updated IGNITE-12530:

Attachment: screenshot-1.png

> Pages list caching can cause IgniteOOME when checkpoint is triggered by "too 
> many dirty pages" reason
> -
>
> Key: IGNITE-12530
> URL: https://issues.apache.org/jira/browse/IGNITE-12530
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.8
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
> Attachments: screenshot-1.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When a checkpoint is triggered, we need some amount of page memory to store 
> pages list on-heap cache.
> If data region is too small, a checkpoint is triggered by "too many dirty 
> pages" reason and pages list cache is rather big, we can get 
> IgniteOutOfMemoryException.
> Reproducer:
> {code:java}
> @Override protected IgniteConfiguration getConfiguration(String name) throws 
> Exception {
> IgniteConfiguration cfg = super.getConfiguration(name);
> cfg.setDataStorageConfiguration(new DataStorageConfiguration()
> .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
> .setPersistenceEnabled(true)
> .setMaxSize(50 * 1024 * 1024)
> ));
> return cfg;
> }
> @Test
> public void testUpdatesNotFittingIntoMemoryRegion() throws Exception {
> IgniteEx ignite = startGrid(0);
> ignite.cluster().active(true);
> ignite.getOrCreateCache(DEFAULT_CACHE_NAME);
> try (IgniteDataStreamer streamer = 
> ignite.dataStreamer(DEFAULT_CACHE_NAME)) {
> for (int i = 0; i < 100_000; i++)
> streamer.addData(i, new byte[i % 2048]);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12530) Pages list caching can cause IgniteOOME when checkpoint is triggered by "too many dirty pages" reason

2020-01-16 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016645#comment-17016645
 ] 

Aleksey Plekhanov commented on IGNITE-12530:


[~ivan.glukos] I've added this check to the existing test. Please have a look 
again.

> Pages list caching can cause IgniteOOME when checkpoint is triggered by "too 
> many dirty pages" reason
> -
>
> Key: IGNITE-12530
> URL: https://issues.apache.org/jira/browse/IGNITE-12530
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.8
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When a checkpoint is triggered, we need some amount of page memory to store 
> pages list on-heap cache.
> If data region is too small, a checkpoint is triggered by "too many dirty 
> pages" reason and pages list cache is rather big, we can get 
> IgniteOutOfMemoryException.
> Reproducer:
> {code:java}
> @Override protected IgniteConfiguration getConfiguration(String name) throws 
> Exception {
> IgniteConfiguration cfg = super.getConfiguration(name);
> cfg.setDataStorageConfiguration(new DataStorageConfiguration()
> .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
> .setPersistenceEnabled(true)
> .setMaxSize(50 * 1024 * 1024)
> ));
> return cfg;
> }
> @Test
> public void testUpdatesNotFittingIntoMemoryRegion() throws Exception {
> IgniteEx ignite = startGrid(0);
> ignite.cluster().active(true);
> ignite.getOrCreateCache(DEFAULT_CACHE_NAME);
> try (IgniteDataStreamer streamer = 
> ignite.dataStreamer(DEFAULT_CACHE_NAME)) {
> for (int i = 0; i < 100_000; i++)
> streamer.addData(i, new byte[i % 2048]);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12542) Some tests failed due to incompatible changes in IGNITE-12108 and IGNITE-11987

2020-01-16 Thread Ivan Bessonov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016694#comment-17016694
 ] 

Ivan Bessonov commented on IGNITE-12542:


[~nizhikov] can you please review and merge?

> Some tests failed due to incompatible changes in IGNITE-12108 and IGNITE-11987
> --
>
> Key: IGNITE-12542
> URL: https://issues.apache.org/jira/browse/IGNITE-12542
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.ignite.apache.org/buildConfiguration/IgniteTests24Java8_ComputeGrid?branch=%3Cdefault%3E=overview=builds]
>  
> [https://ci.ignite.apache.org/buildConfiguration/IgniteTests24Java8_Basic1?branch=%3Cdefault%3E=overview=builds]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12049) Add user attributes to thin clients

2020-01-16 Thread Ryabov Dmitrii (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016764#comment-17016764
 ] 

Ryabov Dmitrii commented on IGNITE-12049:
-

[~ascherbakov], problem is in peer classloading - it doesn't work for thin 
clients. So, both server and client should have {{MyClass}}, and it should be 
declared in {{META-INF/classnames.properties}}. I added appropriate message to 
user attribute setters.

> Add user attributes to thin clients
> ---
>
> Key: IGNITE-12049
> URL: https://issues.apache.org/jira/browse/IGNITE-12049
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ryabov Dmitrii
>Assignee: Ryabov Dmitrii
>Priority: Minor
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add user attributes to thin clients (like node attributes for server nodes). 
> Make sure that custom authenticators can use these attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12549:

Description: 
Case 1
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3  start servr node 2 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3 start client node 2
3  start server node 3 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery

if executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

if executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.






  was:
Case 1
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3  start servr node 2 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3 start client node 2
3  start server node 3 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 




> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> Case 1
> 1.  start server node 1
> 2  create and fill replicated cache with RebalanceMode.Async (as by default)
> 3  start servr node 2 
> 3 immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1.  start server node 1
> 2  create and fill replicated cache with RebalanceMode.Async (as by default)
> 3 start client node 2
> 3  start server node 3 
> 3 immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> 

[jira] [Updated] (IGNITE-12400) Remove the stopProcess method from the DiscoveryCustomMessage interface

2020-01-16 Thread Amelchev Nikita (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita updated IGNITE-12400:
-
Fix Version/s: 2.9

> Remove the stopProcess method from the DiscoveryCustomMessage interface
> ---
>
> Key: IGNITE-12400
> URL: https://issues.apache.org/jira/browse/IGNITE-12400
> Project: Ignite
>  Issue Type: Task
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Minor
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the {{stopProcess}} method works only if the {{zookeeper 
> discovery}} configured. It doesn't work in {{TcpDiscoverySpi}}. There are no 
> any usages of this method except tests. I suggest to remove it from the 
> discovery custom message interface. 
> [Dev-list 
> discussion.|http://apache-ignite-developers.2346864.n4.nabble.com/Unclear-to-use-methods-in-the-DiscoverySpiCustomMessage-interface-td44144.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)
Sergey Kosarev created IGNITE-12549:
---

 Summary: Scan query/iterator on a replicated cache may get wrong 
results
 Key: IGNITE-12549
 URL: https://issues.apache.org/jira/browse/IGNITE-12549
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.7.6
Reporter: Sergey Kosarev


Case 1
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3  start servr node 2 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3 start client node 2
3  start server node 3 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12538) GridAffinityAssignmentV2 can return modifiable collection in some cases.

2020-01-16 Thread Vyacheslav Koptilin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016777#comment-17016777
 ] 

Vyacheslav Koptilin commented on IGNITE-12538:
--

Hello [~amashenkov],

The change looks good to me. Please proceed with the merge.

> GridAffinityAssignmentV2 can return modifiable collection in some cases.
> 
>
> Key: IGNITE-12538
> URL: https://issues.apache.org/jira/browse/IGNITE-12538
> Project: Ignite
>  Issue Type: Task
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Minor
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> GridAffinityAssignmentV2.nodes() and 
> GridAffinityAssignmentV2.primaryPartitionNodes() methods can return 
> modifiable collections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12536) Inconsistency between cache data and indexes when cache operation is interrupted

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017010#comment-17017010
 ] 

Ignite TC Bot commented on IGNITE-12536:


{panel:title=Branch: [pull/7257/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4931670buildTypeId=IgniteTests24Java8_RunAll]

> Inconsistency between cache data and indexes when cache operation is 
> interrupted
> 
>
> Key: IGNITE-12536
> URL: https://issues.apache.org/jira/browse/IGNITE-12536
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.7
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Root cause:*
> Inconsistency between cache and indexes happens when cache operation 
> put/remove is interrupted (e.g. thread is interrupted). The cache operation 
> is finished, {{GridH2Table#lock(boolean)}} is interrupted because 
> {{Lock#lockInterruptibly}} is used.
> *Possible fix:*
> Use not interruptible lock for cache operation and interruptible lock for SQL 
> operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-11434) SQL: Create a view with list of existing COLUMNS

2020-01-16 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-11434:
-

Assignee: (was: Taras Ledkov)

> SQL: Create a view with list of existing COLUMNS
> 
>
> Key: IGNITE-11434
> URL: https://issues.apache.org/jira/browse/IGNITE-11434
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Major
>  Labels: iep-29
> Fix For: 2.9
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Need to expose SQL system view with COLUMNS information.
> Need to investigate more deeper which of information should be there.
>  
> As start point we can take 
> [https://dev.mysql.com/doc/refman/8.0/en/columns-table.html] 
> Columns description:
> || Name || Type || Description||
> |  SCHEMA_NAME | string | Schema name |
> | TABLE_NAME | string | Table name |
> | COLUMN_NAME | string | Column name | 
> | ORDINAL_POSITION | int | Column ordinal. Starts with 1 | 
> | DEFAULT VALUE | string | Defaut column's value |
> | IS_NULLABLE | boolean | Nullable flag corresponds to 
> {{QueryEntity#setNotNullFields}} |
> | DATA_TYPE | string | SQL data type |
> | CHARACTER_LENGTH | int | Size for char CAHR and VARCHAR types |
> | NUMERIC_PRECISION | int | Precision for numeric types |
> | NUMERIC_SCALE |  int | Scale for numeric types |
> | IS_AFFINITY_KEY | boolean | {{true}} whan the column is affinity key |
> | IS_HIDDEN | boolean | {{true}} for hidden _ley nad _val columns. {{false}} 
> for all columns available by asterisk mask |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12552) [IEP-35] Expose MetricRegistry to the public API

2020-01-16 Thread Nikolay Izhikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017045#comment-17017045
 ] 

Nikolay Izhikov commented on IGNITE-12552:
--

[~agoncharuk] Can you please, review my changes.

It fixes the issue with the usage of the internal API in ReadOnlyMetricManager 
interface.

> [IEP-35] Expose MetricRegistry to the public API
> 
>
> Key: IGNITE-12552
> URL: https://issues.apache.org/jira/browse/IGNITE-12552
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.8
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Blocker
>  Labels: IEP-35
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> MetricRegistry is not a part of public API, but used in MetricExporter which 
> is the part of public API.
> We should export MetricRegistry to the public API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12537) Util DbH2ServerStartup failed

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016928#comment-17016928
 ] 

Ignite TC Bot commented on IGNITE-12537:


{panel:title=Branch: [pull/7254/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4927831buildTypeId=IgniteTests24Java8_RunAll]

> Util DbH2ServerStartup failed
> -
>
> Key: IGNITE-12537
> URL: https://issues.apache.org/jira/browse/IGNITE-12537
> Project: Ignite
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 2.7
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{DbH2ServerStartup}} fails with exception:
> {code:java}Exception in thread "main" class 
> org.apache.ignite.IgniteException: Failed to start database TCP server
>   at 
> org.apache.ignite.examples.util.DbH2ServerStartup.main(DbH2ServerStartup.java:86)
> Caused by: org.h2.jdbc.JdbcSQLNonTransientConnectionException: Database 
> "mem:ExampleDb" not found, and IFEXISTS=true, so we cant auto-create it 
> [90146-199]
>   at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
>   at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
>   at org.h2.message.DbException.get(DbException.java:205)
>   at org.h2.message.DbException.get(DbException.java:181)
>   at org.h2.engine.Engine.openSession(Engine.java:67)
>   at org.h2.engine.Engine.openSession(Engine.java:201)
>   at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
>   at org.h2.engine.Engine.createSession(Engine.java:161)
>   at org.h2.server.TcpServerThread.run(TcpServerThread.java:160)
>   at java.lang.Thread.run(Thread.java:748)
>   at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
>   at org.h2.engine.SessionRemote.done(SessionRemote.java:607)
>   at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:143)
>   at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:431)
>   at 
> org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:317)
>   at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:169)
>   at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:148)
>   at org.h2.Driver.connect(Driver.java:69)
>   at 
> org.h2.jdbcx.JdbcDataSource.getJdbcConnection(JdbcDataSource.java:189)
>   at org.h2.jdbcx.JdbcDataSource.getXAConnection(JdbcDataSource.java:352)
>   at 
> org.h2.jdbcx.JdbcDataSource.getPooledConnection(JdbcDataSource.java:384)
>   at 
> org.h2.jdbcx.JdbcConnectionPool.getConnectionNow(JdbcConnectionPool.java:234)
>   at 
> org.h2.jdbcx.JdbcConnectionPool.getConnection(JdbcConnectionPool.java:199)
>   at 
> org.apache.ignite.examples.util.DbH2ServerStartup.populateDatabase(DbH2ServerStartup.java:56)
>   at 
> org.apache.ignite.examples.util.DbH2ServerStartup.main(DbH2ServerStartup.java:74){code}
> This issue blocks all store examples



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12551) Partition desync if partition is evicted when owned again and historically rebalanced

2020-01-16 Thread Alexey Scherbakov (Jira)
Alexey Scherbakov created IGNITE-12551:
--

 Summary: Partition desync if partition is evicted when owned again 
and historically rebalanced
 Key: IGNITE-12551
 URL: https://issues.apache.org/jira/browse/IGNITE-12551
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 2.7.6
Reporter: Alexey Scherbakov
Assignee: Alexey Scherbakov
 Fix For: 2.9


Where is a possibility of partition desync in the following scenario:

1. Some partition is evicted with non zero counters.
2. It is owned again and are going to be rebalanced.
3. Some node in grid has history for the partition defined by it's (initial, 
current) counters pair.

In this scenario partition will be historically rebalanced having only partial 
data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IGNITE-12537) Util DbH2ServerStartup failed

2020-01-16 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov resolved IGNITE-12537.
---
Resolution: Invalid

The {{DbH2ServerStartup}} runs successful without any patch.

> Util DbH2ServerStartup failed
> -
>
> Key: IGNITE-12537
> URL: https://issues.apache.org/jira/browse/IGNITE-12537
> Project: Ignite
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 2.7
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{DbH2ServerStartup}} fails with exception:
> {code:java}Exception in thread "main" class 
> org.apache.ignite.IgniteException: Failed to start database TCP server
>   at 
> org.apache.ignite.examples.util.DbH2ServerStartup.main(DbH2ServerStartup.java:86)
> Caused by: org.h2.jdbc.JdbcSQLNonTransientConnectionException: Database 
> "mem:ExampleDb" not found, and IFEXISTS=true, so we cant auto-create it 
> [90146-199]
>   at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
>   at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
>   at org.h2.message.DbException.get(DbException.java:205)
>   at org.h2.message.DbException.get(DbException.java:181)
>   at org.h2.engine.Engine.openSession(Engine.java:67)
>   at org.h2.engine.Engine.openSession(Engine.java:201)
>   at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
>   at org.h2.engine.Engine.createSession(Engine.java:161)
>   at org.h2.server.TcpServerThread.run(TcpServerThread.java:160)
>   at java.lang.Thread.run(Thread.java:748)
>   at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
>   at org.h2.engine.SessionRemote.done(SessionRemote.java:607)
>   at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:143)
>   at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:431)
>   at 
> org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:317)
>   at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:169)
>   at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:148)
>   at org.h2.Driver.connect(Driver.java:69)
>   at 
> org.h2.jdbcx.JdbcDataSource.getJdbcConnection(JdbcDataSource.java:189)
>   at org.h2.jdbcx.JdbcDataSource.getXAConnection(JdbcDataSource.java:352)
>   at 
> org.h2.jdbcx.JdbcDataSource.getPooledConnection(JdbcDataSource.java:384)
>   at 
> org.h2.jdbcx.JdbcConnectionPool.getConnectionNow(JdbcConnectionPool.java:234)
>   at 
> org.h2.jdbcx.JdbcConnectionPool.getConnection(JdbcConnectionPool.java:199)
>   at 
> org.apache.ignite.examples.util.DbH2ServerStartup.populateDatabase(DbH2ServerStartup.java:56)
>   at 
> org.apache.ignite.examples.util.DbH2ServerStartup.main(DbH2ServerStartup.java:74){code}
> This issue blocks all store examples



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12049) Add user attributes to thin clients

2020-01-16 Thread Ryabov Dmitrii (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016911#comment-17016911
 ] 

Ryabov Dmitrii commented on IGNITE-12049:
-

It is Ignite's magic! Class must be declared in 
{{META-INF/classnames.properties}}.

> Add user attributes to thin clients
> ---
>
> Key: IGNITE-12049
> URL: https://issues.apache.org/jira/browse/IGNITE-12049
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ryabov Dmitrii
>Assignee: Ryabov Dmitrii
>Priority: Minor
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add user attributes to thin clients (like node attributes for server nodes). 
> Make sure that custom authenticators can use these attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12400) Remove the stopProcess method from the DiscoveryCustomMessage interface

2020-01-16 Thread Amelchev Nikita (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016924#comment-17016924
 ] 

Amelchev Nikita commented on IGNITE-12400:
--

I have removed the method from the internal interface and marked deprecated in 
the public interface. The issue is ready for a review.

> Remove the stopProcess method from the DiscoveryCustomMessage interface
> ---
>
> Key: IGNITE-12400
> URL: https://issues.apache.org/jira/browse/IGNITE-12400
> Project: Ignite
>  Issue Type: Task
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Minor
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the {{stopProcess}} method works only if the {{zookeeper 
> discovery}} configured. It doesn't work in {{TcpDiscoverySpi}}. There are no 
> any usages of this method except tests. I suggest to remove it from the 
> discovery custom message interface. 
> [Dev-list 
> discussion.|http://apache-ignite-developers.2346864.n4.nabble.com/Unclear-to-use-methods-in-the-DiscoverySpiCustomMessage-interface-td44144.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12285) Remove boilerplate code in test PluginProvider implementations.

2020-01-16 Thread Amelchev Nikita (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017079#comment-17017079
 ] 

Amelchev Nikita commented on IGNITE-12285:
--

[~PetrovMikhail], LGTM

> Remove boilerplate code in test PluginProvider implementations.
> ---
>
> Key: IGNITE-12285
> URL: https://issues.apache.org/jira/browse/IGNITE-12285
> Project: Ignite
>  Issue Type: Improvement
>Reporter: PetrovMikhail
>Assignee: PetrovMikhail
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> It's needed to remove boilerplate code in test PluginProvider implementations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12551) Partition desync if partition is evicted then owned again and historically rebalanced

2020-01-16 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-12551:
---
Summary: Partition desync if partition is evicted then owned again and 
historically rebalanced  (was: Partition desync if partition is evicted when 
owned again and historically rebalanced)

> Partition desync if partition is evicted then owned again and historically 
> rebalanced
> -
>
> Key: IGNITE-12551
> URL: https://issues.apache.org/jira/browse/IGNITE-12551
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7.6
>Reporter: Alexey Scherbakov
>Assignee: Alexey Scherbakov
>Priority: Major
> Fix For: 2.9
>
>
> Where is a possibility of partition desync in the following scenario:
> 1. Some partition is evicted with non zero counters.
> 2. It is owned again and are going to be rebalanced.
> 3. Some node in grid has history for the partition defined by it's (initial, 
> current) counters pair.
> In this scenario partition will be historically rebalanced having only 
> partial data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12049) Add user attributes to thin clients

2020-01-16 Thread Ryabov Dmitrii (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016979#comment-17016979
 ] 

Ryabov Dmitrii commented on IGNITE-12049:
-

[~ascherbakov], may be we allow primitive types only? And add user classes 
later.

> Add user attributes to thin clients
> ---
>
> Key: IGNITE-12049
> URL: https://issues.apache.org/jira/browse/IGNITE-12049
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ryabov Dmitrii
>Assignee: Ryabov Dmitrii
>Priority: Minor
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add user attributes to thin clients (like node attributes for server nodes). 
> Make sure that custom authenticators can use these attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12342) Continuous Queries: Remote filter and transformer have to run with appropriate SecurityContext.

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017050#comment-17017050
 ] 

Ignite TC Bot commented on IGNITE-12342:


{panel:title=Branch: [pull/7125/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4934693buildTypeId=IgniteTests24Java8_RunAll]

> Continuous Queries: Remote filter and transformer have to run with 
> appropriate SecurityContext.
> ---
>
> Key: IGNITE-12342
> URL: https://issues.apache.org/jira/browse/IGNITE-12342
> Project: Ignite
>  Issue Type: Bug
>  Components: security
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Remote filter and transformer of ContinuousQueries have to run on a remote 
> node with the SecurityContext of the initiator node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12049) Add user attributes to thin clients

2020-01-16 Thread Alexey Scherbakov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016909#comment-17016909
 ] 

Alexey Scherbakov commented on IGNITE-12049:


[~SomeFire]

I did local testing.
Everything was in the common classpath.

> Add user attributes to thin clients
> ---
>
> Key: IGNITE-12049
> URL: https://issues.apache.org/jira/browse/IGNITE-12049
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ryabov Dmitrii
>Assignee: Ryabov Dmitrii
>Priority: Minor
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add user attributes to thin clients (like node attributes for server nodes). 
> Make sure that custom authenticators can use these attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12400) Remove the stopProcess method from the DiscoveryCustomMessage interface

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016920#comment-17016920
 ] 

Ignite TC Bot commented on IGNITE-12400:


{panel:title=Branch: [pull/7268/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4934381buildTypeId=IgniteTests24Java8_RunAll]

> Remove the stopProcess method from the DiscoveryCustomMessage interface
> ---
>
> Key: IGNITE-12400
> URL: https://issues.apache.org/jira/browse/IGNITE-12400
> Project: Ignite
>  Issue Type: Task
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Minor
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the {{stopProcess}} method works only if the {{zookeeper 
> discovery}} configured. It doesn't work in {{TcpDiscoverySpi}}. There are no 
> any usages of this method except tests. I suggest to remove it from the 
> discovery custom message interface. 
> [Dev-list 
> discussion.|http://apache-ignite-developers.2346864.n4.nabble.com/Unclear-to-use-methods-in-the-DiscoverySpiCustomMessage-interface-td44144.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12540) Update versions of vulnerable dependencies

2020-01-16 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev updated IGNITE-12540:
-
Reviewer: Vladimir Pligin

> Update versions of vulnerable dependencies
> --
>
> Key: IGNITE-12540
> URL: https://issues.apache.org/jira/browse/IGNITE-12540
> Project: Ignite
>  Issue Type: Improvement
>  Components: general, hibernate, rest, spring
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Let's bump some crucial dependencies to their latest minor versions and try 
> to include it in 2.8 also.
> spring 4 and 5, spring data, hibernate, jetty, jackson-databind.
> Lesser-used packages, notable Zk discovery, are not affected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12049) Add user attributes to thin clients

2020-01-16 Thread Alexey Scherbakov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016967#comment-17016967
 ] 

Alexey Scherbakov commented on IGNITE-12049:


[~SomeFire]

I think this never should be a requirement for user classes.

> Add user attributes to thin clients
> ---
>
> Key: IGNITE-12049
> URL: https://issues.apache.org/jira/browse/IGNITE-12049
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ryabov Dmitrii
>Assignee: Ryabov Dmitrii
>Priority: Minor
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add user attributes to thin clients (like node attributes for server nodes). 
> Make sure that custom authenticators can use these attributes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Alexey Scherbakov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016906#comment-17016906
 ] 

Alexey Scherbakov commented on IGNITE-12549:


[~macrergate]

Looks like the fix is to avoid local iterator for replicated cache if local 
node has moving partitions.

As a workaround set partition for a query explicitly: query.partition(p)  where 
p in 0..PARTS_CNT


> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016959#comment-17016959
 ] 

Sergey Kosarev commented on IGNITE-12549:
-

[~ascherbakov], thanks for WA, I see.
About fix you suggested I agree it can fix case 1, but how to fix case 2 when 
scanQuery is executed from a client node? 
Can client check that a remote node doesn't have moving partitions?


> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-12549:
-
Priority: Critical  (was: Major)

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-12549:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Critical
> Fix For: 2.8
>
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-12549:
-
Fix Version/s: 2.8

> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
> Fix For: 2.8
>
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> Collection affNodes = nodes(cctx, null, null);
> return affNodes.isEmpty() ? affNodes : 
> *Collections.singletonList(F.rand(affNodes))*;
> case PARTITIONED:
> return nodes(cctx, prj, part);
>  which is executed in 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.
> If executed on a just started node it obviously returns the local node 
> disregarding was it rebalanced or not.
> If executed on a client it returns a random affinity node, so it also can be 
> not yet rebalanced node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12551) Partition desync if a partition is evicted then owned again and historically rebalanced

2020-01-16 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-12551:
---
Summary: Partition desync if a partition is evicted then owned again and 
historically rebalanced  (was: Partition desync if partition is evicted then 
owned again and historically rebalanced)

> Partition desync if a partition is evicted then owned again and historically 
> rebalanced
> ---
>
> Key: IGNITE-12551
> URL: https://issues.apache.org/jira/browse/IGNITE-12551
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7.6
>Reporter: Alexey Scherbakov
>Assignee: Alexey Scherbakov
>Priority: Major
> Fix For: 2.9
>
>
> Where is a possibility of partition desync in the following scenario:
> 1. Some partition is evicted with non zero counters.
> 2. It is owned again and are going to be rebalanced.
> 3. Some node in grid has history for the partition defined by it's (initial, 
> current) counters pair.
> In this scenario partition will be historically rebalanced having only 
> partial data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12551) Partition desync if a partition is evicted then owned again and historically rebalanced

2020-01-16 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-12551:
---
Description: 
Where is a possibility of partition desync in the following scenario:

1. Some partition is evicted with non zero counters.
2. It is owned again and are going to be rebalanced.
3. Some node in a grid has history for the partition defined by it's (initial, 
current) counters pair.

In this scenario the partition will be historically rebalanced having only 
partial data.

  was:
Where is a possibility of partition desync in the following scenario:

1. Some partition is evicted with non zero counters.
2. It is owned again and are going to be rebalanced.
3. Some node in grid has history for the partition defined by it's (initial, 
current) counters pair.

In this scenario the partition will be historically rebalanced having only 
partial data.


> Partition desync if a partition is evicted then owned again and historically 
> rebalanced
> ---
>
> Key: IGNITE-12551
> URL: https://issues.apache.org/jira/browse/IGNITE-12551
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7.6
>Reporter: Alexey Scherbakov
>Assignee: Alexey Scherbakov
>Priority: Major
> Fix For: 2.9
>
>
> Where is a possibility of partition desync in the following scenario:
> 1. Some partition is evicted with non zero counters.
> 2. It is owned again and are going to be rebalanced.
> 3. Some node in a grid has history for the partition defined by it's 
> (initial, current) counters pair.
> In this scenario the partition will be historically rebalanced having only 
> partial data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12551) Partition desync if a partition is evicted then owned again and historically rebalanced

2020-01-16 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-12551:
---
Description: 
Where is a possibility of partition desync in the following scenario:

1. Some partition is evicted with non zero counters.
2. It is owned again and are going to be rebalanced.
3. Some node in grid has history for the partition defined by it's (initial, 
current) counters pair.

In this scenario the partition will be historically rebalanced having only 
partial data.

  was:
Where is a possibility of partition desync in the following scenario:

1. Some partition is evicted with non zero counters.
2. It is owned again and are going to be rebalanced.
3. Some node in grid has history for the partition defined by it's (initial, 
current) counters pair.

In this scenario partition will be historically rebalanced having only partial 
data.


> Partition desync if a partition is evicted then owned again and historically 
> rebalanced
> ---
>
> Key: IGNITE-12551
> URL: https://issues.apache.org/jira/browse/IGNITE-12551
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7.6
>Reporter: Alexey Scherbakov
>Assignee: Alexey Scherbakov
>Priority: Major
> Fix For: 2.9
>
>
> Where is a possibility of partition desync in the following scenario:
> 1. Some partition is evicted with non zero counters.
> 2. It is owned again and are going to be rebalanced.
> 3. Some node in grid has history for the partition defined by it's (initial, 
> current) counters pair.
> In this scenario the partition will be historically rebalanced having only 
> partial data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12535) Jdbc Thin: Add SSL CipherSuites support to JDBC thin client.

2020-01-16 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-12535:
--
Summary: Jdbc Thin: Add SSL CipherSuites support to JDBC thin client.   
(was: Jdbc Thin: Pass custom CipherSuites to JDBC thin client.)

> Jdbc Thin: Add SSL CipherSuites support to JDBC thin client. 
> -
>
> Key: IGNITE-12535
> URL: https://issues.apache.org/jira/browse/IGNITE-12535
> Project: Ignite
>  Issue Type: Task
>  Components: thin client
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We widely own Ignite SSL Factory implementation and allow (e.g. control.sh 
> tool) to pass cipher suites in most cases, but ThinClient.
> Let's allow user to restrict cipher suites for ThinClient as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12535) Jdbc Thin: Pass custom CipherSuites to JDBC thin client.

2020-01-16 Thread Andrey Mashenkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017084#comment-17017084
 ] 

Andrey Mashenkov commented on IGNITE-12535:
---

I've removed useless H2 snapshot repository from parent pom file:
1. We definitely won't to use any snapshot dependencies in project.
2. Ignite use maven central repo for dependencies.

> Jdbc Thin: Pass custom CipherSuites to JDBC thin client.
> 
>
> Key: IGNITE-12535
> URL: https://issues.apache.org/jira/browse/IGNITE-12535
> Project: Ignite
>  Issue Type: Task
>  Components: thin client
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We widely own Ignite SSL Factory implementation and allow (e.g. control.sh 
> tool) to pass cipher suites in most cases, but ThinClient.
> Let's allow user to restrict cipher suites for ThinClient as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12552) [IEP-35] Expose MetricRegistry to the public API

2020-01-16 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created IGNITE-12552:


 Summary: [IEP-35] Expose MetricRegistry to the public API
 Key: IGNITE-12552
 URL: https://issues.apache.org/jira/browse/IGNITE-12552
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.8
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov
 Fix For: 2.8


MetricRegistry is not a part of public API, but used in MetricExporter which is 
the part of public API.
We should export MetricRegistry to the public API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12552) [IEP-35] Expose MetricRegistry to the public API

2020-01-16 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov updated IGNITE-12552:
-
Labels: IEP-35  (was: )

> [IEP-35] Expose MetricRegistry to the public API
> 
>
> Key: IGNITE-12552
> URL: https://issues.apache.org/jira/browse/IGNITE-12552
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.8
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Blocker
>  Labels: IEP-35
> Fix For: 2.8
>
>
> MetricRegistry is not a part of public API, but used in MetricExporter which 
> is the part of public API.
> We should export MetricRegistry to the public API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IGNITE-8115) Add a warning on local node startup if the node is not in Baseline

2020-01-16 Thread Philipp Masharov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philipp Masharov resolved IGNITE-8115.
--
Resolution: Duplicate

Already done in ignite-8190

> Add a warning on local node startup if the node is not in Baseline
> --
>
> Key: IGNITE-8115
> URL: https://issues.apache.org/jira/browse/IGNITE-8115
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Philipp Masharov
>Priority: Major
>  Labels: newbie
>
> The message should contain instructions on how to add the node to baseline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12536) Inconsistency between cache data and indexes when cache operation is interrupted

2020-01-16 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-12536:
--
Fix Version/s: (was: 2.8)
   2.9

> Inconsistency between cache data and indexes when cache operation is 
> interrupted
> 
>
> Key: IGNITE-12536
> URL: https://issues.apache.org/jira/browse/IGNITE-12536
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.7
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Root cause:*
> Inconsistency between cache and indexes happens when cache operation 
> put/remove is interrupted (e.g. thread is interrupted). The cache operation 
> is finished, {{GridH2Table#lock(boolean)}} is interrupted because 
> {{Lock#lockInterruptibly}} is used.
> *Possible fix:*
> Use not interruptible lock for cache operation and interruptible lock for SQL 
> operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12530) Pages list caching can cause IgniteOOME when checkpoint is triggered by "too many dirty pages" reason

2020-01-16 Thread Ivan Rakov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017412#comment-17017412
 ] 

Ivan Rakov commented on IGNITE-12530:
-

[~alex_pl] Looks good, please merge.

> Pages list caching can cause IgniteOOME when checkpoint is triggered by "too 
> many dirty pages" reason
> -
>
> Key: IGNITE-12530
> URL: https://issues.apache.org/jira/browse/IGNITE-12530
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.8
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
> Attachments: screenshot-1.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When a checkpoint is triggered, we need some amount of page memory to store 
> pages list on-heap cache.
> If data region is too small, a checkpoint is triggered by "too many dirty 
> pages" reason and pages list cache is rather big, we can get 
> IgniteOutOfMemoryException.
> Reproducer:
> {code:java}
> @Override protected IgniteConfiguration getConfiguration(String name) throws 
> Exception {
> IgniteConfiguration cfg = super.getConfiguration(name);
> cfg.setDataStorageConfiguration(new DataStorageConfiguration()
> .setDefaultDataRegionConfiguration(new DataRegionConfiguration()
> .setPersistenceEnabled(true)
> .setMaxSize(50 * 1024 * 1024)
> ));
> return cfg;
> }
> @Test
> public void testUpdatesNotFittingIntoMemoryRegion() throws Exception {
> IgniteEx ignite = startGrid(0);
> ignite.cluster().active(true);
> ignite.getOrCreateCache(DEFAULT_CACHE_NAME);
> try (IgniteDataStreamer streamer = 
> ignite.dataStreamer(DEFAULT_CACHE_NAME)) {
> for (int i = 0; i < 100_000; i++)
> streamer.addData(i, new byte[i % 2048]);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12545) Introduce listener interface for components to react to partition map exchange events

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017439#comment-17017439
 ] 

Ignite TC Bot commented on IGNITE-12545:


{panel:title=Branch: [pull/7263/head] Base: [master] : Possible Blockers 
(7)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Queries 1{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4936312]]
* IgniteBinaryCacheQueryTestSuite: 
IndexingCachePartitionLossPolicySelfTest.testReadWriteSafeWithBackupsAfterKillThreeNodesWithPersistence[TRANSACTIONAL]
 - Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Basic 1{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4936264]]
* IgniteBasicTestSuite: 
IgniteDiagnosticMessagesMultipleConnectionsTest.testTimeOutTxLock - Test has 
low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Data Structures{color} [[tests 
4|https://ci.ignite.apache.org/viewLog.html?buildId=4936297]]
* IgniteCacheDataStructuresSelfTestSuite: 
ReplicatedImplicitTransactionalReadRepairTest.test[getEntry=true, async=true] - 
Test has low fail rate in base branch 0,9% and is not flaky
* IgniteCacheDataStructuresSelfTestSuite: 
ReplicatedImplicitTransactionalReadRepairTest.test[getEntry=false, async=false] 
- Test has low fail rate in base branch 0,9% and is not flaky
* IgniteCacheDataStructuresSelfTestSuite: 
ReplicatedImplicitTransactionalReadRepairTest.test[getEntry=false, async=true] 
- Test has low fail rate in base branch 0,9% and is not flaky
* IgniteCacheDataStructuresSelfTestSuite: 
ReplicatedImplicitTransactionalReadRepairTest.test[getEntry=true, async=false] 
- Test has low fail rate in base branch 0,9% and is not flaky

{color:#d04437}Queries 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4936247]]
* IgniteBinaryCacheQueryTestSuite2: 
DynamicColumnsConcurrentAtomicPartitionedSelfTest.testClientReconnectWithNonDynamicCache
 - Test has low fail rate in base branch 0,0% and is not flaky

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4936336buildTypeId=IgniteTests24Java8_RunAll]

> Introduce listener interface for components to react to partition map 
> exchange events
> -
>
> Key: IGNITE-12545
> URL: https://issues.apache.org/jira/browse/IGNITE-12545
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It would be handly to have listener interface for components that should 
> react to PME instead of just adding more and more calls to 
> GridDhtPartitionsExchangeFuture.
> In general, there are four possible moments when a compnent can be notified: 
> on exchnage init (before and after topologies are updates and exchange latch 
> is acquired) and on exchange done (before and after readyTopVer is 
> incremented and user operations are unlocked).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12535) Jdbc Thin: Add SSL CipherSuites support to JDBC thin client.

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017381#comment-17017381
 ] 

Ignite TC Bot commented on IGNITE-12535:


{panel:title=Branch: [pull/7252/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4935249buildTypeId=IgniteTests24Java8_RunAll]

> Jdbc Thin: Add SSL CipherSuites support to JDBC thin client. 
> -
>
> Key: IGNITE-12535
> URL: https://issues.apache.org/jira/browse/IGNITE-12535
> Project: Ignite
>  Issue Type: Task
>  Components: thin client
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We widely own Ignite SSL Factory implementation and allow (e.g. control.sh 
> tool) to pass cipher suites in most cases, but ThinClient.
> Let's allow user to restrict cipher suites for ThinClient as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-6804) Print a warning if HashMap is passed into bulk update operations

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017461#comment-17017461
 ] 

Ignite TC Bot commented on IGNITE-6804:
---

{panel:title=Branch: [pull/6976/head] Base: [master] : Possible Blockers 
(49)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Start Nodes{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4936485]]
* IgniteStartStopRestartTestSuite: 
IgniteProjectionStartStopRestartSelfTest.testCustomScript - Test has low fail 
rate in base branch 0,0% and is not flaky

{color:#d04437}Queries 1{color} [[tests 
42|https://ci.ignite.apache.org/viewLog.html?buildId=4936546]]
* IgniteBinaryCacheQueryTestSuite: 
IgniteCachePartitionedTransactionalSnapshotColumnConstraintTest.testPutLongStringKeyField
 - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCachePartitionedTransactionalSnapshotColumnConstraintTest.testPutValidDecimalKeyAndValueField
 - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCachePartitionedTransactionalSnapshotColumnConstraintTest.testPutLongStringValueField
 - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCacheReplicatedTransactionalSnapshotColumnConstraintTest.testPutValidDecimalKeyAndValueField
 - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCacheReplicatedTransactionalSnapshotColumnConstraintTest.testPutLongStringKeyField
 - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCacheReplicatedTransactionalSnapshotColumnConstraintTest.testPutLongStringValueField
 - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCachePartitionedAtomicColumnConstraintsTest.testPutValidDecimalKeyAndValueField
 - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCachePartitionedAtomicColumnConstraintsTest.testPutTooLongStringKeyFieldFail
 - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCachePartitionedAtomicColumnConstraintsTest.testPutLongStringKeyField - 
Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCachePartitionedAtomicColumnConstraintsTest.testPutTooLongStringValueFieldFail
 - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteBinaryCacheQueryTestSuite: 
IgniteCachePartitionedAtomicColumnConstraintsTest.testPutTooLongDecimalValueFieldScaleFail
 - Test has low fail rate in base branch 0,0% and is not flaky
... and 31 tests blockers

{color:#d04437}Data Structures{color} [[tests 
4|https://ci.ignite.apache.org/viewLog.html?buildId=4936531]]
* IgniteCacheDataStructuresSelfTestSuite: 
SingleBackupImplicitTransactionalReadRepairTest.test[getEntry=false, 
async=false] - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteCacheDataStructuresSelfTestSuite: 
SingleBackupImplicitTransactionalReadRepairTest.test[getEntry=false, 
async=true] - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteCacheDataStructuresSelfTestSuite: 
SingleBackupImplicitTransactionalReadRepairTest.test[getEntry=true, 
async=false] - Test has low fail rate in base branch 0,0% and is not flaky
* IgniteCacheDataStructuresSelfTestSuite: 
SingleBackupImplicitTransactionalReadRepairTest.test[getEntry=true, async=true] 
- Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Cache 9{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4936527]]
* IgniteCacheTestSuite9: 
TxPartitionCounterStateOnePrimaryTwoBackupsTest.testPartialPrepare_3TX_6_1 - 
Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}MVCC Cache 5{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4936558]]
* IgniteCacheMvccTestSuite5: 
GridCacheHashMapPutAllWarningsTest.testHashMapPutAllExplicitOptimistic - 
History for base branch is absent.

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4936570buildTypeId=IgniteTests24Java8_RunAll]

> Print a warning if HashMap is passed into bulk update operations
> 
>
> Key: IGNITE-6804
> URL: https://issues.apache.org/jira/browse/IGNITE-6804
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Denis A. Magda
>Assignee: Ilya Kasnacheev
>Priority: Critical
>  Labels: usability
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Ignite newcomers tend to stumble on deadlocks simply because the keys are 
> passed in an unordered HashMap. 

[jira] [Updated] (IGNITE-12342) Continuous Queries: Remote filter and transformer have to run with appropriate SecurityContext.

2020-01-16 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12342:
--
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> Continuous Queries: Remote filter and transformer have to run with 
> appropriate SecurityContext.
> ---
>
> Key: IGNITE-12342
> URL: https://issues.apache.org/jira/browse/IGNITE-12342
> Project: Ignite
>  Issue Type: Bug
>  Components: security
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
> Fix For: 2.9
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Remote filter and transformer of ContinuousQueries have to run on a remote 
> node with the SecurityContext of the initiator node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12342) Continuous Queries: Remote filter and transformer have to run with appropriate SecurityContext.

2020-01-16 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12342:
--
Fix Version/s: 2.9

> Continuous Queries: Remote filter and transformer have to run with 
> appropriate SecurityContext.
> ---
>
> Key: IGNITE-12342
> URL: https://issues.apache.org/jira/browse/IGNITE-12342
> Project: Ignite
>  Issue Type: Bug
>  Components: security
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
> Fix For: 2.9
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Remote filter and transformer of ContinuousQueries have to run on a remote 
> node with the SecurityContext of the initiator node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-12468) ClassCastException on thinClient in Apache Ignite

2020-01-16 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov reassigned IGNITE-12468:
--

Assignee: Aleksey Plekhanov

> ClassCastException on thinClient in Apache Ignite
> -
>
> Key: IGNITE-12468
> URL: https://issues.apache.org/jira/browse/IGNITE-12468
> Project: Ignite
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.6
>Reporter: LEE PYUNG BEOM
>Assignee: Aleksey Plekhanov
>Priority: Major
>
>  
> {code:java}
> ClientConfiguration cfg = new 
> ClientConfiguration().setAddresses("127.0.0.1:10800");
> try (IgniteClient igniteClient = Ignition.startClient(cfg)) {
> System.out.println(">>> Thin client put-get example started.");
> final String CACHE_NAME = "put-get-example";
> ClientCache cache = 
> igniteClient.getOrCreateCache(CACHE_NAME);
> Person p = new Person();
> //put
> HashMap hm = new HashMap();
> hm.put(1, p);
> cache.put(1, hm);
> //get
> HashMap map = (HashMap)cache.get(1);
> Person p2 = map.get(1);
> System.out.format(">>> Loaded [%s] from the cache.\n",p2);
> }
> catch (ClientException e) {
> System.err.println(e.getMessage());
> e.printStackTrace();
> }
> catch (Exception e) {
> System.err.format("Unexpected failure: %s\n", e);
> e.printStackTrace();
> }
> {code}
>  
> I use the thin client of apache-ignite.
> I Create a hashmap and put the Person 
> class(org.apache.ignite.examples.model.Person) object into it.
> And when I take it out of the hashmap, I get the following exceptions:
>  
> {code:java}
> > java.lang.ClassCastException:
> > org.apache.enite.internal.binary.BinaryObjectImpl cannot be cast to
> > org.apache.engite.examples.model.Person.
> {code}
> An exception is given in the code below.
>  
> {code:java}
> Person p2 = map.get(1);
> {code}
>  
> However, there is no exception if I modify the code as follows:
>  
> {code:java}
> BinaryObject bo = (BinaryObject) map.get(1);
> Person p2 = bo.deserialize();
> {code}
> I don't think that's necessary. Is there another solution?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12342) Continuous Queries: Remote filter and transformer have to run with appropriate SecurityContext.

2020-01-16 Thread Anton Vinogradov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017731#comment-17017731
 ] 

Anton Vinogradov commented on IGNITE-12342:
---

Merged to master branch.
Thanks for your contribution.

> Continuous Queries: Remote filter and transformer have to run with 
> appropriate SecurityContext.
> ---
>
> Key: IGNITE-12342
> URL: https://issues.apache.org/jira/browse/IGNITE-12342
> Project: Ignite
>  Issue Type: Bug
>  Components: security
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
> Fix For: 2.9
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Remote filter and transformer of ContinuousQueries have to run on a remote 
> node with the SecurityContext of the initiator node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12439) More descriptive message in situation of IgniteOutOfMemoryException, warning message if risk of IOOME is found

2020-01-16 Thread Ivan Bessonov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016865#comment-17016865
 ] 

Ivan Bessonov commented on IGNITE-12439:


[~sergey-chugunov] looks good to me, please proceed with merge.

> More descriptive message in situation of IgniteOutOfMemoryException, warning 
> message if risk of IOOME is found
> --
>
> Key: IGNITE-12439
> URL: https://issues.apache.org/jira/browse/IGNITE-12439
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Chugunov
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In persistent mode starting many caches in a data region of a small size may 
> lead to IgniteOutOfMemoryException being thrown.
> The root cause is that each partition requires allocation of one or more 
> metapages that should be stored during checkpoint and cannot be replaced by 
> other types of pages.
> As a result when too many metapages occupy significant portion of data 
> region's space a request to replace a page in memory (with one on disk) may 
> not be able to find clean page for replacement. In this situation 
> IgniteOutOfMemoryException is thrown.
> It is not easy to prevent IOOME in general case, but we should provide more 
> descriptive message when the exception is thrown and/or print out warning to 
> logs when too many caches (or one cache with huge number of partitions) are 
> started in the same data region.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12550) Add page read latency histogram per data region

2020-01-16 Thread Alexey Goncharuk (Jira)
Alexey Goncharuk created IGNITE-12550:
-

 Summary: Add page read latency histogram per data region
 Key: IGNITE-12550
 URL: https://issues.apache.org/jira/browse/IGNITE-12550
 Project: Ignite
  Issue Type: Improvement
  Components: persistence
Reporter: Alexey Goncharuk
Assignee: Alexey Goncharuk
 Fix For: 2.9


During an incident I experienced a large checkpoint mark duration. It was 
impossible to determine whether this was caused by a stalled disk because of 
large number of long page reads or by some other reasons.
Having a metric showing the page read latency histogram would help in such 
cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12550) Add page read latency histogram per data region

2020-01-16 Thread Alexey Goncharuk (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-12550:
--
Description: 
During an incident I experienced a large checkpoint mark duration. It was 
impossible to determine whether this was caused by a stalled disk because of 
large number of long page reads or by some other reasons.
Having a metric showing the page read latency histogram would help in such 
cases.
We already have a {{pagesRead}} metric, just need to measure the read timings.

  was:
During an incident I experienced a large checkpoint mark duration. It was 
impossible to determine whether this was caused by a stalled disk because of 
large number of long page reads or by some other reasons.
Having a metric showing the page read latency histogram would help in such 
cases.


> Add page read latency histogram per data region
> ---
>
> Key: IGNITE-12550
> URL: https://issues.apache.org/jira/browse/IGNITE-12550
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Alexey Goncharuk
>Assignee: Alexey Goncharuk
>Priority: Major
> Fix For: 2.9
>
>
> During an incident I experienced a large checkpoint mark duration. It was 
> impossible to determine whether this was caused by a stalled disk because of 
> large number of long page reads or by some other reasons.
> Having a metric showing the page read latency histogram would help in such 
> cases.
> We already have a {{pagesRead}} metric, just need to measure the read timings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12549:

Description: 
Case 1
1.  start server node 1
2.  create and fill replicated cache with RebalanceMode.Async (as by default)
3.  start servr node 2 
4. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1. start server node 1
2. create and fill replicated cache with RebalanceMode.Async (as by default)
3. start client node 2
4. start server node 3 
5. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery

if executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

if executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.






  was:
Case 1
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3  start servr node 2 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1.  start server node 1
2  create and fill replicated cache with RebalanceMode.Async (as by default)
3 start client node 2
3  start server node 3 
3 immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery

if executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

if executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.







> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> Case 1
> 1.  start server node 1
> 2.  create and fill replicated cache with RebalanceMode.Async (as by default)
> 3.  start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
>

[jira] [Updated] (IGNITE-12549) Scan query/iterator on a replicated cache may get wrong results

2020-01-16 Thread Sergey Kosarev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kosarev updated IGNITE-12549:

Description: 
Case 1
1. start server node 1
2. create and fill replicated cache with RebalanceMode.Async (as by default)
3. start servr node 2 
4. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1. start server node 1
2. create and fill replicated cache with RebalanceMode.Async (as by default)
3. start client node 2
4. start server node 3 
5. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery.

If executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

If executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.






  was:
Case 1
1.  start server node 1
2.  create and fill replicated cache with RebalanceMode.Async (as by default)
3.  start servr node 2 
4. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the node 2
It can get empty or partial results. (if rebalance on node 2 is finished)

Case 2
1. start server node 1
2. create and fill replicated cache with RebalanceMode.Async (as by default)
3. start client node 2
4. start server node 3 
5. immediately execute scan query  on the replicated cache((or just iterate the 
cache)) on the client node 2
It can get empty or partial results. (if rebalance on node 2 is not finished 
and query is mapped on the node 2)

It looks like problem in the 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()

case REPLICATED:
if (prj != null || part != null)
return nodes(cctx, prj, part);

if (cctx.affinityNode())
return *Collections.singletonList(cctx.localNode())*;

Collection affNodes = nodes(cctx, null, null);

return affNodes.isEmpty() ? affNodes : 
*Collections.singletonList(F.rand(affNodes))*;

case PARTITIONED:
return nodes(cctx, prj, part);

 which is executed in 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#executeScanQuery

if executed on a just started node it obviously returns the local node 
disregarding was it rebalanced or not.

if executed on a client it returns a random affinity node, so it also can be 
not yet rebalanced node.







> Scan query/iterator on a replicated cache may get wrong results
> ---
>
> Key: IGNITE-12549
> URL: https://issues.apache.org/jira/browse/IGNITE-12549
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.7.6
>Reporter: Sergey Kosarev
>Priority: Major
>
> Case 1
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start servr node 2 
> 4. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the node 2
> It can get empty or partial results. (if rebalance on node 2 is finished)
> Case 2
> 1. start server node 1
> 2. create and fill replicated cache with RebalanceMode.Async (as by default)
> 3. start client node 2
> 4. start server node 3 
> 5. immediately execute scan query  on the replicated cache((or just iterate 
> the cache)) on the client node 2
> It can get empty or partial results. (if rebalance on node 2 is not finished 
> and query is mapped on the node 2)
> It looks like problem in the 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter#nodes()
> case REPLICATED:
> if (prj != null || part != null)
> return nodes(cctx, prj, part);
> if (cctx.affinityNode())
> return *Collections.singletonList(cctx.localNode())*;
> 

[jira] [Commented] (IGNITE-12101) IgniteQueue.removeAll throws NPE

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016815#comment-17016815
 ] 

Ignite TC Bot commented on IGNITE-12101:


{panel:title=Branch: [pull/7266/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4932460buildTypeId=IgniteTests24Java8_RunAll]

> IgniteQueue.removeAll throws NPE
> 
>
> Key: IGNITE-12101
> URL: https://issues.apache.org/jira/browse/IGNITE-12101
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.5
>Reporter: Denis A. Magda
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> See more details here:
> https://stackoverflow.com/questions/57473783/ignite-2-5-ignitequeue-removeall-throwing-npe
> {noformat}
> 2019-08-09 18:18:39,241 ERROR [Inbound-Main-Pool-13] [TransactionId: 
> e5b5bfe3-5246-4d54-a4d6-acd550240e13 Request ID - 27845] [ APP=Server, 
> ACTION=APP_PROCESS, USER=tsgops ] ProcessWorkflowProcessor - Error while 
> processing CLIENT process 
> class org.apache.ignite.IgniteException: Failed to serialize object 
> [typeName=LinkedList] 
>at 
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
>  
>at 
> org.apache.ignite.internal.processors.datastructures.GridCacheQueueAdapter$QueueIterator.remove(GridCacheQueueAdapter.java:687)
>  
>at 
> java.util.AbstractCollection.removeAll(AbstractCollection.java:376) 
>at 
> org.apache.ignite.internal.processors.datastructures.GridCacheQueueProxy.removeAll(GridCacheQueueProxy.java:180)
>  
>at 
> com.me.app.service.support.APPOrderProcessIgniteQueueService.removeAll(APPOrderProcessIgniteQueueService.java:63)
>  
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.removeAllFromCurrentProcessing(APPOrderContextProcessInputManager.java:201)
>  
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.lambda$removeAll$3(APPOrderContextProcessInputManager.java:100)
>  
>at java.lang.Iterable.forEach(Iterable.java:75) 
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.removeAll(APPOrderContextProcessInputManager.java:100)
>  
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.removeAll(APPOrderContextProcessInputManager.java:90)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.processOrders(ProcessWorkflowProcessor.java:602)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$13(ProcessWorkflowProcessor.java:405)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$14(ProcessWorkflowProcessor.java:368)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$15(ProcessWorkflowProcessor.java:354)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$16(ProcessWorkflowProcessor.java:345)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$executeProcess$17(ProcessWorkflowProcessor.java:337)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.executeProcess(ProcessWorkflowProcessor.java:330)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.executeProcess(ProcessWorkflowProcessor.java:302)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$processProcessFromQueue$6(ProcessWorkflowProcessor.java:282)
>  
>at 
> com.me.app.locking.support.IgniteLockingService.execute(IgniteLockingService.java:39)
>  
>at 
> com.me.app.locking.support.IgniteLockingService.execute(IgniteLockingService.java:68)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.processProcessFromQueue(ProcessWorkflowProcessor.java:281)
>  
>at 
> com.me.app.facade.listener.support.APPProcessEventListener.listen(APPProcessEventListener.java:49)
>  
>at 
> com.me.app.facade.listener.support.APPProcessEventListener.listen(APPProcessEventListener.java:19)
>  
>at 
> 

[jira] [Commented] (IGNITE-12531) Cluster is unable to change BLT on 2.8 if storage was initially created on 2.7 or less

2020-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016828#comment-17016828
 ] 

Ignite TC Bot commented on IGNITE-12531:


{panel:title=Branch: [pull/7265/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Platform .NET (Inspections)*{color} [[tests 0 TIMEOUT , 
TC_BUILD_FAILURE |https://ci.ignite.apache.org/viewLog.html?buildId=4934395]]

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4932221buildTypeId=IgniteTests24Java8_RunAll]

> Cluster is unable to change BLT on 2.8 if storage was initially created on 
> 2.7 or less
> --
>
> Key: IGNITE-12531
> URL: https://issues.apache.org/jira/browse/IGNITE-12531
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Ivan Rakov
>Assignee: Vyacheslav Koptilin
>Priority: Blocker
> Fix For: 2.8
>
> Attachments: TestBltChangeFail.java
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Due to bug in https://issues.apache.org/jira/browse/IGNITE-10348, after 
> storage migration from 2.7- to 2.8 any updates of metastorage are not 
> persisted.
> S2R:
> (on 2.7)
> - Activate persistent cluster with 2 nodes
> - Shutdown the cluster
> (on 2.8)
> - Start cluster with 2 nodes based on persistent storage from 2.7
> - Start 3rd node
> - Change baseline
> - Shutdown the cluster
> - Start initial two nodes
> - Start 3rd node (join is rejected: first two nodes has old BLT of two nodes, 
> 3rd node has new BLT of three nodes)
> Reproducer is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12531) Cluster is unable to change BLT on 2.8 if storage was initially created on 2.7 or less

2020-01-16 Thread Vyacheslav Koptilin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016831#comment-17016831
 ] 

Vyacheslav Koptilin commented on IGNITE-12531:
--

This test failure does not seem to be related to the proposed change.

> Cluster is unable to change BLT on 2.8 if storage was initially created on 
> 2.7 or less
> --
>
> Key: IGNITE-12531
> URL: https://issues.apache.org/jira/browse/IGNITE-12531
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Ivan Rakov
>Assignee: Vyacheslav Koptilin
>Priority: Blocker
> Fix For: 2.8
>
> Attachments: TestBltChangeFail.java
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Due to bug in https://issues.apache.org/jira/browse/IGNITE-10348, after 
> storage migration from 2.7- to 2.8 any updates of metastorage are not 
> persisted.
> S2R:
> (on 2.7)
> - Activate persistent cluster with 2 nodes
> - Shutdown the cluster
> (on 2.8)
> - Start cluster with 2 nodes based on persistent storage from 2.7
> - Start 3rd node
> - Change baseline
> - Shutdown the cluster
> - Start initial two nodes
> - Start 3rd node (join is rejected: first two nodes has old BLT of two nodes, 
> 3rd node has new BLT of three nodes)
> Reproducer is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12101) IgniteQueue.removeAll throws NPE

2020-01-16 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016835#comment-17016835
 ] 

Alexander Lapin commented on IGNITE-12101:
--

[~slava.koptilin] LGTM

> IgniteQueue.removeAll throws NPE
> 
>
> Key: IGNITE-12101
> URL: https://issues.apache.org/jira/browse/IGNITE-12101
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.5
>Reporter: Denis A. Magda
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> See more details here:
> https://stackoverflow.com/questions/57473783/ignite-2-5-ignitequeue-removeall-throwing-npe
> {noformat}
> 2019-08-09 18:18:39,241 ERROR [Inbound-Main-Pool-13] [TransactionId: 
> e5b5bfe3-5246-4d54-a4d6-acd550240e13 Request ID - 27845] [ APP=Server, 
> ACTION=APP_PROCESS, USER=tsgops ] ProcessWorkflowProcessor - Error while 
> processing CLIENT process 
> class org.apache.ignite.IgniteException: Failed to serialize object 
> [typeName=LinkedList] 
>at 
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
>  
>at 
> org.apache.ignite.internal.processors.datastructures.GridCacheQueueAdapter$QueueIterator.remove(GridCacheQueueAdapter.java:687)
>  
>at 
> java.util.AbstractCollection.removeAll(AbstractCollection.java:376) 
>at 
> org.apache.ignite.internal.processors.datastructures.GridCacheQueueProxy.removeAll(GridCacheQueueProxy.java:180)
>  
>at 
> com.me.app.service.support.APPOrderProcessIgniteQueueService.removeAll(APPOrderProcessIgniteQueueService.java:63)
>  
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.removeAllFromCurrentProcessing(APPOrderContextProcessInputManager.java:201)
>  
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.lambda$removeAll$3(APPOrderContextProcessInputManager.java:100)
>  
>at java.lang.Iterable.forEach(Iterable.java:75) 
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.removeAll(APPOrderContextProcessInputManager.java:100)
>  
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.removeAll(APPOrderContextProcessInputManager.java:90)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.processOrders(ProcessWorkflowProcessor.java:602)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$13(ProcessWorkflowProcessor.java:405)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$14(ProcessWorkflowProcessor.java:368)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$15(ProcessWorkflowProcessor.java:354)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$16(ProcessWorkflowProcessor.java:345)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$executeProcess$17(ProcessWorkflowProcessor.java:337)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.executeProcess(ProcessWorkflowProcessor.java:330)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.executeProcess(ProcessWorkflowProcessor.java:302)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$processProcessFromQueue$6(ProcessWorkflowProcessor.java:282)
>  
>at 
> com.me.app.locking.support.IgniteLockingService.execute(IgniteLockingService.java:39)
>  
>at 
> com.me.app.locking.support.IgniteLockingService.execute(IgniteLockingService.java:68)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.processProcessFromQueue(ProcessWorkflowProcessor.java:281)
>  
>at 
> com.me.app.facade.listener.support.APPProcessEventListener.listen(APPProcessEventListener.java:49)
>  
>at 
> com.me.app.facade.listener.support.APPProcessEventListener.listen(APPProcessEventListener.java:19)
>  
>at 
> com.me.app.common.listener.support.AbstractEventListener.onMessage(AbstractEventListener.java:44)
>  
>at 
> com.me.app.common.listener.support.AbstractEventListener$$FastClassBySpringCGLIB$$f1379f74.invoke()
>  
>at 
> 

[jira] [Updated] (IGNITE-12101) IgniteQueue.removeAll throws NPE

2020-01-16 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-12101:
-
Reviewer: Alexander Lapin

> IgniteQueue.removeAll throws NPE
> 
>
> Key: IGNITE-12101
> URL: https://issues.apache.org/jira/browse/IGNITE-12101
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.5
>Reporter: Denis A. Magda
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> See more details here:
> https://stackoverflow.com/questions/57473783/ignite-2-5-ignitequeue-removeall-throwing-npe
> {noformat}
> 2019-08-09 18:18:39,241 ERROR [Inbound-Main-Pool-13] [TransactionId: 
> e5b5bfe3-5246-4d54-a4d6-acd550240e13 Request ID - 27845] [ APP=Server, 
> ACTION=APP_PROCESS, USER=tsgops ] ProcessWorkflowProcessor - Error while 
> processing CLIENT process 
> class org.apache.ignite.IgniteException: Failed to serialize object 
> [typeName=LinkedList] 
>at 
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
>  
>at 
> org.apache.ignite.internal.processors.datastructures.GridCacheQueueAdapter$QueueIterator.remove(GridCacheQueueAdapter.java:687)
>  
>at 
> java.util.AbstractCollection.removeAll(AbstractCollection.java:376) 
>at 
> org.apache.ignite.internal.processors.datastructures.GridCacheQueueProxy.removeAll(GridCacheQueueProxy.java:180)
>  
>at 
> com.me.app.service.support.APPOrderProcessIgniteQueueService.removeAll(APPOrderProcessIgniteQueueService.java:63)
>  
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.removeAllFromCurrentProcessing(APPOrderContextProcessInputManager.java:201)
>  
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.lambda$removeAll$3(APPOrderContextProcessInputManager.java:100)
>  
>at java.lang.Iterable.forEach(Iterable.java:75) 
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.removeAll(APPOrderContextProcessInputManager.java:100)
>  
>at 
> com.me.app.service.support.APPOrderContextProcessInputManager.removeAll(APPOrderContextProcessInputManager.java:90)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.processOrders(ProcessWorkflowProcessor.java:602)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$13(ProcessWorkflowProcessor.java:405)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$14(ProcessWorkflowProcessor.java:368)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$15(ProcessWorkflowProcessor.java:354)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$null$16(ProcessWorkflowProcessor.java:345)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$executeProcess$17(ProcessWorkflowProcessor.java:337)
>  
>at java.util.HashMap.forEach(HashMap.java:1289) 
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.executeProcess(ProcessWorkflowProcessor.java:330)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.executeProcess(ProcessWorkflowProcessor.java:302)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.lambda$processProcessFromQueue$6(ProcessWorkflowProcessor.java:282)
>  
>at 
> com.me.app.locking.support.IgniteLockingService.execute(IgniteLockingService.java:39)
>  
>at 
> com.me.app.locking.support.IgniteLockingService.execute(IgniteLockingService.java:68)
>  
>at 
> com.me.app.processor.support.ProcessWorkflowProcessor.processProcessFromQueue(ProcessWorkflowProcessor.java:281)
>  
>at 
> com.me.app.facade.listener.support.APPProcessEventListener.listen(APPProcessEventListener.java:49)
>  
>at 
> com.me.app.facade.listener.support.APPProcessEventListener.listen(APPProcessEventListener.java:19)
>  
>at 
> com.me.app.common.listener.support.AbstractEventListener.onMessage(AbstractEventListener.java:44)
>  
>at 
> com.me.app.common.listener.support.AbstractEventListener$$FastClassBySpringCGLIB$$f1379f74.invoke()
>  
>at 
>