[jira] [Commented] (IGNITE-12950) Partitions validator must check sizes even if update counters are different

2020-04-30 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097201#comment-17097201
 ] 

Ignite TC Bot commented on IGNITE-12950:


{panel:title=Branch: [pull/7735/head] Base: [master] : Possible Blockers 
(2)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Platform .NET (NuGet)*{color} [[tests 0 Exit Code , Compilation 
Error |https://ci.ignite.apache.org/viewLog.html?buildId=5274903]]

{color:#d04437}Platform .NET (Inspections)*{color} [[tests 0 Failure on metric 
|https://ci.ignite.apache.org/viewLog.html?buildId=5274905]]

{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5273102&buildTypeId=IgniteTests24Java8_RunAll]

> Partitions validator must check sizes even if update counters are different
> ---
>
> Key: IGNITE-12950
> URL: https://issues.apache.org/jira/browse/IGNITE-12950
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Ivan Mironovich
>Assignee: Ivan Mironovich
>Priority: Major
> Fix For: 2.9
>
>   Original Estimate: 336h
>  Time Spent: 10m
>  Remaining Estimate: 335h 50m
>
> We have method in GridDhtPartitionsStateValidator:
> {code:java}
> // public void validatePartitionCountersAndSizes(
> GridDhtPartitionsExchangeFuture fut,
> GridDhtPartitionTopology top,
> Map messages
> ) throws IgniteCheckedException {
> final Set ignoringNodes = new HashSet<>();
> // Ignore just joined nodes.
> for (DiscoveryEvent evt : fut.events().events()) {
> if (evt.type() == EVT_NODE_JOINED)
> ignoringNodes.add(evt.eventNode().id());
> }
> AffinityTopologyVersion topVer = 
> fut.context().events().topologyVersion();
> // Validate update counters.
> Map> result = 
> validatePartitionsUpdateCounters(top, messages, ignoringNodes);
> if (!result.isEmpty())
> throw new IgniteCheckedException("Partitions update counters are 
> inconsistent for " + fold(topVer, result));
> // For sizes validation ignore also nodes which are not able to send 
> cache sizes.
> for (UUID id : messages.keySet()) {
> ClusterNode node = cctx.discovery().node(id);
> if (node != null && 
> node.version().compareTo(SIZES_VALIDATION_AVAILABLE_SINCE) < 0)
> ignoringNodes.add(id);
> }
> if (!cctx.cache().cacheGroup(top.groupId()).mvccEnabled()) { // TODO: 
> Remove "if" clause in IGNITE-9451.
> // Validate cache sizes.
> result = validatePartitionsSizes(top, messages, ignoringNodes);
> if (!result.isEmpty())
> throw new IgniteCheckedException("Partitions cache sizes are 
> inconsistent for " + fold(topVer, result));
> }
> }
> {code}
>  We should check partitions sizes even if update counters are different. It 
> could be helpful for debugging problems on production.
>  We must print information about all copies, if a partition is in an 
> inconsistent state. Now we could get the message on cache group with 3 
> backups:
> {code:java}
> // Partition states validation has failed for group: CACHEGROUP. Partitions 
> update counters are inconsistent for Part 3415: [10.104.6.10:47500=2577263 
> 10.104.6.12:47500=2577263 10.104.6.23:47500=2577262 10.104.6.9:47500=2577263 
> ] Part 4960: [10.104.6.11:47500=2560994 10.104.6.23:47500=2560993 ]
> {code}
> (part 4960 contains information about 2 copies only)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12973) Support inlining of BigDecimal

2020-04-30 Thread Evgeniy Rudenko (Jira)
Evgeniy Rudenko created IGNITE-12973:


 Summary: Support inlining of BigDecimal
 Key: IGNITE-12973
 URL: https://issues.apache.org/jira/browse/IGNITE-12973
 Project: Ignite
  Issue Type: Improvement
Reporter: Evgeniy Rudenko
Assignee: Evgeniy Rudenko


SQL currently doesn't support inlining for indexes over BigDecimal although 
there seem to be no strong reason for that. It is quite often that decimal 
types are used in FinTech apps and inability to efficiently use them in SQL 
conditions can make migration to SQL a lot harder.

Need to implement support for BigDecimal inlining.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12965) Redirect ignite-website GitHub notifications

2020-04-30 Thread Denis A. Magda (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17097045#comment-17097045
 ] 

Denis A. Magda commented on IGNITE-12965:
-

Thanks Ivan! I've merged the changes.

> Redirect ignite-website GitHub notifications
> 
>
> Key: IGNITE-12965
> URL: https://issues.apache.org/jira/browse/IGNITE-12965
> Project: Ignite
>  Issue Type: Task
>Reporter: Ivan Pavlukhin
>Assignee: Ivan Pavlukhin
>Priority: Major
>
> GitHub notifications for all Ignite repositories are sent to 
> notificati...@ignite.apache.org after INFRA-17351. But notifications for a 
> new ignite-website repository (https://github.com/apache/ignite-website) are 
> sent to d...@ignite.apache.org. These notifications should be sent to 
> notificati...@ignite.apache.org as well.
> Apparently we can solve it on our side by editing .asf.yaml file according to 
> https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12937) Create pull-request template for the github repo

2020-04-30 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-12937:
-
Fix Version/s: 2.9

> Create pull-request template for the github repo
> 
>
> Key: IGNITE-12937
> URL: https://issues.apache.org/jira/browse/IGNITE-12937
> Project: Ignite
>  Issue Type: Task
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Create {{.github/pull_request_template.md}} to allow users to have a 
> pull-request all usefull information while contributing to the Apache Ignite.
> Example:
> [ ] Coding Guidelines are followed
> [ ] TeamCity build passes
> [ ] JIRA ticked is in Patch Available state, a review has been requested in
> comments



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-12937) Create pull-request template for the github repo

2020-04-30 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov reassigned IGNITE-12937:


Assignee: Maxim Muzafarov

> Create pull-request template for the github repo
> 
>
> Key: IGNITE-12937
> URL: https://issues.apache.org/jira/browse/IGNITE-12937
> Project: Ignite
>  Issue Type: Task
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Create {{.github/pull_request_template.md}} to allow users to have a 
> pull-request all usefull information while contributing to the Apache Ignite.
> Example:
> [ ] Coding Guidelines are followed
> [ ] TeamCity build passes
> [ ] JIRA ticked is in Patch Available state, a review has been requested in
> comments



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12617) PME-free switch should wait for recovery only at affected nodes.

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12617:
--
Description: 
Since IGNITE-9913, new-topology operations allowed immediately after 
cluster-wide recovery finished.

But is there any reason to wait for a cluster-wide recovery if only one node 
failed?
In this case, we should recover only the failed node's backups.
Unfortunately, {{RendezvousAffinityFunction}} tends to spread the node's backup 
partitions to the whole cluster. In this case, we, obviously, have to wait for 
cluster-wide recovery on switch.

But what if only some nodes will be the backups for every primary?

In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations outside 
the failed cell without a cluster-wide switch finish (broken cell recovery) 
waiting.

In other words, switch (when left/fail + baseline + rebalanced) will have 
little effect on the operation's (not related to failed cell) latency.

In other words
- We should wait for tx recovery before finishing the switch only on a broken 
cell.
- We should wait for replicated caches tx recovery everywhere since every node 
is a backup of a failed one.
- Upcoming operations related to the broken cell (including all replicated 
caches operations) will require a cluster-wide switch finish to be processed.

  was:
Since IGNITE-9913, new-topology operations allowed immediately after 
cluster-wide recovery finished.

But is there any reason to wait for a cluster-wide recovery if only one node 
failed?
In this case, we should recover only the failed node's backups.
Unfortunately, {{RendezvousAffinityFunction}} tends to spread the node's backup 
partitions to the whole cluster. In this case, we, obviously have to perform 
cluster-wide recovery on switch.

But what if only some nodes will be the backups for every primary?

In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations outside 
the failed cell without cluster-wide switch finish waiting.

In other words, switch (when left/fail + baseline + rebalanced) will have 
little effect on the operation's (not related to failed cell) latency.

Assumptions
- We should wait for tx recovery before finishing the global switch.
- We should wait for replicated caches recovery globally before finishing 
switch locally.
- Upcoming replicated caches operations and operations related to the broken 
cell will require a cluster-wide switch finish to be committed.


> PME-free switch should wait for recovery only at affected nodes.
> 
>
> Key: IGNITE-12617
> URL: https://issues.apache.org/jira/browse/IGNITE-12617
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since IGNITE-9913, new-topology operations allowed immediately after 
> cluster-wide recovery finished.
> But is there any reason to wait for a cluster-wide recovery if only one node 
> failed?
> In this case, we should recover only the failed node's backups.
> Unfortunately, {{RendezvousAffinityFunction}} tends to spread the node's 
> backup partitions to the whole cluster. In this case, we, obviously, have to 
> wait for cluster-wide recovery on switch.
> But what if only some nodes will be the backups for every primary?
> In case nodes combined into virtual cells where, for each partition, backups 
> located at the same cell with primaries, it's possible to finish the switch 
> outside the affected cell before tx recovery finish.
> This optimization will allow us to start and even finish new operations 
> outside the failed cell without a cluster-wide switch finish (broken cell 
> recovery) waiting.
> In other words, switch (when left/fail + baseline + rebalanced) will have 
> little effect on the operation's (not related to failed cell) latency.
> In other words
> - We should wait for tx recovery before finishing the switch only on a broken 
> cell.
> - We should wait for replicated caches tx recovery everywhere since every 
> node is a backup of a failed one.
> - Upcoming operations related to the broken cell (including all replicated 
> caches operations) will require a cluster-wide switch finish to be processed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

[jira] [Updated] (IGNITE-12617) PME-free switch should wait for recovery only at affected nodes.

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12617:
--
Description: 
Since IGNITE-9913, new-topology operations allowed immediately after 
cluster-wide recovery finished.

But is there any reason to wait for a cluster-wide recovery if only one node 
failed?
In this case, we should recover only the failed node's backups.
Unfortunately, {{RendezvousAffinityFunction}} tends to spread the node's backup 
partitions to the whole cluster. In this case, we, obviously have to perform 
cluster-wide recovery on switch.

But what if only some nodes will be the backups for every primary?

In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations outside 
the failed cell without cluster-wide switch finish waiting.

In other words, switch (when left/fail + baseline + rebalanced) will have 
little effect on the operation's (not related to failed cell) latency.

Assumptions
- We should wait for tx recovery before finishing the global switch.
- We should wait for replicated caches recovery globally before finishing 
switch locally.
- Upcoming replicated caches operations and operations related to the broken 
cell will require a cluster-wide switch finish to be committed.

  was:
Since IGNITE-9913, new-topology operations allowed immediately after 
cluster-wide recovery finished.

But is there any reason to wait for a cluster-wide recovery if only one node 
failed?
In this case, we should recover only the failed node's backups.
Unfortunately, RendezvousAffinityFunction tends to spread some node's backup 
partitions to the whole cluster. In this case, we, obviously have to perform 
cluster-wide recovery on switch.

But what if only some nodes will be the backups for every primary?

In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations outside 
the failed cell immediately.

In other words
- We should wait for tx recovery before finishing the global switch.
- We should wait for replicated caches recovery globally.
- As to partitioned caches, we have to minimize the waiting group to allow 
upcoming operations where possible during the switch.


> PME-free switch should wait for recovery only at affected nodes.
> 
>
> Key: IGNITE-12617
> URL: https://issues.apache.org/jira/browse/IGNITE-12617
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since IGNITE-9913, new-topology operations allowed immediately after 
> cluster-wide recovery finished.
> But is there any reason to wait for a cluster-wide recovery if only one node 
> failed?
> In this case, we should recover only the failed node's backups.
> Unfortunately, {{RendezvousAffinityFunction}} tends to spread the node's 
> backup partitions to the whole cluster. In this case, we, obviously have to 
> perform cluster-wide recovery on switch.
> But what if only some nodes will be the backups for every primary?
> In case nodes combined into virtual cells where, for each partition, backups 
> located at the same cell with primaries, it's possible to finish the switch 
> outside the affected cell before tx recovery finish.
> This optimization will allow us to start and even finish new operations 
> outside the failed cell without cluster-wide switch finish waiting.
> In other words, switch (when left/fail + baseline + rebalanced) will have 
> little effect on the operation's (not related to failed cell) latency.
> Assumptions
> - We should wait for tx recovery before finishing the global switch.
> - We should wait for replicated caches recovery globally before finishing 
> switch locally.
> - Upcoming replicated caches operations and operations related to the broken 
> cell will require a cluster-wide switch finish to be committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12617) PME-free switch should wait for recovery only at affected nodes.

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12617:
--
Description: 
Since IGNITE-9913, new-topology operations allowed immediately after 
cluster-wide recovery finished.

But is there any reason to wait for a cluster-wide recovery if only one node 
failed?
In this case, we should recover only the failed node's backups.
Unfortunately, RendezvousAffinityFunction tends to spread some node's backup 
partitions to the whole cluster. In this case, we, obviously have to perform 
cluster-wide recovery on switch.

But what if only some nodes will be the backups for every primary?

In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations outside 
the failed cell immediately.

In other words
- We should wait for tx recovery before finishing the global switch.
- We should wait for replicated caches recovery globally.
- As to partitioned caches, we have to minimize the waiting group to allow 
upcoming operations where possible during the switch.

  was:
In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the Switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations without 
waiting for a cluster-wide Switch finish.

In other words
- We should wait for tx recovery before finishing the global switch.
- We should wait for replicated caches recovery globally.
- As to partitioned caches, we have to minimize the waiting group to allow 
upcoming operations where possible during the switch.


> PME-free switch should wait for recovery only at affected nodes.
> 
>
> Key: IGNITE-12617
> URL: https://issues.apache.org/jira/browse/IGNITE-12617
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since IGNITE-9913, new-topology operations allowed immediately after 
> cluster-wide recovery finished.
> But is there any reason to wait for a cluster-wide recovery if only one node 
> failed?
> In this case, we should recover only the failed node's backups.
> Unfortunately, RendezvousAffinityFunction tends to spread some node's backup 
> partitions to the whole cluster. In this case, we, obviously have to perform 
> cluster-wide recovery on switch.
> But what if only some nodes will be the backups for every primary?
> In case nodes combined into virtual cells where, for each partition, backups 
> located at the same cell with primaries, it's possible to finish the switch 
> outside the affected cell before tx recovery finish.
> This optimization will allow us to start and even finish new operations 
> outside the failed cell immediately.
> In other words
> - We should wait for tx recovery before finishing the global switch.
> - We should wait for replicated caches recovery globally.
> - As to partitioned caches, we have to minimize the waiting group to allow 
> upcoming operations where possible during the switch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12794) Scan query fails with an assertion error: Unexpected row key

2020-04-30 Thread Nikolay Izhikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096673#comment-17096673
 ] 

Nikolay Izhikov commented on IGNITE-12794:
--

[~dmekhanikov], [~gvvinblade]

Guys, Can we proceed with the review? To get the fix in 2.8.1 release.

> Scan query fails with an assertion error: Unexpected row key
> 
>
> Key: IGNITE-12794
> URL: https://issues.apache.org/jira/browse/IGNITE-12794
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Denis Mekhanikov
>Assignee: Denis Mekhanikov
>Priority: Major
> Fix For: 2.8.1
>
> Attachments: ScanQueryExample.java
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Scan query fails with an exception:
> {noformat}
> Exception in thread "main" java.lang.AssertionError: Unexpected row key
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.unswap(GridCacheMapEntry.java:548)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.unswap(GridCacheMapEntry.java:512)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.advance(GridCacheQueryManager.java:3045)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.onHasNext(GridCacheQueryManager.java:2997)
>   at 
> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
>   at 
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
>   at 
> org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:127)
>   at scan.ScanQueryExample.main(ScanQueryExample.java:31)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)"
> {noformat}
> The issue is reproduced when performing concurrent scan queries and updates. 
> A reproducer is attached. You will need to enable asserts in order to 
> reproduce this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12252) Unchecked exceptions during rebalancing should be handled

2020-04-30 Thread Anton Vinogradov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096662#comment-17096662
 ] 

Anton Vinogradov commented on IGNITE-12252:
---

Merged to master and 2.8.1.
Thanks for your contribution.

> Unchecked exceptions during rebalancing should be handled
> -
>
> Key: IGNITE-12252
> URL: https://issues.apache.org/jira/browse/IGNITE-12252
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Nikolai Kulagin
>Priority: Critical
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Rebalancing should handle unchecked exceptions by failure handler. In current 
> implementation unchecked exceptions just ignored. They were handled by IO 
> worker before IGNITE-3195.
> Reproducer:
> {code:java}
> @Test
> public void testRebalanceUncheckedError() throws Exception {
> IgniteEx ignite0 = startGrid(new 
> IgniteConfiguration().setIgniteInstanceName("ignite0"));
> IgniteCache cache = 
> ignite0.getOrCreateCache(DEFAULT_CACHE_NAME);
> IgniteDataStreamer streamer = 
> ignite0.dataStreamer(DEFAULT_CACHE_NAME);
> for (int i = 0; i < 100_000; i++)
> streamer.addData(i, i);
> streamer.flush();
> IgniteEx ignite1 = startGrid(new 
> IgniteConfiguration().setIgniteInstanceName("ignite1")
> 
> .setIncludeEventTypes(EventType.EVT_CACHE_REBALANCE_OBJECT_LOADED));
> ignite1.events().localListen(e -> {
> throw new Error();
> }, EventType.EVT_CACHE_REBALANCE_OBJECT_LOADED);
> awaitPartitionMapExchange();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12617) PME-free switch should wait for recovery only at affected nodes.

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12617:
--
Description: 
In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the Switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations without 
waiting for a cluster-wide Switch finish.

In other words
- We should wait for tx recovery before finishing the global switch.
- We should wait for replicated caches recovery globally.
- As to partitioned caches, we have to minimize the waiting group to allow 
upcoming operations where possible during the switch.

  was:
In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the Switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations without 
waiting for a cluster-wide Switch finish.

In brief
- We should wait for tx recovery before finishing the global switch.
- We should wait for replicated caches recovery globally.
- As to partitioned caches, we have to minimize the waiting group to allow 
upcoming operations where possible during the switch.


> PME-free switch should wait for recovery only at affected nodes.
> 
>
> Key: IGNITE-12617
> URL: https://issues.apache.org/jira/browse/IGNITE-12617
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In case nodes combined into virtual cells where, for each partition, backups 
> located at the same cell with primaries, it's possible to finish the Switch 
> outside the affected cell before tx recovery finish.
> This optimization will allow us to start and even finish new operations 
> without waiting for a cluster-wide Switch finish.
> In other words
> - We should wait for tx recovery before finishing the global switch.
> - We should wait for replicated caches recovery globally.
> - As to partitioned caches, we have to minimize the waiting group to allow 
> upcoming operations where possible during the switch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12617) PME-free switch should wait for recovery only at affected nodes.

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12617:
--
Description: 
In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the Switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations without 
waiting for a cluster-wide Switch finish.

In brief
- We should wait for tx recovery before finishing the global switch.
- We should wait for replicated caches recovery globally.
- As to partitioned caches, we have to minimize the waiting group to allow 
upcoming operations where possible during the switch.

  was:
In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the Switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations without 
waiting for a cluster-wide Switch finish.

In brief
- We should wait for recovery before finishing the switch.
- We should wait for replicated caches recovery globally.
- As to partitioned caches, we have to minimize the waiting group to allow 
upcoming operations where possible during the switch.


> PME-free switch should wait for recovery only at affected nodes.
> 
>
> Key: IGNITE-12617
> URL: https://issues.apache.org/jira/browse/IGNITE-12617
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In case nodes combined into virtual cells where, for each partition, backups 
> located at the same cell with primaries, it's possible to finish the Switch 
> outside the affected cell before tx recovery finish.
> This optimization will allow us to start and even finish new operations 
> without waiting for a cluster-wide Switch finish.
> In brief
> - We should wait for tx recovery before finishing the global switch.
> - We should wait for replicated caches recovery globally.
> - As to partitioned caches, we have to minimize the waiting group to allow 
> upcoming operations where possible during the switch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12617) PME-free switch should wait for recovery only at affected nodes.

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12617:
--
Description: 
In case nodes combined into virtual cells where, for each partition, backups 
located at the same cell with primaries, it's possible to finish the Switch 
outside the affected cell before tx recovery finish.

This optimization will allow us to start and even finish new operations without 
waiting for a cluster-wide Switch finish.

In brief
- We should wait for recovery before finishing the switch.
- We should wait for replicated caches recovery globally.
- As to partitioned caches, we have to minimize the waiting group to allow 
upcoming operations where possible during the switch.

  was:
Obviously, we should wait for recovery before finishing the switch.
We should wait for replicated caches recovery globally.
As to partitioned caches, we have to minimize the waiting group to allow 
upcoming operations where possible during the switch.


> PME-free switch should wait for recovery only at affected nodes.
> 
>
> Key: IGNITE-12617
> URL: https://issues.apache.org/jira/browse/IGNITE-12617
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In case nodes combined into virtual cells where, for each partition, backups 
> located at the same cell with primaries, it's possible to finish the Switch 
> outside the affected cell before tx recovery finish.
> This optimization will allow us to start and even finish new operations 
> without waiting for a cluster-wide Switch finish.
> In brief
> - We should wait for recovery before finishing the switch.
> - We should wait for replicated caches recovery globally.
> - As to partitioned caches, we have to minimize the waiting group to allow 
> upcoming operations where possible during the switch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12972) Calcite integration. Serialization refactoring

2020-04-30 Thread Igor Seliverstov (Jira)
Igor Seliverstov created IGNITE-12972:
-

 Summary: Calcite integration. Serialization refactoring
 Key: IGNITE-12972
 URL: https://issues.apache.org/jira/browse/IGNITE-12972
 Project: Ignite
  Issue Type: Improvement
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov


Currently we need quite a lot of classes to serialize, send and deserialize a 
prepared plan (in scope of node-to-node communications). It's better to do that 
by analogy with Calcite's RelJsonReader/RelJsonWriter. This way we may omit 
necessity to maintain lots of classes preserving functionality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12345) Remote listener of IgniteMessaging has to run inside the Ignite Sandbox.

2020-04-30 Thread Denis Garus (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096505#comment-17096505
 ] 

Denis Garus commented on IGNITE-12345:
--

[~alex_pl] thank you for review!

> Remote listener of IgniteMessaging has to run inside the Ignite Sandbox.
> 
>
> Key: IGNITE-12345
> URL: https://issues.apache.org/jira/browse/IGNITE-12345
> Project: Ignite
>  Issue Type: Task
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Remote listener of IgniteMessaging has to run on a remote node inside the 
> Ignite Sandbox if it is turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12345) Remote listener of IgniteMessaging has to run inside the Ignite Sandbox.

2020-04-30 Thread Denis Garus (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Garus updated IGNITE-12345:
-
Release Note: A remote listener of IgniteMessaging runs on a remote node 
inside the Ignite Sandbox if it is turned on.

> Remote listener of IgniteMessaging has to run inside the Ignite Sandbox.
> 
>
> Key: IGNITE-12345
> URL: https://issues.apache.org/jira/browse/IGNITE-12345
> Project: Ignite
>  Issue Type: Task
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Remote listener of IgniteMessaging has to run on a remote node inside the 
> Ignite Sandbox if it is turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12252) Unchecked exceptions during rebalancing should be handled

2020-04-30 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096461#comment-17096461
 ] 

Ignite TC Bot commented on IGNITE-12252:


{panel:title=Branch: [pull/6965/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5268087&buildTypeId=IgniteTests24Java8_RunAll]

> Unchecked exceptions during rebalancing should be handled
> -
>
> Key: IGNITE-12252
> URL: https://issues.apache.org/jira/browse/IGNITE-12252
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Nikolai Kulagin
>Priority: Critical
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Rebalancing should handle unchecked exceptions by failure handler. In current 
> implementation unchecked exceptions just ignored. They were handled by IO 
> worker before IGNITE-3195.
> Reproducer:
> {code:java}
> @Test
> public void testRebalanceUncheckedError() throws Exception {
> IgniteEx ignite0 = startGrid(new 
> IgniteConfiguration().setIgniteInstanceName("ignite0"));
> IgniteCache cache = 
> ignite0.getOrCreateCache(DEFAULT_CACHE_NAME);
> IgniteDataStreamer streamer = 
> ignite0.dataStreamer(DEFAULT_CACHE_NAME);
> for (int i = 0; i < 100_000; i++)
> streamer.addData(i, i);
> streamer.flush();
> IgniteEx ignite1 = startGrid(new 
> IgniteConfiguration().setIgniteInstanceName("ignite1")
> 
> .setIncludeEventTypes(EventType.EVT_CACHE_REBALANCE_OBJECT_LOADED));
> ignite1.events().localListen(e -> {
> throw new Error();
> }, EventType.EVT_CACHE_REBALANCE_OBJECT_LOADED);
> awaitPartitionMapExchange();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12971) Create snapshot view to show available cluster snapshots

2020-04-30 Thread Maxim Muzafarov (Jira)
Maxim Muzafarov created IGNITE-12971:


 Summary: Create snapshot view to show available cluster snapshots
 Key: IGNITE-12971
 URL: https://issues.apache.org/jira/browse/IGNITE-12971
 Project: Ignite
  Issue Type: Improvement
Reporter: Maxim Muzafarov


Users must be able to see available information about cluster snapshots through 
the view:
1. Snapshot name
2. Affected BLT nodes
3. List of cache groups in a snapshot
4. Partition states (e.g. the snapshot has LOST partitions)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12970)  Cluster snapshot must support encryption caches

2020-04-30 Thread Maxim Muzafarov (Jira)
Maxim Muzafarov created IGNITE-12970:


 Summary:  Cluster snapshot must support encryption caches
 Key: IGNITE-12970
 URL: https://issues.apache.org/jira/browse/IGNITE-12970
 Project: Ignite
  Issue Type: Improvement
Reporter: Maxim Muzafarov


Currently, a cluster snapshot operation not supports including encrypted caches 
to the snapshot. The {{EncryptionFileIO}} must be added for coping cache 
partition files and its deltas (see IEP-43 for details about copying cache 
partition files).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12345) Remote listener of IgniteMessaging has to run inside the Ignite Sandbox.

2020-04-30 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096437#comment-17096437
 ] 

Ignite TC Bot commented on IGNITE-12345:


{panel:title=Branch: [pull/7666/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5219765&buildTypeId=IgniteTests24Java8_RunAll]

> Remote listener of IgniteMessaging has to run inside the Ignite Sandbox.
> 
>
> Key: IGNITE-12345
> URL: https://issues.apache.org/jira/browse/IGNITE-12345
> Project: Ignite
>  Issue Type: Task
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Remote listener of IgniteMessaging has to run on a remote node inside the 
> Ignite Sandbox if it is turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12969) TxRecovery discovery listener should implement HighPriorityListener

2020-04-30 Thread Anton Vinogradov (Jira)
Anton Vinogradov created IGNITE-12969:
-

 Summary: TxRecovery discovery listener should implement 
HighPriorityListener
 Key: IGNITE-12969
 URL: https://issues.apache.org/jira/browse/IGNITE-12969
 Project: Ignite
  Issue Type: Task
Reporter: Anton Vinogradov
Assignee: Anton Vinogradov
 Fix For: 2.9


Currently, tx recovery delayed for 300+ ms because it starts at a low priority 
discovery listener.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12938) control.sh utility commands: IdleVerify and ValidateIndexes use eventual payload check.

2020-04-30 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-12938:
---
Reviewer: Alexey Scherbakov

> control.sh utility commands: IdleVerify and ValidateIndexes use eventual 
> payload check.
> ---
>
> Key: IGNITE-12938
> URL: https://issues.apache.org/jira/browse/IGNITE-12938
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.8
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.9
>
>
> "--cache idle_verify" and "--cache validate_indexes" commands of *control.sh* 
>  utility use eventual payload check during  execution. This can lead to 
> execution concurrently with active payload and no errors like : "Checkpoint 
> with dirty pages started! Cluster not idle"  will be triggered. Additionally 
> current functional miss check on caches without persistence.  Remove old 
> functionality from PageMemory and move it into update counters usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-11073) Persistence cache snapshot

2020-04-30 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096407#comment-17096407
 ] 

Ignite TC Bot commented on IGNITE-11073:


{panel:title=Branch: [pull/7760/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5272003&buildTypeId=IgniteTests24Java8_RunAll]

> Persistence cache snapshot
> --
>
> Key: IGNITE-11073
> URL: https://issues.apache.org/jira/browse/IGNITE-11073
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Labels: iep-28, iep-43
> Fix For: 2.9
>
>  Time Spent: 52h 20m
>  Remaining Estimate: 0h
>
> *Snapshot requirements*
> # Users must have the ability to create a snapshot of persisted user data 
> (in-memory is out of the scope).
> # Users must have the ability to create a snapshot from the cluster under the 
> load without cluster deactivation.
> # The snapshot process must not block for a long time any of the user 
> transactions (short-time blocks are acceptable).
> # The snapshot process must allow creating a data snapshot on each node and 
> transfer it to any of the remote nodes for internal cluster needs.
> # The created snapshot at the cluster-level must be fully consistent from 
> cluster-wide terms, there should not be any incomplete transactions inside.
> # The snapshot of each node must be consistent – cache partitions, binary 
> meta, etc. must not have unnecessary changes.
> *The following API must be available:*
> # [public] Java API
> # [public] JMX MBean
> # [internal] File Transmission



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9720) Initialize partition free lists lazily

2020-04-30 Thread Alexey Goncharuk (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-9720:
-
Labels: performance  (was: )

> Initialize partition free lists lazily
> --
>
> Key: IGNITE-9720
> URL: https://issues.apache.org/jira/browse/IGNITE-9720
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Goncharuk
>Assignee: Semen Boikov
>Priority: Major
>  Labels: performance
>
> When persistence is enabled, partition free lists metadata may take quite a 
> lot of pages.
> This results in a very long start time because 
> {{GridCacheOffheapManager.GridCacheDataStore#init0}} will read all metadata 
> for free list in each partition on exchange start (this is done in the 
> {{CacheFreeListImpl}} constructor)
> We should only read required information on exchange and defer actual free 
> list initialization to the first access.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9720) Initialize partition free lists lazily

2020-04-30 Thread Alexey Goncharuk (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-9720:
-
Labels:   (was: performance)

> Initialize partition free lists lazily
> --
>
> Key: IGNITE-9720
> URL: https://issues.apache.org/jira/browse/IGNITE-9720
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Goncharuk
>Assignee: Semen Boikov
>Priority: Major
>
> When persistence is enabled, partition free lists metadata may take quite a 
> lot of pages.
> This results in a very long start time because 
> {{GridCacheOffheapManager.GridCacheDataStore#init0}} will read all metadata 
> for free list in each partition on exchange start (this is done in the 
> {{CacheFreeListImpl}} constructor)
> We should only read required information on exchange and defer actual free 
> list initialization to the first access.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12660) [ML] The ParamGrid uses unserialized lambdas in interface to get an access to the trainer fields

2020-04-30 Thread Alexey Zinoviev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096375#comment-17096375
 ] 

Alexey Zinoviev commented on IGNITE-12660:
--

[~agoncharuk] Agree, I'll merge it to the master branch (need small changes for 
the master and correct order of changes).

I'll do it during the May

> [ML] The ParamGrid uses unserialized lambdas in interface to get an access to 
> the trainer fields
> 
>
> Key: IGNITE-12660
> URL: https://issues.apache.org/jira/browse/IGNITE-12660
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Affects Versions: 2.8
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Blocker
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12587) ML examples failed on start

2020-04-30 Thread Alexey Zinoviev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096372#comment-17096372
 ] 

Alexey Zinoviev commented on IGNITE-12587:
--

[~agoncharuk] agree, the master branch was missed, I'll do it during the May.

> ML examples failed on start
> ---
>
> Key: IGNITE-12587
> URL: https://issues.apache.org/jira/browse/IGNITE-12587
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Affects Versions: 2.8
> Environment: Java 8
> Linux/Win
>Reporter: Stepan Pilschikov
>Assignee: Alexey Zinoviev
>Priority: Blocker
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> New release build comes with lost data sets for ML 2.8
> Steps:
> - Try to run any ML examples used MLSandboxDatasets 
> (org.apache.ignite.examples.ml.environment.TrainingWithCustomPreprocessorsExample
>  for examples)
> Actual:
> - FileNotFoundException
> {code}
> Exception in thread "main" java.io.FileNotFoundException: 
> modules/ml/src/main/resources/datasets/boston_housing_dataset.txt
>   at 
> org.apache.ignite.ml.util.SandboxMLCache.fillCacheWith(SandboxMLCache.java:119)
>   at 
> org.apache.ignite.examples.ml.environment.TrainingWithCustomPreprocessorsExample.main(TrainingWithCustomPreprocessorsExample.java:62)
> {code}
> Release build - 
> https://ci.ignite.apache.org/viewLog.html?buildId=4957767&buildTypeId=Releases_ApacheIgniteMain_ReleaseBuild&tab=artifacts&branch_Releases_ApacheIgniteMain=ignite-2.8



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12660) [ML] The ParamGrid uses unserialized lambdas in interface to get an access to the trainer fields

2020-04-30 Thread Alexey Goncharuk (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096362#comment-17096362
 ] 

Alexey Goncharuk commented on IGNITE-12660:
---

[~zaleslaw] the commit is merged to ignite-2.8 only. Should we merge it to 
master as well?

> [ML] The ParamGrid uses unserialized lambdas in interface to get an access to 
> the trainer fields
> 
>
> Key: IGNITE-12660
> URL: https://issues.apache.org/jira/browse/IGNITE-12660
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Affects Versions: 2.8
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Blocker
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12273) Slow TX recovery

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12273:
--
Labels: iep-45  (was: )

> Slow TX recovery
> 
>
> Key: IGNITE-12273
> URL: https://issues.apache.org/jira/browse/IGNITE-12273
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>
> TX recovery cause B*B*N*2 GridCacheTxRecoveryRequest messages sending (B - 
> backups, N - prepared txs amount).
> Seems, we able to perform recovery more efficiently.
> For example, we may send only B*B*2 messages, accumulates txs together.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12741) Allow exchange merges for PME free switch.

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12741:
--
Labels: iep-45  (was: )

> Allow exchange merges for PME free switch.
> --
>
> Key: IGNITE-12741
> URL: https://issues.apache.org/jira/browse/IGNITE-12741
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.8
>Reporter: Alexey Scherbakov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>
> Currently exchange merges are disabled if multiple baseline nodes left/failed.
> It's possible to have enabled merges together with enabled optimization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12470) Pme-free switch feature should be deactivatable

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12470:
--
Labels: iep-45 newbie  (was: newbie)

> Pme-free switch feature should be deactivatable
> ---
>
> Key: IGNITE-12470
> URL: https://issues.apache.org/jira/browse/IGNITE-12470
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Blocker
>  Labels: iep-45, newbie
> Fix For: 2.8, 2.9
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> We should be able to disable this feature by some env/jvm property.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12788) Cluster achieved fully rebalanced (PME-free ready) state metric

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12788:
--
Labels: iep-45  (was: )

> Cluster achieved fully rebalanced (PME-free ready) state metric
> ---
>
> Key: IGNITE-12788
> URL: https://issues.apache.org/jira/browse/IGNITE-12788
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Mikhail Petrov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Currently, there is no metric responsible for "PME-free ready state achieved" 
> delivery.
> {{GridDhtPartitionsExchangeFuture#rebalanced}} can be used to provide such 
> metric.
> Seems, we should update metric on each 
> {{GridDhtPartitionsExchangeFuture#onDone}}.
> P.s. Late Affinity Assignment should always set {{true}} to metric value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12272) Delayed TX recovery

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12272:
--
Labels: iep-45  (was: )

> Delayed TX recovery
> ---
>
> Key: IGNITE-12272
> URL: https://issues.apache.org/jira/browse/IGNITE-12272
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> TX recovery now starts in delayed way.
> IGNITE_TX_SALVAGE_TIMEOUT = 100 which cause 100+ ms delay on recovery.
> Seems, we able to get rig of this delay to make recovery faster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12617) PME-free switch should wait for recovery only at affected nodes.

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12617:
--
Labels: iep-45  (was: )

> PME-free switch should wait for recovery only at affected nodes.
> 
>
> Key: IGNITE-12617
> URL: https://issues.apache.org/jira/browse/IGNITE-12617
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Obviously, we should wait for recovery before finishing the switch.
> We should wait for replicated caches recovery globally.
> As to partitioned caches, we have to minimize the waiting group to allow 
> upcoming operations where possible during the switch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9913) Prevent data updates blocking in case of backup BLT server node leave

2020-04-30 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-9913:
-
Labels: iep-45  (was: )

> Prevent data updates blocking in case of backup BLT server node leave
> -
>
> Key: IGNITE-9913
> URL: https://issues.apache.org/jira/browse/IGNITE-9913
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Ivan Rakov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.8, 2.9
>
> Attachments: 9913_yardstick.png, master_yardstick.png
>
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> Ignite cluster performs distributed partition map exchange when any server 
> node leaves or joins the topology.
> Distributed PME blocks all updates and may take a long time. If all 
> partitions are assigned according to the baseline topology and server node 
> leaves, there's no actual need to perform distributed PME: every cluster node 
> is able to recalculate new affinity assigments and partition states locally. 
> If we'll implement such lightweight PME and handle mapping and lock requests 
> on new topology version correctly, updates won't be stopped (except updates 
> of partitions that lost their primary copy).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12587) ML examples failed on start

2020-04-30 Thread Alexey Goncharuk (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096361#comment-17096361
 ] 

Alexey Goncharuk commented on IGNITE-12587:
---

[~zaleslaw] the PR was only merged to ignite-2.8 branch, therefore the fix will 
not be available in ignite-2.9. Should we merge the fix to master as well?

> ML examples failed on start
> ---
>
> Key: IGNITE-12587
> URL: https://issues.apache.org/jira/browse/IGNITE-12587
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Affects Versions: 2.8
> Environment: Java 8
> Linux/Win
>Reporter: Stepan Pilschikov
>Assignee: Alexey Zinoviev
>Priority: Blocker
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> New release build comes with lost data sets for ML 2.8
> Steps:
> - Try to run any ML examples used MLSandboxDatasets 
> (org.apache.ignite.examples.ml.environment.TrainingWithCustomPreprocessorsExample
>  for examples)
> Actual:
> - FileNotFoundException
> {code}
> Exception in thread "main" java.io.FileNotFoundException: 
> modules/ml/src/main/resources/datasets/boston_housing_dataset.txt
>   at 
> org.apache.ignite.ml.util.SandboxMLCache.fillCacheWith(SandboxMLCache.java:119)
>   at 
> org.apache.ignite.examples.ml.environment.TrainingWithCustomPreprocessorsExample.main(TrainingWithCustomPreprocessorsExample.java:62)
> {code}
> Release build - 
> https://ci.ignite.apache.org/viewLog.html?buildId=4957767&buildTypeId=Releases_ApacheIgniteMain_ReleaseBuild&tab=artifacts&branch_Releases_ApacheIgniteMain=ignite-2.8



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12843) TDE Phase-3. Cache key rotation.

2020-04-30 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-12843:
--
Description: 
Add the ability to rotate (change) the cache encryption key.

Initial design: 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652384

  was:
Add the ability to rotate (change) the cache encryption key.

Design (draft): 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652384


> TDE Phase-3. Cache key rotation.
> 
>
> Key: IGNITE-12843
> URL: https://issues.apache.org/jira/browse/IGNITE-12843
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: IEP-18
>
> Add the ability to rotate (change) the cache encryption key.
> Initial design: 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652384



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12855) Node failed after get operation when entries from the cache expired concurrently

2020-04-30 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12855:
---
Fix Version/s: 2.8.1

> Node failed after get operation when entries from the cache expired 
> concurrently 
> -
>
> Key: IGNITE-12855
> URL: https://issues.apache.org/jira/browse/IGNITE-12855
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Failed with the error:
> {noformat}
> [12:10:50] (err) Failed to notify listener: 
> o.a.i.i.processors.cache.distributed.dht.GridDhtCacheAdapter$6...@7c956694java.lang.AssertionError
> at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.remove(GridCacheOffheapManager.java:2456)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:619)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:4401)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.onExpired(GridCacheMapEntry.java:4095)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(GridCacheMapEntry.java:767)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGetVersioned(GridCacheMapEntry.java:694)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.getAllAsync0(GridCacheAdapter.java:2175)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtAllAsync(GridDhtCacheAdapter.java:709)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.getAsync(GridDhtGetSingleFuture.java:413)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map0(GridDhtGetSingleFuture.java:279)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map(GridDhtGetSingleFuture.java:261)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.init(GridDhtGetSingleFuture.java:182)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtSingleAsync(GridDhtCacheAdapter.java:821)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetRequest(GridDhtCacheAdapter.java:836)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$2.apply(GridDhtTransactionalCacheAdapter.java:152)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$2.apply(GridDhtTransactionalCacheAdapter.java:150)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:392)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:318)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:308)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367)
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:565)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> {noformat}
> Reproducer:
>  
> {code:java}
> @Test
> public void shouldNotBeProblemToPutToExpiredCacheConcurrently() throws 
> Exception {
> final AtomicBoolean end = new AtomicBoolean();
> final IgniteEx srv = startGrid(3);
> srv.cluster().active(true);
> IgniteInternalFuture loadFut = runMultiThreadedAsync(() -> {
> while (!end.get() && !fail) {
> IgniteCache cache = srv.cache(CACHE_NAME);
> for (int i = 0; i < ENTRIES; i++)
> cache.put(i, new byte[1024]);
> for (int i = 0; i < ENTRIES; i++)
> cache.get(i); // touch entries
> }
> }, WORKLOAD_THREADS_CNT, "high-workload");
> try {
> loadFut.get(10, TimeUnit.SECONDS);
> }
> catch (Exception e) {
>

[jira] [Commented] (IGNITE-12855) Node failed after get operation when entries from the cache expired concurrently

2020-04-30 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096301#comment-17096301
 ] 

Aleksey Plekhanov commented on IGNITE-12855:


Cherry-picked to 2.8.1

> Node failed after get operation when entries from the cache expired 
> concurrently 
> -
>
> Key: IGNITE-12855
> URL: https://issues.apache.org/jira/browse/IGNITE-12855
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Failed with the error:
> {noformat}
> [12:10:50] (err) Failed to notify listener: 
> o.a.i.i.processors.cache.distributed.dht.GridDhtCacheAdapter$6...@7c956694java.lang.AssertionError
> at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.remove(GridCacheOffheapManager.java:2456)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:619)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:4401)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.onExpired(GridCacheMapEntry.java:4095)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(GridCacheMapEntry.java:767)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGetVersioned(GridCacheMapEntry.java:694)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.getAllAsync0(GridCacheAdapter.java:2175)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtAllAsync(GridDhtCacheAdapter.java:709)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.getAsync(GridDhtGetSingleFuture.java:413)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map0(GridDhtGetSingleFuture.java:279)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.map(GridDhtGetSingleFuture.java:261)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtGetSingleFuture.init(GridDhtGetSingleFuture.java:182)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.getDhtSingleAsync(GridDhtCacheAdapter.java:821)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetRequest(GridDhtCacheAdapter.java:836)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$2.apply(GridDhtTransactionalCacheAdapter.java:152)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTransactionalCacheAdapter$2.apply(GridDhtTransactionalCacheAdapter.java:150)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:392)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:318)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:308)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367)
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:565)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> {noformat}
> Reproducer:
>  
> {code:java}
> @Test
> public void shouldNotBeProblemToPutToExpiredCacheConcurrently() throws 
> Exception {
> final AtomicBoolean end = new AtomicBoolean();
> final IgniteEx srv = startGrid(3);
> srv.cluster().active(true);
> IgniteInternalFuture loadFut = runMultiThreadedAsync(() -> {
> while (!end.get() && !fail) {
> IgniteCache cache = srv.cache(CACHE_NAME);
> for (int i = 0; i < ENTRIES; i++)
> cache.put(i, new byte[1024]);
> for (int i = 0; i < ENTRIES; i++)
> cache.get(i); // touch entries
> }
> }, WORKLOAD_THREADS_CNT, "high-workload");
> try {
> loadFut.get(10, 

[jira] [Updated] (IGNITE-12933) Node failed after put incorrect key class for indexed type to transactional cache

2020-04-30 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12933:
---
Fix Version/s: 2.8.1

> Node failed after put incorrect key class for indexed type to transactional 
> cache
> -
>
> Key: IGNITE-12933
> URL: https://issues.apache.org/jira/browse/IGNITE-12933
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Node failed after put incorrect key class for indexed type to the 
> transactional cache when indexing is enabled.
> Reproducer:
> {code:java}
> public class IndexedTypesTest extends GridCommonAbstractTest {
> private boolean failed;
> @Override protected IgniteConfiguration getConfiguration(String 
> igniteInstanceName) throws Exception {
> return super.getConfiguration(igniteInstanceName)
> .setFailureHandler((ignite, ctx) -> failed = true)
> .setCacheConfiguration(new 
> CacheConfiguration<>(DEFAULT_CACHE_NAME)
> .setAtomicityMode(TRANSACTIONAL)
> .setIndexedTypes(String.class, String.class));
> }
> @Test
> public void testPutIndexedType() throws Exception {
> Ignite ignite = startGrids(2);
> for (int i = 0; i < 10; i++) {
> try {
> ignite.cache(DEFAULT_CACHE_NAME).put(i, "val" + i);
> }
> catch (Exception ignore) {
> }
> }
> assertFalse(failed);
> }
> }
> {code}
> Node failed with exception:
> {noformat}
> [2020-04-22 
> 17:05:34,524][ERROR][sys-stripe-11-#76%cache.IndexedTypesTest1%][IgniteTestResources]
>  Critical system error detected. Will be handled accordingly to configured 
> handler 
> [hnd=o.a.i.i.processors.cache.IndexedTypesTest$$Lambda$115/0x00080024d040@147237db,
>  failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
> o.a.i.i.transactions.IgniteTxHeuristicCheckedException: Committing a 
> transaction has produced runtime exception]]
> class 
> org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException: 
> Committing a transaction has produced runtime exception
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.heuristicException(IgniteTxAdapter.java:800)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitIfLocked(GridDistributedTxRemoteAdapter.java:838)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitRemoteTx(GridDistributedTxRemoteAdapter.java:893)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finish(IgniteTxHandler.java:1502)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processDhtTxPrepareRequest(IgniteTxHandler.java:1233)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$5.apply(IgniteTxHandler.java:229)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$5.apply(IgniteTxHandler.java:227)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:392)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:318)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:308)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367)
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:565)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to update 
> index, incorrect key class [expCls=java.lang.String, 
> actualCls=java.lang.Integer]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.typeByValue(GridQueryProcessor.java:2223)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:209

[jira] [Commented] (IGNITE-12933) Node failed after put incorrect key class for indexed type to transactional cache

2020-04-30 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096299#comment-17096299
 ] 

Aleksey Plekhanov commented on IGNITE-12933:


Cherry-picked to 2.8.1

> Node failed after put incorrect key class for indexed type to transactional 
> cache
> -
>
> Key: IGNITE-12933
> URL: https://issues.apache.org/jira/browse/IGNITE-12933
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
> Fix For: 2.9, 2.8.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Node failed after put incorrect key class for indexed type to the 
> transactional cache when indexing is enabled.
> Reproducer:
> {code:java}
> public class IndexedTypesTest extends GridCommonAbstractTest {
> private boolean failed;
> @Override protected IgniteConfiguration getConfiguration(String 
> igniteInstanceName) throws Exception {
> return super.getConfiguration(igniteInstanceName)
> .setFailureHandler((ignite, ctx) -> failed = true)
> .setCacheConfiguration(new 
> CacheConfiguration<>(DEFAULT_CACHE_NAME)
> .setAtomicityMode(TRANSACTIONAL)
> .setIndexedTypes(String.class, String.class));
> }
> @Test
> public void testPutIndexedType() throws Exception {
> Ignite ignite = startGrids(2);
> for (int i = 0; i < 10; i++) {
> try {
> ignite.cache(DEFAULT_CACHE_NAME).put(i, "val" + i);
> }
> catch (Exception ignore) {
> }
> }
> assertFalse(failed);
> }
> }
> {code}
> Node failed with exception:
> {noformat}
> [2020-04-22 
> 17:05:34,524][ERROR][sys-stripe-11-#76%cache.IndexedTypesTest1%][IgniteTestResources]
>  Critical system error detected. Will be handled accordingly to configured 
> handler 
> [hnd=o.a.i.i.processors.cache.IndexedTypesTest$$Lambda$115/0x00080024d040@147237db,
>  failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
> o.a.i.i.transactions.IgniteTxHeuristicCheckedException: Committing a 
> transaction has produced runtime exception]]
> class 
> org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException: 
> Committing a transaction has produced runtime exception
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.heuristicException(IgniteTxAdapter.java:800)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitIfLocked(GridDistributedTxRemoteAdapter.java:838)
> at 
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitRemoteTx(GridDistributedTxRemoteAdapter.java:893)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finish(IgniteTxHandler.java:1502)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processDhtTxPrepareRequest(IgniteTxHandler.java:1233)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$5.apply(IgniteTxHandler.java:229)
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$5.apply(IgniteTxHandler.java:227)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:392)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:318)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:308)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367)
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:565)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to update 
> index, incorrect key class [expCls=java.lang.String, 
> actualCls=java.lang.Integer]
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.typeByValue(GridQueryProcessor.java:2223)
> at 
> org.apache.ignite.internal.processors.qu

[jira] [Created] (IGNITE-12968) Create cluster snapshot documentation pages

2020-04-30 Thread Maxim Muzafarov (Jira)
Maxim Muzafarov created IGNITE-12968:


 Summary: Create cluster snapshot documentation pages
 Key: IGNITE-12968
 URL: https://issues.apache.org/jira/browse/IGNITE-12968
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Maxim Muzafarov
Assignee: Maxim Muzafarov
 Fix For: 2.9


Add the following to the Apache Ignite documentation:
1. How to create a cluster snapshot (describe API, limitations)
2. How to configure a destination directory
3. Manual steps for a snapshot restore
4. Examples



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12968) Create cluster snapshot documentation pages

2020-04-30 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-12968:
-
Issue Type: Task  (was: Improvement)

> Create cluster snapshot documentation pages
> ---
>
> Key: IGNITE-12968
> URL: https://issues.apache.org/jira/browse/IGNITE-12968
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Labels: iep-43
> Fix For: 2.9
>
>
> Add the following to the Apache Ignite documentation:
> 1. How to create a cluster snapshot (describe API, limitations)
> 2. How to configure a destination directory
> 3. Manual steps for a snapshot restore
> 4. Examples



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12968) Create cluster snapshot documentation pages

2020-04-30 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-12968:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Create cluster snapshot documentation pages
> ---
>
> Key: IGNITE-12968
> URL: https://issues.apache.org/jira/browse/IGNITE-12968
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Labels: iep-43
> Fix For: 2.9
>
>
> Add the following to the Apache Ignite documentation:
> 1. How to create a cluster snapshot (describe API, limitations)
> 2. How to configure a destination directory
> 3. Manual steps for a snapshot restore
> 4. Examples



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12961) Start snapshot operation via control.sh

2020-04-30 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-12961:
-
Issue Type: Improvement  (was: Task)

> Start snapshot operation via control.sh
> ---
>
> Key: IGNITE-12961
> URL: https://issues.apache.org/jira/browse/IGNITE-12961
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Labels: iep-43
> Fix For: 2.9
>
>
> Add the ability to start snapshot operation via {{control.sh}} command line.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12967) Start cluster snapshot from client node

2020-04-30 Thread Maxim Muzafarov (Jira)
Maxim Muzafarov created IGNITE-12967:


 Summary: Start cluster snapshot from client node
 Key: IGNITE-12967
 URL: https://issues.apache.org/jira/browse/IGNITE-12967
 Project: Ignite
  Issue Type: Improvement
Reporter: Maxim Muzafarov
Assignee: Maxim Muzafarov
 Fix For: 2.9


Users should be able to start cluster snapshots from client node by sending 
compute requests on server node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12966) .NET: Use C# Source Generators for serialization

2020-04-30 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-12966:
---

 Summary: .NET: Use C# Source Generators for serialization
 Key: IGNITE-12966
 URL: https://issues.apache.org/jira/browse/IGNITE-12966
 Project: Ignite
  Issue Type: New Feature
  Components: platforms
Reporter: Pavel Tupitsyn
 Fix For: 3.0


C# Source Generators provide a way to replace reflection with compile-time code 
generation.
This can be very useful for Ignite serialization, compute invocations, and 
everything else where reflection is involved currently.

https://devblogs.microsoft.com/dotnet/introducing-c-source-generators/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)