[jira] [Commented] (IGNITE-8801) Change default behaviour of atomic operations inside transactions

2024-01-19 Thread Scott Feldstein (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-8801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808840#comment-17808840
 ] 

Scott Feldstein commented on IGNITE-8801:
-

hi, i understand this question is really late in the game but could someone 
explain why this was ultimately merged when the last two comments in the [dev 
thread|https://lists.apache.org/thread/3wxpx2tw9xnn74139nkqopdom5mh6q74] 
correctly point out that there are valid use-cases for atomic and transactional 
caches to be commingled?  Was there some other conversation that occurred 
around this?

> Change default behaviour of atomic operations inside transactions
> -
>
> Key: IGNITE-8801
> URL: https://issues.apache.org/jira/browse/IGNITE-8801
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Ryabov Dmitrii
>Assignee: Julia Bakulina
>Priority: Minor
>  Labels: ise
> Fix For: 2.15
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Need to change default behaviour of atomic operations to fail inside 
> transactions.
> 1) Remove IGNITE_ALLOW_ATOMIC_OPS_IN_TX system property.
> 2) Set default value to restrict atomic operations in 
> {{CacheOperationContext}} constructor without arguments and arguments for 
> calls of another constructor.
> 3) Fix javadocs.
> As per the latest round of discussion on Ignite Dev List as of 28/10/2022 we 
> agreed on the following:
> 1) Revert deprecation IGNITE-17916 - reverted
> 2) Change default value in 2.15.
> 3) Notify users in release notes, an exception message - how to change the
> behavior back.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21316) Do schema sync before executing a DDL operation

2024-01-19 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21316:
---
Issue Type: Improvement  (was: Bug)

> Do schema sync before executing a DDL operation
> ---
>
> Key: IGNITE-21316
> URL: https://issues.apache.org/jira/browse/IGNITE-21316
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When executing an DDL operation on one node and then executing another 
> operation (depending on the first operation to finish) on another node, the 
> second operation might not see the first operation results.
> For example, if we create a zone via node A, wait for the DDL future to 
> complete and then we try to create a table using that new zone via node B, 
> the table creation might fail because node B does not see the newly-created 
> zone yet.
> This is because the zone creation future only makes us wait for all 
> activation timestamp to become non-future on all clocks on the cluster, but 
> when this happens, there is no guarantee that all nodes actually received the 
> new catalog version.
> To fix this, we need to do a schema sync for timestamp equal to 'now' before 
> doing any DDL operation.
> This should probably be done in the DDL handler (but maybe it makes sense to 
> do it in the `execute()` method of the CatalogManager).
> An example of a test demonstrating the problem is 
> ItRebalanceDistributedTest.testOnLeaderElectedRebalanceRestart(). But this 
> test also has another problem: it interacts with the CatalogManager directly. 
> If we add the fix above the CatalogManager, the test will have to be fixed to 
> do schema sync by hand.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21173) Repeated call to commit/abort should not emit exceptions

2024-01-19 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808725#comment-17808725
 ] 

Vladislav Pyatkov commented on IGNITE-21173:


Merged bba00639c68970007b4ec837e03cf0aa10cb0b87

> Repeated call to commit/abort should not emit exceptions
> 
>
> Key: IGNITE-21173
> URL: https://issues.apache.org/jira/browse/IGNITE-21173
> Project: Ignite
>  Issue Type: Task
>Reporter:  Kirill Sizov
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As per the docs,
> ??A commit of a completed or ending transaction has no effect and always 
> succeeds when the transaction is completed.??
> At the moment, if a first call to commit/abort fails with a exception, the 
> subsequent ones will fail with the same exception. 
> *Definition of Done*
> 1. Subsequent calls to commit/abort should succeed.
> 2. The tests should reflect this change
> 3. If a transaction is being committed, but the finishing logic decides to 
> abort this transaction (due to primary replica expiration), the commit call 
> with fail with {{TransactionAlreadyFinishedException}}. 
> The exception should be renamed to 
> {{MismatchingTransactionOutcomeException}}, as it is thrown when the real 
> outcome is different from the one we provide as a parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21173) Repeated call to commit/abort should not emit exceptions

2024-01-19 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808721#comment-17808721
 ] 

Vladislav Pyatkov commented on IGNITE-21173:


LGTM

> Repeated call to commit/abort should not emit exceptions
> 
>
> Key: IGNITE-21173
> URL: https://issues.apache.org/jira/browse/IGNITE-21173
> Project: Ignite
>  Issue Type: Task
>Reporter:  Kirill Sizov
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As per the docs,
> ??A commit of a completed or ending transaction has no effect and always 
> succeeds when the transaction is completed.??
> At the moment, if a first call to commit/abort fails with a exception, the 
> subsequent ones will fail with the same exception. 
> *Definition of Done*
> 1. Subsequent calls to commit/abort should succeed.
> 2. The tests should reflect this change
> 3. If a transaction is being committed, but the finishing logic decides to 
> abort this transaction (due to primary replica expiration), the commit call 
> with fail with {{TransactionAlreadyFinishedException}}. 
> The exception should be renamed to 
> {{MismatchingTransactionOutcomeException}}, as it is thrown when the real 
> outcome is different from the one we provide as a parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21213) Coordination of mechanisms of determination for primary on replicaside

2024-01-19 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-21213:
-
Epic Link: IGNITE-21174

> Coordination of mechanisms of determination for primary on replicaside
> --
>
> Key: IGNITE-21213
> URL: https://issues.apache.org/jira/browse/IGNITE-21213
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> In the replica listener, we have unconsidered mechanisms between each other 
> to determine primary rteplica. The first one is based on the placement driver 
> API (it is used in _PartitionReplicaListener#ensureReplicaIsPrimary_) and the 
> other one is based on the placement driver events (the events are hadeled by 
> two methods: _ReplicaManager#onPrimaryReplicaElected_, 
> _ReplicaManager#onPrimaryReplicaExpired_).
> Because the replica messages and events are handled in different threads, any 
> variety of processing is possible. For example, the replica can release all 
> transaction locks (by
> PRIMARY_REPLICA_EXPIRED event) and then handle a message for this transaction 
> (because ensureReplicaIsPrimary was done before), assuming that all the locks 
> are holding.
> h3. Definition of done
> The two mechanisms work in coordination.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21213) Coordination of mechanisms of determination for primary on replicaside

2024-01-19 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-21213:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Coordination of mechanisms of determination for primary on replicaside
> --
>
> Key: IGNITE-21213
> URL: https://issues.apache.org/jira/browse/IGNITE-21213
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> In the replica listener, we have unconsidered mechanisms between each other 
> to determine primary rteplica. The first one is based on the placement driver 
> API (it is used in _PartitionReplicaListener#ensureReplicaIsPrimary_) and the 
> other one is based on the placement driver events (the events are hadeled by 
> two methods: _ReplicaManager#onPrimaryReplicaElected_, 
> _ReplicaManager#onPrimaryReplicaExpired_).
> Because the replica messages and events are handled in different threads, any 
> variety of processing is possible. For example, the replica can release all 
> transaction locks (by
> PRIMARY_REPLICA_EXPIRED event) and then handle a message for this transaction 
> (because ensureReplicaIsPrimary was done before), assuming that all the locks 
> are holding.
> h3. Definition of done
> The two mechanisms work in coordination.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20752) Bump opencensus version to latest 0.31.1

2024-01-19 Thread Nikita Amelchev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Amelchev updated IGNITE-20752:
-
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> Bump opencensus version to latest 0.31.1
> 
>
> Key: IGNITE-20752
> URL: https://issues.apache.org/jira/browse/IGNITE-20752
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 2.15
>Reporter: ZhangJian He
>Assignee: Aleksandr Nikolaev
>Priority: Minor
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21248) HeapUnboundedLockManager lacks abandoned locks handling

2024-01-19 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808690#comment-17808690
 ] 

Vladislav Pyatkov commented on IGNITE-21248:


I added a test in PR that demonstrates the issue:
_ItTransactionRecoveryTest.testTsRecoveryForCursor_

> HeapUnboundedLockManager lacks abandoned locks handling
> ---
>
> Key: IGNITE-21248
> URL: https://issues.apache.org/jira/browse/IGNITE-21248
> Project: Ignite
>  Issue Type: Task
>Reporter:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{HeapLockManager}} notifies {{OrphanDetector}} of a lock conflict to check 
> whether the lock holder is still alive and immediately fail the request if it 
> is not (done in IGNITE-21147).
> {{HeapUnboundedLockManager}} does not have similar changes, it does not check 
> the response from {{OrphanDetector}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20752) Bump opencensus version to latest 0.31.1

2024-01-19 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808682#comment-17808682
 ] 

Ignite TC Bot commented on IGNITE-20752:


{panel:title=Branch: [pull/11187/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11187/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7710407&buildTypeId=IgniteTests24Java8_RunAll]

> Bump opencensus version to latest 0.31.1
> 
>
> Key: IGNITE-20752
> URL: https://issues.apache.org/jira/browse/IGNITE-20752
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 2.15
>Reporter: ZhangJian He
>Assignee: Aleksandr Nikolaev
>Priority: Minor
>  Labels: ise
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20752) Bump opencensus version to latest 0.31.1

2024-01-19 Thread Aleksandr Nikolaev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Nikolaev updated IGNITE-20752:

Fix Version/s: 2.17

> Bump opencensus version to latest 0.31.1
> 
>
> Key: IGNITE-20752
> URL: https://issues.apache.org/jira/browse/IGNITE-20752
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 2.15
>Reporter: ZhangJian He
>Assignee: Aleksandr Nikolaev
>Priority: Minor
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21313) Incorrect behaviour when invalid zone filter is applied to zone

2024-01-19 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-21313:
---
Description: 
Let's consider this code to be run in a test:

 
{code:java}
sql("CREATE ZONE ZONE1 WITH DATA_NODES_FILTER = 'INCORRECT_FILTER'");
sql("CREATE TABLE TEST(ID INT PRIMARY KEY, VAL0 INT) WITH 
PRIMARY_ZONE='ZONE1'"); {code}
 Current behaviour is that test hangs with spamming 

 
{noformat}
[2024-01-19T12:56:25,163][ERROR][%ictdt_n_0%metastorage-watch-executor-2][WatchProcessor]
 Error occurred when notifying safe time advanced callback
 java.util.concurrent.CompletionException: 
com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']
    at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883)
 [?:?]
    at 
java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2257)
 [?:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.notifyWatches(WatchProcessor.java:213)
 ~[main/:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$notifyWatches$3(WatchProcessor.java:169)
 ~[main/:?]
    at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
 [?:?]
    at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
    at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']{noformat}
 

We need to fix that and formulate a reaction to an incorrect filter

 

*Implementation notes:*

To fix it we need to change implementation of DistributionZonesUtil#filter.

Instead of 
{code:java}
List> res = JsonPath.read(convertedAttributes, 
filter);{code}
need to use
{code:java}
Configuration configuration = new Configuration.ConfigurationBuilder()
.options(Option.SUPPRESS_EXCEPTIONS, Option.ALWAYS_RETURN_LIST)
.build();

List> res = 
JsonPath.using(configuration).parse(convertedAttributes).read(filter);{code}
In this case incorrect filter will not throw PathNotFoundException and returns 
empty 'res'.

  was:
Let's consider this code to be run in a test:

 
{code:java}
sql("CREATE ZONE ZONE1 WITH DATA_NODES_FILTER = 'INCORRECT_FILTER'");
sql("CREATE TABLE TEST(ID INT PRIMARY KEY, VAL0 INT) WITH 
PRIMARY_ZONE='ZONE1'"); {code}
 Current behaviour is that test hangs with spamming 

 
{noformat}
[2024-01-19T12:56:25,163][ERROR][%ictdt_n_0%metastorage-watch-executor-2][WatchProcessor]
 Error occurred when notifying safe time advanced callback
 java.util.concurrent.CompletionException: 
com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']
    at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883)
 [?:?]
    at 
java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2257)
 [?:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.notifyWatches(WatchProcessor.java:213)
 ~[main/:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$notifyWatches$3(WatchProcessor.java:169)
 ~[main/:?]
    at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
 [?:?]
    at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
    at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']{noformat}
 

We need to fix that and formulate a reaction to an incorrect filter

 

*Implementation notes:*

To fix it we need to change DistributionZonesUtil#filter implementation.

Instead of 
{code:java}
List> res = JsonPath.read(convertedAttributes, 
filter);{code}
need to use
{code:java}
Configuration configuration = new Configuration.ConfigurationBuilder()
.options(Option.SUPPRESS_EXCEPTIONS, Option.ALW

[jira] [Updated] (IGNITE-21313) Incorrect behaviour when invalid zone filter is applied to zone

2024-01-19 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-21313:
---
Description: 
Let's consider this code to be run in a test:

 
{code:java}
sql("CREATE ZONE ZONE1 WITH DATA_NODES_FILTER = 'INCORRECT_FILTER'");
sql("CREATE TABLE TEST(ID INT PRIMARY KEY, VAL0 INT) WITH 
PRIMARY_ZONE='ZONE1'"); {code}
 Current behaviour is that test hangs with spamming 

 
{noformat}
[2024-01-19T12:56:25,163][ERROR][%ictdt_n_0%metastorage-watch-executor-2][WatchProcessor]
 Error occurred when notifying safe time advanced callback
 java.util.concurrent.CompletionException: 
com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']
    at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883)
 [?:?]
    at 
java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2257)
 [?:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.notifyWatches(WatchProcessor.java:213)
 ~[main/:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$notifyWatches$3(WatchProcessor.java:169)
 ~[main/:?]
    at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
 [?:?]
    at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
    at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']{noformat}
 

We need to fix that and formulate a reaction to an incorrect filter

 

*Implementation notes:*

To fix it we need to change DistributionZonesUtil#filter implementation.

Instead of 
{code:java}
List> res = JsonPath.read(convertedAttributes, 
filter);{code}
need to use
{code:java}
Configuration configuration = new Configuration.ConfigurationBuilder()
.options(Option.SUPPRESS_EXCEPTIONS, Option.ALWAYS_RETURN_LIST)
.build();

List> res = 
JsonPath.using(configuration).parse(convertedAttributes).read(filter);{code}
In this case incorrect filter will not throw PathNotFoundException and returns 
empty 'res'.

 

  was:
Let's consider this code to be run in a test:

 
{code:java}
sql("CREATE ZONE ZONE1 WITH DATA_NODES_FILTER = 'INCORRECT_FILTER'");
sql("CREATE TABLE TEST(ID INT PRIMARY KEY, VAL0 INT) WITH 
PRIMARY_ZONE='ZONE1'"); {code}
 Current behaviour is that test hangs with spamming 

 
{noformat}
[2024-01-19T12:56:25,163][ERROR][%ictdt_n_0%metastorage-watch-executor-2][WatchProcessor]
 Error occurred when notifying safe time advanced callback
 java.util.concurrent.CompletionException: 
com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']
    at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883)
 [?:?]
    at 
java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2257)
 [?:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.notifyWatches(WatchProcessor.java:213)
 ~[main/:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$notifyWatches$3(WatchProcessor.java:169)
 ~[main/:?]
    at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
 [?:?]
    at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
    at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']{noformat}
 

We need to fix that and formulate a reaction to an incorrect filter  


> Incorrect behaviour when invalid zone filter is applied to zone 
> 
>
> Key: IGNITE-21313
> URL: https://issues.apache.org/jira/browse/IGNITE-21313
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Alie

[jira] [Created] (IGNITE-21317) ItRebalanceDistributedTest.testRaftClientsUpdatesAfterRebalance rare fails with "No such partition 0 in table TBL1"

2024-01-19 Thread Kirill Gusakov (Jira)
Kirill Gusakov created IGNITE-21317:
---

 Summary: 
ItRebalanceDistributedTest.testRaftClientsUpdatesAfterRebalance rare fails with 
"No such partition 0 in table TBL1"
 Key: IGNITE-21317
 URL: https://issues.apache.org/jira/browse/IGNITE-21317
 Project: Ignite
  Issue Type: Bug
Reporter: Kirill Gusakov


We thought, that issue will be fixed by IGNITE-20210, but it is still there.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21317) ItRebalanceDistributedTest.testRaftClientsUpdatesAfterRebalance rare fails with "No such partition 0 in table TBL1"

2024-01-19 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-21317:

Labels: ignite-3  (was: )

> ItRebalanceDistributedTest.testRaftClientsUpdatesAfterRebalance rare fails 
> with "No such partition 0 in table TBL1"
> ---
>
> Key: IGNITE-21317
> URL: https://issues.apache.org/jira/browse/IGNITE-21317
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> We thought, that issue will be fixed by IGNITE-20210, but it is still there.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21266) [Java Thin Client] Partition Awareness does not work after cluster restart

2024-01-19 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-21266:

Labels: ise  (was: )

> [Java Thin Client] Partition Awareness does not work after cluster restart
> --
>
> Key: IGNITE-21266
> URL: https://issues.apache.org/jira/browse/IGNITE-21266
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Major
>  Labels: ise
>
> Reproducer: 
> {code:java}
> /** */
> public class PartitionAwarenessClusterRestartTest extends 
> ThinClientAbstractPartitionAwarenessTest {
> /** */
> @Test
> public void testGroupNodesAfterClusterRestart() throws Exception {
> prepareCluster();
> initClient(getClientConfiguration(0), 0, 1);
> checkPartitionAwareness();
> stopAllGrids();
> prepareCluster();
> checkPartitionAwareness();
> }
> /** */
> private void checkPartitionAwareness() throws Exception {
> ClientCache cache = client.cache(DEFAULT_CACHE_NAME);
> cache.put(0, 0);
> opsQueue.clear();
> for (int i = 1; i < 1000; i++) {
> cache.put(i, i);
> 
> assertOpOnChannel(nodeChannel(grid(0).affinity(DEFAULT_CACHE_NAME).mapKeyToNode(i).id()),
>  ClientOperation.CACHE_PUT);
> }
> }
> /** */
> private void prepareCluster() throws Exception {
> startGrids(3);
> grid(0).createCache(new CacheConfiguration<>(DEFAULT_CACHE_NAME));
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21316) Do schema sync before executing a DDL operation

2024-01-19 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21316:
---
Description: 
When executing an DDL operation on one node and then executing another 
operation (depending on the first operation to finish) on another node, the 
second operation might not see the first operation results.

For example, if we create a zone via node A, wait for the DDL future to 
complete and then we try to create a table using that new zone via node B, the 
table creation might fail because node B does not see the newly-created zone 
yet.

This is because the zone creation future only makes us wait for all activation 
timestamp to become non-future on all clocks on the cluster, but when this 
happens, there is no guarantee that all nodes actually received the new catalog 
version.

To fix this, we need to do a schema sync for timestamp equal to 'now' before 
doing any DDL operation.

This should probably be done in the DDL handler (but maybe it makes sense to do 
it in the `execute()` method of the CatalogManager).

An example of a test demonstrating the problem is 
ItRebalanceDistributedTest.testOnLeaderElectedRebalanceRestart(). But this test 
also has another problem: it interacts with the CatalogManager directly. If we 
add the fix above the CatalogManager, the test will have to be fixed to do the 
schema sync by hand.

  was:
When executing an DDL operation on one node and then executing another 
operation (depending on the first operation to finish) on another node, the 
second operation might not see the first operation results.

For example, if we create a zone via node A, wait for the DDL future to 
complete and then we try to create a table using that new zone via node B, the 
table creation might fail because node B does not see the newly-created zone 
yet.

This is because the zone creation future only makes us wait for all activation 
timestamp to become non-future on all clocks on the cluster, but when this 
happens, there is no guarantee that all nodes actually received the new catalog 
version.

To fix this, we need to do a schema sync for timestamp equal to 'now' before 
doing any DDL operation.

This should probably be done in the DDL handler (but maybe it makes sense to do 
it in the `execute()` method of the CatalogManager).


> Do schema sync before executing a DDL operation
> ---
>
> Key: IGNITE-21316
> URL: https://issues.apache.org/jira/browse/IGNITE-21316
> Project: Ignite
>  Issue Type: Bug
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When executing an DDL operation on one node and then executing another 
> operation (depending on the first operation to finish) on another node, the 
> second operation might not see the first operation results.
> For example, if we create a zone via node A, wait for the DDL future to 
> complete and then we try to create a table using that new zone via node B, 
> the table creation might fail because node B does not see the newly-created 
> zone yet.
> This is because the zone creation future only makes us wait for all 
> activation timestamp to become non-future on all clocks on the cluster, but 
> when this happens, there is no guarantee that all nodes actually received the 
> new catalog version.
> To fix this, we need to do a schema sync for timestamp equal to 'now' before 
> doing any DDL operation.
> This should probably be done in the DDL handler (but maybe it makes sense to 
> do it in the `execute()` method of the CatalogManager).
> An example of a test demonstrating the problem is 
> ItRebalanceDistributedTest.testOnLeaderElectedRebalanceRestart(). But this 
> test also has another problem: it interacts with the CatalogManager directly. 
> If we add the fix above the CatalogManager, the test will have to be fixed to 
> do the schema sync by hand.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21316) Do schema sync before executing a DDL operation

2024-01-19 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21316:
---
Description: 
When executing an DDL operation on one node and then executing another 
operation (depending on the first operation to finish) on another node, the 
second operation might not see the first operation results.

For example, if we create a zone via node A, wait for the DDL future to 
complete and then we try to create a table using that new zone via node B, the 
table creation might fail because node B does not see the newly-created zone 
yet.

This is because the zone creation future only makes us wait for all activation 
timestamp to become non-future on all clocks on the cluster, but when this 
happens, there is no guarantee that all nodes actually received the new catalog 
version.

To fix this, we need to do a schema sync for timestamp equal to 'now' before 
doing any DDL operation.

This should probably be done in the DDL handler (but maybe it makes sense to do 
it in the `execute()` method of the CatalogManager).

An example of a test demonstrating the problem is 
ItRebalanceDistributedTest.testOnLeaderElectedRebalanceRestart(). But this test 
also has another problem: it interacts with the CatalogManager directly. If we 
add the fix above the CatalogManager, the test will have to be fixed to do 
schema sync by hand.

  was:
When executing an DDL operation on one node and then executing another 
operation (depending on the first operation to finish) on another node, the 
second operation might not see the first operation results.

For example, if we create a zone via node A, wait for the DDL future to 
complete and then we try to create a table using that new zone via node B, the 
table creation might fail because node B does not see the newly-created zone 
yet.

This is because the zone creation future only makes us wait for all activation 
timestamp to become non-future on all clocks on the cluster, but when this 
happens, there is no guarantee that all nodes actually received the new catalog 
version.

To fix this, we need to do a schema sync for timestamp equal to 'now' before 
doing any DDL operation.

This should probably be done in the DDL handler (but maybe it makes sense to do 
it in the `execute()` method of the CatalogManager).

An example of a test demonstrating the problem is 
ItRebalanceDistributedTest.testOnLeaderElectedRebalanceRestart(). But this test 
also has another problem: it interacts with the CatalogManager directly. If we 
add the fix above the CatalogManager, the test will have to be fixed to do the 
schema sync by hand.


> Do schema sync before executing a DDL operation
> ---
>
> Key: IGNITE-21316
> URL: https://issues.apache.org/jira/browse/IGNITE-21316
> Project: Ignite
>  Issue Type: Bug
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When executing an DDL operation on one node and then executing another 
> operation (depending on the first operation to finish) on another node, the 
> second operation might not see the first operation results.
> For example, if we create a zone via node A, wait for the DDL future to 
> complete and then we try to create a table using that new zone via node B, 
> the table creation might fail because node B does not see the newly-created 
> zone yet.
> This is because the zone creation future only makes us wait for all 
> activation timestamp to become non-future on all clocks on the cluster, but 
> when this happens, there is no guarantee that all nodes actually received the 
> new catalog version.
> To fix this, we need to do a schema sync for timestamp equal to 'now' before 
> doing any DDL operation.
> This should probably be done in the DDL handler (but maybe it makes sense to 
> do it in the `execute()` method of the CatalogManager).
> An example of a test demonstrating the problem is 
> ItRebalanceDistributedTest.testOnLeaderElectedRebalanceRestart(). But this 
> test also has another problem: it interacts with the CatalogManager directly. 
> If we add the fix above the CatalogManager, the test will have to be fixed to 
> do schema sync by hand.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21316) Do schema sync before executing a DDL operation

2024-01-19 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21316:
---
Description: 
When executing an DDL operation on one node and then executing another 
operation (depending on the first operation to finish) on another node, the 
second operation might not see the first operation results.

For example, if we create a zone via node A, wait for the DDL future to 
complete and then we try to create a table using that new zone via node B, the 
table creation might fail because node B does not see the newly-created zone 
yet.

This is because the zone creation future only makes us wait for all activation 
timestamp to become non-future on all clocks on the cluster, but when this 
happens, there is no guarantee that all nodes actually received the new catalog 
version.

To fix this, we need to do a schema sync for timestamp equal to 'now' before 
doing any DDL operation.

This should probably be done in the DDL handler (but maybe it makes sense to do 
it in the `execute()` method of the CatalogManager).

> Do schema sync before executing a DDL operation
> ---
>
> Key: IGNITE-21316
> URL: https://issues.apache.org/jira/browse/IGNITE-21316
> Project: Ignite
>  Issue Type: Bug
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When executing an DDL operation on one node and then executing another 
> operation (depending on the first operation to finish) on another node, the 
> second operation might not see the first operation results.
> For example, if we create a zone via node A, wait for the DDL future to 
> complete and then we try to create a table using that new zone via node B, 
> the table creation might fail because node B does not see the newly-created 
> zone yet.
> This is because the zone creation future only makes us wait for all 
> activation timestamp to become non-future on all clocks on the cluster, but 
> when this happens, there is no guarantee that all nodes actually received the 
> new catalog version.
> To fix this, we need to do a schema sync for timestamp equal to 'now' before 
> doing any DDL operation.
> This should probably be done in the DDL handler (but maybe it makes sense to 
> do it in the `execute()` method of the CatalogManager).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21315) Node can't join the cluster when create index in progress and caches have the same deploymentId

2024-01-19 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-21315:
---
Summary: Node can't join the cluster when create index in progress and 
caches have the same deploymentId  (was: Node can't join then cluster when 
create index in progress and caches have the same deploymentId)

> Node can't join the cluster when create index in progress and caches have the 
> same deploymentId
> ---
>
> Key: IGNITE-21315
> URL: https://issues.apache.org/jira/browse/IGNITE-21315
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: ise
>
> Reproducer:
> {code:java}
> public class DynamicIndexCreateAfterClusterRestartTest extends 
> GridCommonAbstractTest {
> /** {@inheritDoc} */
> @Override protected IgniteConfiguration getConfiguration(String 
> igniteInstanceName) throws Exception {
> IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName)
> .setDataStorageConfiguration(
> new 
> DataStorageConfiguration().setDefaultDataRegionConfiguration(
> new 
> DataRegionConfiguration().setPersistenceEnabled(true)));
> cfg.setConsistentId(igniteInstanceName);
> return cfg;
> }
> /** */
> @Test
> public void testNodeJoinOnCreateIndex() throws Exception {
> IgniteEx grid = startGrids(2);
> grid.cluster().state(ClusterState.ACTIVE);
> grid.getOrCreateCache(new 
> CacheConfiguration<>("CACHE1").setSqlSchema("PUBLIC")
> .setIndexedTypes(Integer.class, Integer.class));
> grid.getOrCreateCache(new 
> CacheConfiguration<>("CACHE2").setSqlSchema("PUBLIC")
> .setIndexedTypes(Integer.class, TestValue.class));
> stopAllGrids();
> startGrids(2);
> try (IgniteDataStreamer ds = 
> grid(0).dataStreamer("CACHE2")) {
> for (int i = 0; i < 1_500_000; i++)
> ds.addData(i, new TestValue(i));
> }
> GridTestUtils.runAsync(() -> {
> grid(1).cache("CACHE2").query(new SqlFieldsQuery("CREATE INDEX ON 
> TestValue(val)")).getAll();
> });
> doSleep(100);
> stopGrid(0, true);
> cleanPersistenceDir(getTestIgniteInstanceName(0));
> startGrid(0);
> }
> /** */
> private static class TestValue {
> /** */
> @QuerySqlField
> private final int val;
> /** */
> private TestValue(int val) {
> this.val = val;
> }
> }
> }
>  {code}
> Fails on last node join with an exception:
> {noformat}
> java.lang.AssertionError
>     at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:1124)
>     at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:1257)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$ff7b936b$1(GridCacheProcessor.java:1869)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$16(GridCacheProcessor.java:1754)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1863)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1753)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnLocalJoin(GridCacheProcessor.java:1699)
>     at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:1162)
>     at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1007)
>     at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3336)
>     at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3170)
>     at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>     at java.lang.Thread.run(Thread.java:748){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21316) Do schema sync before executing a DDL operation

2024-01-19 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-21316:
--

 Summary: Do schema sync before executing a DDL operation
 Key: IGNITE-21316
 URL: https://issues.apache.org/jira/browse/IGNITE-21316
 Project: Ignite
  Issue Type: Bug
Reporter: Roman Puchkovskiy
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21315) Node can't join then cluster when create index in progress and caches have the same deploymentId

2024-01-19 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-21315:
---
Labels: ise  (was: )

> Node can't join then cluster when create index in progress and caches have 
> the same deploymentId
> 
>
> Key: IGNITE-21315
> URL: https://issues.apache.org/jira/browse/IGNITE-21315
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: ise
>
> Reproducer:
> {code:java}
> public class DynamicIndexCreateAfterClusterRestartTest extends 
> GridCommonAbstractTest {
> /** {@inheritDoc} */
> @Override protected IgniteConfiguration getConfiguration(String 
> igniteInstanceName) throws Exception {
> IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName)
> .setDataStorageConfiguration(
> new 
> DataStorageConfiguration().setDefaultDataRegionConfiguration(
> new 
> DataRegionConfiguration().setPersistenceEnabled(true)));
> cfg.setConsistentId(igniteInstanceName);
> return cfg;
> }
> /** */
> @Test
> public void testNodeJoinOnCreateIndex() throws Exception {
> IgniteEx grid = startGrids(2);
> grid.cluster().state(ClusterState.ACTIVE);
> grid.getOrCreateCache(new 
> CacheConfiguration<>("CACHE1").setSqlSchema("PUBLIC")
> .setIndexedTypes(Integer.class, Integer.class));
> grid.getOrCreateCache(new 
> CacheConfiguration<>("CACHE2").setSqlSchema("PUBLIC")
> .setIndexedTypes(Integer.class, TestValue.class));
> stopAllGrids();
> startGrids(2);
> try (IgniteDataStreamer ds = 
> grid(0).dataStreamer("CACHE2")) {
> for (int i = 0; i < 1_500_000; i++)
> ds.addData(i, new TestValue(i));
> }
> GridTestUtils.runAsync(() -> {
> grid(1).cache("CACHE2").query(new SqlFieldsQuery("CREATE INDEX ON 
> TestValue(val)")).getAll();
> });
> doSleep(100);
> stopGrid(0, true);
> cleanPersistenceDir(getTestIgniteInstanceName(0));
> startGrid(0);
> }
> /** */
> private static class TestValue {
> /** */
> @QuerySqlField
> private final int val;
> /** */
> private TestValue(int val) {
> this.val = val;
> }
> }
> }
>  {code}
> Fails on last node join with an exception:
> {noformat}
> java.lang.AssertionError
>     at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:1124)
>     at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:1257)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$ff7b936b$1(GridCacheProcessor.java:1869)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$16(GridCacheProcessor.java:1754)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1863)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1753)
>     at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnLocalJoin(GridCacheProcessor.java:1699)
>     at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:1162)
>     at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1007)
>     at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3336)
>     at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3170)
>     at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>     at java.lang.Thread.run(Thread.java:748){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21315) Node can't join then cluster when create index in progress and caches have the same deploymentId

2024-01-19 Thread Aleksey Plekhanov (Jira)
Aleksey Plekhanov created IGNITE-21315:
--

 Summary: Node can't join then cluster when create index in 
progress and caches have the same deploymentId
 Key: IGNITE-21315
 URL: https://issues.apache.org/jira/browse/IGNITE-21315
 Project: Ignite
  Issue Type: Bug
Reporter: Aleksey Plekhanov
Assignee: Aleksey Plekhanov


Reproducer:
{code:java}
public class DynamicIndexCreateAfterClusterRestartTest extends 
GridCommonAbstractTest {
/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String 
igniteInstanceName) throws Exception {
IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName)
.setDataStorageConfiguration(
new 
DataStorageConfiguration().setDefaultDataRegionConfiguration(
new DataRegionConfiguration().setPersistenceEnabled(true)));

cfg.setConsistentId(igniteInstanceName);

return cfg;
}

/** */
@Test
public void testNodeJoinOnCreateIndex() throws Exception {
IgniteEx grid = startGrids(2);
grid.cluster().state(ClusterState.ACTIVE);

grid.getOrCreateCache(new 
CacheConfiguration<>("CACHE1").setSqlSchema("PUBLIC")
.setIndexedTypes(Integer.class, Integer.class));
grid.getOrCreateCache(new 
CacheConfiguration<>("CACHE2").setSqlSchema("PUBLIC")
.setIndexedTypes(Integer.class, TestValue.class));

stopAllGrids();

startGrids(2);

try (IgniteDataStreamer ds = 
grid(0).dataStreamer("CACHE2")) {
for (int i = 0; i < 1_500_000; i++)
ds.addData(i, new TestValue(i));
}

GridTestUtils.runAsync(() -> {
grid(1).cache("CACHE2").query(new SqlFieldsQuery("CREATE INDEX ON 
TestValue(val)")).getAll();
});

doSleep(100);

stopGrid(0, true);

cleanPersistenceDir(getTestIgniteInstanceName(0));

startGrid(0);
}

/** */
private static class TestValue {
/** */
@QuerySqlField
private final int val;

/** */
private TestValue(int val) {
this.val = val;
}
}
}
 {code}
Fails on last node join with an exception:
{noformat}
java.lang.AssertionError
    at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:1124)
    at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:1257)
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$ff7b936b$1(GridCacheProcessor.java:1869)
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$16(GridCacheProcessor.java:1754)
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1863)
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1753)
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnLocalJoin(GridCacheProcessor.java:1699)
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:1162)
    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1007)
    at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3336)
    at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3170)
    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
    at java.lang.Thread.run(Thread.java:748){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21240) Remove deprecated authorization method from Security Context

2024-01-19 Thread Mikhail Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808644#comment-17808644
 ] 

Mikhail Petrov commented on IGNITE-21240:
-

[~av] Thank you for the review.

> Remove deprecated authorization method from Security Context
> 
>
> Key: IGNITE-21240
> URL: https://issues.apache.org/jira/browse/IGNITE-21240
> Project: Ignite
>  Issue Type: Task
>Reporter: Mikhail Petrov
>Assignee: Mikhail Petrov
>Priority: Minor
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We need to finally remove what was described ad deprecated in 
> https://issues.apache.org/jira/browse/IGNITE-19807



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20812) Sql. Performance of a queries affected by number of partitions

2024-01-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-20812:
---
Epic Link: IGNITE-21312

> Sql. Performance of a queries affected by number of partitions
> --
>
> Key: IGNITE-20812
> URL: https://issues.apache.org/jira/browse/IGNITE-20812
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Performance of INSERT and SELECT by primary key queries is affected by number 
> of partitions in the table. Most probably, the reason is a lack of partition 
> pruning in sql engine.
> Let's investigate and fix the problem.
> Here is a result of local (on my laptop; 2,6 GHz 6-Core Intel Core i7 32GB 
> RAM) benchmark:
> ||Benchmark||(clusterSize)||(fsync)||(partitionCount)||Mode||Cnt||Score||Error||Units||
> |InsertBenchmark.sqlInsert|1|false|1|avgt|20|0.643|± 0.069|ms/op|
> |InsertBenchmark.sqlInsert|1|false|5|avgt|20|0.756|± 0.049|ms/op|
> |InsertBenchmark.sqlInsert|1|false|25|avgt|20|1.114|± 0.093|ms/op|
> |SelectBenchmark.sqlGet|1|false|1|avgt|20|203.432|± 16.617|us/op|
> |SelectBenchmark.sqlGet|1|false|5|avgt|20|320.383|± 22.086|us/op|
> |SelectBenchmark.sqlGet|1|false|25|avgt|20|794.232|± 49.473|us/op|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-21200) IgniteRunner start fails in Windows via git-bash

2024-01-19 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev resolved IGNITE-21200.
---
Resolution: Won't Fix

> IgniteRunner start fails in Windows via git-bash
> 
>
> Key: IGNITE-21200
> URL: https://issues.apache.org/jira/browse/IGNITE-21200
> Project: Ignite
>  Issue Type: Bug
>  Components: binary, cli
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3, windows
> Fix For: 3.0.0-beta2
>
>
> h3. Steps to reproduce:
>  # Build ignite3-db-3.0.0-SNAPSHOT.zip distribution from sources.
>  # Use git-bash to run `./ignite3db start` in Windows.
> h3. Expected:
> IgniteRunner started.
> h3. Actual:
> Error:
> {code:java}
> ./ignite3db: line 38: C:\Program: No such file or directory{code}
> h3. Details:
> The space in path to java (C:\Program Files) is considered as separation 
> between command and arguments. To avoid it, usage of all variables have to 
> replaced to arrays. For example `${JAVA_CMD_WITH_ARGS}` have to be replaced 
> to `${JAVA_CMD_WITH_ARGS[@]}` and so on.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21200) IgniteRunner start fails in Windows via git-bash

2024-01-19 Thread Vadim Pakhnushev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808621#comment-17808621
 ] 

Vadim Pakhnushev commented on IGNITE-21200:
---

It's impossible to run ignite under the MinGW, it will run Windows JVM but use 
Unix paths and conventions

> IgniteRunner start fails in Windows via git-bash
> 
>
> Key: IGNITE-21200
> URL: https://issues.apache.org/jira/browse/IGNITE-21200
> Project: Ignite
>  Issue Type: Bug
>  Components: binary, cli
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3, windows
> Fix For: 3.0.0-beta2
>
>
> h3. Steps to reproduce:
>  # Build ignite3-db-3.0.0-SNAPSHOT.zip distribution from sources.
>  # Use git-bash to run `./ignite3db start` in Windows.
> h3. Expected:
> IgniteRunner started.
> h3. Actual:
> Error:
> {code:java}
> ./ignite3db: line 38: C:\Program: No such file or directory{code}
> h3. Details:
> The space in path to java (C:\Program Files) is considered as separation 
> between command and arguments. To avoid it, usage of all variables have to 
> replaced to arrays. For example `${JAVA_CMD_WITH_ARGS}` have to be replaced 
> to `${JAVA_CMD_WITH_ARGS[@]}` and so on.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21309) DirectMessageWriter keeps holding used buffers

2024-01-19 Thread Kirill Tkalenko (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808615#comment-17808615
 ] 

Kirill Tkalenko commented on IGNITE-21309:
--

Looks good.

> DirectMessageWriter keeps holding used buffers
> --
>
> Key: IGNITE-21309
> URL: https://issues.apache.org/jira/browse/IGNITE-21309
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Thread-local optimized marshallers store links to write buffers in their 
> internal stacks, which could lead to occasional OOMs. We should release 
> buffers after writing nested messages in DirectMessageWriter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21307) Drop the node in case of failure in watch listener

2024-01-19 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808605#comment-17808605
 ] 

Mirza Aliev edited comment on IGNITE-21307 at 1/19/24 11:40 AM:


Once this (https://issues.apache.org/jira/browse/IGNITE-20452) ticket will be 
implemented, we can start the current ticket and we will need to call the 
corresponding method from the new FailureHandler when watch processing is 
failed.


was (Author: maliev):
Once this ticket will be implemented, we can start the current ticket and we 
will need to call the corresponding method from the new FailureHandler when 
watch processing is failed.

> Drop the node in case of failure in watch listener
> --
>
> Key: IGNITE-21307
> URL: https://issues.apache.org/jira/browse/IGNITE-21307
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> For the linearized watch processing, we have 
> WatchProcessor#notificationFuture that is rewritten for each revision 
> processing and meta storage safe time advance. If some watch processor 
> completes exceptionally, this means that no further updates will be 
> processed, because they need the previous updates to be processed 
> successfully. This is implemented in futures chaining like this:
>  
> {code:java}
> notificationFuture = notificationFuture
> .thenRunAsync(() -> revisionCallback.onSafeTimeAdvanced(time), 
> watchExecutor)
> .whenComplete((ignored, e) -> {
> if (e != null) {
> LOG.error("Error occurred when notifying safe time advanced 
> callback", e);
> }
> }); {code}
> For now, we dont have any failure handing of exceptionally completed 
> notification future. It leads to the endless log records with the same 
> exception's stack trace, caused by meta storage safe time advances:
>  
> {code:java}
> [2024-01-16T21:42:35,515][ERROR][%isot_n_0%JRaft-FSMCaller-Disruptor-metastorage-_stripe_0-0][WatchProcessor]
>  Error occurred when notifying safe time advanced callback
> java.util.concurrent.CompletionException: 
> org.apache.ignite.internal.lang.IgniteInternalException: IGN-CMN-65535 
> TraceId:3877e098-6a1b-4f30-88a8-a4c13411d573 Peers are not ready 
> [groupId=5_part_0]
>     at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1081)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>  ~[?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
>     at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: org.apache.ignite.internal.lang.IgniteInternalException: Peers are 
> not ready [groupId=5_part_0]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.randomNode(RaftGroupServiceImpl.java:725)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.randomNode(RaftGroupServiceImpl.java:709)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.refreshLeader(RaftGroupServiceImpl.java:234)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.start(RaftGroupServiceImpl.java:190)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.start(TopologyAwareRaftGroupService.java:187)
>  ~[ignite-replicator-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupServiceFactory.startRaftGroupService(TopologyAwareRaftGroupServiceFactory.java:73)
>  ~[ignite-replicator-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.Loza.startRaftGroupService(Loza.java:350) 
> ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$startPartitionAndStartClient$27(TableManager.java:917)
>  ~[ignite-table-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:827) 
> ~[ignite-core-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$startPartitionAndStartClient$28(TableManager.java:913)
>  ~[ignite-table-9.0.127-SNAPSHOT.jar:?]
>     at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
>  ~[?:?]
>     ... 4 mor

[jira] [Comment Edited] (IGNITE-21307) Drop the node in case of failure in watch listener

2024-01-19 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808605#comment-17808605
 ] 

Mirza Aliev edited comment on IGNITE-21307 at 1/19/24 11:39 AM:


Once this ticket will be implemented, we can start the current ticket and we 
will need to call the corresponding method from the new FailureHandler when 
watch processing is failed.


was (Author: maliev):
Once this ticket will be implemented, we can start the current ticket and we 
will need to call corresponding method from the new FailureHandler when watch 
processing is failed.

> Drop the node in case of failure in watch listener
> --
>
> Key: IGNITE-21307
> URL: https://issues.apache.org/jira/browse/IGNITE-21307
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> For the linearized watch processing, we have 
> WatchProcessor#notificationFuture that is rewritten for each revision 
> processing and meta storage safe time advance. If some watch processor 
> completes exceptionally, this means that no further updates will be 
> processed, because they need the previous updates to be processed 
> successfully. This is implemented in futures chaining like this:
>  
> {code:java}
> notificationFuture = notificationFuture
> .thenRunAsync(() -> revisionCallback.onSafeTimeAdvanced(time), 
> watchExecutor)
> .whenComplete((ignored, e) -> {
> if (e != null) {
> LOG.error("Error occurred when notifying safe time advanced 
> callback", e);
> }
> }); {code}
> For now, we dont have any failure handing of exceptionally completed 
> notification future. It leads to the endless log records with the same 
> exception's stack trace, caused by meta storage safe time advances:
>  
> {code:java}
> [2024-01-16T21:42:35,515][ERROR][%isot_n_0%JRaft-FSMCaller-Disruptor-metastorage-_stripe_0-0][WatchProcessor]
>  Error occurred when notifying safe time advanced callback
> java.util.concurrent.CompletionException: 
> org.apache.ignite.internal.lang.IgniteInternalException: IGN-CMN-65535 
> TraceId:3877e098-6a1b-4f30-88a8-a4c13411d573 Peers are not ready 
> [groupId=5_part_0]
>     at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1081)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>  ~[?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
>     at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: org.apache.ignite.internal.lang.IgniteInternalException: Peers are 
> not ready [groupId=5_part_0]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.randomNode(RaftGroupServiceImpl.java:725)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.randomNode(RaftGroupServiceImpl.java:709)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.refreshLeader(RaftGroupServiceImpl.java:234)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.start(RaftGroupServiceImpl.java:190)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.start(TopologyAwareRaftGroupService.java:187)
>  ~[ignite-replicator-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupServiceFactory.startRaftGroupService(TopologyAwareRaftGroupServiceFactory.java:73)
>  ~[ignite-replicator-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.Loza.startRaftGroupService(Loza.java:350) 
> ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$startPartitionAndStartClient$27(TableManager.java:917)
>  ~[ignite-table-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:827) 
> ~[ignite-core-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$startPartitionAndStartClient$28(TableManager.java:913)
>  ~[ignite-table-9.0.127-SNAPSHOT.jar:?]
>     at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
>  ~[?:?]
>     ... 4 more {code}
> So, the node can't operate properly and just pr

[jira] [Comment Edited] (IGNITE-21307) Drop the node in case of failure in watch listener

2024-01-19 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808605#comment-17808605
 ] 

Mirza Aliev edited comment on IGNITE-21307 at 1/19/24 11:39 AM:


Once this ticket will be implemented, we can start the current ticket and we 
will need to call corresponding method from the new FailureHandler when watch 
processing is failed.


was (Author: maliev):
Once [this|https://issues.apache.org/jira/browse/IGNITE-20452] ticket will be 
implemented, we can start the current ticket and we will need to call 
corresponding method from the new FailureHandler that will be propagated to all 
components when watch processing is failed

> Drop the node in case of failure in watch listener
> --
>
> Key: IGNITE-21307
> URL: https://issues.apache.org/jira/browse/IGNITE-21307
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> For the linearized watch processing, we have 
> WatchProcessor#notificationFuture that is rewritten for each revision 
> processing and meta storage safe time advance. If some watch processor 
> completes exceptionally, this means that no further updates will be 
> processed, because they need the previous updates to be processed 
> successfully. This is implemented in futures chaining like this:
>  
> {code:java}
> notificationFuture = notificationFuture
> .thenRunAsync(() -> revisionCallback.onSafeTimeAdvanced(time), 
> watchExecutor)
> .whenComplete((ignored, e) -> {
> if (e != null) {
> LOG.error("Error occurred when notifying safe time advanced 
> callback", e);
> }
> }); {code}
> For now, we dont have any failure handing of exceptionally completed 
> notification future. It leads to the endless log records with the same 
> exception's stack trace, caused by meta storage safe time advances:
>  
> {code:java}
> [2024-01-16T21:42:35,515][ERROR][%isot_n_0%JRaft-FSMCaller-Disruptor-metastorage-_stripe_0-0][WatchProcessor]
>  Error occurred when notifying safe time advanced callback
> java.util.concurrent.CompletionException: 
> org.apache.ignite.internal.lang.IgniteInternalException: IGN-CMN-65535 
> TraceId:3877e098-6a1b-4f30-88a8-a4c13411d573 Peers are not ready 
> [groupId=5_part_0]
>     at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1081)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>  ~[?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
>     at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: org.apache.ignite.internal.lang.IgniteInternalException: Peers are 
> not ready [groupId=5_part_0]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.randomNode(RaftGroupServiceImpl.java:725)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.randomNode(RaftGroupServiceImpl.java:709)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.refreshLeader(RaftGroupServiceImpl.java:234)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.start(RaftGroupServiceImpl.java:190)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.start(TopologyAwareRaftGroupService.java:187)
>  ~[ignite-replicator-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupServiceFactory.startRaftGroupService(TopologyAwareRaftGroupServiceFactory.java:73)
>  ~[ignite-replicator-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.Loza.startRaftGroupService(Loza.java:350) 
> ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$startPartitionAndStartClient$27(TableManager.java:917)
>  ~[ignite-table-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:827) 
> ~[ignite-core-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$startPartitionAndStartClient$28(TableManager.java:913)
>  ~[ignite-table-9.0.127-SNAPSHOT.jar:?]
>     at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.jav

[jira] [Commented] (IGNITE-21307) Drop the node in case of failure in watch listener

2024-01-19 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808605#comment-17808605
 ] 

Mirza Aliev commented on IGNITE-21307:
--

Once [this|https://issues.apache.org/jira/browse/IGNITE-20452] ticket will be 
implemented, we can start the current ticket and we will need to call 
corresponding method from the new FailureHandler that will be propagated to all 
components when watch processing is failed

> Drop the node in case of failure in watch listener
> --
>
> Key: IGNITE-21307
> URL: https://issues.apache.org/jira/browse/IGNITE-21307
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> For the linearized watch processing, we have 
> WatchProcessor#notificationFuture that is rewritten for each revision 
> processing and meta storage safe time advance. If some watch processor 
> completes exceptionally, this means that no further updates will be 
> processed, because they need the previous updates to be processed 
> successfully. This is implemented in futures chaining like this:
>  
> {code:java}
> notificationFuture = notificationFuture
> .thenRunAsync(() -> revisionCallback.onSafeTimeAdvanced(time), 
> watchExecutor)
> .whenComplete((ignored, e) -> {
> if (e != null) {
> LOG.error("Error occurred when notifying safe time advanced 
> callback", e);
> }
> }); {code}
> For now, we dont have any failure handing of exceptionally completed 
> notification future. It leads to the endless log records with the same 
> exception's stack trace, caused by meta storage safe time advances:
>  
> {code:java}
> [2024-01-16T21:42:35,515][ERROR][%isot_n_0%JRaft-FSMCaller-Disruptor-metastorage-_stripe_0-0][WatchProcessor]
>  Error occurred when notifying safe time advanced callback
> java.util.concurrent.CompletionException: 
> org.apache.ignite.internal.lang.IgniteInternalException: IGN-CMN-65535 
> TraceId:3877e098-6a1b-4f30-88a8-a4c13411d573 Peers are not ready 
> [groupId=5_part_0]
>     at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1081)
>  ~[?:?]
>     at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>  ~[?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
>     at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: org.apache.ignite.internal.lang.IgniteInternalException: Peers are 
> not ready [groupId=5_part_0]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.randomNode(RaftGroupServiceImpl.java:725)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.randomNode(RaftGroupServiceImpl.java:709)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.refreshLeader(RaftGroupServiceImpl.java:234)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.start(RaftGroupServiceImpl.java:190)
>  ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupService.start(TopologyAwareRaftGroupService.java:187)
>  ~[ignite-replicator-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.client.TopologyAwareRaftGroupServiceFactory.startRaftGroupService(TopologyAwareRaftGroupServiceFactory.java:73)
>  ~[ignite-replicator-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.raft.Loza.startRaftGroupService(Loza.java:350) 
> ~[ignite-raft-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$startPartitionAndStartClient$27(TableManager.java:917)
>  ~[ignite-table-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:827) 
> ~[ignite-core-9.0.127-SNAPSHOT.jar:?]
>     at 
> org.apache.ignite.internal.table.distributed.TableManager.lambda$startPartitionAndStartClient$28(TableManager.java:913)
>  ~[ignite-table-9.0.127-SNAPSHOT.jar:?]
>     at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
>  ~[?:?]
>     ... 4 more {code}
> So, the node can't operate properly and just produces tons of logs. Such 
> nodes should be halted.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21314) Sql. Partition pruning. Introduce node pruning for system views.

2024-01-19 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21314:
--
Epic Link: IGNITE-21312

> Sql. Partition pruning. Introduce node pruning for system views.
> 
>
> Key: IGNITE-21314
> URL: https://issues.apache.org/jira/browse/IGNITE-21314
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> In order to support pruning for system views we need to introduce node 
> pruning, because system views do not have notion of partitions and data of 
> node system views reside on specific nodes.
> Update query plan to include node pruning metadata that can be used to remove 
> unnecessary nodes from execution. It should to do so by using/modifying 
> partition extractor from https://issues.apache.org/jira/browse/IGNITE-21277 
> to analyze node name columns of system views instead of table colocation keys.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21314) Sql. Partition pruning. Introduce node pruning for system views.

2024-01-19 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-21314:
-

 Summary: Sql. Partition pruning. Introduce node pruning for system 
views.
 Key: IGNITE-21314
 URL: https://issues.apache.org/jira/browse/IGNITE-21314
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 3.0.0-beta2
Reporter: Maksim Zhuravkov


In order to support pruning for system views we need to introduce node pruning, 
because system views do not have notion of partitions and data of node system 
views reside on specific nodes.

Update query plan to include node pruning metadata that can be used to remove 
unnecessary nodes from execution. It should to do so by using/modifying 
partition extractor from https://issues.apache.org/jira/browse/IGNITE-21277 to 
analyze node name columns of system views instead of table colocation keys.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21281) Sql. Partition pruning. Integrate static partition pruning into MODIFY statements execution pipeline.

2024-01-19 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21281:
--
Description: 
Update PartitionExtractor introduced in 
https://issues.apache.org/jira/browse/IGNITE-21277 to compile pruning metadata 
for DML operations.
In order to that we need to extract values of colocation key columns from input 
operators to ModifyNodes:
- For operations that do accept scan operation (e.g. plain INSERT), such 
information is included in Values and Projection operators (to get values for 
DEFAULT expression).
- For that accept scan operation, we should use pruning metadata provided by a 
scan operator.

*Some examples*

Plain insert:
{code:java}
INSERT INTO t (colo_col1, colo_col2) VALUES (1, 2), (3, 4)
Partition pruning metadata: t = [ (col_c1 = 1, col_c2 = 2) ||  (col_c1 = 3, 
col_c2 = 4)]
{code}

Plain update
{code:java}
UPDATE t SET col = 100 WHERE pk = 42
Partition pruning metadata: t = [ (pk = 42) ] // Uses metadata provided by a 
scan operation
{code}

Delete
{code:java}
DELETE FROM t WHERE pk = 42
Partition pruning metadata: t = [ (pk = 42) ] // Uses metadata provided by a 
scan operation
{code}


After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.



  was:
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input (UPDATE), we do not 
need to do anything when both operations are collocated and this case is 
covered by https://issues.apache.org/jira/browse/IGNITE-21279). 
  - For operations that accept INSERT/ MERGE, we need to consider values of 
colocation key columns of both Value and Projection operators, since SQL's 
VALUES accepts DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.




> Sql. Partition pruning. Integrate static partition pruning into MODIFY 
> statements execution pipeline.
> -
>
> Key: IGNITE-21281
> URL: https://issues.apache.org/jira/browse/IGNITE-21281
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Update PartitionExtractor introduced in 
> https://issues.apache.org/jira/browse/IGNITE-21277 to compile pruning 
> metadata for DML operations.
> In order to that we need to extract values of colocation key columns from 
> input operators to ModifyNodes:
> - For operations that do accept scan operation (e.g. plain INSERT), such 
> information is included in Values and Projection operators (to get values for 
> DEFAULT expression).
> - For that accept scan operation, we should use pruning metadata provided by 
> a scan operator.
> *Some examples*
> Plain insert:
> {code:java}
> INSERT INTO t (colo_col1, colo_col2) VALUES (1, 2), (3, 4)
> Partition pruning metadata: t = [ (col_c1 = 1, col_c2 = 2) ||  (col_c1 = 3, 
> col_c2 = 4)]
> {code}
> Plain update
> {code:java}
> UPDATE t SET col = 100 WHERE pk = 42
> Partition pruning metadata: t = [ (pk = 42) ] // Uses metadata provided by a 
> scan operation
> {code}
> Delete
> {code:java}
> DELETE FROM t WHERE pk = 42
> Partition pruning metadata: t = [ (pk = 42) ] // Uses metadata provided by a 
> scan operation
> {code}
> After this issue is resolved, partition pruning should work for INSERT, 
> UPDATE, MERGE, and DELETE statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20495) Sql. Provide IgniteTableModify with source id

2024-01-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-20495:
---
Description: 
Currently, IgniteTableModify doesn't implement interface SourceAwareIgniteRel, 
and, as a result, is not being assigned its own source id, although requires to 
be mapped on particular set of nodes with regards to the distribution of the 
table it modifies.

To fully integrate TableModify into mapping process, let's make it implement 
SourceAwareIgniteRel. This id should be used during mapping phase to create 
colocation group properly, and inside execution to acquire assignments. See 
usages of {{UpdatableTableImpl#MODIFY_NODE_SOURCE_ID}} to find all places that 
should be changed. 


  was:
Currently, IgniteTableModify doesn't implement interface SourceAwareIgniteRel, 
and, as a result, is not being assigned its own source id, although requires to 
be mapped on particular set of nodes with regards to the distribution of the 
table it modifies.

To fully integrate TableModify into mapping process, let's make it implement 
SourceAwareIgniteRel. This id should be used during mapping phase to create 
colocation group properly, and inside execution to acquire assignments. See 
usages of {{UpdatableTableImpl#MODIFY_NODE_SOURCE_ID}} to find all places that 
should be changed. 


> Sql. Provide IgniteTableModify with source id 
> --
>
> Key: IGNITE-20495
> URL: https://issues.apache.org/jira/browse/IGNITE-20495
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Currently, IgniteTableModify doesn't implement interface 
> SourceAwareIgniteRel, and, as a result, is not being assigned its own source 
> id, although requires to be mapped on particular set of nodes with regards to 
> the distribution of the table it modifies.
> To fully integrate TableModify into mapping process, let's make it implement 
> SourceAwareIgniteRel. This id should be used during mapping phase to create 
> colocation group properly, and inside execution to acquire assignments. See 
> usages of {{UpdatableTableImpl#MODIFY_NODE_SOURCE_ID}} to find all places 
> that should be changed. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21310) Sql. Partition pruning. Introduce partition provider

2024-01-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-21310:
---
Description: 
In order to implement partition pruning, we need a component that will return 
partition id by given colocation keys' values. We already have 
HashCalculator+ColocationUtils to calculate colocation hash and 
TypesAwareHashFunction which does pretty match the same thing but for sql.

My proposal is to introduce component that will consume values similarly to 
HashCalculator but will return particular partition id with regard to table's 
column types and number of its partitions.

As part of this ticket, let's integrate new component into 
{{org.apache.ignite.internal.sql.engine.trait.Partitioned}} destination 
function.


  was:
In order to implement partition pruning, we need a component that will return 
partition id by given colocation keys' values. We already have 
HashCalculator+ColocationUtils to calculate colocation hash and 
TypesAwareHashFunction which does pretty match the same thing but for sql.

My proposal is to introduce component that will consume values similarly to 
HashCalculator but will return particular partition id with regard to table's 
column types and number of its partitions.

As part of this ticket, let's integrate new component into 
{{org.apache.ignite.internal.sql.engine.trait.Partitioned}} destination 
function.


> Sql. Partition pruning. Introduce partition provider
> 
>
> Key: IGNITE-21310
> URL: https://issues.apache.org/jira/browse/IGNITE-21310
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> In order to implement partition pruning, we need a component that will return 
> partition id by given colocation keys' values. We already have 
> HashCalculator+ColocationUtils to calculate colocation hash and 
> TypesAwareHashFunction which does pretty match the same thing but for sql.
> My proposal is to introduce component that will consume values similarly to 
> HashCalculator but will return particular partition id with regard to table's 
> column types and number of its partitions.
> As part of this ticket, let's integrate new component into 
> {{org.apache.ignite.internal.sql.engine.trait.Partitioned}} destination 
> function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Port partitionExtractor from AI2.

2024-01-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-21277:
---
Description: 
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators prior to statement execution. This can be accomplished 
by traversing an expression tree of scan's filter and collecting expressions 
with colocation key columns (this data is called partition pruning metadata for 
simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch 
(https://github.com/apache/ignite/pull/10928/files) can be used a reference 
implementation).

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

Basic examples:

{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 

Statement: 
SELECT * FROM t WHERE pk = 7 OR col1 = 1
Partition metadata: [] // Empty, because col1 is not part of a colocation key.

Statement: 
SELECT * FROM t_colo_key1_colo_key2 WHERE colo_key1= 42
Partition metadata: [] // Empty, because colo_key2 is missing 
{code}

—

*Additional examples - partition pruning is possible*

Dynamic parameters:

{code:java}
SELECT * FROM t WHERE pk = ?1 
Partition pruning metadata: t = [ pk = ?1 ]
{code}

Colocation columns reside inside a nested expression:

{code:java}
SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
Partition pruning metadata: t = [ pk = 2 ]
{code}

Multiple keys:

{code:java}
SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
{code}

Complex expression with multiple keys:

{code:java}
SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
col_col2 = 100)
Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 = 
4, col_col2 = 100) ]
{code}

Multiple tables, assuming that filter b_id = 42 is pushed into scan b, because 
a_id = b_id:

{code:java}
SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
{code}

---

*Additional examples - partition pruning is not possible*

Columns named col* are not part of colocation key:

{code:java}
SELECT * FROM t WHERE col1 = 10 
// Filter does not use colocation key columns.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col1 = col2 OR pk = 42 
// We need to scan all partitions to figure out which tuples have ‘col1 = col2’
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
// Although first expression uses all colocation key columns the second one 
only uses some.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
// We need to scan all partitions to figure out which tuples have col_c1 = 1 OR 
col_c2 = 2.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
// We need to scan all partitions to figure out which tuples have ‘col_col1 = 
col_col2’
Partition pruning metadata: [] 
{code}



  was:
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators prior to statement execution. This can be accomplished 
by traversing an expression tree of scan's filter and collecting expressions 
with colocation key columns (this data is called partition pruning metadata for 
simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch 
(https://github.com/apache/ignite/pull/10928/files) can be used a reference 
implementation).

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

Basic examples:

{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is e

[jira] [Updated] (IGNITE-21281) Sql. Partition pruning. Integrate static partition pruning into MODIFY statements execution pipeline.

2024-01-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-21281:
---
Description: 
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input (UPDATE), we do not 
need to do anything when both operations are collocated and this case is 
covered by https://issues.apache.org/jira/browse/IGNITE-21279). 
  - For operations that accept INSERT/ MERGE, we need to consider values of 
colocation key columns of both Value and Projection operators, since SQL's 
VALUES accepts DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.



  was:
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input (UPDATE), we do not 
need to do anything when both operations are collocated and this case is 
covered by https://issues.apache.org/jira/browse/IGNITE-21279). 
  - For operations that accept INSERT/ MERGE, we need to consider values of 
colocation key columns of both Value and Projection operators, since SQL's 
VALUES accepts DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.



> Sql. Partition pruning. Integrate static partition pruning into MODIFY 
> statements execution pipeline.
> -
>
> Key: IGNITE-21281
> URL: https://issues.apache.org/jira/browse/IGNITE-21281
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that modify operations won't touch.
> 1. Traverse fragment tree to analyze inputs of DML operations:
>   - If Modify operation accepts Scan operation as an input (UPDATE), we do 
> not need to do anything when both operations are collocated and this case is 
> covered by https://issues.apache.org/jira/browse/IGNITE-21279). 
>   - For operations that accept INSERT/ MERGE, we need to consider values of 
> colocation key columns of both Value and Projection operators, since SQL's 
> VALUES accepts DEFAULT expression.
> 2. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned/modified.
> After this issue is resolved, partition pruning should work for INSERT, 
> UPDATE, MERGE, and DELETE statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21311) Sql. Partition pruning. Introduce pruning for correlated scans

2024-01-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-21311:
---
Description: 
Apart of static pruning made on preparation step, we may also introduce dynamic 
pruning for correlated scans.

In case of correlated join, a predicate on the right shoulder should be 
evaluated based on the context provided from the left shoulder. This context is 
not known until runtime, those partitions are not trimmed on preparation step. 
However, this very case doesn't differ match from the static pruning: the only 
difference here is a time when pruning should be applied. 

In order to support dynamic pruning, we should save pruning meta during 
planning phase in a scan node. Later, in runtime, we should evaluate pruning 
function in order to derive partitions satisfying the predicate. Those 
partitions should be used to do a lookup into a table.


  was:
Apart of static pruning made on preparation step, we may also introduce dynamic 
pruning for correlated scans.

In case of correlated join, a predicate on the right shoulder should be 
evaluated based on the context provided from the left shoulder. This context is 
not known until runtime, those partitions are not trimmed on preparation step. 
However, this very case doesn't differ match from the static pruning: the only 
difference here is a time when pruning should be applied. 

In order to support dynamic pruning, we should save pruning meta during 
planning phase in a scan node. Later, in runtime, we should evaluate pruning 
function in order to derive partitions satisfying the predicate. Those 
partitions should be used to do a lookup into a table.


> Sql. Partition pruning. Introduce pruning for correlated scans
> --
>
> Key: IGNITE-21311
> URL: https://issues.apache.org/jira/browse/IGNITE-21311
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Apart of static pruning made on preparation step, we may also introduce 
> dynamic pruning for correlated scans.
> In case of correlated join, a predicate on the right shoulder should be 
> evaluated based on the context provided from the left shoulder. This context 
> is not known until runtime, those partitions are not trimmed on preparation 
> step. However, this very case doesn't differ match from the static pruning: 
> the only difference here is a time when pruning should be applied. 
> In order to support dynamic pruning, we should save pruning meta during 
> planning phase in a scan node. Later, in runtime, we should evaluate pruning 
> function in order to derive partitions satisfying the predicate. Those 
> partitions should be used to do a lookup into a table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21279) Sql. Partition pruning. Integrate static partition pruning into READ statements execution pipeline

2024-01-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-21279:
---
Description: 
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that scan operations won't touch.

1. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned.
2. Support table scans, and index scans.

After this issue is resolved, partition pruning should work for SELECT queries.

  was:
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that scan operations won't touch.

1. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned.
2. Support table scans, system view scans, and index scans.

After this issue is resolved, partition pruning should work for SELECT queries.


> Sql. Partition pruning. Integrate static partition pruning into READ 
> statements execution pipeline
> --
>
> Key: IGNITE-21279
> URL: https://issues.apache.org/jira/browse/IGNITE-21279
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that scan operations won't touch.
> 1. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned.
> 2. Support table scans, and index scans.
> After this issue is resolved, partition pruning should work for SELECT 
> queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20495) Sql. Provide IgniteTableModify with source id

2024-01-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-20495:
---
Description: 
Currently, IgniteTableModify doesn't implement interface SourceAwareIgniteRel, 
and, as a result, is not being assigned its own source id, although requires to 
be mapped on particular set of nodes with regards to the distribution of the 
table it modifies.

To fully integrate TableModify into mapping process, let's make it implement 
SourceAwareIgniteRel. This id should be used during mapping phase to create 
colocation group properly, and inside execution to acquire assignments. See 
usages of {{UpdatableTableImpl#MODIFY_NODE_SOURCE_ID}} to find all places that 
should be changed. 

  was:
Currently, IgniteTableModify doesn't implement interface SourceAwareIgniteRel, 
and, as a result, is not being assigned its own source id, although requires to 
be mapped on particular set of nodes with regards to the distribution of the 
table it modifies.

To fully integrate TableModify into mapping process, let's make it implement 
SourceAwareIgniteRel. This id should be used during mapping phase to create 
colocation group properly, and inside execution to acquire assignments. See 
usages of {{UpdatableTableImpl#MODIFY_NODE_SOURCE_ID}} to find all places that 
should be changed.


> Sql. Provide IgniteTableModify with source id 
> --
>
> Key: IGNITE-20495
> URL: https://issues.apache.org/jira/browse/IGNITE-20495
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Currently, IgniteTableModify doesn't implement interface 
> SourceAwareIgniteRel, and, as a result, is not being assigned its own source 
> id, although requires to be mapped on particular set of nodes with regards to 
> the distribution of the table it modifies.
> To fully integrate TableModify into mapping process, let's make it implement 
> SourceAwareIgniteRel. This id should be used during mapping phase to create 
> colocation group properly, and inside execution to acquire assignments. See 
> usages of {{UpdatableTableImpl#MODIFY_NODE_SOURCE_ID}} to find all places 
> that should be changed. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20495) Sql. Provide IgniteTableModify with source id

2024-01-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-20495:
---
Epic Link: IGNITE-21312

> Sql. Provide IgniteTableModify with source id 
> --
>
> Key: IGNITE-20495
> URL: https://issues.apache.org/jira/browse/IGNITE-20495
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Currently, IgniteTableModify doesn't implement interface 
> SourceAwareIgniteRel, and, as a result, is not being assigned its own source 
> id, although requires to be mapped on particular set of nodes with regards to 
> the distribution of the table it modifies.
> To fully integrate TableModify into mapping process, let's make it implement 
> SourceAwareIgniteRel. This id should be used during mapping phase to create 
> colocation group properly, and inside execution to acquire assignments. See 
> usages of {{UpdatableTableImpl#MODIFY_NODE_SOURCE_ID}} to find all places 
> that should be changed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21275) Up to 5x difference in performance between SQL API and key-value API

2024-01-19 Thread Konstantin Orlov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808559#comment-17808559
 ] 

Konstantin Orlov commented on IGNITE-21275:
---

One of the major contributor to the difference in performance is the lack of a 
partition pruning (PP) and partition awareness (PA) in SQL. With both PP and PA 
I would expect to get ~2x increase of performance in SQL.

Another suspicious thing (I'm not 100% sure here) is a performance of table 
scan. On my laptop, average time of "embedded kv get" on a single node single 
partition cluster is 35.249 us, while InternalTable#scan (the method we are 
using in SQL) by primary key index takes ~50us (I measured time from invoking 
"request" on subscription till invoking "onComplete" on subscriber).

The rest is spread across an entire query execution pipeline (still need to be 
investigated, but it will look more like polishing though).

> Up to 5x difference in performance between SQL API and key-value API
> 
>
> Key: IGNITE-21275
> URL: https://issues.apache.org/jira/browse/IGNITE-21275
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3
> Attachments: 1240-sql-insert.png, 1240-sql-select.png, 
> 1241-jdbc-insert.png, 1241-jdbc-select.png, 1242-kv-get.png, 1242-kv-put.png
>
>
> h1. Build under test
> AI3 rev. ca21384f85e8c779258cb3b21f54b6c30a7071e4 (Jan 16 2024)
> h1. Setup
> A single-node Ignite 3 cluster with default config. 
> h1. Benchmark
> Compare two benchmark runs:
>  * a benchmark which uses KeyValueView to perform single {{put()}} and 
> {{{}get(){}}}: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteClient.java]
>  
>  * a benchmark which performs {{INSERT}} and {{SELECT}} via {{Statement}} 
> objects by using Ignite SQL API: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> h1. Run 1, PUT/INSERT
> Insert N unique entries into a single-node cluster from a single-threaded 
> client. 
> h2. KeyValueView
> N = 25
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=25 -p warmupops=5 -p dataintegrity=true -p 
> measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.37 -s 
> {code}
> !1242-kv-put.png!
> h2. SQL API
> N = 15000
>  
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=15 -p warmupops=15000 -p dataintegrity=true -p 
> measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.47 -s 
> {code}
> !1240-sql-insert.png!
>  
> h1. Run 2, GET/SELECT
> Get N entries inserted on Run 1.
> h2. KeyValueView
> N = 25
>  
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=25 -p recordcount=25 -p warmupops=5 -p 
> dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
> hosts=192.168.1.37 -s{code}
> !1242-kv-get.png!
>  
> h2. SQL API
> N = 15
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=15 -p recordcount=15 -p warmupops=15000 -p 
> dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
> hosts=192.168.1.47 -s {code}
> !1240-sql-select.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21313) Incorrect behaviour when invalid zone filter is applied to zone

2024-01-19 Thread Mirza Aliev (Jira)
Mirza Aliev created IGNITE-21313:


 Summary: Incorrect behaviour when invalid zone filter is applied 
to zone 
 Key: IGNITE-21313
 URL: https://issues.apache.org/jira/browse/IGNITE-21313
 Project: Ignite
  Issue Type: Bug
Reporter: Mirza Aliev


Let's consider this code to be run in a test:

 
{code:java}
sql("CREATE ZONE ZONE1 WITH DATA_NODES_FILTER = 'INCORRECT_FILTER'");
sql("CREATE TABLE TEST(ID INT PRIMARY KEY, VAL0 INT) WITH 
PRIMARY_ZONE='ZONE1'"); {code}
 Current behaviour is that test hangs with spamming 

 
{noformat}
[2024-01-19T12:56:25,163][ERROR][%ictdt_n_0%metastorage-watch-executor-2][WatchProcessor]
 Error occurred when notifying safe time advanced callback
 java.util.concurrent.CompletionException: 
com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']
    at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870)
 ~[?:?]
    at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883)
 [?:?]
    at 
java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2257)
 [?:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.notifyWatches(WatchProcessor.java:213)
 ~[main/:?]
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$notifyWatches$3(WatchProcessor.java:169)
 ~[main/:?]
    at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
 [?:?]
    at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
    at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: com.jayway.jsonpath.PathNotFoundException: No results for path: 
$['INCORRECT_FILTER']{noformat}
 

We need to fix that and formulate a reaction to an incorrect filter  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21279) Sql. Partition pruning. Integrate static partition pruning into READ statements execution pipeline

2024-01-19 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-21279:
--
Epic Link: IGNITE-21312

> Sql. Partition pruning. Integrate static partition pruning into READ 
> statements execution pipeline
> --
>
> Key: IGNITE-21279
> URL: https://issues.apache.org/jira/browse/IGNITE-21279
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that scan operations won't touch.
> 1. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned.
> 2. Support table scans, system view scans, and index scans.
> After this issue is resolved, partition pruning should work for SELECT 
> queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21281) Sql. Partition pruning. Integrate static partition pruning into MODIFY statements execution pipeline.

2024-01-19 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-21281:
--
Epic Link: IGNITE-21312

> Sql. Partition pruning. Integrate static partition pruning into MODIFY 
> statements execution pipeline.
> -
>
> Key: IGNITE-21281
> URL: https://issues.apache.org/jira/browse/IGNITE-21281
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that modify operations won't touch.
> 1. Traverse fragment tree to analyze inputs of DML operations:
>   - If Modify operation accepts Scan operation as an input (UPDATE), we do 
> not need to do anything when both operations are collocated and this case is 
> covered by https://issues.apache.org/jira/browse/IGNITE-21279). 
>   - For operations that accept INSERT/ MERGE, we need to consider values of 
> colocation key columns of both Value and Projection operators, since SQL's 
> VALUES accepts DEFAULT expression.
> 2. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned/modified.
> After this issue is resolved, partition pruning should work for INSERT, 
> UPDATE, MERGE, and DELETE statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21311) Sql. Partition pruning. Introduce pruning for correlated scans

2024-01-19 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-21311:
--
Epic Link: IGNITE-21312

> Sql. Partition pruning. Introduce pruning for correlated scans
> --
>
> Key: IGNITE-21311
> URL: https://issues.apache.org/jira/browse/IGNITE-21311
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Apart of static pruning made on preparation step, we may also introduce 
> dynamic pruning for correlated scans.
> In case of correlated join, a predicate on the right shoulder should be 
> evaluated based on the context provided from the left shoulder. This context 
> is not known until runtime, those partitions are not trimmed on preparation 
> step. However, this very case doesn't differ match from the static pruning: 
> the only difference here is a time when pruning should be applied. 
> In order to support dynamic pruning, we should save pruning meta during 
> planning phase in a scan node. Later, in runtime, we should evaluate pruning 
> function in order to derive partitions satisfying the predicate. Those 
> partitions should be used to do a lookup into a table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21310) Sql. Partition pruning. Introduce partition provider

2024-01-19 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-21310:
--
Epic Link: IGNITE-21312

> Sql. Partition pruning. Introduce partition provider
> 
>
> Key: IGNITE-21310
> URL: https://issues.apache.org/jira/browse/IGNITE-21310
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> In order to implement partition pruning, we need a component that will return 
> partition id by given colocation keys' values. We already have 
> HashCalculator+ColocationUtils to calculate colocation hash and 
> TypesAwareHashFunction which does pretty match the same thing but for sql.
> My proposal is to introduce component that will consume values similarly to 
> HashCalculator but will return particular partition id with regard to table's 
> column types and number of its partitions.
> As part of this ticket, let's integrate new component into 
> {{org.apache.ignite.internal.sql.engine.trait.Partitioned}} destination 
> function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Port partitionExtractor from AI2.

2024-01-19 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-21277:
--
Epic Link: IGNITE-21312

> Sql. Partition pruning. Port partitionExtractor from AI2.
> -
>
> Key: IGNITE-21277
> URL: https://issues.apache.org/jira/browse/IGNITE-21277
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> In order to prune unnecessary partitions we need to obtain information that 
> includes possible "values" of colocation key columns from filter expressions 
> for every scan operators prior to statement execution. This can be 
> accomplished by traversing an expression tree of scan's filter and collecting 
> expressions with colocation key columns (this data is called partition 
> pruning metadata for simplicity).
> 1. Implement a component that takes a physical plan and analyses filter 
> expressions of every scan operator and creates (if possible) an expression 
> that includes all colocated columns. (The PartitionExtractor from patch 
> (https://github.com/apache/ignite/pull/10928/files) can be used a reference 
> implementation).
> Expression types to analyze:
>  * AND
>  * EQUALS
>  * IS_FALSE
>  * IS_NOT_DISTINCT_FROM
>  * IS_NOT_FALSE
>  * IS_NOT_TRUE
>  * IS_TRUE
>  * NOT
>  * OR
>  * SEARCH (operation that tests whether a value is included in a certain 
> range)
> 2. Update QueryPlan to include partition pruning metadata for every scan 
> operator (source_id = ).
> Basic examples:
> {code:java}
> Statement: 
> SELECT * FROM t WHERE pk = 7 OR pk = 42
> Partition metadata: 
> t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
> primary key, || denotes OR operation 
> Statement: 
> SELECT * FROM t WHERE pk = 7 OR col1 = 1
> Partition metadata: [] // Empty, because col1 is not part of a colocation key.
> Statement: 
> SELECT * FROM t_colo_key1_colo_key2 WHERE colo_key1= 42
> Partition metadata: [] // Empty, because colo_key2 is missing 
> {code}
> —
> *Additional examples - partition pruning is possible*
> Dynamic parameters:
> {code:java}
> SELECT * FROM t WHERE pk = ?1 
> Partition pruning metadata: t = [ pk = ?1 ]
> {code}
> Colocation columns reside inside a nested expression:
> {code:java}
> SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
> Partition pruning metadata: t = [ pk = 2 ]
> {code}
> Multiple keys:
> {code:java}
> SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
> Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
> {code}
> Complex expression with multiple keys:
> {code:java}
> SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
> col_col2 = 100)
> Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 
> = 4, col_col2 = 100) ]
> {code}
> Multiple tables, assuming that filter b_id = 42 is pushed into scan b, 
> because a_id = b_id:
> {code:java}
> SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
> Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
> {code}
> ---
> *Additional examples - partition pruning is not possible*
> Columns named col* are not part of colocation key:
> {code:java}
> SELECT * FROM t WHERE col1 = 10 
> // Filter does not use colocation key columns.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col1 = col2 OR pk = 42 
> // We need to scan all partitions to figure out which tuples have ‘col1 = 
> col2’
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
> // Although first expression uses all colocation key columns the second one 
> only uses some.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
> // We need to scan all partitions to figure out which tuples have col_c1 = 1 
> OR col_c2 = 2.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
> // We need to scan all partitions to figure out which tuples have ‘col_col1 = 
> col_col2’
> Partition pruning metadata: [] 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21312) Sql. Partition pruning

2024-01-19 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-21312:
-

 Summary: Sql. Partition pruning
 Key: IGNITE-21312
 URL: https://issues.apache.org/jira/browse/IGNITE-21312
 Project: Ignite
  Issue Type: Epic
  Components: sql
Reporter: Konstantin Orlov


Partition pruning is a technique to reduce the amount of work required to 
execute a query by prematurely omitting parts of the table which definitely 
won't contribute to the result. 




--
This message was sent by Atlassian Jira
(v8.20.10#820010)