[jira] [Commented] (IGNITE-17053) Incorrect configuration of spring-data example

2022-10-21 Thread Ilya Shishkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622383#comment-17622383
 ] 

Ilya Shishkov commented on IGNITE-17053:


[~PetrovMikhail] , thank you for the review!

> Incorrect configuration of spring-data example
> --
>
> Key: IGNITE-17053
> URL: https://issues.apache.org/jira/browse/IGNITE-17053
> Project: Ignite
>  Issue Type: Bug
>  Components: extensions, springdata
>Reporter: Ilya Shishkov
>Assignee: Ilya Shishkov
>Priority: Minor
>  Labels: ise, newbie
> Attachments: SpringDataExamples.patch
>
>
> After removing of spring-data-2.2-ext, {{SpringDataExample}} will fail to 
> start because of incorrect path to XML-configuration [1], and incorrect FQDN 
> of Person class in XML-configuration [2].
> Fix is simple (see, [^SpringDataExamples.patch]) but it would be perfect to 
> add tests for examples similarly to tests of examples in Ignite.
> *Links:*
>  # 
> [https://github.com/apache/ignite-extensions/blob/master/modules/spring-data-ext/examples/src/main/java/org/apache/ignite/springdata/examples/SpringApplicationConfiguration.java#L51]
>  # 
> [https://github.com/apache/ignite-extensions/blob/master/modules/spring-data-ext/examples/config/example-spring-data.xml#L57]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17357) JMX metric exporter for Ignite 3

2022-10-21 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-17357:

Description: 
Metrics should be able to be exported via JMX as a first stage of metrics 
exposing.

Exporter implementation must provide the following behavior:
 * for each MetricSource we need to provide separate MBean with attribute per 
metric
 * each MBean attribute must have the same name as corresponding metric
 * on enable/disable event for MetricSource corresponding MBean must be 
registered/unregistered

  was:Metrics should be able to be exported via JMX. MBeans should be 
dynamically created and destroyed for metric sets that are enabled for metric 
sources (one MBean per metric source).


> JMX metric exporter for Ignite 3
> 
>
> Key: IGNITE-17357
> URL: https://issues.apache.org/jira/browse/IGNITE-17357
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Denis Chudov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> Metrics should be able to be exported via JMX as a first stage of metrics 
> exposing.
> Exporter implementation must provide the following behavior:
>  * for each MetricSource we need to provide separate MBean with attribute per 
> metric
>  * each MBean attribute must have the same name as corresponding metric
>  * on enable/disable event for MetricSource corresponding MBean must be 
> registered/unregistered



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17953) NPE and closed connection on some malformed SQL requests using third-party SQL clients

2022-10-21 Thread Andrey Khitrin (Jira)
Andrey Khitrin created IGNITE-17953:
---

 Summary: NPE and closed connection on some malformed SQL requests 
using third-party SQL clients
 Key: IGNITE-17953
 URL: https://issues.apache.org/jira/browse/IGNITE-17953
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Andrey Khitrin
 Fix For: 3.0.0-beta1


I try to run different SQL queries in AI3 using 
[SqlLine|https://github.com/julianhyde/sqlline] tool and fresh ignite-client 
JAR downloaded from CI. I tried both correct and some incorrect SQL queries. 
And it looks like some incorrect SQL queries lead to irrecoverable error on the 
client side. The stack trace is the following:
{code:java}
Oct 21, 2022 4:57:02 PM io.netty.channel.DefaultChannelPipeline 
onUnhandledInboundException
WARNING: An exceptionCaught() event was fired, and it reached at the tail of 
the pipeline. It usually means the last handler in the pipeline did not handle 
the exception.
java.lang.NullPointerException
at org.apache.ignite.lang.ErrorGroup.errorMessage(ErrorGroup.java:193)
at 
org.apache.ignite.lang.IgniteException.(IgniteException.java:190)
at 
org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:336)
at 
org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:301)
at 
org.apache.ignite.internal.client.TcpClientChannel.onMessage(TcpClientChannel.java:160)
at 
org.apache.ignite.internal.client.io.netty.NettyClientConnection.onMessage(NettyClientConnection.java:94)
at 
org.apache.ignite.internal.client.io.netty.NettyClientMessageHandler.channelRead(NettyClientMessageHandler.java:34)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995)
at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)

Oct 21, 2022 4:58:07 PM io.netty.channel.DefaultChannelPipeline 
onUnhandledInboundException
WARNING: An exceptionCaught() event was fired, and it reached at the tail of 
the pipeline. It usually means the last handler in the pipeline did not handle 
the exception.
java.io.IOException: Connection reset by peer
at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:233)
at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223)
at 
java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:356)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:258)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132)
at 
io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:357)
at 

[jira] [Updated] (IGNITE-17951) Sql. Enlist partitions to rw transaction

2022-10-21 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-17951:
--
Summary: Sql. Enlist partitions to rw transaction  (was: Enlist partitions 
to rw transaction)

> Sql. Enlist partitions to rw transaction
> 
>
> Key: IGNITE-17951
> URL: https://issues.apache.org/jira/browse/IGNITE-17951
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> In order to support distributed query execution with RW transaction, we need 
> prepare the transaction before actual execution.
> Looks like we only need to enlist the involved partitions to the transaction. 
> That could be made right after the query mapping phase.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17950) Sql. Revise query cancellation flow

2022-10-21 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-17950:
--
Summary: Sql. Revise query cancellation flow  (was: Revise query 
cancellation flow)

> Sql. Revise query cancellation flow
> ---
>
> Key: IGNITE-17950
> URL: https://issues.apache.org/jira/browse/IGNITE-17950
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> To prevent locks from being held indefinitely, we need to make sure that the 
> root fragment of the query is the last to cancel.
> Let's revise the query cancellation flow in order to meet this requirement.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17952) Sql. Make SQL distributed again

2022-10-21 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-17952:
-

 Summary: Sql. Make SQL distributed again
 Key: IGNITE-17952
 URL: https://issues.apache.org/jira/browse/IGNITE-17952
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Konstantin Orlov


As a first step in integrating RW transactions, we were forced to abandon the 
execution of distributed queries.

After IGNITE-17950 and IGNITE-17951 will be implemented, we need to switch back 
to a distributed execution. For this let's remove LOCAL_TRAITS_SET, and start 
to pass proper txId when performing a scan over local partition.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17770) ItIgniteNodeRestartTest.testCfgGap is flaky

2022-10-21 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17770:
-
Description: 
This test sporadically fails with the following undescriptive error:
{noformat}
java.lang.AssertionError
java.lang.AssertionError: java.util.concurrent.TimeoutException
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
Caused by: java.util.concurrent.TimeoutException
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
{noformat}
TC Run: 
[https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6772875]

Need to find out the root cause of the issue.

 

It is possible, that after https://issues.apache.org/jira/browse/IGNITE-17302 
will be completed, such error won't occur anymore, so we have to check this 
test several time after https://issues.apache.org/jira/browse/IGNITE-17302 will 
be merged

  was:
This test sporadically fails with the following undescriptive error:
{noformat}
java.lang.AssertionError
java.lang.AssertionError: java.util.concurrent.TimeoutException
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
Caused by: java.util.concurrent.TimeoutException
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
{noformat}
TC Run: 
[https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6772875]

Need to find out the root cause of the issue.

 

It is possible, that 


> ItIgniteNodeRestartTest.testCfgGap is flaky
> ---
>
> Key: IGNITE-17770
> URL: https://issues.apache.org/jira/browse/IGNITE-17770
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
>
> This test sporadically fails with the following undescriptive error:
> {noformat}
> java.lang.AssertionError
> java.lang.AssertionError: java.util.concurrent.TimeoutException
>   at 
> org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
>   at 
> org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
> Caused by: java.util.concurrent.TimeoutException
>   at 
> org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
>   at 
> org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
> {noformat}
> TC Run: 
> [https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6772875]
> Need to find out the root cause of the issue.
>  
> It is possible, that after https://issues.apache.org/jira/browse/IGNITE-17302 
> will be completed, such error won't occur anymore, so we have to check this 
> test several time after https://issues.apache.org/jira/browse/IGNITE-17302 
> will be merged



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17770) ItIgniteNodeRestartTest.testCfgGap is flaky

2022-10-21 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17770:
-
Description: 
This test sporadically fails with the following undescriptive error:
{noformat}
java.lang.AssertionError
java.lang.AssertionError: java.util.concurrent.TimeoutException
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
Caused by: java.util.concurrent.TimeoutException
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
{noformat}
TC Run: 
[https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6772875]

Need to find out the root cause of the issue.

 

It is possible, that 

  was:
This test sporadically fails with the following undescriptive error:

{noformat}
java.lang.AssertionError
java.lang.AssertionError: java.util.concurrent.TimeoutException
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
Caused by: java.util.concurrent.TimeoutException
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
{noformat}

TC Run: 
https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6772875

Need to find out the root cause of the issue.


> ItIgniteNodeRestartTest.testCfgGap is flaky
> ---
>
> Key: IGNITE-17770
> URL: https://issues.apache.org/jira/browse/IGNITE-17770
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
>
> This test sporadically fails with the following undescriptive error:
> {noformat}
> java.lang.AssertionError
> java.lang.AssertionError: java.util.concurrent.TimeoutException
>   at 
> org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
>   at 
> org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
> Caused by: java.util.concurrent.TimeoutException
>   at 
> org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:482)
>   at 
> org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.testCfgGap(ItIgniteNodeRestartTest.java:980)
> {noformat}
> TC Run: 
> [https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6772875]
> Need to find out the root cause of the issue.
>  
> It is possible, that 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17816) Sort out and merge Calcite tickets to Ignite 3.0 (step 7)

2022-10-21 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-17816:
---
Fix Version/s: 3.0.0-beta1
   (was: 3.0.0-beta2)

>  Sort out and merge Calcite tickets to Ignite 3.0 (step 7)
> --
>
> Key: IGNITE-17816
> URL: https://issues.apache.org/jira/browse/IGNITE-17816
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Major
>  Labels: calcite, ignite-3
> Fix For: 3.0.0-beta1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Let's merge the following tickets to ignite 3.0:
> https://issues.apache.org/jira/browse/IGNITE-16443
> https://issues.apache.org/jira/browse/IGNITE-16151
> https://issues.apache.org/jira/browse/IGNITE-16701
> https://issues.apache.org/jira/browse/IGNITE-16693
> https://issues.apache.org/jira/browse/IGNITE-16053
> After merge needs to remove the label {*}calcite3-required{*}.
> Tickets that could be simply merged - merge immediately. For hard cases let's 
> create separate tickets with estimation and link them to IGNITE-15658 or 
> links to blocker ticket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17894) Implement RAFT snapshot streaming receiver

2022-10-21 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-17894:
-
Reviewer: Semyon Danilov

> Implement RAFT snapshot streaming receiver
> --
>
> Key: IGNITE-17894
> URL: https://issues.apache.org/jira/browse/IGNITE-17894
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Roman Puchkovskiy
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> See IGNITE-17262



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17950) Revise query cancellation flow

2022-10-21 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-17950:
---
Description: 
To prevent locks from being held indefinitely, we need to make sure that the 
root fragment of the query is the last to cancel.

Let's revise the query cancellation flow in order to meet this requirement.

  was:
To prevent locks from being held indefinitely, we need to make sure that the 
root fragment of the query is the last to cancel.

Let's revise the query cancellation flow in order to meet tis requirement.


> Revise query cancellation flow
> --
>
> Key: IGNITE-17950
> URL: https://issues.apache.org/jira/browse/IGNITE-17950
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> To prevent locks from being held indefinitely, we need to make sure that the 
> root fragment of the query is the last to cancel.
> Let's revise the query cancellation flow in order to meet this requirement.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17951) Enlist partitions to rw transaction

2022-10-21 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-17951:
-

 Summary: Enlist partitions to rw transaction
 Key: IGNITE-17951
 URL: https://issues.apache.org/jira/browse/IGNITE-17951
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Konstantin Orlov


In order to support distributed query execution with RW transaction, we need 
prepare the transaction before actual execution.

Looks like we only need to enlist the involved partitions to the transaction. 
That could be made right after the query mapping phase.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17950) Revise query cancellation flow

2022-10-21 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-17950:
--
Epic Link: IGNITE-15081

> Revise query cancellation flow
> --
>
> Key: IGNITE-17950
> URL: https://issues.apache.org/jira/browse/IGNITE-17950
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> To prevent locks from being held indefinitely, we need to make sure that the 
> root fragment of the query is the last to cancel.
> Let's revise the query cancellation flow in order to meet tis requirement.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17950) Revise query cancellation flow

2022-10-21 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-17950:
-

 Summary: Revise query cancellation flow
 Key: IGNITE-17950
 URL: https://issues.apache.org/jira/browse/IGNITE-17950
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Konstantin Orlov


To prevent locks from being held indefinitely, we need to make sure that the 
root fragment of the query is the last to cancel.

Let's revise the query cancellation flow in order to meet tis requirement.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17889) Calcite engine. Avoid full index scans in case of null dynamic parameter

2022-10-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-17889:
---
Labels: calcite calcite3-required  (was: calcite calcite2-required 
calcite3-required)

> Calcite engine. Avoid full index scans in case of null dynamic parameter
> 
>
> Key: IGNITE-17889
> URL: https://issues.apache.org/jira/browse/IGNITE-17889
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: calcite, calcite3-required
> Fix For: 2.15
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, queries like:
> {code:java}
> SELECT * FROM tbl WHERE a >= ?
> {code}
> Should return no rows if dynamic parameter is null, but can be downgraded to 
> full index scan in case table have index on column {{a}} (ASCENDING order, 
> NULLS FIRST).
> We should somehow analyse nulls in search bounds and return empty rows 
> iterator for regular field conditions (`=`, `<`, '>`, etc). But also nulls 
> should be processed as is in search bounds for conditions like `IS NULL`, `IS 
> NOT NULL`, `IS NOT DISTINCT FROM` (the last one not supported currently).  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-15609) Calcite. Error WHERE clause must be a condition.

2022-10-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-15609:
---
Labels: calcite  (was: calcite calcite2-required)

> Calcite. Error WHERE clause must be a condition.
> 
>
> Key: IGNITE-15609
> URL: https://issues.apache.org/jira/browse/IGNITE-15609
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Minor
>  Labels: calcite
> Fix For: 2.15, 3.0.0-beta2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {noformat}
> statement ok
> CREATE TABLE item(i_manufact INTEGER)
> query I
> SELECT * FROM item i1 WHERE (SELECT count(*) AS item_cnt FROM item WHERE 
> (i_manufact = i1.i_manufact AND i_manufact=3) OR (i_manufact = i1.i_manufact 
> AND i_manufact=3)) ORDER BY 1 LIMIT 100;
> 
> {noformat}
> {noformat}
> org.apache.calcite.runtime.CalciteContextException: From line 1, column 30 to 
> line 1, column 167: WHERE clause must be a condition
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:506)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:917)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:902)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5271)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateWhereOrOn(SqlValidatorImpl.java:4350)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateWhereClause(SqlValidatorImpl.java:4334)
> {noformat}
> {noformat}
> /subquery/scalar/test_tpcds_correlated_subquery.test[_ignore]
> {noformat}
> tested with mysql, all ok there.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-17369) Snapshot is inconsistent under streamed loading with 'allowOverwrite==false'.

2022-10-21 Thread Vladimir Steshin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17612278#comment-17612278
 ] 

Vladimir Steshin edited comment on IGNITE-17369 at 10/21/22 12:39 PM:
--

Snapshot can begin work with different state of kin partitions. The shapshot 
process waits for the datastreamer futures. 
(_GridCacheMvccManager.addDataStreamerFuture()_). The problem is that these 
futures are created separately and concurrently on primary and backups nodes by 
_IsolatedUpdater_. As result, at the checkpoint some backups might be written 
without the primaries. And opposite. There are no updates accepted during 
checkpoint. Late streamer updates is not written to snapshoting partitions. 
This verification could produce a warninf of active streamer.

Solutions:

1) V1 (PR 10285). 
PR brings watching _DataStreamer_ futures in snapshot process. The futures are 
created before writing streamer batch on any node. We cannot relay on the 
future as on final and consistent write for streamer batch or certain entry. 
But we know that datastreamer is in progress at the checkpoint and that it is 
on pause. We can invalidate snapshot at this moment.
In theory the solution is not resilent. On streamer batch could've been 
entirely written before snapshot. Second batch after. First batch writes 
partition on primaries or backups. Second writes the rest. Snapshot is 
inconsistent.

2) V2 (PR 10286).
_IsolatedUpdater_ could just notify snapshot process, if exists, that 
concurrent inconsistent update is on. A notification of at least one entry on 
any node wound be enough. Should work in practice. In theory the solution is 
not resilent. On streamer batch could've been entirely written before snapshot. 
Second batch after. First batch writes partition on primaries or backups. 
Second writes the rest. Snapshot is inconsistent.

3) V3 (PR 10284).
We could mark that _DataStreamer_ is on on any first streamer batch received. 
And unmark somehow later. If _DataStreamer_ is marked as active, the snapshot 
process could check this mark. Since the mark is set before writting data, it 
is set before the datastreamer future which is being waited for in the snapshot 
process. This guaraties the mark are visible before the snapshot.

The problem is how to close such mark. When the streaming node left? Node can 
live forever. Send special closing request? The streamer node can do not close 
streamer at all. Meaning no _close()_ is invoked. Moreoever, _DataStreamer_ 
works through _CommunicationSPI_. Which doesn't guarantee delivery. We can't be 
sure that closing request is delivered and streamer is unmarked on the 
accepting node. Do we need to set this mark with a timeout and re-set with next 
datastreamer batche? Which timeout? Bind to what? 
On closing requests, a rebalance can happen. Should be processed too. Looks 
like we need a discovery closing message. Much simpler and reliable. 
Also, datastreamer can be canceled. Meaning one batches were written before 
snapshot. Other won't ever. 

4) V4 (PR 10299).
We could watch streamer is already registered before snapshot and 
simultaneously. The problem is that we need to monitor even client at the 
snapshot beginning and make sure they answered whether streamer is on. We could 
adjust snapshot process so that it would gather client responses at the start 
stage. The process is already has snapshot verification routines. 

5) V5 (PR 10330)
We could quickly check partition counters at the start stage. That would cover 
case when Datastremer failed or canceled before snapshot. But same counters 
doesn't mean same data. 

A shared problem is that streamer could fail, be canceled or lost long ago, 
before the snapshot. The data is alredy corrupted and streamer is gone, is not 
seen.


was (Author: vladsz83):
Snapshot can begin work with different state of kin partitions. The shapshot 
process waits for the datastreamer futures. 
(_GridCacheMvccManager.addDataStreamerFuture()_). The problem is that these 
futures are created separately and concurrently on primary and backups nodes by 
_IsolatedUpdater_. As result, at the checkpoint some backups might be written 
without the primaries. And opposite. There are no updates accepted during 
checkpoint. Late streamer updates is not written to snapshoting partitions. 
This verification could produce a warninf of active streamer.

Solutions:

1) V1 (PR 10285). 
PR brings watching _DataStreamer_ futures in snapshot process. The futures are 
created before writing streamer batch on any node. We cannot relay on the 
future as on final and consistent write for streamer batch or certain entry. 
But we know that datastreamer is in progress at the checkpoint and that it is 
on pause. We can invalidate snapshot at this moment.
In theory the solution is not resilent. On streamer batch could've been 
entirely written before snapshot. 

[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the partition inconsistency on restart/node_join by itself

2022-10-21 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Description: 
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
See [^PartialHistoricalRebalanceTest.java]

2) In case LWM is the same on primary and backup, rebalance will be skipped for 
such partition.
See [^SkippedRebalanceBecauseOfTheSameLwmTest.java]

Proposals:

1) Cheap fix
A possible solution is for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

2) Complex fix (when baseline changed)
Rebalance must honor whole counter state (LWM, HWM, gaps).
2.0) Primary HWM must be set to the highest HWM across the copies to avoid 
reapplying of already applied update counters on backups.
2.1) In case when WAL is available all entries between LWM and HWM (including) 
must be rebalanced to other nodes where they are required.
Even from backups to the primary.
Such approach *may require rebalance as a prerequisite to activation finish*. 

  was:
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
See [^PartialHistoricalRebalanceTest.java]

2) In case LWM is the same on primary and backup, rebalance will be skipped for 
such partition.
See [^SkippedRebalanceBecauseOfTheSameLwmTest.java]

Proposals:

1) Cheap fix
A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

2) Complex fix (when baseline changed)
Rebalance must honor whole counter state (LWM, HWM, gaps).
2.0) Primary HWM must be set to the highest HWM across the copies to avoid 
reapplying of already applied update counters on backups.
2.1) In case when WAL is available all entries between LWM and HWM (including) 
must be rebalanced to other nodes where they are required.
Even from backups to the primary.
Such approach *may require rebalance as a prerequisite to activation finish*. 


> Cluster must be able to fix the partition inconsistency on restart/node_join 
> by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Labels: iep-31, ise
> Fix For: 2.15
>
> Attachments: PartialHistoricalRebalanceTest.java, 
> SkippedRebalanceBecauseOfTheSameLwmTest.java
>
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the highest 
> LWM for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> See [^PartialHistoricalRebalanceTest.java]
> 2) In case LWM is the same on primary and backup, rebalance will be skipped 
> for such partition.
> See [^SkippedRebalanceBecauseOfTheSameLwmTest.java]
> Proposals:
> 1) Cheap fix
> A possible solution is for the case when the cluster failed and restarted 
> (same baseline) is to fix the counters automatically (when cluster 
> composition is equal to the baseline specified 

[jira] [Updated] (IGNITE-17949) Full rebalance must be restricted when it causes any updates loss.

2022-10-21 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17949:
--
Description: 
For example, it's
 - ok to replace _partition's copy_ B with _partition's copy_ A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm=120],
because A contains whole B.
 - NOT ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm={*}148{*}], 
when update *142* will be lost.

But, currently, full (any) rebalance takes into account only LWM, and B will be 
replaced with A in both cases (where historical rebalance is impossible).

  was:
For example, it's
 - ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm=120],
because A contains whole B.
 - NOT ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm={*}148{*}], 
when update *142* will be lost.


> Full rebalance must be restricted when it causes any updates loss.
> --
>
> Key: IGNITE-17949
> URL: https://issues.apache.org/jira/browse/IGNITE-17949
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: ise
>
> For example, it's
>  - ok to replace _partition's copy_ B with _partition's copy_ A when
> A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm=120],
> because A contains whole B.
>  - NOT ok to replace B with A when
> A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], 
> hwm={*}148{*}], 
> when update *142* will be lost.
> But, currently, full (any) rebalance takes into account only LWM, and B will 
> be replaced with A in both cases (where historical rebalance is impossible).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17817) Update ItTablePersistenceTest to use Replica layer with new transaction protocol

2022-10-21 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-17817:
---
Description: 
The ItTablePersistenceTest is disabled. Need to update ItTablePersistenceTest 
to use Replica layer with new transaction protocol. Now components (for example 
TxStateTableStorage, ReplicaService) of InternalTableImpl and TxManagerImpl are 
mocked or null.

Also ItTablePersistenceTest cannot be enabled because MvPartitionStorage hasn't 
supported snapshots yet https://issues.apache.org/jira/browse/IGNITE-16644

  was:Update ItTablePersistenceTest to use Replica layer with new transaction 
protocol.


> Update ItTablePersistenceTest to use Replica layer with new transaction 
> protocol
> 
>
> Key: IGNITE-17817
> URL: https://issues.apache.org/jira/browse/IGNITE-17817
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> The ItTablePersistenceTest is disabled. Need to update ItTablePersistenceTest 
> to use Replica layer with new transaction protocol. Now components (for 
> example TxStateTableStorage, ReplicaService) of InternalTableImpl and 
> TxManagerImpl are mocked or null.
> Also ItTablePersistenceTest cannot be enabled because MvPartitionStorage 
> hasn't supported snapshots yet 
> https://issues.apache.org/jira/browse/IGNITE-16644



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17949) Full rebalance must be restricted when it causes any updates loss.

2022-10-21 Thread Anton Vinogradov (Jira)
Anton Vinogradov created IGNITE-17949:
-

 Summary: Full rebalance must be restricted when it causes any 
updates loss.
 Key: IGNITE-17949
 URL: https://issues.apache.org/jira/browse/IGNITE-17949
 Project: Ignite
  Issue Type: Sub-task
Reporter: Anton Vinogradov


For example, it's
 - ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm=120],
because A contains whole B.
 - NOT ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm={*}148{*}], 
when update *142* will be lost.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the partition inconsistency on restart/node_join by itself

2022-10-21 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Description: 
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
See [^PartialHistoricalRebalanceTest.java]

2) In case LWM is the same on primary and backup, rebalance will be skipped for 
such partition.
See [^SkippedRebalanceBecauseOfTheSameLwmTest.java]

Proposals:

1) Cheap fix
A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

2) Complex fix (when baseline changed)
Rebalance must honor whole counter state (LWM, HWM, gaps).
2.0) Primary HWM must be set to the highest HWM across the copies to avoid 
reapplying of already applied update counters on backups.
2.1) In case when WAL is available all entries between LWM and HWM (including) 
must be rebalanced to other nodes where they are required.
Even from backups to the primary.
Such approach *may require rebalance as a prerequisite to activation finish*. 

  was:
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
See [^PartialHistoricalRebalanceTest.java]

2) In case LWM is the same on primary and backup, rebalance will be skipped for 
such partition.
See [^SkippedRebalanceBecauseOfTheSameLwmTest.java]

Proposals:

1) Cheap fix
A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

2) Complex fix
Rebalance must honor whole counter state (LWM, HWM, gaps).
2.0) Primary HWM must be set to the highest HWM across the copies to avoid 
reapplying of already applied update counters on backups.
2.1) In case when WAL is available all entries between LWM and HWM (including) 
must be rebalanced to other nodes where they are required.
Even from backups to the primary.
2.2) Full rebalance must be restricted when it causes any updates loss.
For example, it's
 - ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm=120],
because A contains whole B.
 - NOT ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm={*}148{*}], 
when update *142* will be lost.


> Cluster must be able to fix the partition inconsistency on restart/node_join 
> by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Labels: iep-31, ise
> Fix For: 2.15
>
> Attachments: PartialHistoricalRebalanceTest.java, 
> SkippedRebalanceBecauseOfTheSameLwmTest.java
>
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the highest 
> LWM for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> See [^PartialHistoricalRebalanceTest.java]
> 2) In case LWM is the same on primary and backup, rebalance will be skipped 
> for such partition.
> See 

[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the partition inconsistency on restart/node_join by itself

2022-10-21 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Description: 
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
See [^PartialHistoricalRebalanceTest.java]

2) In case LWM is the same on primary and backup, rebalance will be skipped for 
such partition.
See [^SkippedRebalanceBecauseOfTheSameLwmTest.java]

Proposals:

1) Cheap fix
A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

2) Complex fix
Rebalance must honor whole counter state (LWM, HWM, gaps).
2.0) Primary HWM must be set to the highest HWM across the copies to avoid 
reapplying of already applied update counters on backups.
2.1) In case when WAL is available all entries between LWM and HWM (including) 
must be rebalanced to other nodes where they are required.
Even from backups to the primary.
2.2) Full rebalance must be restricted when it causes any updates loss.
For example, it's
 - ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm=120],
because A contains whole B.
 - NOT ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm={*}148{*}], 
when update *142* will be lost.

  was:
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
See [^PartialHistoricalRebalanceTest.java]

Such partition may be rebalanced correctly "later" in case of full rebalance 
will be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will be skipped for 
such partition.
See [^SkippedRebalanceBecauseOfTheSameLwmTest.java]

Proposals:

1) Cheap fix
A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

2) Correct fix
Rebalance must honor whole counter state (LWM, HWM, gaps).
2.0) Primary HWM must be set to the highest HWM across the copies to avoid 
reapplying of already applied update counters on backups.
2.1) In case when WAL is available all entries between LWM and HWM (including) 
must be rebalanced to other nodes where they are required.
Even from backups to the primary.
2.2) Full rebalance must be restricted when it causes any updates loss.
For example, it's
 - ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm=120],
because A contains whole B.
 - NOT ok to replace B with A when
A[lwm=100, gaps=[142], hwm=200] and B[lwm=50, gaps=[76,99,111], hwm={*}148{*}], 
when update *142* will be lost.


> Cluster must be able to fix the partition inconsistency on restart/node_join 
> by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Labels: iep-31, ise
> Fix For: 2.15
>
> Attachments: PartialHistoricalRebalanceTest.java, 
> SkippedRebalanceBecauseOfTheSameLwmTest.java
>
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the 

[jira] [Commented] (IGNITE-17053) Incorrect configuration of spring-data example

2022-10-21 Thread Mikhail Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622201#comment-17622201
 ] 

Mikhail Petrov commented on IGNITE-17053:
-

LGTM. Merged to the master branch.

[~shishkovilja] Thank you for the contribution.

> Incorrect configuration of spring-data example
> --
>
> Key: IGNITE-17053
> URL: https://issues.apache.org/jira/browse/IGNITE-17053
> Project: Ignite
>  Issue Type: Bug
>  Components: extensions, springdata
>Reporter: Ilya Shishkov
>Assignee: Ilya Shishkov
>Priority: Minor
>  Labels: ise, newbie
> Attachments: SpringDataExamples.patch
>
>
> After removing of spring-data-2.2-ext, {{SpringDataExample}} will fail to 
> start because of incorrect path to XML-configuration [1], and incorrect FQDN 
> of Person class in XML-configuration [2].
> Fix is simple (see, [^SpringDataExamples.patch]) but it would be perfect to 
> add tests for examples similarly to tests of examples in Ignite.
> *Links:*
>  # 
> [https://github.com/apache/ignite-extensions/blob/master/modules/spring-data-ext/examples/src/main/java/org/apache/ignite/springdata/examples/SpringApplicationConfiguration.java#L51]
>  # 
> [https://github.com/apache/ignite-extensions/blob/master/modules/spring-data-ext/examples/config/example-spring-data.xml#L57]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17931) Blocking code inside SchemaRegistryImpl#schema(int), need to be refactored.

2022-10-21 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-17931:
-
Component/s: (was: data structures)

> Blocking code inside SchemaRegistryImpl#schema(int), need to be refactored.
> ---
>
> Key: IGNITE-17931
> URL: https://issues.apache.org/jira/browse/IGNITE-17931
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha5
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>
> Previously blocking fut.join() contains in SchemaManager#tableSchema after 
> refactoring it moves into SchemaRegistryImpl#schema(int) [1], it`s necessary 
> to remove blocking approach.
> [1] 
> https://github.com/apache/ignite-3/blob/7b0b3395de97db09896272e03322bba302c0b556/modules/schema/src/main/java/org/apache/ignite/internal/schema/registry/SchemaRegistryImpl.java#L93
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17948) Create documentation for AI3 packaging

2022-10-21 Thread Igor Gusev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Gusev reassigned IGNITE-17948:
---

Assignee: Igor Gusev

> Create documentation for AI3 packaging
> --
>
> Key: IGNITE-17948
> URL: https://issues.apache.org/jira/browse/IGNITE-17948
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Reporter: Igor Gusev
>Assignee: Igor Gusev
>Priority: Major
>
> We have added packaging for AI3 beta. We need to describe how users can now 
> install the product.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17129) cli tool dosn’t expand tilde in a config path

2022-10-21 Thread Ivan Artukhov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622142#comment-17622142
 ] 

Ivan Artukhov commented on IGNITE-17129:


Duplicate of https://issues.apache.org/jira/browse/IGNITE-16463

> cli tool dosn’t expand tilde in a config path 
> --
>
> Key: IGNITE-17129
> URL: https://issues.apache.org/jira/browse/IGNITE-17129
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha5
>Reporter: Andrey Khitrin
>Priority: Major
>  Labels: ignite-3
>
> Many Linux users use tilde ({{{}~{}}}) as a shortcut for a user's home 
> directory. CLI tool could expand environment variables (like {{{}$HOME{}}}) 
> in config path, but fails to expand tilde.
> An example:
> {code:java}
> $ ./ignite node start 
> --config=~/work/apache/ignite-3/examples/config/ignite-config.json 
> my-first-node
> Starting a new Ignite node...
> Can't start the node. Read logs for details: 
> /home/zloddey/opt/ai3/ignite-log/my-first-node.log
> $ cat /home/zloddey/opt/ai3/ignite-log/my-first-node.log
> Exception in thread "main" class org.apache.ignite.lang.IgniteException: 
> Unable to read user specific configuration.
> at 
> org.apache.ignite.internal.app.IgnitionImpl.start(IgnitionImpl.java:97)
> at org.apache.ignite.IgnitionManager.start(IgnitionManager.java:105)
> at 
> org.apache.ignite.app.IgniteCliRunner.start(IgniteCliRunner.java:109)
> at org.apache.ignite.app.IgniteCliRunner.main(IgniteCliRunner.java:44)
> Caused by: java.nio.file.NoSuchFileException: 
> /home/zloddey/opt/ai3/~/work/apache/ignite-3/examples/config/ignite-config.json
> at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
> at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
> at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
> at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
> at java.base/java.nio.file.Files.newByteChannel(Files.java:371)
> at java.base/java.nio.file.Files.newByteChannel(Files.java:422)
> at java.base/java.nio.file.Files.readAllBytes(Files.java:3206)
> at java.base/java.nio.file.Files.readString(Files.java:3284)
> at java.base/java.nio.file.Files.readString(Files.java:3243)
> at 
> org.apache.ignite.internal.app.IgnitionImpl.start(IgnitionImpl.java:92)
> ... 3 more
>  {code}
> When I use {{/home/zloddey}} or {{$HOME}} instead of tilde, it works fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17948) Create documentation for AI3 packaging

2022-10-21 Thread Igor Gusev (Jira)
Igor Gusev created IGNITE-17948:
---

 Summary: Create documentation for AI3 packaging
 Key: IGNITE-17948
 URL: https://issues.apache.org/jira/browse/IGNITE-17948
 Project: Ignite
  Issue Type: Task
  Components: documentation
Reporter: Igor Gusev


We have added packaging for AI3 beta. We need to describe how users can now 
install the product.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17940) Move rest-http to extensions

2022-10-21 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-17940:

Labels: ise  (was: )

> Move rest-http to extensions
> 
>
> Key: IGNITE-17940
> URL: https://issues.apache.org/jira/browse/IGNITE-17940
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Priority: Minor
>  Labels: ise
>
> Ignite rest-http module should be moved to extensions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17942) Replace jetty in rest-http module

2022-10-21 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-17942:

Labels: good-first-issue ise newbie  (was: good-first-issue newbie)

> Replace jetty in rest-http module
> -
>
> Key: IGNITE-17942
> URL: https://issues.apache.org/jira/browse/IGNITE-17942
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Priority: Minor
>  Labels: good-first-issue, ise, newbie
>
> jetty 9.x is outdated since June, 2022, versions 10.x and higher require 
> java11.
> Let's replace jetty with netty, or other lightweight framework.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-15609) Calcite. Error WHERE clause must be a condition.

2022-10-21 Thread Vladimir Steshin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17622094#comment-17622094
 ] 

Vladimir Steshin commented on IGNITE-15609:
---

[~zstan], yes. Thank you. I made similar PR to AI2.

> Calcite. Error WHERE clause must be a condition.
> 
>
> Key: IGNITE-15609
> URL: https://issues.apache.org/jira/browse/IGNITE-15609
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Minor
>  Labels: calcite, calcite2-required
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {noformat}
> statement ok
> CREATE TABLE item(i_manufact INTEGER)
> query I
> SELECT * FROM item i1 WHERE (SELECT count(*) AS item_cnt FROM item WHERE 
> (i_manufact = i1.i_manufact AND i_manufact=3) OR (i_manufact = i1.i_manufact 
> AND i_manufact=3)) ORDER BY 1 LIMIT 100;
> 
> {noformat}
> {noformat}
> org.apache.calcite.runtime.CalciteContextException: From line 1, column 30 to 
> line 1, column 167: WHERE clause must be a condition
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:506)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:917)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:902)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5271)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateWhereOrOn(SqlValidatorImpl.java:4350)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateWhereClause(SqlValidatorImpl.java:4334)
> {noformat}
> {noformat}
> /subquery/scalar/test_tpcds_correlated_subquery.test[_ignore]
> {noformat}
> tested with mysql, all ok there.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17250) Calcite. Make 'min()/max()' use first/last index value.

2022-10-21 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-17250:
--
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> Calcite. Make 'min()/max()' use first/last index value.
> ---
>
> Key: IGNITE-17250
> URL: https://issues.apache.org/jira/browse/IGNITE-17250
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: ise
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, Calcite's plan on min()/max() is the table scan and the 
> aggregation n the indexed field:
> {code:sql}
> "select min(salary) from Person"
> {code}
> Plan:
> {code:java}
> IgniteReduceHashAggregate(group=[{}], MAX(SALARY)=[MAX($0)])
>   IgniteExchange(distribution=[single])
> IgniteMapHashAggregate(group=[{}], MAX(SALARY)=[MAX($0)])
>   IgniteTableScan(table=[[PUBLIC, PERSON]], requiredColumns=[{3}])
> {code}
> We could pick up first index record. No need to scan.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17053) Incorrect configuration of spring-data example

2022-10-21 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-17053:

Fix Version/s: (was: 2.15)

> Incorrect configuration of spring-data example
> --
>
> Key: IGNITE-17053
> URL: https://issues.apache.org/jira/browse/IGNITE-17053
> Project: Ignite
>  Issue Type: Bug
>  Components: extensions, springdata
>Reporter: Ilya Shishkov
>Assignee: Ilya Shishkov
>Priority: Minor
>  Labels: ise, newbie
> Attachments: SpringDataExamples.patch
>
>
> After removing of spring-data-2.2-ext, {{SpringDataExample}} will fail to 
> start because of incorrect path to XML-configuration [1], and incorrect FQDN 
> of Person class in XML-configuration [2].
> Fix is simple (see, [^SpringDataExamples.patch]) but it would be perfect to 
> add tests for examples similarly to tests of examples in Ignite.
> *Links:*
>  # 
> [https://github.com/apache/ignite-extensions/blob/master/modules/spring-data-ext/examples/src/main/java/org/apache/ignite/springdata/examples/SpringApplicationConfiguration.java#L51]
>  # 
> [https://github.com/apache/ignite-extensions/blob/master/modules/spring-data-ext/examples/config/example-spring-data.xml#L57]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17053) Incorrect configuration of spring-data example

2022-10-21 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-17053:

Fix Version/s: 2.15

> Incorrect configuration of spring-data example
> --
>
> Key: IGNITE-17053
> URL: https://issues.apache.org/jira/browse/IGNITE-17053
> Project: Ignite
>  Issue Type: Bug
>  Components: extensions, springdata
>Reporter: Ilya Shishkov
>Assignee: Ilya Shishkov
>Priority: Minor
>  Labels: ise, newbie
> Fix For: 2.15
>
> Attachments: SpringDataExamples.patch
>
>
> After removing of spring-data-2.2-ext, {{SpringDataExample}} will fail to 
> start because of incorrect path to XML-configuration [1], and incorrect FQDN 
> of Person class in XML-configuration [2].
> Fix is simple (see, [^SpringDataExamples.patch]) but it would be perfect to 
> add tests for examples similarly to tests of examples in Ignite.
> *Links:*
>  # 
> [https://github.com/apache/ignite-extensions/blob/master/modules/spring-data-ext/examples/src/main/java/org/apache/ignite/springdata/examples/SpringApplicationConfiguration.java#L51]
>  # 
> [https://github.com/apache/ignite-extensions/blob/master/modules/spring-data-ext/examples/config/example-spring-data.xml#L57]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17946) .NET: PartitionLossTest.TestReadWriteAll is flaky

2022-10-21 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17621518#comment-17621518
 ] 

Pavel Tupitsyn commented on IGNITE-17946:
-

Merged to master: 6aeb74588add4339ea531e45140f01c513055421

> .NET: PartitionLossTest.TestReadWriteAll is flaky
> -
>
> Key: IGNITE-17946
> URL: https://issues.apache.org/jira/browse/IGNITE-17946
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
> Fix For: 2.15
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *PartitionLossTest* is flaky on Windows:
> https://ci.ignite.apache.org/test/-4373544711224269498?currentProjectId=IgniteTests24Java8=%3Cdefault%3E



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17889) Calcite engine. Avoid full index scans in case of null dynamic parameter

2022-10-21 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17621505#comment-17621505
 ] 

Ignite TC Bot commented on IGNITE-17889:


{panel:title=Branch: [pull/10338/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10338/head] Base: [master] : New Tests 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Calcite SQL{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=6844840]]
* {color:#013220}IgniteCalciteTestSuite: 
IndexScanlIntegrationTest.testNullsInCNLJSearchRow - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=6844483buildTypeId=IgniteTests24Java8_RunAll]

> Calcite engine. Avoid full index scans in case of null dynamic parameter
> 
>
> Key: IGNITE-17889
> URL: https://issues.apache.org/jira/browse/IGNITE-17889
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: calcite, calcite2-required, calcite3-required
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, queries like:
> {code:java}
> SELECT * FROM tbl WHERE a >= ?
> {code}
> Should return no rows if dynamic parameter is null, but can be downgraded to 
> full index scan in case table have index on column {{a}} (ASCENDING order, 
> NULLS FIRST).
> We should somehow analyse nulls in search bounds and return empty rows 
> iterator for regular field conditions (`=`, `<`, '>`, etc). But also nulls 
> should be processed as is in search bounds for conditions like `IS NULL`, `IS 
> NOT NULL`, `IS NOT DISTINCT FROM` (the last one not supported currently).  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)