[jira] [Commented] (IGNITE-17598) Unbind some JDBC/ODBC metadata requests from the ignite-indexing module

2022-10-14 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617899#comment-17617899
 ] 

Ignite TC Bot commented on IGNITE-17598:


{panel:title=Branch: [pull/10291/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10291/head] Base: [master] : New Tests 
(7)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Calcite SQL{color} [[tests 
7|https://ci2.ignite.apache.org/viewLog.html?buildId=6819785]]
* {color:#013220}IgniteCalciteTestSuite: JdbcQueryTest.testParametersMetadata - 
PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: QueryMetadataIntegrationTest.testDml - 
PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: QueryMetadataIntegrationTest.testDdl - 
PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: 
QueryMetadataIntegrationTest.testExplain - PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: 
QueryMetadataIntegrationTest.testMultipleQueries - PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: 
QueryMetadataIntegrationTest.testMultipleConditions - PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: QueryMetadataIntegrationTest.testJoin 
- PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=6819865buildTypeId=IgniteTests24Java8_RunAll]

> Unbind some JDBC/ODBC metadata requests from the ignite-indexing module
> ---
>
> Key: IGNITE-17598
> URL: https://issues.apache.org/jira/browse/IGNITE-17598
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.14
>Reporter: Aleksey Plekhanov
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: ise
> Fix For: 2.15
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> After IGNITE-15424 some JDBC/ODBC metadata requests still require indexing 
> module (Methods {{IgniteH2Indexing.resultMetaData}} and 
> \{{IgniteH2Indexing.parameterMetaData}} are used). If query engine used 
> without ignite-indexing module such requests can fail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17909) Extract Network configuration into corresponding modules

2022-10-14 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-17909:
-
Description: This is part of the work related to removing configuration 
from the ignite-api module. Network configuration schema should be moved to the 
{{ignite-network}} module.

> Extract Network configuration into corresponding modules
> 
>
> Key: IGNITE-17909
> URL: https://issues.apache.org/jira/browse/IGNITE-17909
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>
> This is part of the work related to removing configuration from the 
> ignite-api module. Network configuration schema should be moved to the 
> {{ignite-network}} module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17909) Extract Network configuration into corresponding modules

2022-10-14 Thread Aleksandr Polovtcev (Jira)
Aleksandr Polovtcev created IGNITE-17909:


 Summary: Extract Network configuration into corresponding modules
 Key: IGNITE-17909
 URL: https://issues.apache.org/jira/browse/IGNITE-17909
 Project: Ignite
  Issue Type: Task
Reporter: Aleksandr Polovtcev
Assignee: Aleksandr Polovtcev






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17908) AssertionError LWM after reserved on data insertion after the cluster restart

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17908:
--
Labels: ise  (was: )

> AssertionError LWM after reserved on data insertion after the cluster restart
> -
>
> Key: IGNITE-17908
> URL: https://issues.apache.org/jira/browse/IGNITE-17908
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: ise
> Attachments: LwmAfterReservedTest.java
>
>
> After the cluster restart you may see the following assertion:
> {code}
> java.lang.AssertionError: LWM after reserved: lwm=2030, reserved=2010, 
> cntr=Counter [lwm=2030, missed=[], hwm=2030, reserved=2011]
>   at 
> org.apache.ignite.internal.processors.cache.PartitionUpdateCounterTrackingImpl.reserve(PartitionUpdateCounterTrackingImpl.java:270)
>   at 
> org.apache.ignite.internal.processors.cache.PartitionUpdateCounterErrorWrapper.reserve(PartitionUpdateCounterErrorWrapper.java:58)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.getAndIncrementUpdateCounter(IgniteCacheOffheapManagerImpl.java:1594)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.getAndIncrementUpdateCounter(GridCacheOffheapManager.java:2483)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition.getAndIncrementUpdateCounter(GridDhtLocalPartition.java:942)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.calculatePartitionUpdateCounters(IgniteTxLocalAdapter.java:510)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1356)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:726)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1132)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareAsyncLocal(GridNearTxLocal.java:4282)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareColocatedTx(IgniteTxHandler.java:303)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.proceedPrepare(GridNearOptimisticTxPrepareFuture.java:565)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepareSingle(GridNearOptimisticTxPrepareFuture.java:392)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepare0(GridNearOptimisticTxPrepareFuture.java:335)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepareOnTopology(GridNearOptimisticTxPrepareFutureAdapter.java:205)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepare(GridNearOptimisticTxPrepareFutureAdapter.java:129)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareNearTxLocal(GridNearTxLocal.java:3946)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:3994)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.optimisticPutFuture(GridNearTxLocal.java:3051)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:729)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:484)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2511)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2509)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4284)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2509)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2487)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2466)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1332)
>   at 
> 

[jira] [Updated] (IGNITE-17908) AssertionError LWM after reserved on data insertion after the cluster restart

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17908:
--
Description: 
After the cluster restart you may see the following assertion:
{code}
java.lang.AssertionError: LWM after reserved: lwm=2030, reserved=2010, 
cntr=Counter [lwm=2030, missed=[], hwm=2030, reserved=2011]

at 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounterTrackingImpl.reserve(PartitionUpdateCounterTrackingImpl.java:270)
at 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounterErrorWrapper.reserve(PartitionUpdateCounterErrorWrapper.java:58)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.getAndIncrementUpdateCounter(IgniteCacheOffheapManagerImpl.java:1594)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.getAndIncrementUpdateCounter(GridCacheOffheapManager.java:2483)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition.getAndIncrementUpdateCounter(GridDhtLocalPartition.java:942)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.calculatePartitionUpdateCounters(IgniteTxLocalAdapter.java:510)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1356)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:726)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1132)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareAsyncLocal(GridNearTxLocal.java:4282)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareColocatedTx(IgniteTxHandler.java:303)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.proceedPrepare(GridNearOptimisticTxPrepareFuture.java:565)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepareSingle(GridNearOptimisticTxPrepareFuture.java:392)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepare0(GridNearOptimisticTxPrepareFuture.java:335)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepareOnTopology(GridNearOptimisticTxPrepareFutureAdapter.java:205)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepare(GridNearOptimisticTxPrepareFutureAdapter.java:129)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareNearTxLocal(GridNearTxLocal.java:3946)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:3994)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.optimisticPutFuture(GridNearTxLocal.java:3051)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:729)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:484)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2511)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2509)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4284)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2509)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2487)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2466)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1332)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:867)
at 
org.apache.ignite.util.BrokenRebalanceTest.testCountersOnCrachRecovery(BrokenRebalanceTest.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 

[jira] [Updated] (IGNITE-17908) AssertionError LWM after reserved on data insertion after the cluster restart

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17908:
--
Description: 
After the cluster restart you may see the following assertion:
{code}
java.lang.AssertionError: LWM after reserved: lwm=2030, reserved=2010, 
cntr=Counter [lwm=2030, missed=[], hwm=2030, reserved=2011]

at 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounterTrackingImpl.reserve(PartitionUpdateCounterTrackingImpl.java:270)
at 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounterErrorWrapper.reserve(PartitionUpdateCounterErrorWrapper.java:58)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.getAndIncrementUpdateCounter(IgniteCacheOffheapManagerImpl.java:1594)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.getAndIncrementUpdateCounter(GridCacheOffheapManager.java:2483)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition.getAndIncrementUpdateCounter(GridDhtLocalPartition.java:942)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.calculatePartitionUpdateCounters(IgniteTxLocalAdapter.java:510)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1356)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:726)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1132)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareAsyncLocal(GridNearTxLocal.java:4282)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareColocatedTx(IgniteTxHandler.java:303)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.proceedPrepare(GridNearOptimisticTxPrepareFuture.java:565)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepareSingle(GridNearOptimisticTxPrepareFuture.java:392)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepare0(GridNearOptimisticTxPrepareFuture.java:335)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepareOnTopology(GridNearOptimisticTxPrepareFutureAdapter.java:205)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepare(GridNearOptimisticTxPrepareFutureAdapter.java:129)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareNearTxLocal(GridNearTxLocal.java:3946)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:3994)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.optimisticPutFuture(GridNearTxLocal.java:3051)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:729)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:484)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2511)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2509)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4284)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2509)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2487)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2466)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1332)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:867)
at 
org.apache.ignite.util.BrokenRebalanceTest.testCountersOnCrachRecovery(BrokenRebalanceTest.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 

[jira] [Updated] (IGNITE-17908) AssertionError LWM after reserved on data insertion after the cluster restart

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17908:
--
Attachment: LwmAfterReservedTest.java

> AssertionError LWM after reserved on data insertion after the cluster restart
> -
>
> Key: IGNITE-17908
> URL: https://issues.apache.org/jira/browse/IGNITE-17908
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
> Attachments: LwmAfterReservedTest.java
>
>
> After the cluster restart you may see the following assertion:
> {code}
> java.lang.AssertionError: LWM after reserved: lwm=2030, reserved=2010, 
> cntr=Counter [lwm=2030, missed=[], hwm=2030, reserved=2011]
>   at 
> org.apache.ignite.internal.processors.cache.PartitionUpdateCounterTrackingImpl.reserve(PartitionUpdateCounterTrackingImpl.java:270)
>   at 
> org.apache.ignite.internal.processors.cache.PartitionUpdateCounterErrorWrapper.reserve(PartitionUpdateCounterErrorWrapper.java:58)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.getAndIncrementUpdateCounter(IgniteCacheOffheapManagerImpl.java:1594)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.getAndIncrementUpdateCounter(GridCacheOffheapManager.java:2483)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition.getAndIncrementUpdateCounter(GridDhtLocalPartition.java:942)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.calculatePartitionUpdateCounters(IgniteTxLocalAdapter.java:510)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1356)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:726)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1132)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareAsyncLocal(GridNearTxLocal.java:4282)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareColocatedTx(IgniteTxHandler.java:303)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.proceedPrepare(GridNearOptimisticTxPrepareFuture.java:565)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepareSingle(GridNearOptimisticTxPrepareFuture.java:392)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepare0(GridNearOptimisticTxPrepareFuture.java:335)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepareOnTopology(GridNearOptimisticTxPrepareFutureAdapter.java:205)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepare(GridNearOptimisticTxPrepareFutureAdapter.java:129)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareNearTxLocal(GridNearTxLocal.java:3946)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:3994)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.optimisticPutFuture(GridNearTxLocal.java:3051)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:729)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:484)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2511)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2509)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4284)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2509)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2487)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2466)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1332)
>   at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:867)
>   

[jira] [Created] (IGNITE-17908) AssertionError LWM after reserved on data insertion after the cluster restart

2022-10-14 Thread Anton Vinogradov (Jira)
Anton Vinogradov created IGNITE-17908:
-

 Summary: AssertionError LWM after reserved on data insertion after 
the cluster restart
 Key: IGNITE-17908
 URL: https://issues.apache.org/jira/browse/IGNITE-17908
 Project: Ignite
  Issue Type: Sub-task
Reporter: Anton Vinogradov


After the cluster restart you may see the following assertion:
{code}
java.lang.AssertionError: LWM after reserved: lwm=2030, reserved=2010, 
cntr=Counter [lwm=2030, missed=[], hwm=2030, reserved=2011]

at 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounterTrackingImpl.reserve(PartitionUpdateCounterTrackingImpl.java:270)
at 
org.apache.ignite.internal.processors.cache.PartitionUpdateCounterErrorWrapper.reserve(PartitionUpdateCounterErrorWrapper.java:58)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.getAndIncrementUpdateCounter(IgniteCacheOffheapManagerImpl.java:1594)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.getAndIncrementUpdateCounter(GridCacheOffheapManager.java:2483)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition.getAndIncrementUpdateCounter(GridDhtLocalPartition.java:942)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.calculatePartitionUpdateCounters(IgniteTxLocalAdapter.java:510)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1356)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:726)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1132)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareAsyncLocal(GridNearTxLocal.java:4282)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareColocatedTx(IgniteTxHandler.java:303)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.proceedPrepare(GridNearOptimisticTxPrepareFuture.java:565)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepareSingle(GridNearOptimisticTxPrepareFuture.java:392)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepare0(GridNearOptimisticTxPrepareFuture.java:335)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepareOnTopology(GridNearOptimisticTxPrepareFutureAdapter.java:205)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepare(GridNearOptimisticTxPrepareFutureAdapter.java:129)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareNearTxLocal(GridNearTxLocal.java:3946)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:3994)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.optimisticPutFuture(GridNearTxLocal.java:3051)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:729)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:484)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2511)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$20.op(GridCacheAdapter.java:2509)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4284)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2509)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2487)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2466)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1332)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:867)
at 
org.apache.ignite.util.BrokenRebalanceTest.testCountersOnCrachRecovery(BrokenRebalanceTest.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

[jira] [Updated] (IGNITE-17590) C++ 3.0: Implement RecordBinaryView

2022-10-14 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego updated IGNITE-17590:
-
Fix Version/s: 3.0.0-beta1

> C++ 3.0: Implement RecordBinaryView
> ---
>
> Key: IGNITE-17590
> URL: https://issues.apache.org/jira/browse/IGNITE-17590
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms, thin client
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>
> Implement record_binary_view with mapping to ignite_tuple:
> * client_table::record_binary_view should return record_binary_view;
> * Design and implement ignite_tuple.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17588) C++ 3.0: Implement SQL API

2022-10-14 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego updated IGNITE-17588:
-
Fix Version/s: 3.0.0-beta1

> C++ 3.0: Implement SQL API
> --
>
> Key: IGNITE-17588
> URL: https://issues.apache.org/jira/browse/IGNITE-17588
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms, thin client
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>
> We need to implement SQL API for C++ client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17880) Topology version must be extended with topology epoch

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17880:
--
Description: 
Epoch must be presented as a timestamp and version pair.

Epoch timestamp must represent epoch start time.
Epoch major version must be incremented each time when topology version changed 
from 0 to 1 (when the cluster started or restarted).
Epoch minor version should be changed on every baseline change.

Node's epoch version must be increased or keeped as is on node join.
Each decreasing must be logged as an error.

Epoch (version and timestamp) must be logged at every topology version change.

This will 
- help to determine how many times the cluster was restarted (and make it 
easier to determine when)
- checks that the part of the cluster which was restarted several times as a 
standalone/segmented cluster will never join the rest of the cluster with the 
lower epoch (check some segmentation and management problems)

  was:
Epoch must be presented as a timestamp and version pair.

Epoch timestamp must represent epoch start time.
Epoch version must be incremented each time when topology version changed from 
0 to 1 (when the cluster started or restarted).
Each epoch version decrease or invariance on join must be logged as a warning.

Epoch (version and timestamp) must be logged at every topology version change.

This will 
- help to determine how many times the cluster was restarted (and make it 
easier to determine when)
- checks that the part of the cluster which was restarted several times as a 
standalone cluster will never join the rest of the cluster with the lower epoch 
(check some segmentation and management problems)


> Topology version must be extended with topology epoch
> -
>
> Key: IGNITE-17880
> URL: https://issues.apache.org/jira/browse/IGNITE-17880
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: ise
>
> Epoch must be presented as a timestamp and version pair.
> Epoch timestamp must represent epoch start time.
> Epoch major version must be incremented each time when topology version 
> changed from 0 to 1 (when the cluster started or restarted).
> Epoch minor version should be changed on every baseline change.
> Node's epoch version must be increased or keeped as is on node join.
> Each decreasing must be logged as an error.
> Epoch (version and timestamp) must be logged at every topology version change.
> This will 
> - help to determine how many times the cluster was restarted (and make it 
> easier to determine when)
> - checks that the part of the cluster which was restarted several times as a 
> standalone/segmented cluster will never join the rest of the cluster with the 
> lower epoch (check some segmentation and management problems)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the inconsistency on restart by itself

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Description: 
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
See  [^PartialHistoricalRebalanceTest.java] 

Such partition may be rebalanced correctly "later" in case of full rebalance 
will be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will never happen 
for such partition.
See  [^SkippedRebalanceBecauseOfTheSameLwmTest.java] 

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

  was:
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.

see  [^PartialHistoricalRebalanceTest.java] 

Such partition may be rebalanced correctly "later" in case of full rebalance 
will be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will never happen 
for such partition.

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.


> Cluster must be able to fix the inconsistency on restart by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: iep-31, ise
> Attachments: PartialHistoricalRebalanceTest.java, 
> SkippedRebalanceBecauseOfTheSameLwmTest.java
>
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the highest 
> LWM for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> See  [^PartialHistoricalRebalanceTest.java] 
> Such partition may be rebalanced correctly "later" in case of full rebalance 
> will be triggered sometime.
> 2) In case LWM is the same on primary and backup, rebalance will never happen 
> for such partition.
> See  [^SkippedRebalanceBecauseOfTheSameLwmTest.java] 
> A possible solution for the case when the cluster failed and restarted (same 
> baseline) is to fix the counters automatically (when cluster composition is 
> equal to the baseline specified before the crash).
> Counters should be set as
>  - HWM at primary and as LWM at backups for caches with 2+ backups,
>  - LWM at primary and as HWM at backups for caches with a single backup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the inconsistency on restart by itself

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Description: 
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
See  [^PartialHistoricalRebalanceTest.java] 

Such partition may be rebalanced correctly "later" in case of full rebalance 
will be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will be skipped for 
such partition.
See  [^SkippedRebalanceBecauseOfTheSameLwmTest.java] 

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

  was:
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
See  [^PartialHistoricalRebalanceTest.java] 

Such partition may be rebalanced correctly "later" in case of full rebalance 
will be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will never happen 
for such partition.
See  [^SkippedRebalanceBecauseOfTheSameLwmTest.java] 

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.


> Cluster must be able to fix the inconsistency on restart by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: iep-31, ise
> Attachments: PartialHistoricalRebalanceTest.java, 
> SkippedRebalanceBecauseOfTheSameLwmTest.java
>
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the highest 
> LWM for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> See  [^PartialHistoricalRebalanceTest.java] 
> Such partition may be rebalanced correctly "later" in case of full rebalance 
> will be triggered sometime.
> 2) In case LWM is the same on primary and backup, rebalance will be skipped 
> for such partition.
> See  [^SkippedRebalanceBecauseOfTheSameLwmTest.java] 
> A possible solution for the case when the cluster failed and restarted (same 
> baseline) is to fix the counters automatically (when cluster composition is 
> equal to the baseline specified before the crash).
> Counters should be set as
>  - HWM at primary and as LWM at backups for caches with 2+ backups,
>  - LWM at primary and as HWM at backups for caches with a single backup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the inconsistency on restart by itself

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Attachment: SkippedRebalanceBecauseOfTheSameLwmTest.java

> Cluster must be able to fix the inconsistency on restart by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: iep-31, ise
> Attachments: PartialHistoricalRebalanceTest.java, 
> SkippedRebalanceBecauseOfTheSameLwmTest.java
>
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the highest 
> LWM for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> see  [^PartialHistoricalRebalanceTest.java] 
> Such partition may be rebalanced correctly "later" in case of full rebalance 
> will be triggered sometime.
> 2) In case LWM is the same on primary and backup, rebalance will never happen 
> for such partition.
> A possible solution for the case when the cluster failed and restarted (same 
> baseline) is to fix the counters automatically (when cluster composition is 
> equal to the baseline specified before the crash).
> Counters should be set as
>  - HWM at primary and as LWM at backups for caches with 2+ backups,
>  - LWM at primary and as HWM at backups for caches with a single backup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17907) Extract Table and Storage configuration into corresponding modules

2022-10-14 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-17907:
-
Description: 
This is part of the work related to removing configuration from the ignite-api 
module. The following configurations should be moved:

* Table configuration to {{ignite-schema}}.
* Storage configuration to {{ignite-storage-api}}. However, this may currently 
be impossible, because Table configuration depends on the storage 
configuration, {{ignite-storage-api}} already depends on the {{ignite-schema}} 
module.

> Extract Table and Storage configuration into corresponding modules
> --
>
> Key: IGNITE-17907
> URL: https://issues.apache.org/jira/browse/IGNITE-17907
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>
> This is part of the work related to removing configuration from the 
> ignite-api module. The following configurations should be moved:
> * Table configuration to {{ignite-schema}}.
> * Storage configuration to {{ignite-storage-api}}. However, this may 
> currently be impossible, because Table configuration depends on the storage 
> configuration, {{ignite-storage-api}} already depends on the 
> {{ignite-schema}} module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17907) Extract Table and Storage configuration into corresponding modules

2022-10-14 Thread Aleksandr Polovtcev (Jira)
Aleksandr Polovtcev created IGNITE-17907:


 Summary: Extract Table and Storage configuration into 
corresponding modules
 Key: IGNITE-17907
 URL: https://issues.apache.org/jira/browse/IGNITE-17907
 Project: Ignite
  Issue Type: Task
Reporter: Aleksandr Polovtcev
Assignee: Aleksandr Polovtcev






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the inconsistency on restart by itself

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Description: 
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.

see  [^PartialHistoricalRebalanceTest.java] 

Such partition may be rebalanced correctly "later" in case of full rebalance 
will be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will never happen 
for such partition.

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

  was:
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
Such partition may be rebalanced correctly "later" in case of full rebalance 
will be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will never happen 
for such partition.

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.


> Cluster must be able to fix the inconsistency on restart by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: iep-31, ise
> Attachments: PartialHistoricalRebalanceTest.java
>
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the highest 
> LWM for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> see  [^PartialHistoricalRebalanceTest.java] 
> Such partition may be rebalanced correctly "later" in case of full rebalance 
> will be triggered sometime.
> 2) In case LWM is the same on primary and backup, rebalance will never happen 
> for such partition.
> A possible solution for the case when the cluster failed and restarted (same 
> baseline) is to fix the counters automatically (when cluster composition is 
> equal to the baseline specified before the crash).
> Counters should be set as
>  - HWM at primary and as LWM at backups for caches with 2+ backups,
>  - LWM at primary and as HWM at backups for caches with a single backup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the inconsistency on restart by itself

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Attachment: PartialHistoricalRebalanceTest.java

> Cluster must be able to fix the inconsistency on restart by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: iep-31, ise
> Attachments: PartialHistoricalRebalanceTest.java
>
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the highest 
> LWM for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> Such partition may be rebalanced correctly "later" in case of full rebalance 
> will be triggered sometime.
> 2) In case LWM is the same on primary and backup, rebalance will never happen 
> for such partition.
> A possible solution for the case when the cluster failed and restarted (same 
> baseline) is to fix the counters automatically (when cluster composition is 
> equal to the baseline specified before the crash).
> Counters should be set as
>  - HWM at primary and as LWM at backups for caches with 2+ backups,
>  - LWM at primary and as HWM at backups for caches with a single backup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the inconsistency on restart by itself

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Description: 
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
Such partition may be rebalanced correctly "later" in case of full rebalance 
will be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will never happen 
for such partition.

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

  was:
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
Such partition may be rebalanced correctly "later" in case full rebalance will 
be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will never happen 
for such partition.

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.


> Cluster must be able to fix the inconsistency on restart by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: iep-31, ise
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the highest 
> LWM for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> Such partition may be rebalanced correctly "later" in case of full rebalance 
> will be triggered sometime.
> 2) In case LWM is the same on primary and backup, rebalance will never happen 
> for such partition.
> A possible solution for the case when the cluster failed and restarted (same 
> baseline) is to fix the counters automatically (when cluster composition is 
> equal to the baseline specified before the crash).
> Counters should be set as
>  - HWM at primary and as LWM at backups for caches with 2+ backups,
>  - LWM at primary and as HWM at backups for caches with a single backup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the inconsistency on restart by itself

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Description: 
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups as well as counters may contain gaps even on primary).

1) Currently, "historical rebalance" is able to sync the data to the highest 
LWM for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.
Such partition may be rebalanced correctly "later" in case full rebalance will 
be triggered sometime.

2) In case LWM is the same on primary and backup, rebalance will never happen 
for such partition.

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix the counters automatically (when cluster composition is 
equal to the baseline specified before the crash).

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

  was:
On cluster restart (because of power-off, OOM or some other problem) it's 
possible to have PDS inconsistent (primary partitions may contain operations 
missed on backups).

Currently, "historical rebalance" is able to sync the data to the highest LWM 
for every partition. 
Most likely, a primary will be chosen as a rebalance source, but the data after 
the LWM will not be rebalanced. So, all updates between LWM and HWM will not be 
synchronized.

A possible solution for the case when the cluster failed and restarted (same 
baseline) is to fix counters to help "historical rebalance" perform the sync.

Counters should be set as
 - HWM at primary and as LWM at backups for caches with 2+ backups,
 - LWM at primary and as HWM at backups for caches with a single backup.

Possible solutions:
 * This can be implemented as an extension for the "-consistency finalize` 
command, for example `-consistency finalize-on-restart` or
 * Counters can be finalized automatically when cluster composition is equal to 
the baseline specified before the crash (preferred)


> Cluster must be able to fix the inconsistency on restart by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: iep-31, ise
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups as well as counters may contain gaps even on primary).
> 1) Currently, "historical rebalance" is able to sync the data to the highest 
> LWM for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> Such partition may be rebalanced correctly "later" in case full rebalance 
> will be triggered sometime.
> 2) In case LWM is the same on primary and backup, rebalance will never happen 
> for such partition.
> A possible solution for the case when the cluster failed and restarted (same 
> baseline) is to fix the counters automatically (when cluster composition is 
> equal to the baseline specified before the crash).
> Counters should be set as
>  - HWM at primary and as LWM at backups for caches with 2+ backups,
>  - LWM at primary and as HWM at backups for caches with a single backup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-17854) Update IgniteReleasedVersion with 2.14.0

2022-10-14 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov resolved IGNITE-17854.
---
Resolution: Fixed

Merged to 
[master|https://github.com/apache/ignite/commit/74d5d6117e8eae3045d87a8eaa82741ddccdf1e6]

> Update IgniteReleasedVersion with 2.14.0
> 
>
> Key: IGNITE-17854
> URL: https://issues.apache.org/jira/browse/IGNITE-17854
> Project: Ignite
>  Issue Type: Bug
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17854) Update IgniteReleasedVersion with 2.14.0

2022-10-14 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617772#comment-17617772
 ] 

Ignite TC Bot commented on IGNITE-17854:


{panel:title=Branch: [pull/10296/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10296/head] Base: [master] : New Tests 
(8)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}PDS (Compatibility){color} [[tests 
8|https://ci2.ignite.apache.org/viewLog.html?buildId=6812718]]
* {color:#013220}IgniteCompatibilityBasicTestSuite: 
PersistenceBasicCompatibilityTest.testNodeStartByOldVersionPersistenceData[version=2.14.0]
 - PASSED{color}
* {color:#013220}IgniteCompatibilityBasicTestSuite: 
IndexTypesCompatibilityTest.testQueryOldIndex[ver=2.14.0] - PASSED{color}
* {color:#013220}IgniteCompatibilityBasicTestSuite: 
InlineJavaObjectCompatibilityTest.testQueryOldInlinedIndex[ver=2.14.0, 
cfgInlineSize=true] - PASSED{color}
* {color:#013220}IgniteCompatibilityBasicTestSuite: 
InlineJavaObjectCompatibilityTest.testQueryOldInlinedIndex[ver=2.14.0, 
cfgInlineSize=false] - PASSED{color}
* {color:#013220}IgniteCompatibilityBasicTestSuite: 
JavaThinCompatibilityTest.testOldClientToCurrentServer[Version 2.14.0] - 
PASSED{color}
* {color:#013220}IgniteCompatibilityBasicTestSuite: 
JdbcThinCompatibilityTest.testCurrentClientToOldServer[Version 2.14.0] - 
PASSED{color}
* {color:#013220}IgniteCompatibilityBasicTestSuite: 
JdbcThinCompatibilityTest.testOldClientToCurrentServer[Version 2.14.0] - 
PASSED{color}
* {color:#013220}IgniteCompatibilityBasicTestSuite: 
JavaThinCompatibilityTest.testCurrentClientToOldServer[Version 2.14.0] - 
PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=6812757buildTypeId=IgniteTests24Java8_RunAll]

> Update IgniteReleasedVersion with 2.14.0
> 
>
> Key: IGNITE-17854
> URL: https://issues.apache.org/jira/browse/IGNITE-17854
> Project: Ignite
>  Issue Type: Bug
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-8801) Change default behaviour of atomic operations inside transactions

2022-10-14 Thread Julia Bakulina (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-8801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617751#comment-17617751
 ] 

Julia Bakulina commented on IGNITE-8801:


[~timoninmaxim], hi, please review the changes

> Change default behaviour of atomic operations inside transactions
> -
>
> Key: IGNITE-8801
> URL: https://issues.apache.org/jira/browse/IGNITE-8801
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Ryabov Dmitrii
>Assignee: Julia Bakulina
>Priority: Minor
>  Labels: ise
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Need to change default behaviour of atomic operations to fail inside 
> transactions.
> 1) Remove IGNITE_ALLOW_ATOMIC_OPS_IN_TX system property.
> 2) Set default value to restrict atomic operations in 
> \{{CacheOperationContext}} constructor without arguments and arguments for 
> calls of another constructor.
> 3) Fix javadocs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17906) Implement optimized marshaller based on network stream serialization

2022-10-14 Thread Ivan Bessonov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov updated IGNITE-17906:
---
Reviewer:   (was: Kirill Tkalenko)

> Implement optimized marshaller based on network stream serialization
> 
>
> Key: IGNITE-17906
> URL: https://issues.apache.org/jira/browse/IGNITE-17906
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>
> Please refer to IGNITE-17871 for description



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17906) Implement optimized marshaller based on network stream serialization

2022-10-14 Thread Ivan Bessonov (Jira)
Ivan Bessonov created IGNITE-17906:
--

 Summary: Implement optimized marshaller based on network stream 
serialization
 Key: IGNITE-17906
 URL: https://issues.apache.org/jira/browse/IGNITE-17906
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Bessonov
Assignee: Ivan Bessonov
 Fix For: 3.0.0-beta1


Please refer to IGNITE-17871 for description



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17905) Extract REST, Client and Compute configuration into corresponding modules

2022-10-14 Thread Ivan Bessonov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov updated IGNITE-17905:
---
Reviewer: Ivan Bessonov

> Extract REST, Client and Compute configuration into corresponding modules
> -
>
> Key: IGNITE-17905
> URL: https://issues.apache.org/jira/browse/IGNITE-17905
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is part of the work related to removing configuration from the 
> {{ignite-api}} module. The following configurations should be moved:
> * Rest configuration to {{ignite-rest}}
> * Client configuration to {{ignite-client-handler}}
> * Compute configuration to {{ignite-compute}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-8801) Change default behaviour of atomic operations inside transactions

2022-10-14 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-8801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617741#comment-17617741
 ] 

Ignite TC Bot commented on IGNITE-8801:
---

{panel:title=Branch: [pull/10318/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10318/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=6819525buildTypeId=IgniteTests24Java8_RunAll]

> Change default behaviour of atomic operations inside transactions
> -
>
> Key: IGNITE-8801
> URL: https://issues.apache.org/jira/browse/IGNITE-8801
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Ryabov Dmitrii
>Assignee: Julia Bakulina
>Priority: Minor
>  Labels: ise
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Need to change default behaviour of atomic operations to fail inside 
> transactions.
> 1) Remove IGNITE_ALLOW_ATOMIC_OPS_IN_TX system property.
> 2) Set default value to restrict atomic operations in 
> \{{CacheOperationContext}} constructor without arguments and arguments for 
> calls of another constructor.
> 3) Fix javadocs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17748) Enrich InternalTable.scan API in order to support index scans

2022-10-14 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich reassigned IGNITE-17748:
--

Assignee: Andrey Mashenkov  (was: Konstantin Orlov)

> Enrich InternalTable.scan API in order to support index scans
> -
>
> Key: IGNITE-17748
> URL: https://issues.apache.org/jira/browse/IGNITE-17748
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> SQL engine might specify index to use for scanning data along with some 
> additional parameters like lower/upper bounds, including/excluding such 
> bounds, columns to include etc. All in all we should enrich InternalTable 
> scan api and provide corresponding data propagation logic. 
> h3. Definition of Done
> *1.* InternalTable scan API has following method for both hash and sorted 
> indexes scanning
> {code:java}
> Publisher scan(int partId, @Nullable InternalTransaction tx, 
> @NotNull UUID indexId, BinaryTuple key, BitSet columnsToInclude){code}
> and following method for sorted index scanning
> {code:java}
> Publisher scan(int partId, @Nullable InternalTransaction tx, 
> @NotNull UUID indexId, @Nullable BinaryTuple leftBound, @Nullable BinaryTuple 
> rightBound, int flags, BitSet columnsToInclude) {code}
> Please check org.apache.ignite.internal.storage.index.SortedIndexStorage#scan 
> for flags explanation, briefly
> {code:java}
> flags Control flags. {@link #GREATER} | {@link #LESS} by default. Other 
> available values are {@link #GREATER_OR_EQUAL}, {@link #LESS_OR_EQUAL}.{code}
> *2.* Besides API itself corresponding scan-meta should be propagated to 
> PartitionReplicaListener, so that some changes are also expected within 
> ScanRetrieveBatchReplicaRequest. Please, pay attention that, that there's, 
> probably, no sense in propagation same scan-meta within every 
> ScanRetrieveBatchReplicaRequest, we might do it only wihtin initial request.
> *3.* Proper index is chosen. See optional indexId param and proper method of 
> either IndexStorage or specific SortedIndexStorage is used.
> h3. Implementation Notes
> Mainly it's all specified in the section above. Seems that there's only one 
> caveat left, however, non trivial one - BinaryRow to Binary tuple convertion, 
> schema involved.
> h3. UPD:
> As was discussed, indexId will be always specified so that tables internals 
> will never select PK-index or any other by themselves, so that @Nullable UUID 
> indexId is now @NotNull UUID indexId.
>  
> Besides API extension itself, it's required to build a bridge between 
> https://issues.apache.org/jira/browse/IGNITE-17655 and internalTable.scan 
> API, meaaning that it's required to create implementations for sql index 
> interfaces introduced in 17655 that will propagate index scans to 
> corresponding internalTable.scan API.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17759) Need to pass commitTableId and commitPartitionId to MvPartitionStorage#addWrite

2022-10-14 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov reassigned IGNITE-17759:
--

Assignee: Vladislav Pyatkov

> Need to pass commitTableId and commitPartitionId to 
> MvPartitionStorage#addWrite
> ---
>
> Key: IGNITE-17759
> URL: https://issues.apache.org/jira/browse/IGNITE-17759
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Uttsel
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3, transaction3_ro
>
> Currently when PartitionListener invokes MvPartitionStorage#addWrite it 
> passes 
> UUID.randomUUID() as commitTableId and 0 as commitPartitionId. Need pass 
> appropriate values.
>  
> For this:
>  # Need to create
> {code:java}
> class PartitionId {
>     UUID tableId;
>     int partId;
> }{code}
>  # In InternalTableImpl#enlistInTx need to save PartitionId of the first 
> operation to the Transaction.
>  # Need to change {color:#172b4d}Map Long>> enlisted = new ConcurrentSkipListMap<>(){color} to Map IgniteBiTuple> enlisted = new ConcurrentHashMap<>();
>  # Need to change String groupId to PartitionId groupId in all places.
>  # In InternalTableImpl#enlistInTx need to pass PartitionId into 
> ReplicaRequest (ReadWriteSingleRowReplicaRequest, 
> ReadWriteSwapRowReplicaRequest, ReadWriteMultiRowReplicaRequest)
>  # In PartitionReplicaListener need to pass PartitionId from ReplicaRequest 
> to UpdateCommand and UpdateAllCommand.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17905) Extract REST, Client and Compute configuration into corresponding modules

2022-10-14 Thread Aleksandr Polovtcev (Jira)
Aleksandr Polovtcev created IGNITE-17905:


 Summary: Extract REST, Client and Compute configuration into 
corresponding modules
 Key: IGNITE-17905
 URL: https://issues.apache.org/jira/browse/IGNITE-17905
 Project: Ignite
  Issue Type: Task
Reporter: Aleksandr Polovtcev
Assignee: Aleksandr Polovtcev


This is part of the work related to removing configuration from the 
{{ignite-api}} module. The following configurations should be moved:

* Rest configuration to {{ignite-rest}}
* Client configuration to {{ignite-client-handler}}
* Compute configuration to {{ignite-compute}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-13024) Calcite integration. Support and simplification of complex expressions in index conditions

2022-10-14 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617655#comment-17617655
 ] 

Ignite TC Bot commented on IGNITE-13024:


{panel:title=Branch: [pull/10317/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10317/head] Base: [master] : New Tests 
(2)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Calcite SQL{color} [[tests 
2|https://ci2.ignite.apache.org/viewLog.html?buildId=6819332]]
* {color:#013220}IgniteCalciteTestSuite: 
CalciteBasicSecondaryIndexIntegrationTest.testComplexIndexExpression - 
PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: 
IndexSearchBoundsPlannerTest.testBoundsComplex - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=6819412buildTypeId=IgniteTests24Java8_RunAll]

> Calcite integration. Support and simplification of complex expressions in 
> index conditions
> --
>
> Key: IGNITE-13024
> URL: https://issues.apache.org/jira/browse/IGNITE-13024
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Roman Kondakov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: calcite2-required, calcite3-required
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Current implementation supports only trivial expressions in index conditions, 
> i.e. {{x=1}}. We need to support all evaluatable expressions (which not 
> depends on input/table references) like {{x=?+10}}. Also we need to ensure 
> that complex expressions in index filters are simplified (see 
> {{RexSimplify}}).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17738) Cluster must be able to fix the inconsistency on restart by itself

2022-10-14 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-17738:
--
Summary: Cluster must be able to fix the inconsistency on restart by itself 
 (was: Historical rebalance must be able to fix the inconsistency on cluster 
restart by itself)

> Cluster must be able to fix the inconsistency on restart by itself
> --
>
> Key: IGNITE-17738
> URL: https://issues.apache.org/jira/browse/IGNITE-17738
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: iep-31, ise
>
> On cluster restart (because of power-off, OOM or some other problem) it's 
> possible to have PDS inconsistent (primary partitions may contain operations 
> missed on backups).
> Currently, "historical rebalance" is able to sync the data to the highest LWM 
> for every partition. 
> Most likely, a primary will be chosen as a rebalance source, but the data 
> after the LWM will not be rebalanced. So, all updates between LWM and HWM 
> will not be synchronized.
> A possible solution for the case when the cluster failed and restarted (same 
> baseline) is to fix counters to help "historical rebalance" perform the sync.
> Counters should be set as
>  - HWM at primary and as LWM at backups for caches with 2+ backups,
>  - LWM at primary and as HWM at backups for caches with a single backup.
> Possible solutions:
>  * This can be implemented as an extension for the "-consistency finalize` 
> command, for example `-consistency finalize-on-restart` or
>  * Counters can be finalized automatically when cluster composition is equal 
> to the baseline specified before the crash (preferred)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-17285) Ambiguous output of INDEXES SytemView if cache is created via DDL

2022-10-14 Thread Ilya Shishkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Shishkov resolved IGNITE-17285.

Resolution: Not A Problem

[~alex_pl], [~jooger], sorry for the delay with reply.

Thank's for your comments. I'm closing the ticket, because:
* _key_PK_hash are not present in system view now (since  IGNITE-15424 ), so no 
duplication occurs.
* IGNITE-8386, IGNITE-10217 contains exhaustive information about unwrapping 
indexes for sql.

> Ambiguous output of INDEXES SytemView  if cache is created via DDL
> --
>
> Key: IGNITE-17285
> URL: https://issues.apache.org/jira/browse/IGNITE-17285
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ilya Shishkov
>Priority: Minor
>  Labels: ise
> Attachments: SqlIndexSystemViewReproducerTest.patch, 
> create_table.txt, query_entity.txt
>
>
> There is a difference in 'COLUMNS ' and 'INLINE_SIZE' columns content in 
> system view 'INDEXES', when you create SQL-cache by means of QueryEntity and 
> by means of DDL.
> As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , 
> there are two "equal" attepmts to create cache: via DDL, and via Cache API + 
> QueryEntity.
> Primary keys contains equal set of fields, affinity fields are the same. 
> Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the 
> same for both ways of cache creation. Table content (i.e. select *) is also 
> the same, if you do not take into account the order of output.
> There are example sqlline outputs for table from reproducer:
>  # [^create_table.txt] - for table, created by DDL.
>  # [^query_entity.txt] - for table, created by Cache API.
> As you can see, colums and content differs in INDEXES view: in case of DDL, 
> indexes does not have '_KEY' column, and have explicit set of columns from 
> primary key. Also, there is a duplication of affinity column 'ID' for :
> {code}
> "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC   
> {code}
> In case of creation table via Cache API + QueryEntity, no exlicit primary key 
> columns are shown, but '_KEY' column is, and there is no duplication of 
> affinity column 'ID' in '_key_PK_hash' index.
> Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which 
> is obtained from {{GridH2Table#getIndexes}}. It seems, that information 
> differs in this class too.
> Example output:
> {code:java|title=Cache API + QueryEntity}
> Index name   Columns
> _key_PK__SCAN_   [_KEY, ID]
> _key_PK_hash [_KEY, ID]
> _key_PK  [_KEY, ID]
> AFFINITY_KEY [ID, _KEY]
> PERSONINFO_CITYNAME_IDX  [CITYNAME, _KEY, ID]
> {code}
> {code:java|title=DDL}
> Index name   Columns
> _key_PK__SCAN_   [ID, FIRSTNAME, LASTNAME]
> _key_PK_hash [_KEY, ID]
> _key_PK  [ID, FIRSTNAME, LASTNAME]
> AFFINITY_KEY [ID, FIRSTNAME, LASTNAME]
> PERSONINFO_CITYNAME_IDX  [CITYNAME, ID, FIRSTNAME, LASTNAME]
> {code}
> If such difference is not a bug, it should be documented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17903) Remove random index names from AbstractSortedIndexTest

2022-10-14 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-17903:
-
Fix Version/s: 3.0.0-beta1

> Remove random index names from AbstractSortedIndexTest
> --
>
> Key: IGNITE-17903
> URL: https://issues.apache.org/jira/browse/IGNITE-17903
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> AbstractSortedIndexTest uses random strings as index names which can lead to 
> unsupported index names being generated



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17904) Fix flaky VolatilePageMemorySortedIndexStorageTest#testBoundsAndOrder

2022-10-14 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-17904:
-
Description: 
https://ci.ignite.apache.org/viewLog.html?buildId=6831695=ignite3_Test_RunUnitTests=true

> Fix flaky VolatilePageMemorySortedIndexStorageTest#testBoundsAndOrder
> -
>
> Key: IGNITE-17904
> URL: https://issues.apache.org/jira/browse/IGNITE-17904
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>
> https://ci.ignite.apache.org/viewLog.html?buildId=6831695=ignite3_Test_RunUnitTests=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-17904) Fix flaky VolatilePageMemorySortedIndexStorageTest#testBoundsAndOrder

2022-10-14 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev resolved IGNITE-17904.
--
Resolution: Duplicate

> Fix flaky VolatilePageMemorySortedIndexStorageTest#testBoundsAndOrder
> -
>
> Key: IGNITE-17904
> URL: https://issues.apache.org/jira/browse/IGNITE-17904
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>
> https://ci.ignite.apache.org/viewLog.html?buildId=6831695=ignite3_Test_RunUnitTests=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17904) Fix flaky VolatilePageMemorySortedIndexStorageTest#testBoundsAndOrder

2022-10-14 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-17904:


 Summary: Fix flaky 
VolatilePageMemorySortedIndexStorageTest#testBoundsAndOrder
 Key: IGNITE-17904
 URL: https://issues.apache.org/jira/browse/IGNITE-17904
 Project: Ignite
  Issue Type: Task
Reporter: Kirill Tkalenko
 Fix For: 3.0.0-beta1






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17901) .NET: Tests fail on TC due to Gradle daemon

2022-10-14 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617621#comment-17617621
 ] 

Pavel Tupitsyn commented on IGNITE-17901:
-

Second part merged to main: 66f1bd1069426ff7d68fd52041772985957e9e19

> .NET: Tests fail on TC due to Gradle daemon
> ---
>
> Key: IGNITE-17901
> URL: https://issues.apache.org/jira/browse/IGNITE-17901
> Project: Ignite
>  Issue Type: Bug
>  Components: build, ignite-3
>Reporter: Mikhail Pochatkin
>Assignee: Mikhail Pochatkin
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> .NET tests fail with "Starting a Gradle Daemon, 5 busy Daemons could not be 
> reused, use --status for details" error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17903) Remove random index names from AbstractSortedIndexTest

2022-10-14 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-17903:
-
Labels: ignite-3  (was: )

> Remove random index names from AbstractSortedIndexTest
> --
>
> Key: IGNITE-17903
> URL: https://issues.apache.org/jira/browse/IGNITE-17903
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Blocker
>  Labels: ignite-3
>
> AbstractSortedIndexTest uses random strings as index names which can lead to 
> unsupported index names being generated



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17903) Remove random index names from AbstractSortedIndexTest

2022-10-14 Thread Aleksandr Polovtcev (Jira)
Aleksandr Polovtcev created IGNITE-17903:


 Summary: Remove random index names from AbstractSortedIndexTest
 Key: IGNITE-17903
 URL: https://issues.apache.org/jira/browse/IGNITE-17903
 Project: Ignite
  Issue Type: Bug
Reporter: Aleksandr Polovtcev
Assignee: Aleksandr Polovtcev


AbstractSortedIndexTest uses random strings as index names which can lead to 
unsupported index names being generated



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17903) Remove random index names from AbstractSortedIndexTest

2022-10-14 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-17903:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Remove random index names from AbstractSortedIndexTest
> --
>
> Key: IGNITE-17903
> URL: https://issues.apache.org/jira/browse/IGNITE-17903
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Blocker
>  Labels: ignite-3
>
> AbstractSortedIndexTest uses random strings as index names which can lead to 
> unsupported index names being generated



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17383) IdleVerify hangs when called on inactive cluster with persistence

2022-10-14 Thread Julia Bakulina (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julia Bakulina reassigned IGNITE-17383:
---

Assignee: Julia Bakulina

> IdleVerify hangs when called on inactive cluster with persistence
> -
>
> Key: IGNITE-17383
> URL: https://issues.apache.org/jira/browse/IGNITE-17383
> Project: Ignite
>  Issue Type: Bug
>  Components: control.sh
>Affects Versions: 2.13
>Reporter: Ilya Shishkov
>Assignee: Julia Bakulina
>Priority: Minor
>  Labels: ise
>
> When you call {{control.sh --cache idle_verify}} on inactive cluster with 
> persistence, control script hangs and no actions are performed. As you can 
> see below in 'rest' thread dump, {{VerifyBackupPartitionsTaskV2}} waits for 
> checkpoint start in {{GridCacheDatabaseSharedManager#waitForCheckpoint}}.
> It seems, that we can interrupt task execution and print message in control 
> script output, that IdleVerify can't work on inactive cluster.
> {code:title=Thread dump}
> "rest-#82%ignite-server%" #146 prio=5 os_prio=31 tid=0x7fe0cf97c000 
> nid=0x3607 waiting on condition [0x700010149000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.waitForCheckpoint(GridCacheDatabaseSharedManager.java:1869)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.waitForCheckpoint(IgniteCacheDatabaseSharedManager.java:1107)
>   at 
> org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2$VerifyBackupPartitionsJobV2.execute(VerifyBackupPartitionsTaskV2.java:199)
>   at 
> org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2$VerifyBackupPartitionsJobV2.execute(VerifyBackupPartitionsTaskV2.java:171)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:620)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7366)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:614)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:539)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>   at 
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1343)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.sendRequest(GridTaskWorker.java:1444)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.processMappedJobs(GridTaskWorker.java:674)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.body(GridTaskWorker.java:540)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:860)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:470)
>   at 
> org.apache.ignite.internal.IgniteComputeImpl.executeAsync0(IgniteComputeImpl.java:514)
>   at 
> org.apache.ignite.internal.IgniteComputeImpl.executeAsync(IgniteComputeImpl.java:496)
>   at 
> org.apache.ignite.internal.visor.verify.VisorIdleVerifyJob.run(VisorIdleVerifyJob.java:70)
>   at 
> org.apache.ignite.internal.visor.verify.VisorIdleVerifyJob.run(VisorIdleVerifyJob.java:35)
>   at org.apache.ignite.internal.visor.VisorJob.execute(VisorJob.java:69)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:620)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7366)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:614)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:539)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>   at 
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1343)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.sendRequest(GridTaskWorker.java:1444)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.processMappedJobs(GridTaskWorker.java:674)
>   at 
> 

[jira] [Comment Edited] (IGNITE-11368) use the same information about indexes for JDBC drivers as for system view INDEXES

2022-10-14 Thread Ilya Shishkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617588#comment-17617588
 ] 

Ilya Shishkov edited comment on IGNITE-11368 at 10/14/22 9:16 AM:
--

*UPDATE:*

Previous comment is not actual now:
* _SCAN and _key_PK_hash are not present in system view.
* Origin of system view metadata was changed after IGNITE-15424: 
{{TableDescriptor}} and {{IndexDescriptor}} are used.


was (Author: shishkovilja):
*UPDATE:*
Previous comment is not actual now:

_SCAN and _key_PK_hash are not present in system view.

Origin of system view metadata was changed after IGNITE-15424: 
{{TableDescriptor}} and {{IndexDescriptor}} are used.

> use the same information about indexes for JDBC drivers as for system view 
> INDEXES
> --
>
> Key: IGNITE-11368
> URL: https://issues.apache.org/jira/browse/IGNITE-11368
> Project: Ignite
>  Issue Type: Task
>  Components: jdbc, odbc, sql
>Reporter: Yury Gerzhedovich
>Assignee: Ilya Shishkov
>Priority: Major
>  Labels: ise, newbie
> Attachments: indexes_sqlline.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As of now indexes information for JDBC drivers get by another way then system 
> SQL view INDEXES. Need to use single source of the information to have 
> consistent picture.
> So, JDBC drivers should use the same source as SQL view INDEXES 
> (org.apache.ignite.internal.processors.query.h2.sys.view.SqlSystemViewIndexes)
> Start point for JDBC index metadata is 
> org.apache.ignite.internal.jdbc2.JdbcDatabaseMetadata#getIndexInfo
> Also order of result should be correspond Javadoc ('ordered by NON_UNIQUE, 
> TYPE, INDEX_NAME, and ORDINAL_POSITION') - at present it is not so.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-11368) use the same information about indexes for JDBC drivers as for system view INDEXES

2022-10-14 Thread Ilya Shishkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617588#comment-17617588
 ] 

Ilya Shishkov commented on IGNITE-11368:


*UPDATE:*
Previous comment is not actual now:

_SCAN and _key_PK_hash are not present in system view.

Origin of system view metadata was changed after IGNITE-15424: 
{{TableDescriptor}} and {{IndexDescriptor}} are used.

> use the same information about indexes for JDBC drivers as for system view 
> INDEXES
> --
>
> Key: IGNITE-11368
> URL: https://issues.apache.org/jira/browse/IGNITE-11368
> Project: Ignite
>  Issue Type: Task
>  Components: jdbc, odbc, sql
>Reporter: Yury Gerzhedovich
>Assignee: Ilya Shishkov
>Priority: Major
>  Labels: ise, newbie
> Attachments: indexes_sqlline.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As of now indexes information for JDBC drivers get by another way then system 
> SQL view INDEXES. Need to use single source of the information to have 
> consistent picture.
> So, JDBC drivers should use the same source as SQL view INDEXES 
> (org.apache.ignite.internal.processors.query.h2.sys.view.SqlSystemViewIndexes)
> Start point for JDBC index metadata is 
> org.apache.ignite.internal.jdbc2.JdbcDatabaseMetadata#getIndexInfo
> Also order of result should be correspond Javadoc ('ordered by NON_UNIQUE, 
> TYPE, INDEX_NAME, and ORDINAL_POSITION') - at present it is not so.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17872) Fetch commit index on non-primary replicas instead of waiting for safe time in case of RO tx on idle cluster

2022-10-14 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-17872:
--
Description: 
Safe time for non-primary replicas (see IGNITE-17263 ) was conceived as 
optimization to avoid unnecessary network hops. Safe time is propagated from 
primary replica via raft appendEntries messages. When there is constant load on 
cluster that is caused by RW transactions, these messages are refreshing safe 
time on replicas with decent frequency, but in case of idle cluster, or cluster 
with read-only load, safe time is propagated periodically via heartbeats. This 
means that, if a RO transaction with read timestamp in present or future, is 
trying to read a value from non-primary replica, it will wait for safe time 
first, which is bound to frequency of heartbeat messages, and hence, the 
duration of the read operation may be close to the period of heartbeats. This 
looks weird and will cause performance issues. 

Example:
Heartbeat period is 500 ms. 
Current safe time on replica is 1.
We are processing read-only request with timestamp=2. 
There were no RW transactions for some time, and the next expected update of 
safe time, according to the heartbeat period, is 1 + 500 = 501.
This means that we should wait for about 499 ms (assuming the clock skew and 
ping in cluster is 0) to proceed with RO request processing.

So, even though safe time is an optimization, we shouldn't use it in cases when 
there are no RW transactions affecting the given replica, and the timestamp of 
current RO transaction is greater than safe time. Instead of waiting for the 
safe time update, we should fallback to reading index from the leader to 
minimize the time of processing the current RO request.

To do this, we should compare the read timestamp with safe time, and if read 
timestamp is greater, and since the last RW transaction (affecting this 
replica) some time passed that is greater than some timeout (i.e. we expect 
that the safe time will be updated only via periodic updates) we shouldn't wait 
for safe time and perform read index request to leader to get the latest 
updates that may not have been replicated yet. 

  was:
Safe time for non-primary replicas (see IGNITE-17263 ) was conceived as 
optimization to avoid unnecessary network hops. Safe time is propagated from 
primary replica via raft appendEntries messages. When there is constant load on 
cluster that is caused by RW transactions, these messages are refreshing safe 
time on replicas with decent frequency, but in case of idle cluster, or cluster 
with read-only load, safe time is propagated periodically via heartbeats. This 
means that, if a RO transaction with read timestamp in present or future, is 
trying to read a value from non-primary replica, it will wait for safe time 
first, which is bound to frequency of heartbeat messages, and hence, the 
duration of the read operation may be close to the period of heartbeats. This 
looks weird and will cause performance issues. 

Example:
Heartbeat period is 500 ms. 
Current safe time on replica is 1.
We are processing read-only request with timestamp=2. 
There were no RW transactions for some time, and the next expected update of 
safe time, according to the heartbeat period, is 1 + 500 = 501.
This means that we should wait for about 499 ms (assuming the clock skew and 
ping in cluster is 0) to proceed with RO request processing.

So, even though safe time is an optimization, we shouldn't use it in cases when 
there are no RW transactions affecting the given replica, and the timestamp of 
current RO transaction is greater than safe time. Instead of waiting for the 
safe time update, we should fallback to reading index from the leader to 
minimize the time of processing the current RO request.


> Fetch commit index on non-primary replicas instead of waiting for safe time 
> in case of RO tx on idle cluster
> 
>
> Key: IGNITE-17872
> URL: https://issues.apache.org/jira/browse/IGNITE-17872
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3, transaction3_ro
>
> Safe time for non-primary replicas (see IGNITE-17263 ) was conceived as 
> optimization to avoid unnecessary network hops. Safe time is propagated from 
> primary replica via raft appendEntries messages. When there is constant load 
> on cluster that is caused by RW transactions, these messages are refreshing 
> safe time on replicas with decent frequency, but in case of idle cluster, or 
> cluster with read-only load, safe time is propagated periodically via 
> heartbeats. This means that, if a RO transaction with read timestamp in 
> present or future, is trying to read a value from 

[jira] [Updated] (IGNITE-17599) Calcite engine. Support LocalDate/LocalTime types

2022-10-14 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov updated IGNITE-17599:
---
Labels: calcite2-required calcite3-required ise  (was: calcite2-required 
calcite3-required)

> Calcite engine. Support LocalDate/LocalTime types
> -
>
> Key: IGNITE-17599
> URL: https://issues.apache.org/jira/browse/IGNITE-17599
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.14
>Reporter: Aleksey Plekhanov
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite2-required, calcite3-required, ise
> Fix For: 2.15
>
>
> H2-based engine works with LocalData/LocalTime types as 
> java.sql.Data/java.sql.Time types. To check if value with LocalDate type can 
> be inserted to descriptor with type java.sql.Data some logic from 
> \{{IgniteH2Indexing.isConvertibleToColumnType}} is used. If Calcite engine 
> used without ignite-indexing this logic is unavailable.
> We should:
>  # Provide an ability to work in Calcite-based engine with 
> LocalDate/LocalTime type in the same way as java.sql.Data/java.sql.Time types.
>  # Move \{{IgniteH2Indexing.isConvertibleToColumnType}} logic to the core 
> module (perhaps delegating this call from the core to the QueryEngine 
> instance)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17641) Investigate message coalescing in Casandra (versions < 4)

2022-10-14 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17641:
-
Description: 
As a part of the work on the IGNITE-17625 we decided to investigate message 
coalescing in Casandra (versions < 4). They developed several techniques over 
TCP communication layer.

As a result, there should be some document or presentation with the 
investigation.

  was:
As a part of the work on the IGNITE-17625 we decided to investigate message 
coalescing in Casandra (versions < 4). The developed several techniques over 
TCP communication layer.

As a result, there should be some document or presentation with the 
investigation.


> Investigate message coalescing in Casandra (versions < 4)
> -
>
> Key: IGNITE-17641
> URL: https://issues.apache.org/jira/browse/IGNITE-17641
> Project: Ignite
>  Issue Type: Task
>Reporter: Mirza Aliev
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> As a part of the work on the IGNITE-17625 we decided to investigate message 
> coalescing in Casandra (versions < 4). They developed several techniques over 
> TCP communication layer.
> As a result, there should be some document or presentation with the 
> investigation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17599) Calcite engine. Support LocalDate/LocalTime types

2022-10-14 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-17599:
-
Affects Version/s: 2.14

> Calcite engine. Support LocalDate/LocalTime types
> -
>
> Key: IGNITE-17599
> URL: https://issues.apache.org/jira/browse/IGNITE-17599
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.14
>Reporter: Aleksey Plekhanov
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite2-required, calcite3-required
>
> H2-based engine works with LocalData/LocalTime types as 
> java.sql.Data/java.sql.Time types. To check if value with LocalDate type can 
> be inserted to descriptor with type java.sql.Data some logic from 
> \{{IgniteH2Indexing.isConvertibleToColumnType}} is used. If Calcite engine 
> used without ignite-indexing this logic is unavailable.
> We should:
>  # Provide an ability to work in Calcite-based engine with 
> LocalDate/LocalTime type in the same way as java.sql.Data/java.sql.Time types.
>  # Move \{{IgniteH2Indexing.isConvertibleToColumnType}} logic to the core 
> module (perhaps delegating this call from the core to the QueryEngine 
> instance)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17599) Calcite engine. Support LocalDate/LocalTime types

2022-10-14 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-17599:
-
Fix Version/s: 2.15

> Calcite engine. Support LocalDate/LocalTime types
> -
>
> Key: IGNITE-17599
> URL: https://issues.apache.org/jira/browse/IGNITE-17599
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.14
>Reporter: Aleksey Plekhanov
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite2-required, calcite3-required
> Fix For: 2.15
>
>
> H2-based engine works with LocalData/LocalTime types as 
> java.sql.Data/java.sql.Time types. To check if value with LocalDate type can 
> be inserted to descriptor with type java.sql.Data some logic from 
> \{{IgniteH2Indexing.isConvertibleToColumnType}} is used. If Calcite engine 
> used without ignite-indexing this logic is unavailable.
> We should:
>  # Provide an ability to work in Calcite-based engine with 
> LocalDate/LocalTime type in the same way as java.sql.Data/java.sql.Time types.
>  # Move \{{IgniteH2Indexing.isConvertibleToColumnType}} logic to the core 
> module (perhaps delegating this call from the core to the QueryEngine 
> instance)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17731) Possible LRT in case of postponed GridDhtLockRequest

2022-10-14 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov updated IGNITE-17731:
---
Priority: Minor  (was: Major)

> Possible LRT in case of postponed GridDhtLockRequest
> 
>
> Key: IGNITE-17731
> URL: https://issues.apache.org/jira/browse/IGNITE-17731
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Minor
>  Labels: IEP-89, ise
>
> Let's assume the foowing scenario:
> 1.  TX coordinator starts transaction and sends GridDhtLockRequest to "near" 
> nodes.
> 2. Some GridDhtLockRequest messages was delayed by the network. 
> 3. Not all "near" nodes receive GridDhtLockRequest and as result not all of 
> them respond to the TX coordinator.
> 4. TX coordinator aborts TX by the timeout.
> 5. Completed TX ID is stored in IgniteTxManager#completedVersHashMap.
> 6. TX load continuous (assume puts in TX cache) and record about described 
> above completed TX is evicted from the map.
> 7. GridDhtLockRequest from the clause 2 is finally recived by the "near" 
> nodes. They lock keys, start the local TX, and respond to the TX coordinator.
> But currently TX coordinator ignores GridDhtLockResponce as info about 
> initial TX was evicted and does nothing.
> As a result near nodes keep holding key locks and waiting for next steps of 
> TX protocol that will never happen as TX was already completed.
> As a WA TX can be explicitly KILLED on the near node. 
> It is proposed to handle this situation and not aquire locks on the near node 
> if TX coordinator or other cluster nodes do not have notion about TX to which 
> current lock request belongs to.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17731) Possible LRT in case of postponed GridDhtLockRequest

2022-10-14 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov updated IGNITE-17731:
---
Labels: IEP-89 ise  (was: IEP-89)

> Possible LRT in case of postponed GridDhtLockRequest
> 
>
> Key: IGNITE-17731
> URL: https://issues.apache.org/jira/browse/IGNITE-17731
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Major
>  Labels: IEP-89, ise
>
> Let's assume the foowing scenario:
> 1.  TX coordinator starts transaction and sends GridDhtLockRequest to "near" 
> nodes.
> 2. Some GridDhtLockRequest messages was delayed by the network. 
> 3. Not all "near" nodes receive GridDhtLockRequest and as result not all of 
> them respond to the TX coordinator.
> 4. TX coordinator aborts TX by the timeout.
> 5. Completed TX ID is stored in IgniteTxManager#completedVersHashMap.
> 6. TX load continuous (assume puts in TX cache) and record about described 
> above completed TX is evicted from the map.
> 7. GridDhtLockRequest from the clause 2 is finally recived by the "near" 
> nodes. They lock keys, start the local TX, and respond to the TX coordinator.
> But currently TX coordinator ignores GridDhtLockResponce as info about 
> initial TX was evicted and does nothing.
> As a result near nodes keep holding key locks and waiting for next steps of 
> TX protocol that will never happen as TX was already completed.
> As a WA TX can be explicitly KILLED on the near node. 
> It is proposed to handle this situation and not aquire locks on the near node 
> if TX coordinator or other cluster nodes do not have notion about TX to which 
> current lock request belongs to.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17895) Assertion on histogram update if currentTimeMillis decreases

2022-10-14 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617561#comment-17617561
 ] 

Ignite TC Bot commented on IGNITE-17895:


{panel:title=Branch: [pull/10313/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10313/head] Base: [master] : New Tests 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Basic 4{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=6817072]]
* {color:#013220}IgniteBasicTestSuite2: 
PeriodicHistogramMetricImplTest.testCurrentTimeDecreasing - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=6817179buildTypeId=IgniteTests24Java8_RunAll]

> Assertion on histogram update if currentTimeMillis decreases
> 
>
> Key: IGNITE-17895
> URL: https://issues.apache.org/jira/browse/IGNITE-17895
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Histogram metric classes (\{{HistogramMetricImpl}}, 
> \{{PeriodicHistogramMetricImpl}}) assume that timestamp is always increasing, 
> but in some cases method {{U.currentTimeMillis()}} can return decreasing 
> values (for example, if time was set manually or on NTP sync). In these cases 
> assertion can be thrown on histogram update:
> {noformat}
> java.lang.AssertionError: null
> at 
> org.apache.ignite.internal.processors.metric.impl.HistogramMetricImpl.value(HistogramMetricImpl.java:61)
>  ~[classes/:?]
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:643)
>  [classes/:?]
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) 
> [classes/:?]
> at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]{noformat}
> We should fix {{PeriodicHistogramMetricImpl#add}} and 
> {{HistogramMetricImpl#value}} methods to work correctly with decreasing 
> timestamps.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17599) Calcite engine. Support LocalDate/LocalTime types

2022-10-14 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky reassigned IGNITE-17599:


Assignee: Ivan Daschinsky

> Calcite engine. Support LocalDate/LocalTime types
> -
>
> Key: IGNITE-17599
> URL: https://issues.apache.org/jira/browse/IGNITE-17599
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite2-required, calcite3-required
>
> H2-based engine works with LocalData/LocalTime types as 
> java.sql.Data/java.sql.Time types. To check if value with LocalDate type can 
> be inserted to descriptor with type java.sql.Data some logic from 
> \{{IgniteH2Indexing.isConvertibleToColumnType}} is used. If Calcite engine 
> used without ignite-indexing this logic is unavailable.
> We should:
>  # Provide an ability to work in Calcite-based engine with 
> LocalDate/LocalTime type in the same way as java.sql.Data/java.sql.Time types.
>  # Move \{{IgniteH2Indexing.isConvertibleToColumnType}} logic to the core 
> module (perhaps delegating this call from the core to the QueryEngine 
> instance)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17598) Unbind some JDBC/ODBC metadata requests from the ignite-indexing module

2022-10-14 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-17598:
-
Fix Version/s: 2.15

> Unbind some JDBC/ODBC metadata requests from the ignite-indexing module
> ---
>
> Key: IGNITE-17598
> URL: https://issues.apache.org/jira/browse/IGNITE-17598
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.14
>Reporter: Aleksey Plekhanov
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: ise
> Fix For: 2.15
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> After IGNITE-15424 some JDBC/ODBC metadata requests still require indexing 
> module (Methods {{IgniteH2Indexing.resultMetaData}} and 
> \{{IgniteH2Indexing.parameterMetaData}} are used). If query engine used 
> without ignite-indexing module such requests can fail.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-17620) [Ignite Website] Update ignite-extensions header

2022-10-14 Thread Erlan Aytpaev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erlan Aytpaev resolved IGNITE-17620.

Resolution: Fixed

> [Ignite Website] Update ignite-extensions header
> 
>
> Key: IGNITE-17620
> URL: https://issues.apache.org/jira/browse/IGNITE-17620
> Project: Ignite
>  Issue Type: Task
>  Components: website
>Reporter: Alexey Alexandrov
>Assignee: Erlan Aytpaev
>Priority: Minor
>
> Please replace commented-out header in 
> [https://github.com/apache/ignite-extensions/blob/master/docs/_includes/header.html]
> With this:
> {code:xml}
> 
> 
> 
> 
>  width="18" height="12" alt="menu icon" />
> 
> 
> 
>  src="{{'assets/images/apache_ignite_logo.svg'|relative_url}}" alt="Apache 
> Ignite logo" width="103" height="36" >
> 
> 
> 
>  src='{{"assets/images/cancel.svg"|relative_url}}' alt="close" width="10" 
> height="10" />
> 
> 
> ⋮
> 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-17622) [Ignite Website] Automate extensions documentation build

2022-10-14 Thread Erlan Aytpaev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erlan Aytpaev resolved IGNITE-17622.

Resolution: Fixed

> [Ignite Website] Automate extensions documentation build
> 
>
> Key: IGNITE-17622
> URL: https://issues.apache.org/jira/browse/IGNITE-17622
> Project: Ignite
>  Issue Type: Task
>  Components: website
>Reporter: Alexey Alexandrov
>Assignee: Erlan Aytpaev
>Priority: Minor
>
> Extensions part of ignite.apache.org documentation is built from 
> [https://github.com/apache/ignite-extensions,] we need to automate it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17880) Topology version must be extended with topology epoch

2022-10-14 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov updated IGNITE-17880:
---
Labels: ise  (was: )

> Topology version must be extended with topology epoch
> -
>
> Key: IGNITE-17880
> URL: https://issues.apache.org/jira/browse/IGNITE-17880
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: ise
>
> Epoch must be presented as a timestamp and version pair.
> Epoch timestamp must represent epoch start time.
> Epoch version must be incremented each time when topology version changed 
> from 0 to 1 (when the cluster started or restarted).
> Each epoch version decrease or invariance on join must be logged as a warning.
> Epoch (version and timestamp) must be logged at every topology version change.
> This will 
> - help to determine how many times the cluster was restarted (and make it 
> easier to determine when)
> - checks that the part of the cluster which was restarted several times as a 
> standalone cluster will never join the rest of the cluster with the lower 
> epoch (check some segmentation and management problems)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-17873) [Ignite Website] Ignite Summit 2022 Europe_Update website banners

2022-10-14 Thread Evgenia (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgenia resolved IGNITE-17873.
--
Resolution: Done

> [Ignite Website] Ignite Summit 2022 Europe_Update website banners
> -
>
> Key: IGNITE-17873
> URL: https://issues.apache.org/jira/browse/IGNITE-17873
> Project: Ignite
>  Issue Type: Task
>  Components: website
>Reporter: Evgenia
>Assignee: Erlan Aytpaev
>Priority: Major
> Attachments: Event page.jpg, docs.jpg, ignite-Summit.jpg
>
>
> Please add a new Ignite Summit and update event banners.
>  
> All the links should lead to [Ignite Summit November 9, 2022 
> |https://ignite-summit.org/2022-november/]
> Places to update banners:
> 1) Featured events
> [Distributed Database - Apache Ignite|https://ignite.apache.org/]
> 2) Doc's banner
> [https://ignite.apache.org/docs/latest/]
> 3) Text banner at the top (doc's page)
> Ignite Summit Europe — November 9 — Join virtually! 
> 4) Update event page with a new image
> [Apache Ignite Events - Meetups, Summit, 
> Conference|https://ignite.apache.org/events.html#summit]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17536) Implement BinaryTuple inlining in a hash index B+Tree

2022-10-14 Thread Ivan Bessonov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617536#comment-17617536
 ] 

Ivan Bessonov commented on IGNITE-17536:


Looks good to me, thank you!

> Implement BinaryTuple inlining in a hash index B+Tree
> -
>
> Key: IGNITE-17536
> URL: https://issues.apache.org/jira/browse/IGNITE-17536
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> In a simple implementation, instead of a *BinaryTuple*, we store its link 
> from the *FreeList* in the key, this is not optimal and we need to inline the 
> *BinaryTuple* in the key, for starters, you can see how to do this in 2.0.
> What should be done:
> * Calculate approximate inline size:
> ** It is set by the user through the configuration or calculated 
> automatically, but should not be more than 2 KB (similar to 
> *PageIO#MAX_PAYLOAD_SIZE* from 2.0);
> ** For automatic calculation, the *BinaryTuple* format must be taken into 
> account: header + null map + offset table;
> *** To cover most cases, we will consider the size class to be 2 bytes 
> ([BinaryTupleFormat-Header|https://cwiki.apache.org/confluence/display/IGNITE/IEP-92%3A+Binary+Tuple+Format#IEP92:BinaryTupleFormat-Header]);
> *** For columns of variable length (for example, a string), we will assume 
> that 10 bytes should be enough;
> * Write *BinaryTuple* as is up to inline size;
> ** If the user *BinaryTuple* is larger than the inline size, then without its 
> transformations (as it is) we will write it according to the inline size (cut 
> it off);
> ** If the user *BinaryTuple* is less than or equal to the inline size, then 
> we will write it as is without writing it to the FreeList;
> *** At the same time, we need to make a note somewhere that the *BinaryTuple* 
> is written in full, this will help us when comparing *BinaryTuple*'s inside 
> the B+tree;
> * Comparing *BinaryTuple*'s at B+tree nodes:
> ** If the *BinaryTuple* in the tree node is completely written, then compare 
> them without loading the *BinaryTuple* by link (since we will not write it to 
> the *FreeList*);
> ** Otherwise, compare *BinaryTuple*'s with loading it by link (we need to do 
> it optimally right away).
> The configuration will most likely be done later, since the user experience 
> for this is not yet clear.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-17265) Support RAFT snapshot streaming for PageMemory storage

2022-10-14 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy resolved IGNITE-17265.

Resolution: Done

Fixed in IGNITE-17254

> Support RAFT snapshot streaming for PageMemory storage
> --
>
> Key: IGNITE-17265
> URL: https://issues.apache.org/jira/browse/IGNITE-17265
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>
> IGNITE-17254 API needs to be implemented for PageMemory storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17265) Support RAFT snapshot streaming for PageMemory storage

2022-10-14 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy reassigned IGNITE-17265:
--

Assignee: Roman Puchkovskiy

> Support RAFT snapshot streaming for PageMemory storage
> --
>
> Key: IGNITE-17265
> URL: https://issues.apache.org/jira/browse/IGNITE-17265
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>
> IGNITE-17254 API needs to be implemented for PageMemory storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17265) Support RAFT snapshot streaming for PageMemory storage

2022-10-14 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-17265:
---
Description: IGNITE-17254 API needs to be implemented for PageMemory 
storage  (was: IGNITE-17254 API needs to be implemented for RocksDB storage)

> Support RAFT snapshot streaming for PageMemory storage
> --
>
> Key: IGNITE-17265
> URL: https://issues.apache.org/jira/browse/IGNITE-17265
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta1
>
>
> IGNITE-17254 API needs to be implemented for PageMemory storage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17902) Ignite 3. SQL. Dynamic parameter type can't be inferred for the most of built-in SQL functions

2022-10-14 Thread Yury Gerzhedovich (Jira)
Yury Gerzhedovich created IGNITE-17902:
--

 Summary: Ignite 3. SQL. Dynamic parameter type can't be inferred 
for the most of built-in SQL functions
 Key: IGNITE-17902
 URL: https://issues.apache.org/jira/browse/IGNITE-17902
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Yury Gerzhedovich


Queries like:
{code:java}
SELECT LOWER(?)
{code}
Fails with:
{noformat}
Caused by: org.apache.calcite.runtime.CalciteContextException: At line 1, 
column 14: Illegal use of dynamic parameter
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at 
org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:505)
    at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:932)
    at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:917)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5266)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.inferUnknownTypes(SqlValidatorImpl.java:1975)
    at 
org.apache.ignite.internal.processors.query.calcite.prepare.IgniteSqlValidator.inferUnknownTypes(IgniteSqlValidator.java:534)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.inferUnknownTypes(SqlValidatorImpl.java:2057)
    at 
org.apache.ignite.internal.processors.query.calcite.prepare.IgniteSqlValidator.inferUnknownTypes(IgniteSqlValidator.java:534)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:461)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelectList(SqlValidatorImpl.java:4409)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3652){noformat}
We can try to infer types by type checker for SQL functions with 

empty {{{}operandTypeInference{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17816) Sort out and merge Calcite tickets to Ignite 3.0 (step 7)

2022-10-14 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich reassigned IGNITE-17816:
--

Assignee: Yury Gerzhedovich

>  Sort out and merge Calcite tickets to Ignite 3.0 (step 7)
> --
>
> Key: IGNITE-17816
> URL: https://issues.apache.org/jira/browse/IGNITE-17816
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Major
>  Labels: calcite, ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Let's merge the following tickets to ignite 3.0:
> https://issues.apache.org/jira/browse/IGNITE-16443
> https://issues.apache.org/jira/browse/IGNITE-16151
> https://issues.apache.org/jira/browse/IGNITE-16701
> https://issues.apache.org/jira/browse/IGNITE-16693
> https://issues.apache.org/jira/browse/IGNITE-15425
> https://issues.apache.org/jira/browse/IGNITE-16053
> After merge needs to remove the label *calcite3-required*.
> Tickets that could be simply merged - merge immediately. For hard cases let's 
> create separate tickets with estimation and link them to IGNITE-15658 or 
> links to blocker ticket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17900) Very slow SQL execution with LEFT JOIN and subquery after upgrade from 2.7.6

2022-10-14 Thread Dren (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dren updated IGNITE-17900:
--
Description: 
After migration from Ignite 2.7.6 to Ignite 2.13 I noticed that the query below 
executes very slowly. A big difference in execution time can be seen already in 
tables with several thousands of records. If table have more than 100,000 
records, the query will never finish.
{code:java}
// SQL
select T0.* , T1.HIDE
from TABLE1 as T0
left JOIN
( select key1, key2, count(*) AS HIDE  
from TABLE1
GROUP BY key1, key2
) as T1
ON T0.key1 = T1.key1 AND T0.key2 = T1.key2; {code}
 

– Ignite v2.13.0 and v2.14.0 
– execution time  8 seconds with 2100 records
– execution time 22 seconds with 4400 records

 

– Ignite v 2.7.6 
– execution time  3ms with 2100 records
– execution time  4ms with 4400 records

 

All DDL and test data can be found in attachment.

I tried adding indexes to the key1 and key2 columns, but the result is always 
the same.

  was:
After migration from Ignite 2.7.6 to Ignite 2.13 I noticed that the query below 
executes very slowly. A big difference in execution time can be seen already in 
tables with several thousands of records. If table have more than 100,000 
records, the query will never finish.

select T0.* , T1.HIDE
from TABLE1 as T0
left JOIN
( select key1, key2, count(*) AS HIDE  
    from TABLE1
    GROUP BY key1, key2
) as T1
ON T0.key1 = T1.key1 AND T0.key2 = T1.key2;

 

– Ignite v2.13.0 and v2.14.0 
– execution time  8 seconds with 2100 records
– execution time 22 seconds with 4400 records

 

– Ignite v 2.7.6 
– execution time  3ms with 2100 records
– execution time  4ms with 4400 records

 

All DDL and test data can be found in attachment.

I tried adding indexes to the key1 and key2 columns, but the result is always 
the same.


> Very slow SQL execution with LEFT JOIN and subquery after upgrade from 2.7.6
> 
>
> Key: IGNITE-17900
> URL: https://issues.apache.org/jira/browse/IGNITE-17900
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.13, 2.14
> Environment: One node test instalation, 6 vCPU 8GB RAM.
> Ignite 2.14.0
> jdk-13.0.2
>  
>  
>Reporter: Dren
>Priority: Major
> Attachments: CREATE_TABLE1.sql, explain_plan.txt, ignite_log.txt, 
> insert_data.sql
>
>
> After migration from Ignite 2.7.6 to Ignite 2.13 I noticed that the query 
> below executes very slowly. A big difference in execution time can be seen 
> already in tables with several thousands of records. If table have more than 
> 100,000 records, the query will never finish.
> {code:java}
> // SQL
> select T0.* , T1.HIDE
> from TABLE1 as T0
> left JOIN
> ( select key1, key2, count(*) AS HIDE  
> from TABLE1
> GROUP BY key1, key2
> ) as T1
> ON T0.key1 = T1.key1 AND T0.key2 = T1.key2; {code}
>  
> – Ignite v2.13.0 and v2.14.0 
> – execution time  8 seconds with 2100 records
> – execution time 22 seconds with 4400 records
>  
> – Ignite v 2.7.6 
> – execution time  3ms with 2100 records
> – execution time  4ms with 4400 records
>  
> All DDL and test data can be found in attachment.
> I tried adding indexes to the key1 and key2 columns, but the result is always 
> the same.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17900) Very slow SQL execution with LEFT JOIN and subquery after upgrade from 2.7.6

2022-10-14 Thread Dren (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dren updated IGNITE-17900:
--
Description: 
After migration from Ignite 2.7.6 to Ignite 2.13 I noticed that the query below 
executes very slowly. A big difference in execution time can be seen already in 
tables with several thousands of records. If table have more than 100,000 
records, the query will never finish.

select T0.* , T1.HIDE
from TABLE1 as T0
left JOIN
( select key1, key2, count(*) AS HIDE  
    from TABLE1
    GROUP BY key1, key2
) as T1
ON T0.key1 = T1.key1 AND T0.key2 = T1.key2;

 

– Ignite v2.13.0 and v2.14.0 
– execution time  8 seconds with 2100 records
– execution time 22 seconds with 4400 records

 

– Ignite v 2.7.6 
– execution time  3ms with 2100 records
– execution time  4ms with 4400 records

 

All DDL and test data can be found in attachment.

I tried adding indexes to the key1 and key2 columns, but the result is always 
the same.

  was:
After migration form Ignite 2.7.6 to Ignite 2.13 I noticed that the query below 
executes very slowly. A big difference in execution time can be seen already in 
tables with several thousands of records. If table have more than 100,000 
records, the query will never finish.

select T0.* , T1.HIDE
from TABLE1 as T0
left JOIN
( select key1, key2, count(*) AS HIDE  
    from TABLE1
    GROUP BY key1, key2
) as T1
ON T0.key1 = T1.key1 AND T0.key2 = T1.key2;

 

-- Ignite v2.13.0 and v2.14.0 
-- execution time  8 seconds with 2100 records
-- execution time 22 seconds with 4400 records

 

-- Ignite v 2.7.6 
-- execution time  3ms with 2100 records
-- execution time  4ms seconds with 4400 records

 

All DDL and test data can be found in attachment.

I tried adding indexes to the key1 and key2 columns, but the result is always 
the same.


> Very slow SQL execution with LEFT JOIN and subquery after upgrade from 2.7.6
> 
>
> Key: IGNITE-17900
> URL: https://issues.apache.org/jira/browse/IGNITE-17900
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.13, 2.14
> Environment: One node test instalation, 6 vCPU 8GB RAM.
> Ignite 2.14.0
> jdk-13.0.2
>  
>  
>Reporter: Dren
>Priority: Major
> Attachments: CREATE_TABLE1.sql, explain_plan.txt, ignite_log.txt, 
> insert_data.sql
>
>
> After migration from Ignite 2.7.6 to Ignite 2.13 I noticed that the query 
> below executes very slowly. A big difference in execution time can be seen 
> already in tables with several thousands of records. If table have more than 
> 100,000 records, the query will never finish.
> select T0.* , T1.HIDE
> from TABLE1 as T0
> left JOIN
> ( select key1, key2, count(*) AS HIDE  
>     from TABLE1
>     GROUP BY key1, key2
> ) as T1
> ON T0.key1 = T1.key1 AND T0.key2 = T1.key2;
>  
> – Ignite v2.13.0 and v2.14.0 
> – execution time  8 seconds with 2100 records
> – execution time 22 seconds with 4400 records
>  
> – Ignite v 2.7.6 
> – execution time  3ms with 2100 records
> – execution time  4ms with 4400 records
>  
> All DDL and test data can be found in attachment.
> I tried adding indexes to the key1 and key2 columns, but the result is always 
> the same.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17857) The native client does not need to verify the configuration consistency of the DeploymentSpi

2022-10-14 Thread YuJue Li (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17617506#comment-17617506
 ] 

YuJue Li commented on IGNITE-17857:
---

[~sergeychugunov] thanks for your review, please check again.

> The native client does not need to verify the configuration consistency of 
> the DeploymentSpi
> 
>
> Key: IGNITE-17857
> URL: https://issues.apache.org/jira/browse/IGNITE-17857
> Project: Ignite
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.13
>Reporter: YuJue Li
>Assignee: YuJue Li
>Priority: Minor
> Fix For: 2.15
>
> Attachments: IgniteClient.java, e.log, example-deploy.xml
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> start a server node with example-deploy.xml by ignite.sh
> start a native client with IgniteClient.java
> then error.
> Normally, the client node does not need to verify the configuration 
> consistency of the DeploymentSpi.
> Whether the client needs to configure DeploymentSpi should be decided by the 
> user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17781) Create DEB\RPM distributions for CLI

2022-10-14 Thread Aleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr reassigned IGNITE-17781:
--

Assignee: Aleksandr  (was: Mikhail Pochatkin)

> Create DEB\RPM distributions for CLI
> 
>
> Key: IGNITE-17781
> URL: https://issues.apache.org/jira/browse/IGNITE-17781
> Project: Ignite
>  Issue Type: New Feature
>  Components: build, cli
>Reporter: Mikhail Pochatkin
>Assignee: Aleksandr
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)