[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-09-01 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761417#comment-17761417
 ] 

Raymond Wilson commented on IGNITE-20299:
-

Thanks for confirming you can reproduce it.

In an earlier experiment I tried deleting the cache folder on my local system 
and this did fix it (locally). Performing that same operation in our dev 
environment with data in it did not recover.

I will see if I can enhance the reproducer further.


> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
>         at 
> 

[jira] [Commented] (IGNITE-20262) Refuse accepting partition Raft commands when not enough schemas are available

2023-09-01 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761378#comment-17761378
 ] 

Roman Puchkovskiy commented on IGNITE-20262:


Thanks!

> Refuse accepting partition Raft commands when not enough schemas are available
> --
>
> Key: IGNITE-20262
> URL: https://issues.apache.org/jira/browse/IGNITE-20262
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: iep-98, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is a left-over from IGNITE-19227. Analogous to processing incoming 
> AppendEntriesRequests, we should not accept Raft commands when we don't have 
> enough metadata. This could happen if a Primary replica turns out to be not 
> colocated with a partition Raft group leader and the leader lags wrt 
> MetaStorage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-09-01 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20317:
---
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.
# LogicalTopologyEventListener to update logical topology.
# DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener 
watch listener to update pending assignments.

h3. *Definition of Done*
Need to ensure event handling linearization.

h3. *Implementation Notes*
# ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.
# We cannnot return future from LogicalTopologyEventListener's methods. So we 
can chain their ms invokes futures in DZM or we can add tasks with ms invoke to 
executor.
# Need to return futures from WatchListener#onUpdate method of the data nodes 
listener.

  was:
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.
# LogicalTopologyEventListener to update logical topology.

h3. *Definition of Done*
Need to ensure event handling linearization.

h3. *Implementation Notes*
# ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.
# We cannnot return future from LogicalTopologyEventListener's methods. So we 
can chain their ms invokes futures in DZM or we can add tasks with ms invoke to 
executor.


> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> Key: IGNITE-20317
> URL: https://issues.apache.org/jira/browse/IGNITE-20317
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # ZonesConfigurationListener#onCreate to init a zone.
> # ZonesConfigurationListener#onDelete to clean up the zone data.
> # DistributionZoneManager#onUpdateFilter to save data nodes in the meta 
> storage.
> # DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
> replicas update.
> # LogicalTopologyEventListener to update logical topology.
> # DistributionZoneRebalanceEngine#createDistributionZonesDataNodesListener 
> watch listener to update pending assignments.
> h3. *Definition of Done*
> Need to ensure event handling linearization.
> h3. *Implementation Notes*
> # ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
> DistributionZoneManager#onUpdateFilter and 
> DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
> listeners. So we can  just return the ms invoke future  from these methods 
> and it ensure, that this invoke will be completed within the current event 
> handling.
> # We cannnot return future from LogicalTopologyEventListener's methods. So we 
> can chain their ms invokes 

[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-09-01 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20317:
---
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
replicas update.
# LogicalTopologyEventListener to update logical topology.

h3. *Definition of Done*
Need to ensure event handling linearization.

h3. *Implementation Notes*
# ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
DistributionZoneManager#onUpdateFilter and 
DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
listeners. So we can  just return the ms invoke future  from these methods and 
it ensure, that this invoke will be completed within the current event handling.
# We cannnot return future from LogicalTopologyEventListener's methods. So we 
can chain their ms invokes futures in DZM or we can add tasks with ms invoke to 
executor.

  was:
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# LogicalTopologyEventListener to update logical topology.

h3. *Definition of Done*
Need to ensure event handling linearization.

h3. *Implementation Notes*
# ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete and 
DistributionZoneManager#onUpdateFilter are invoked in configuration listeners. 
So we can  just return the ms invoke future  from these methods and it ensure, 
that this invoke will be completed within the current event handling.
# We cannnot return future from LogicalTopologyEventListener's methods. So we 
can chain their ms invokes futures in DZM or we can add tasks with ms invoke to 
executor.


> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> Key: IGNITE-20317
> URL: https://issues.apache.org/jira/browse/IGNITE-20317
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # ZonesConfigurationListener#onCreate to init a zone.
> # ZonesConfigurationListener#onDelete to clean up the zone data.
> # DistributionZoneManager#onUpdateFilter to save data nodes in the meta 
> storage.
> # DistributionZoneRebalanceEngine#onUpdateReplicas to apdate assignment on 
> replicas update.
> # LogicalTopologyEventListener to update logical topology.
> h3. *Definition of Done*
> Need to ensure event handling linearization.
> h3. *Implementation Notes*
> # ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete, 
> DistributionZoneManager#onUpdateFilter and 
> DistributionZoneRebalanceEngine#onUpdateReplicas are invoked in configuration 
> listeners. So we can  just return the ms invoke future  from these methods 
> and it ensure, that this invoke will be completed within the current event 
> handling.
> # We cannnot return future from LogicalTopologyEventListener's methods. So we 
> can chain their ms invokes futures in DZM or we can add tasks with ms invoke 
> to executor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-09-01 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20317:
---
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# LogicalTopologyEventListener to update logical topology.

h3. *Definition of Done*
Need to ensure event handling linearization.

h3. *Implementation Notes*
# ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete and 
DistributionZoneManager#onUpdateFilter are invoked in configuration listeners. 
So we can  just return the ms invoke future  from these methods and it ensure, 
that this invoke will be completed within the current event handling.
# We cannnot return future from LogicalTopologyEventListener's methods. So we 
can chain their ms invokes futures in DZM or we can add tasks with ms invoke to 
executor.

  was:
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# LogicalTopologyEventListener to update logical topology.

h3. *Definition of Done*
Need to return meta storage futures from event handlers to ensure event 
linearization.

h3. *Implementation Notes*
# ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete and 
DistributionZoneManager#onUpdateFilter are invoked in configuration listeners. 
So we can  just return the ms invoke future  from these methods and it ensure, 
that this invoke will be completed within the current event handling.
# We cannnot return future from LogicalTopologyEventListener's methods. So we 
can chain their ms invokes futures in DZM or we can add tasks with ms invoke to 
executor.


> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> Key: IGNITE-20317
> URL: https://issues.apache.org/jira/browse/IGNITE-20317
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # ZonesConfigurationListener#onCreate to init a zone.
> # ZonesConfigurationListener#onDelete to clean up the zone data.
> # DistributionZoneManager#onUpdateFilter to save data nodes in the meta 
> storage.
> # LogicalTopologyEventListener to update logical topology.
> h3. *Definition of Done*
> Need to ensure event handling linearization.
> h3. *Implementation Notes*
> # ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete 
> and DistributionZoneManager#onUpdateFilter are invoked in configuration 
> listeners. So we can  just return the ms invoke future  from these methods 
> and it ensure, that this invoke will be completed within the current event 
> handling.
> # We cannnot return future from LogicalTopologyEventListener's methods. So we 
> can chain their ms invokes futures in DZM or we can add tasks with ms invoke 
> to executor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-09-01 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. For example, we can chain our futures on 
{{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
completed before {{metaStorageMgr.deployWatches()}}.
 In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. For example, we can chain our futures on 
{{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
completed before {{metaStorageMgr.deployWatches()}}
 In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are 

[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-09-01 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. For example, we can chain our futures on 
{{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
completed before {{metaStorageMgr.deployWatches()}}
 In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are completed. It can lead to the following 
> situation: 
> * Initialisation of the default zone is hanged for some reason even after 
> full restart of the cluster.
> * That 

[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-09-01 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20317:
---
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# LogicalTopologyEventListener to update logical topology.

h3. *Definition of Done*
Need to return meta storage futures from event handlers to ensure event 
linearization.

h3. *Implementation Notes*
# ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete and 
DistributionZoneManager#onUpdateFilter are invoked in configuration listeners. 
So we can  just return the ms invoke future  from these methods and it ensure, 
that this invoke will be completed within the current event handling.
# We cannnot return future from LogicalTopologyEventListener's methods. So we 
can chain their ms invokes futures in DZM or we can add tasks with ms invoke to 
executor.

  was:
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# LogicalTopologyEventListener to update logical topology.

h3. *Definition of Done*
Need to return meta storage futures from event handlers to ensure event 
linearization.

h3. *Implementation Notes*
ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete and 
DistributionZoneManager#onUpdateFilter are invoked in configuration listeners. 
So we can  just return the ms invoke future  from these methods and it ensure, 
that this invoke will be completed within the current event handling.


> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> Key: IGNITE-20317
> URL: https://issues.apache.org/jira/browse/IGNITE-20317
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # ZonesConfigurationListener#onCreate to init a zone.
> # ZonesConfigurationListener#onDelete to clean up the zone data.
> # DistributionZoneManager#onUpdateFilter to save data nodes in the meta 
> storage.
> # LogicalTopologyEventListener to update logical topology.
> h3. *Definition of Done*
> Need to return meta storage futures from event handlers to ensure event 
> linearization.
> h3. *Implementation Notes*
> # ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete 
> and DistributionZoneManager#onUpdateFilter are invoked in configuration 
> listeners. So we can  just return the ms invoke future  from these methods 
> and it ensure, that this invoke will be completed within the current event 
> handling.
> # We cannnot return future from LogicalTopologyEventListener's methods. So we 
> can chain their ms invokes futures in DZM or we can add tasks with ms invoke 
> to executor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20336) Sql. Remove conversion from java types to TypeSpec.

2023-09-01 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-20336:
--
Description: 
JavaTypes are no longer used by execution engine, but they remain in execution 
node tests and TypeUtils::convertToTypeSpec.
- Update unit not to use TypeUtils::createRowType(TypeFactory, Class... 
fields).
- Remove conversion from JavaType from TypeUtils::convertToTypeSpec.
- Remove TypeUtils::createRowType(TypeFactory, Class... fields) as it is no 
longer needed.

  was:
JavaType's are no longer used by execution engine, but they remain in execution 
node tests and TypeUtils::convertToTypeSpec.
- Update unit not to use TypeUtils::createRowType(TypeFactory, Class... 
fields).
- Remove conversion from JavaType from TypeUtils::convertToTypeSpec.
- Remove TypeUtils::createRowType(TypeFactory, Class... fields) as it is no 
longer needed.


> Sql. Remove conversion from java types to TypeSpec.
> ---
>
> Key: IGNITE-20336
> URL: https://issues.apache.org/jira/browse/IGNITE-20336
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> JavaTypes are no longer used by execution engine, but they remain in 
> execution node tests and TypeUtils::convertToTypeSpec.
> - Update unit not to use TypeUtils::createRowType(TypeFactory, Class... 
> fields).
> - Remove conversion from JavaType from TypeUtils::convertToTypeSpec.
> - Remove TypeUtils::createRowType(TypeFactory, Class... fields) as it is 
> no longer needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20336) Sql. Remove conversion from java types to TypeSpec.

2023-09-01 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-20336:
-

 Summary: Sql. Remove conversion from java types to TypeSpec.
 Key: IGNITE-20336
 URL: https://issues.apache.org/jira/browse/IGNITE-20336
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Maksim Zhuravkov
 Fix For: 3.0.0-beta2


JavaType's are no longer used by execution engine, but they remain in execution 
node tests and TypeUtils::convertToTypeSpec.
- Update unit not to use TypeUtils::createRowType(TypeFactory, Class... 
fields).
- Remove conversion from JavaType from TypeUtils::convertToTypeSpec.
- Remove TypeUtils::createRowType(TypeFactory, Class... fields) as it is no 
longer needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-09-01 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are completed. It can lead to the following 
> situation: 
> * Initialisation of the default zone is hanged for some reason even after 
> full restart of the cluster.
> * That means that all data nodes related keys in metastorage haven't been 
> initialised.
> * For example, if user add some new node, and scale up timer is immediate, 
> which 

[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-09-01 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. 



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are completed. It can lead to the following 
> situation: 
> * Initialisation of the default zone is hanged for some reason even after 
> full restart of the cluster.
> * That means that all data nodes related keys in metastorage haven't been 
> initialised.
> * For example, if user add some new node, and scale up timer is immediate, 
> which leads to immediate data nodes recalculation, this recalculation won't 
> happen, because data nodes key have not been initialised. 
> h3. *Possible solutions*
> h4. 

[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-09-01 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. 



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}
,
h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. 



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are completed. It can lead to the following 
> situation: 
> * Initialisation of the default zone is hanged for some reason even after 
> full restart of the cluster.
> * That means that all data nodes related keys in metastorage haven't been 
> initialised.
> * For example, if user add some new node, and scale up timer is immediate, 
> which leads to immediate data nodes recalculation, this recalculation won't 
> happen, because data nodes key have not been initialised. 
> h3. *Possible solutions*
> h4. Easier
> We just need to wait for all async logic to be completed within the 
> {{DistributionZoneManager#start}} with {{ms.invoke().join()}}
> h4. Harder
> We can 

[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-09-01 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().join()}}
,
h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. 



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
`DistributionZoneManager#start` with ms.invoke().join()
,
h4. Harder
We can enhance `IgniteComponent#start`, so it could return Completable future, 
and after that we need to change the flow of starting components, so node is 
not ready to work until all `IgniteComponent#start` futures are completed. 



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are completed. It can lead to the following 
> situation: 
> * Initialisation of the default zone is hanged for some reason even after 
> full restart of the cluster.
> * That means that all data nodes related keys in metastorage haven't been 
> initialised.
> * For example, if user add some new node, and scale up timer is immediate, 
> which leads to immediate data nodes recalculation, this recalculation won't 
> happen, because data nodes key have not been initialised. 
> h3. *Possible solutions*
> h4. Easier
> We just need to wait for all async logic to be completed within the 
> {{DistributionZoneManager#start}} with {{ms.invoke().join()}}
> ,
> h4. Harder
> We can enhance 

[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-09-01 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
`DistributionZoneManager#start` with ms.invoke().join()
,
h4. Harder
We can enhance `IgniteComponent#start`, so it could return Completable future, 
and after that we need to change the flow of starting components, so node is 
not ready to work until all `IgniteComponent#start` futures are completed. 



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
`DistributionZoneManager#start`

h4. Harder
We can enhance `IgniteComponent#start`, so it could return Completable future, 
and after that we need to change the flow of starting components, so node is 
not ready to work until all `IgniteComponent#start` futures are completed. 



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are completed. It can lead to the following 
> situation: 
> * Initialisation of the default zone is hanged for some reason even after 
> full restart of the cluster.
> * That means that all data nodes related keys in metastorage haven't been 
> initialised.
> * For example, if user add some new node, and scale up timer is immediate, 
> which leads to immediate data nodes recalculation, this recalculation won't 
> happen, because data nodes key have not been initialised. 
> h3. *Possible solutions*
> h4. Easier
> We just need to wait for all async logic to be completed within the 
> `DistributionZoneManager#start` with ms.invoke().join()
> ,
> h4. Harder
> We can enhance `IgniteComponent#start`, so it could 

[jira] [Updated] (IGNITE-20319) MultiActorPlacementDriverTest and PlacementDriverManagerTest incorrectly share hybridClock

2023-09-01 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20319:
-
Priority: Blocker  (was: Major)

> MultiActorPlacementDriverTest and PlacementDriverManagerTest incorrectly 
> share hybridClock
> --
>
> Key: IGNITE-20319
> URL: https://issues.apache.org/jira/browse/IGNITE-20319
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Blocker
>  Labels: ignite-3
>
> Within aforementioned tests placement drivers share same clock instance 
> whether nodes have node specific ones.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20319) MultiActorPlacementDriverTest and PlacementDriverManagerTest incorrectly share hybridClock

2023-09-01 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin reassigned IGNITE-20319:


Assignee: Alexander Lapin

> MultiActorPlacementDriverTest and PlacementDriverManagerTest incorrectly 
> share hybridClock
> --
>
> Key: IGNITE-20319
> URL: https://issues.apache.org/jira/browse/IGNITE-20319
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> Within aforementioned tests placement drivers share same clock instance 
> whether nodes have node specific ones.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-09-01 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
`DistributionZoneManager#start`

h4. Harder
We can enhance `IgniteComponent#start`, so it could return Completable future, 
and after that we need to change the flow of starting components, so node is 
not ready to work until all `IgniteComponent#start` futures are completed. 



h3. *Definition of done*

All asynchronous logic in the `DistributionZoneManager#start` is done before a 
node is ready to work, in particular, ready to interact with zones.


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immedate, which 
leads to immediate data nodes recalculation, this recalculation won't happen, 
because data nodes key have not been initialised. 



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are completed. It can lead to the following 
> situation: 
> * Initialisation of the default zone is hanged for some reason even after 
> full restart of the cluster.
> * That means that all data nodes related keys in metastorage haven't been 
> initialised.
> * For example, if user add some new node, and scale up timer is immediate, 
> which leads to immediate data nodes recalculation, this recalculation won't 
> happen, because data nodes key have not been initialised. 
> h3. *Possible solutions*
> h4. Easier
> We just need to wait for all async logic to be completed within the 
> `DistributionZoneManager#start`
> h4. Harder
> We can enhance `IgniteComponent#start`, so it could return Completable 
> future, and after that we need to change the flow of starting components, so 
> node is not ready to work until all `IgniteComponent#start` futures are 
> completed. 
> h3. *Definition of done*
> All asynchronous logic in the `DistributionZoneManager#start` is done before 
> a node is ready to work, in particular, ready to interact with zones.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-09-01 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immedate, which 
leads to immediate data nodes recalculation, this recalculation won't happen, 
because data nodes key have not been initialised. 


  was:
There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers. in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed.



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are completed. It can lead to the following 
> situation: 
> * Initialisation of the default zone is hanged for some reason even after 
> full restart of the cluster.
> * That means that all data nodes related keys in metastorage haven't been 
> initialised.
> * For example, if user add some new node, and scale up timer is immedate, 
> which leads to immediate data nodes recalculation, this recalculation won't 
> happen, because data nodes key have not been initialised. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-09-01 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20317:
---
Description: 
h3. *Motivation*
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Therefore several invokes 
for example on createZone and alterZone can be reordered. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# LogicalTopologyEventListener to update logical topology.

h3. *Definition of Done*
Need to return meta storage futures from event handlers to ensure event 
linearization.

h3. *Implementation Notes*
ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete and 
DistributionZoneManager#onUpdateFilter are invoked in configuration listeners. 
So we can  just return the ms invoke future  from these methods and it ensure, 
that this invoke will be completed within the current event handling.

  was:
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# LogicalTopologyEventListener to update logical topology.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.

Need to return meta storage futures from event handlers to ensure event 
linearization.


> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> Key: IGNITE-20317
> URL: https://issues.apache.org/jira/browse/IGNITE-20317
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Therefore 
> several invokes for example on createZone and alterZone can be reordered. 
> Currently it does the meta storage invokes in:
> # ZonesConfigurationListener#onCreate to init a zone.
> # ZonesConfigurationListener#onDelete to clean up the zone data.
> # DistributionZoneManager#onUpdateFilter to save data nodes in the meta 
> storage.
> # LogicalTopologyEventListener to update logical topology.
> h3. *Definition of Done*
> Need to return meta storage futures from event handlers to ensure event 
> linearization.
> h3. *Implementation Notes*
> ZonesConfigurationListener#onCreate, ZonesConfigurationListener#onDelete and 
> DistributionZoneManager#onUpdateFilter are invoked in configuration 
> listeners. So we can  just return the ms invoke future  from these methods 
> and it ensure, that this invoke will be completed within the current event 
> handling.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20322) Add ability to pass an observable timestamp to an implicit transaction

2023-09-01 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-20322:
--
Description: 
*Motivation*

An implicit transaction is a transaction that is started by the table API when 
a single operation is executed without specifying a transaction and committed 
the operation is finished. Currently, implicit transactions use the common 
observable timestamp. This timestamp is reserved for transactions started by 
the embedded node only. That leads to unnecessary adjustments of the observable 
timestamp (the timestamp is updated more often than required), that may have 
serious performance impact on read-only transactions and read operations which 
use implicit read-only transactions.

*Definition of done*

Implicit transactions should start with an observable timestamp that is 
specific for each embedded server or for each client.

  was:
*Motivation*

An implicit transaction is a transaction that is started by the table API when 
a single operation is executed without specifying a transaction and committed 
the operation is finished. Currently, implicit transactions use the common 
observable timestamp. The timestamp is reserved for transactions started by the 
embedded node only. That leads to unnecessary adjustments of the observable 
timestamp (the timestamp is updated more often than required), that may have 
serious performance impact on read-only transactions and read operations which 
use implicit read-only transactions.

*Definition of done*

Implicit transactions should start with an observable timestamp that is 
specific for each embedded server or for each client.


> Add ability to pass an observable timestamp to an implicit transaction
> --
>
> Key: IGNITE-20322
> URL: https://issues.apache.org/jira/browse/IGNITE-20322
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> An implicit transaction is a transaction that is started by the table API 
> when a single operation is executed without specifying a transaction and 
> committed the operation is finished. Currently, implicit transactions use the 
> common observable timestamp. This timestamp is reserved for transactions 
> started by the embedded node only. That leads to unnecessary adjustments of 
> the observable timestamp (the timestamp is updated more often than required), 
> that may have serious performance impact on read-only transactions and read 
> operations which use implicit read-only transactions.
> *Definition of done*
> Implicit transactions should start with an observable timestamp that is 
> specific for each embedded server or for each client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20322) Add ability to pass an observable timestamp to an implicit transaction

2023-09-01 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-20322:
--
Description: 
*Motivation*

An implicit transaction is a transaction that is started by the table API when 
a single operation is executed without specifying a transaction and committed 
the operation is finished. Currently, implicit transactions use the common 
observable timestamp. The timestamp is reserved for transactions started by the 
embedded node only. That leads to unnecessary adjustments of the observable 
timestamp (the timestamp is updated more often than required), that may have 
serious performance impact on read-only transactions and read operations which 
use implicit read-only transactions.

*Definition of done*

Implicit transactions should start with an observable timestamp that is 
specific for each embedded server or for each client.

  was:
*Motivation*

An implicit transaction is a transaction that is started by the table API when 
a single operation is executed and committed when the operation is finished. 
Currently, implicit transactions use the common observable timestamp. The 
timestamp is reserved for transactions started by the embedded node only. That 
leads to unnecessary adjustments of the observable timestamp (the timestamp is 
updated more often than required), that may have serious performance impact on 
read-only transactions and read operations which use implicit read-only 
transactions.

*Definition of done*

Implicit transactions should start with an observable timestamp that is 
specific for each embedded server or for each client.


> Add ability to pass an observable timestamp to an implicit transaction
> --
>
> Key: IGNITE-20322
> URL: https://issues.apache.org/jira/browse/IGNITE-20322
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> An implicit transaction is a transaction that is started by the table API 
> when a single operation is executed without specifying a transaction and 
> committed the operation is finished. Currently, implicit transactions use the 
> common observable timestamp. The timestamp is reserved for transactions 
> started by the embedded node only. That leads to unnecessary adjustments of 
> the observable timestamp (the timestamp is updated more often than required), 
> that may have serious performance impact on read-only transactions and read 
> operations which use implicit read-only transactions.
> *Definition of done*
> Implicit transactions should start with an observable timestamp that is 
> specific for each embedded server or for each client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20322) Add ability to pass an observable timestamp to an implicit transaction

2023-09-01 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-20322:
--
Description: 
*Motivation*

An implicit transaction is a transaction that is started by the table API when 
a single operation is executed and committed when the operation is finished. 
Currently, implicit transactions use the common observable timestamp. The 
timestamp is reserved for transactions started by the embedded node only. That 
leads to unnecessary adjustments of the observable timestamp (the timestamp is 
updated more often than required), that may have serious performance impact on 
read-only transactions and read operations which use implicit read-only 
transactions.

*Definition of done*

Implicit transactions should start with an observable timestamp that is 
specific for each embedded server or for each client.

  was:
*Motivation*

An implicit transaction is a transaction that is started by the table API when 
a single operation is executed and committed when the operation is finished. 
Currently, internal transactions use the only observable timestamp. The 
timestamp is reserved for embedded transactions only. That leads to incorrect 
calculations of the observable timestamp (the timestamp is updated more 
frequently than required).

*Definition of done*

Implicit transactions should start with a specific observable timestamp (for 
the embedded server or for each client).


> Add ability to pass an observable timestamp to an implicit transaction
> --
>
> Key: IGNITE-20322
> URL: https://issues.apache.org/jira/browse/IGNITE-20322
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> An implicit transaction is a transaction that is started by the table API 
> when a single operation is executed and committed when the operation is 
> finished. Currently, implicit transactions use the common observable 
> timestamp. The timestamp is reserved for transactions started by the embedded 
> node only. That leads to unnecessary adjustments of the observable timestamp 
> (the timestamp is updated more often than required), that may have serious 
> performance impact on read-only transactions and read operations which use 
> implicit read-only transactions.
> *Definition of done*
> Implicit transactions should start with an observable timestamp that is 
> specific for each embedded server or for each client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-19358) Sql. PrepareServiceImpl prepareDML cache does not take dynamic parameters into account

2023-09-01 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov resolved IGNITE-19358.
---
Resolution: Fixed

Was fixed in IGNITE-18831

> Sql. PrepareServiceImpl prepareDML cache does not take dynamic parameters 
> into account
> --
>
> Key: IGNITE-19358
> URL: https://issues.apache.org/jira/browse/IGNITE-19358
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-alpha2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: calcite3-required, ignite-3
>
> PrepareService prepareDml cache does not take dynamic parameters into account 
> which can potentially lead to runtime errors when different queries reuse the 
> same plan.
> {code:java}
> private CompletableFuture prepareDml(SqlNode sqlNode, 
> PlanningContext ctx) {
> var key = new CacheKey(ctx.schemaName(), sqlNode.toString());
> 
> {code}
> Expected behaviour: dml plan cache should take dynamic parameters/types of 
> dynamic parameters into account.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20329) Cannot build ODBC modules on MacOS

2023-09-01 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761302#comment-17761302
 ] 

Igor Sapego commented on IGNITE-20329:
--

Looks good to me.


> Cannot build ODBC modules on MacOS
> --
>
> Key: IGNITE-20329
> URL: https://issues.apache.org/jira/browse/IGNITE-20329
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Reporter: Andrey Khitrin
>Assignee: Andrey Khitrin
>Priority: Major
>  Labels: ignite-3
>
> I tried to build AI3 from master (hash: 59107180b) on my MacBook and faced an 
> obstacle with compiling ODBC driver:
> {code}
> ./gradlew clean build -x check
> ...
> > Task :platforms:cmakeBuildOdbc FAILED
>   CMakePlugin.cmakeConfigure - ERRORS: 
> In file included from 
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/app/application_data_buffer.cpp:18:
> In file included from 
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/app/application_data_buffer.h:20:
> In file included from 
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/common_types.h:20:
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/system/odbc_constants.h:29:10:
>  fatal error: 'odbcinst.h' file not found
> #include 
>  ^~~~
> 1 error generated.
> make[2]: *** 
> [ignite/odbc/CMakeFiles/ignite3-odbc-obj.dir/app/application_data_buffer.cpp.o]
>  Error 1
> make[1]: *** [ignite/odbc/CMakeFiles/ignite3-odbc-obj.dir/all] Error 2
> make: *** [all] Error 2
> {code}
> I have an unixODBC package installed in my system, and odbcinst.h located in 
> /opt/local/include. My problem here is that this header is not found by build 
> scripts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20329) Cannot build ODBC modules on MacOS

2023-09-01 Thread Andrey Khitrin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761300#comment-17761300
 ] 

Andrey Khitrin commented on IGNITE-20329:
-

[~isapego] Take a look at PR please

> Cannot build ODBC modules on MacOS
> --
>
> Key: IGNITE-20329
> URL: https://issues.apache.org/jira/browse/IGNITE-20329
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Reporter: Andrey Khitrin
>Assignee: Andrey Khitrin
>Priority: Major
>  Labels: ignite-3
>
> I tried to build AI3 from master (hash: 59107180b) on my MacBook and faced an 
> obstacle with compiling ODBC driver:
> {code}
> ./gradlew clean build -x check
> ...
> > Task :platforms:cmakeBuildOdbc FAILED
>   CMakePlugin.cmakeConfigure - ERRORS: 
> In file included from 
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/app/application_data_buffer.cpp:18:
> In file included from 
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/app/application_data_buffer.h:20:
> In file included from 
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/common_types.h:20:
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/system/odbc_constants.h:29:10:
>  fatal error: 'odbcinst.h' file not found
> #include 
>  ^~~~
> 1 error generated.
> make[2]: *** 
> [ignite/odbc/CMakeFiles/ignite3-odbc-obj.dir/app/application_data_buffer.cpp.o]
>  Error 1
> make[1]: *** [ignite/odbc/CMakeFiles/ignite3-odbc-obj.dir/all] Error 2
> make: *** [all] Error 2
> {code}
> I have an unixODBC package installed in my system, and odbcinst.h located in 
> /opt/local/include. My problem here is that this header is not found by build 
> scripts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20329) Cannot build ODBC modules on MacOS

2023-09-01 Thread Andrey Khitrin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Khitrin reassigned IGNITE-20329:
---

Assignee: Andrey Khitrin

> Cannot build ODBC modules on MacOS
> --
>
> Key: IGNITE-20329
> URL: https://issues.apache.org/jira/browse/IGNITE-20329
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Reporter: Andrey Khitrin
>Assignee: Andrey Khitrin
>Priority: Major
>  Labels: ignite-3
>
> I tried to build AI3 from master (hash: 59107180b) on my MacBook and faced an 
> obstacle with compiling ODBC driver:
> {code}
> ./gradlew clean build -x check
> ...
> > Task :platforms:cmakeBuildOdbc FAILED
>   CMakePlugin.cmakeConfigure - ERRORS: 
> In file included from 
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/app/application_data_buffer.cpp:18:
> In file included from 
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/app/application_data_buffer.h:20:
> In file included from 
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/common_types.h:20:
> /Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/system/odbc_constants.h:29:10:
>  fatal error: 'odbcinst.h' file not found
> #include 
>  ^~~~
> 1 error generated.
> make[2]: *** 
> [ignite/odbc/CMakeFiles/ignite3-odbc-obj.dir/app/application_data_buffer.cpp.o]
>  Error 1
> make[1]: *** [ignite/odbc/CMakeFiles/ignite3-odbc-obj.dir/all] Error 2
> make: *** [all] Error 2
> {code}
> I have an unixODBC package installed in my system, and odbcinst.h located in 
> /opt/local/include. My problem here is that this header is not found by build 
> scripts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20286) Information about REST port disappeared from logs

2023-09-01 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20286:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Information about REST port disappeared from logs
> -
>
> Key: IGNITE-20286
> URL: https://issues.apache.org/jira/browse/IGNITE-20286
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
> Attachments: image-2023-08-25-16-20-23-118.png
>
>
> The information about taken REST port is disappeared from logs.
> Previously the string was present:
> {code:java}
> 2023-08-24 05:01:08:983 +0300 [INFO][main][Micronaut] Startup completed in 
> 118ms. Server Running: http://5e85d1e4d3d1:10301{code}
> Now it is impossible to determine the taken port by node (because REST 
> endpoint support `portRange`).
> The possible reason of the problem: IgniteRunner doesn't have SL4J 
> implementations in dependencies.
> !image-2023-08-25-16-20-23-118.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20322) Add ability to pass an observable timestamp to an implicit transaction

2023-09-01 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-20322:
--
Summary: Add ability to pass an observable timestamp to an implicit 
transaction  (was: Ability to pass an observable time stamp to an implicit 
transaction)

> Add ability to pass an observable timestamp to an implicit transaction
> --
>
> Key: IGNITE-20322
> URL: https://issues.apache.org/jira/browse/IGNITE-20322
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> An implicit transaction is a transaction that is started by the table API 
> when a single operation is executed and committed when the operation is 
> finished. Currently, internal transactions use the only observable timestamp. 
> The timestamp is reserved for embedded transactions only. That leads to 
> incorrect calculations of the observable timestamp (the timestamp is updated 
> more frequently than required).
> *Definition of done*
> Implicit transactions should start with a specific observable timestamp (for 
> the embedded server or for each client).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20335) Move IgniteRunner to the internal package

2023-09-01 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20335:
-
Labels: ignite-3  (was: )

> Move IgniteRunner to the internal package
> -
>
> Key: IGNITE-20335
> URL: https://issues.apache.org/jira/browse/IGNITE-20335
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
>
> It is not assumed that the -IgniteRunner- can be used directly, but this 
> class is placed into `org.apache.ignite` package. So, it totally makes sense 
> to move it to the internal one.
> Definition of Done:
>  - -IgniteRunner- moved to -org.apache.ignite.internal.app` package
>  - build and packaging scripts are updated in accordance with #1
>  - removed outdated TODO in the -IgniteRunnerTest- : /** TODO: Replace this 
> test by full integration test on the cli side IGNITE-15097. */



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20335) Move IgniteRunner to the internal package

2023-09-01 Thread Vyacheslav Koptilin (Jira)
Vyacheslav Koptilin created IGNITE-20335:


 Summary: Move IgniteRunner to the internal package
 Key: IGNITE-20335
 URL: https://issues.apache.org/jira/browse/IGNITE-20335
 Project: Ignite
  Issue Type: Bug
Reporter: Vyacheslav Koptilin
Assignee: Vyacheslav Koptilin


It is not assumed that the -IgniteRunner- can be used directly, but this class 
is placed into `org.apache.ignite` package. So, it totally makes sense to move 
it to the internal one.

Definition of Done:
 - -IgniteRunner- moved to -org.apache.ignite.internal.app` package
 - build and packaging scripts are updated in accordance with #1
 - removed outdated TODO in the -IgniteRunnerTest- : /** TODO: Replace this 
test by full integration test on the cli side IGNITE-15097. */



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20335) Move IgniteRunner to the internal package

2023-09-01 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20335:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Move IgniteRunner to the internal package
> -
>
> Key: IGNITE-20335
> URL: https://issues.apache.org/jira/browse/IGNITE-20335
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
>
> It is not assumed that the -IgniteRunner- can be used directly, but this 
> class is placed into `org.apache.ignite` package. So, it totally makes sense 
> to move it to the internal one.
> Definition of Done:
>  - -IgniteRunner- moved to -org.apache.ignite.internal.app` package
>  - build and packaging scripts are updated in accordance with #1
>  - removed outdated TODO in the -IgniteRunnerTest- : /** TODO: Replace this 
> test by full integration test on the cli side IGNITE-15097. */



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20334) Sql. Fix compilation after IGNITE-20077

2023-09-01 Thread Evgeny Stanilovsky (Jira)
Evgeny Stanilovsky created IGNITE-20334:
---

 Summary: Sql. Fix compilation after IGNITE-20077
 Key: IGNITE-20334
 URL: https://issues.apache.org/jira/browse/IGNITE-20334
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 3.0.0-beta1
Reporter: Evgeny Stanilovsky
Assignee: Evgeny Stanilovsky






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-09-01 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761283#comment-17761283
 ] 

Pavel Tupitsyn edited comment on IGNITE-20299 at 9/1/23 11:29 AM:
--

[~rpwilson] I can reproduce the issue. Some observations:
* To fix the grid, remove 
{{BadCacheCreationReproducer\Persistence\...\cache-ABadCache}} directory 
* Reproduces on Apache.Ignite 2.15, but not on GridGain.Ignite 8.8.33


was (Author: ptupitsyn):
[~rpwilson] I can reproduce the issue. Some observations:
* To fix the grid, remove 
{code}BadCacheCreationReproducer\Persistence\...\cache-ABadCache{code} 
directory 
* Reproduces on Apache.Ignite 2.15, but not on GridGain.Ignite 8.8.33

> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
>         at 
> 

[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-09-01 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761283#comment-17761283
 ] 

Pavel Tupitsyn commented on IGNITE-20299:
-

[~rpwilson] I can reproduce the issue. Some observations:
* To fix the grid, remove 
{code}BadCacheCreationReproducer\Persistence\...\cache-ABadCache{code} 
directory 
* Reproduces on Apache.Ignite 2.15, but not on GridGain.Ignite 8.8.33

> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
>         at 
> 

[jira] [Updated] (IGNITE-20164) Sql. Incorrect propagation of RelCollation trait for Sort-based map/reduce aggregates.

2023-09-01 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-20164:
--
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Sql. Incorrect propagation of RelCollation trait for Sort-based map/reduce 
> aggregates.
> --
>
> Key: IGNITE-20164
> URL: https://issues.apache.org/jira/browse/IGNITE-20164
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> RelCollation propagation does not take into account remapping of group keys 
> between MAP/REDUCE phases, hence causes errors in queries that are expected 
> to use sort-based MAP/REDUCE - RelCollation uses the same keys on both 
> phases. Example:
> {code:java}
> String[] rules = {
> "MapReduceHashAggregateConverterRule",
> "ColocatedHashAggregateConverterRule",
> "ColocatedSortAggregateConverterRule"
> };
> sql("CREATE TABLE testMe40 (a INTEGER, b INTEGER);");
> sql("INSERT INTO testMe40 VALUES (11, 2), (12, 2), (12, 3)");
> assertQuery("SELECT COUNT(a), COUNT(DISTINCT(b)) FROM testMe40")
>   .disableRules(rules)
>   .returns(3L, 2L)
>   .check();
> {code}
> Plan:
> {code:java}
> IgniteProject(EXPR$0=[CAST($0):BIGINT NOT NULL], EXPR$1=[$1]), 
>   IgniteReduceSortAggregate(group=[{}], EXPR$0=[$SUM0($1)], 
> EXPR$1=[COUNT($0)], collation=[[]]), 
> IgniteMapSortAggregate(group=[{}], EXPR$0=[$SUM0($1)], 
> EXPR$1=[COUNT($0)], collation=[[]]), 
>   IgniteReduceSortAggregate(group=[{1}], EXPR$0=[COUNT($0)], 
> collation=[[1]]), < HERE
> IgniteExchange(distribution=[single]),
>   IgniteMapSortAggregate(group=[{1}], EXPR$0=[COUNT($0)], 
> collation=[[1]]), 
> IgniteSort(sort0=[$1], dir0=[ASC]), 
>   IgniteTableScan(table=[[PUBLIC, TESTME40]], 
> requiredColumns=[{1, 2}]),
> {code}
> Error:
> {code:java}
> Caused by: java.lang.ClassCastException: class java.util.ArrayList cannot be 
> cast to class java.lang.Comparable (java.util.ArrayList and 
> java.lang.Comparable are in module java.base of loader 'bootstrap')
>   at 
> org.apache.ignite.internal.sql.engine.exec.exp.ExpressionFactoryImpl.compare(ExpressionFactoryImpl.java:247)
>   at 
> org.apache.ignite.internal.sql.engine.exec.exp.ExpressionFactoryImpl.lambda$comparator$0(ExpressionFactoryImpl.java:178)
>   at 
> java.base/java.util.Map$Entry.lambda$comparingByKey$6d558cbf$1(Map.java:539)
>   at 
> java.base/java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:675)
>   at java.base/java.util.PriorityQueue.siftUp(PriorityQueue.java:652)
>   at java.base/java.util.PriorityQueue.offer(PriorityQueue.java:345)
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.Inbox.pushOrdered(Inbox.java:235)
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.Inbox.push(Inbox.java:188)
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.Inbox.onBatchReceived(Inbox.java:168)
>   at 
> org.apache.ignite.internal.sql.engine.exec.ExchangeServiceImpl.onMessage(ExchangeServiceImpl.java:184)
>   ... 7 more
> {code}
> The query below works because position of column b does not change after MAP 
> phase.
> {code:java}
> String[] rules = {
> "MapReduceHashAggregateConverterRule",
> "ColocatedHashAggregateConverterRule",
> "ColocatedSortAggregateConverterRule"
> };
> sql("CREATE TABLE testMe40 (a INTEGER, b INTEGER);");
> sql("INSERT INTO testMe40 VALUES (11, 2), (12, 2), (12, 3)");
> assertQuery("SELECT COUNT(a), COUNT(DISTINCT(b)) FROM testMe40")
>   .disableRules(rules)
>   .returns(3L, 2L)
>   .check();
> {code}
> Plan:
> {code:java}
> IgniteProject(EXPR$0=[$0], EXPR$1=[CAST($1):BIGINT NOT NULL]), 
>   IgniteReduceSortAggregate(group=[{}], EXPR$0=[COUNT($0)], 
> EXPR$1=[$SUM0($1)], collation=[[]]), 
> IgniteMapSortAggregate(group=[{}], EXPR$0=[COUNT($0)], 
> EXPR$1=[$SUM0($1)], collation=[[]]), 
>   IgniteReduceSortAggregate(group=[{0}], EXPR$1=[COUNT($1)], 
> collation=[[0]]), 
> IgniteExchange(distribution=[single]),
>   IgniteMapSortAggregate(group=[{0}], EXPR$1=[COUNT($1)], 
> collation=[[0]]), 
> IgniteSort(sort0=[$0], dir0=[ASC]), 
>   IgniteTableScan(table=[[PUBLIC, TESTME40]], projects=[[$t1, 
> $t0]], requiredColumns=[{1, 2}]), 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20232) Java client: Propagate observable timestamp to sql engine using internal API

2023-09-01 Thread Pavel Pereslegin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761276#comment-17761276
 ] 

Pavel Pereslegin commented on IGNITE-20232:
---

[~ptupitsyn],

> _Which internal API should be used_

QueryProcessor#querySingleAsync (you can find example in 
JdbcQueryEventHandlerImpl).

> _Which related tests should be reworked._

I assume that after switching to the internal API, some client tests that use 
FakeSession/FakeAsyncResultSet will fail.

> _How are TODOs with this ticket number in IgniteTransactionsImpl and 
> IgniteImpl should be addressed_

I think it's better to address this to the author of these TODOs.

[~v.pyatkov], can you help?

> Java client:  Propagate observable timestamp to sql engine using internal API
> -
>
> Key: IGNITE-20232
> URL: https://issues.apache.org/jira/browse/IGNITE-20232
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Pereslegin
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>
> Currently, Java client internally uses a public API to execute sql queries 
> (and some tests also rely on this fact).
> In order to pass the observable timestamp to the sql engine and keep the code 
> clean, we need to switch to using the internal API and rework the related 
> tests.
> This must be done after the completion of IGNITE-19898.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20333) Sql. Backport processing DEFAULT constraints into CALCITE code.

2023-09-01 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-20333:

Labels: ignite-3  (was: )

> Sql. Backport processing DEFAULT constraints into CALCITE code.
> ---
>
> Key: IGNITE-20333
> URL: https://issues.apache.org/jira/browse/IGNITE-20333
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>
> It would be helpful to backport _default_ constraints processing into calcite 
> code base.
> Appropriate issue [1]
> [1] CALCITE-5950



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20333) Sql. Backport processing DEFAULT constraints into CALCITE code.

2023-09-01 Thread Evgeny Stanilovsky (Jira)
Evgeny Stanilovsky created IGNITE-20333:
---

 Summary: Sql. Backport processing DEFAULT constraints into CALCITE 
code.
 Key: IGNITE-20333
 URL: https://issues.apache.org/jira/browse/IGNITE-20333
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 3.0.0-beta1
Reporter: Evgeny Stanilovsky
Assignee: Evgeny Stanilovsky


It would be helpful to backport _default_ constraints processing into calcite 
code base.
Appropriate issue [1]

[1] CALCITE-5950



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20288) .NET: Thin 3.0: TestDroppedConnectionsAreRestoredInBackground is flaky

2023-09-01 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761271#comment-17761271
 ] 

Pavel Tupitsyn commented on IGNITE-20288:
-

Merged to main: f189bf979b88a3607305e2659134f582c377

> .NET: Thin 3.0: TestDroppedConnectionsAreRestoredInBackground is flaky
> --
>
> Key: IGNITE-20288
> URL: https://issues.apache.org/jira/browse/IGNITE-20288
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> 3 of 1000 failed
> https://ci.ignite.apache.org/test/3972078589555679559?currentProjectId=ApacheIgnite3xGradle_Test=true=
> {code}
> Condition not reached after 00:00:04.5004651
>at Apache.Ignite.Tests.TestUtils.WaitForConditionAsync(Func`1 condition, 
> Int32 timeoutMs, Func`1 messageFactory) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/TestUtils.cs:line
>  68
>at Apache.Ignite.Tests.TestUtils.WaitForCondition(Func`1 condition, Int32 
> timeoutMs, Func`1 messageFactory) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/TestUtils.cs:line
>  37
>at 
> Apache.Ignite.Tests.ReconnectTests.TestDroppedConnectionsAreRestoredInBackground()
>  in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/ReconnectTests.cs:line
>  95
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20288) .NET: Thin 3.0: TestDroppedConnectionsAreRestoredInBackground is flaky

2023-09-01 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761269#comment-17761269
 ] 

Igor Sapego commented on IGNITE-20288:
--

Looks good to me.

> .NET: Thin 3.0: TestDroppedConnectionsAreRestoredInBackground is flaky
> --
>
> Key: IGNITE-20288
> URL: https://issues.apache.org/jira/browse/IGNITE-20288
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> 3 of 1000 failed
> https://ci.ignite.apache.org/test/3972078589555679559?currentProjectId=ApacheIgnite3xGradle_Test=true=
> {code}
> Condition not reached after 00:00:04.5004651
>at Apache.Ignite.Tests.TestUtils.WaitForConditionAsync(Func`1 condition, 
> Int32 timeoutMs, Func`1 messageFactory) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/TestUtils.cs:line
>  68
>at Apache.Ignite.Tests.TestUtils.WaitForCondition(Func`1 condition, Int32 
> timeoutMs, Func`1 messageFactory) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/TestUtils.cs:line
>  37
>at 
> Apache.Ignite.Tests.ReconnectTests.TestDroppedConnectionsAreRestoredInBackground()
>  in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite.Tests/ReconnectTests.cs:line
>  95
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20232) Java client: Propagate observable timestamp to sql engine using internal API

2023-09-01 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761268#comment-17761268
 ] 

Pavel Tupitsyn commented on IGNITE-20232:
-

>  we need to switch to using the internal API and rework the related tests

[~xtern] please clarify the task:
* Which internal API should be used
* Which related tests should be reworked
* How are TODOs with this ticket number in IgniteTransactionsImpl and 
IgniteImpl should be addressed, and why do we have HybridTimestampTracker on 
the server at all? Client can be connected to multiple servers and it tracks 
timestamp across all connections.

> Java client:  Propagate observable timestamp to sql engine using internal API
> -
>
> Key: IGNITE-20232
> URL: https://issues.apache.org/jira/browse/IGNITE-20232
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Pereslegin
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>
> Currently, Java client internally uses a public API to execute sql queries 
> (and some tests also rely on this fact).
> In order to pass the observable timestamp to the sql engine and keep the code 
> clean, we need to switch to using the internal API and rework the related 
> tests.
> This must be done after the completion of IGNITE-19898.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19499) TableManager should listen CatalogService events instead of configuration

2023-09-01 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko reassigned IGNITE-19499:


Assignee: Kirill Tkalenko  (was: Andrey Mashenkov)

> TableManager should listen CatalogService events instead of configuration
> -
>
> Key: IGNITE-19499
> URL: https://issues.apache.org/jira/browse/IGNITE-19499
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Mashenkov
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> As of now, TableManager listens configuration events to create internal 
> structures.
> Let's make TableManager listens CatalogService events instead.
> Note: Some tests may fails due to changed guarantees and related ticked 
> incompletion. So, let's do this in a separate feature branch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20182) Sql. ExchangeExecutionTest#racesBetweenRewindAndBatchesFromPreviousRequest rarely failed

2023-09-01 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-20182:
--
Fix Version/s: 3.0.0-beta2

> Sql. ExchangeExecutionTest#racesBetweenRewindAndBatchesFromPreviousRequest 
> rarely failed
> 
>
> Key: IGNITE-20182
> URL: https://issues.apache.org/jira/browse/IGNITE-20182
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
> Attachments: image-2023-08-09-16-09-52-058.png
>
>
> run until failure:
> ExchangeExecutionTest#racesBetweenRewindAndBatchesFromPreviousRequest
>  !image-2023-08-09-16-09-52-058.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20182) Sql. ExchangeExecutionTest#racesBetweenRewindAndBatchesFromPreviousRequest rarely failed

2023-09-01 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov reassigned IGNITE-20182:
-

Assignee: Konstantin Orlov

> Sql. ExchangeExecutionTest#racesBetweenRewindAndBatchesFromPreviousRequest 
> rarely failed
> 
>
> Key: IGNITE-20182
> URL: https://issues.apache.org/jira/browse/IGNITE-20182
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
> Attachments: image-2023-08-09-16-09-52-058.png
>
>
> run until failure:
> ExchangeExecutionTest#racesBetweenRewindAndBatchesFromPreviousRequest
>  !image-2023-08-09-16-09-52-058.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20332) Fix ItIgniteDistributionZoneManagerNodeRestartTest#testScaleUpTriggeredByFilterUpdateIsRestoredAfterRestart

2023-09-01 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20332:
-
Description: 
*org.apache.ignite.internal.distribution.zones.ItIgniteDistributionZoneManagerNodeRestartTest#testScaleUpTriggeredByFilterUpdateIsRestoredAfterRestart*
 started to fall in the 
[catalog-feature|https://github.com/apache/ignite-3/tree/catalog-feature] 
branch, and on other branches that are created from it branch, need to fix it.

https://ci.ignite.apache.org/viewLog.html?buildId=7470189=ApacheIgnite3xGradle_Test_RunAllTests=true

  was:
*org.apache.ignite.internal.distribution.zones.ItIgniteDistributionZoneManagerNodeRestartTest#testScaleUpTriggeredByFilterUpdateIsRestoredAfterRestart*
 started to fall in the feature branch, need to fix it.

https://ci.ignite.apache.org/viewLog.html?buildId=7470189=ApacheIgnite3xGradle_Test_RunAllTests=true


> Fix 
> ItIgniteDistributionZoneManagerNodeRestartTest#testScaleUpTriggeredByFilterUpdateIsRestoredAfterRestart
> ---
>
> Key: IGNITE-20332
> URL: https://issues.apache.org/jira/browse/IGNITE-20332
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
>
> *org.apache.ignite.internal.distribution.zones.ItIgniteDistributionZoneManagerNodeRestartTest#testScaleUpTriggeredByFilterUpdateIsRestoredAfterRestart*
>  started to fall in the 
> [catalog-feature|https://github.com/apache/ignite-3/tree/catalog-feature] 
> branch, and on other branches that are created from it branch, need to fix it.
> https://ci.ignite.apache.org/viewLog.html?buildId=7470189=ApacheIgnite3xGradle_Test_RunAllTests=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20332) Fix ItIgniteDistributionZoneManagerNodeRestartTest#testScaleUpTriggeredByFilterUpdateIsRestoredAfterRestart

2023-09-01 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-20332:


 Summary: Fix 
ItIgniteDistributionZoneManagerNodeRestartTest#testScaleUpTriggeredByFilterUpdateIsRestoredAfterRestart
 Key: IGNITE-20332
 URL: https://issues.apache.org/jira/browse/IGNITE-20332
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Tkalenko


*org.apache.ignite.internal.distribution.zones.ItIgniteDistributionZoneManagerNodeRestartTest#testScaleUpTriggeredByFilterUpdateIsRestoredAfterRestart*
 started to fall in the feature branch, need to fix it.

https://ci.ignite.apache.org/viewLog.html?buildId=7470189=ApacheIgnite3xGradle_Test_RunAllTests=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20232) Java client: Propagate observable timestamp to sql engine using internal API

2023-09-01 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20232:

Summary: Java client:  Propagate observable timestamp to sql engine using 
internal API  (was: Java client:  Propagate observable timestamp to sql engine 
using internal API.)

> Java client:  Propagate observable timestamp to sql engine using internal API
> -
>
> Key: IGNITE-20232
> URL: https://issues.apache.org/jira/browse/IGNITE-20232
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Pereslegin
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>
> Currently, Java client internally uses a public API to execute sql queries 
> (and some tests also rely on this fact).
> In order to pass the observable timestamp to the sql engine and keep the code 
> clean, we need to switch to using the internal API and rework the related 
> tests.
> This must be done after the completion of IGNITE-19898.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20331) Complete the changes associated with switching the TableManager to the catalog

2023-09-01 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-20331:


 Summary: Complete the changes associated with switching the 
TableManager to the catalog
 Key: IGNITE-20331
 URL: https://issues.apache.org/jira/browse/IGNITE-20331
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Tkalenko


In the code, we need to look for TODO with the current ticket and fix them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19712) Handle rebalance wrt indexes

2023-09-01 Thread Sergey Chugunov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov updated IGNITE-19712:
-
   Epic Link: IGNITE-17766
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Handle rebalance wrt indexes
> 
>
> Key: IGNITE-19712
> URL: https://issues.apache.org/jira/browse/IGNITE-19712
> Project: Ignite
>  Issue Type: Bug
>Reporter: Semyon Danilov
>Priority: Major
>  Labels: ignite-3
>
> After IGNITE-19363, index storages are no longer lazily instantiated. Need to 
> listen to assignment changes and start new storages



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20263) Get rid of DataStorageConfigurationSchema

2023-09-01 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko reassigned IGNITE-20263:


Assignee: Kirill Tkalenko

> Get rid of DataStorageConfigurationSchema
> -
>
> Key: IGNITE-20263
> URL: https://issues.apache.org/jira/browse/IGNITE-20263
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to get rid of 
> *org.apache.ignite.internal.schema.configuration.storage.DataStorageConfigurationSchema*,
>  its descendants and the code associated with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-18302) ignite-spring-sessions: IgniteSession serialization drags its parent class

2023-09-01 Thread Alexandr Shapkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandr Shapkin reassigned IGNITE-18302:
-

Assignee: Andrey Novikov  (was: Alexandr Shapkin)

> ignite-spring-sessions: IgniteSession serialization drags its parent class
> --
>
> Key: IGNITE-18302
> URL: https://issues.apache.org/jira/browse/IGNITE-18302
> Project: Ignite
>  Issue Type: Bug
>  Components: extensions
>Reporter: Alexandr Shapkin
>Assignee: Andrey Novikov
>Priority: Major
>
> In short, there is a bug in ignite-spring-session-ext implementation.
> We store IgniteIndexedSessionRepository${{{}IgniteSession{}}} in the cache, 
> but it’s an internal non-static class, having a reference to the parent 
> [\{{{}IgniteIndexedSessionRepository{}}}] indirectly.
> Hence, during the serialization Ignite also writes {{{}name=this$0, 
> type=Object, fieldId=0xCBDD23AA (-874699862){}}}, which is the reference to 
> {{{}IgniteIndexedSessionRepository{}}}. That leads to the following issues:
>  * we are serializing and saving internal utility data, like}} Ignite 
> ignite{}}}, {{private IndexResolver indexResolver}} etc
>  * one of the IgniteIndexedSessionRepository’s fields is IgniteCache itself - 
> {{IgniteCache sessions}} that basically keeps every 
> session so far leading to a StackOverflow error after some time. 
>  
> {code:java}
> [2022-11-25T17:27:29,268][ERROR][sys-stripe-0-#1%USERS_IGNITE%][GridCacheIoManager]
>  Failed processing message [senderId=0f0ca915-d6cd-4580-92a3-1fbc3d2a5722, 
> msg=GridNearSingleGetResponse [futId=1669397231378, res=-547701325, 
> topVer=null, err=null, flags=0]] 2java.lang.StackOverflowError: null 3 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshallerUtils.descriptorFromCache(OptimizedMarshallerUtils.java:328)
>  ~[ignite-core-8.8.22.jar:8.8.22] 4 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshallerUtils.classDescriptor(OptimizedMarshallerUtils.java:273)
>  ~[ignite-core-8.8.22.jar:8.8.22] 5 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:354)
>  ~[ignite-core-8.8.22.jar:8.8.22] 6 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:211)
>  ~[ignite-core-8.8.22.jar:8.8.22] 7 at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:480) ~[?:?] 8 at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:447) ~[?:?] 9 at 
> org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.readExternal(GridCacheProxyImpl.java:1662)
>  ~[ignite-core-8.8.22.jar:8.8.22] 10 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readExternalizable(OptimizedObjectInputStream.java:569)
>  ~[ignite-core-8.8.22.jar:8.8.22] 11 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.read(OptimizedClassDescriptor.java:979)
>  ~[ignite-core-8.8.22.jar:8.8.22] 12 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:359)
>  ~[ignite-core-8.8.22.jar:8.8.22] 13 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:211)
>  ~[ignite-core-8.8.22.jar:8.8.22] 14 at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:480) ~[?:?] 15 at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:447) ~[?:?] 16... 
> 17[2022-11-25T17:27:29,276][ERROR][sys-stripe-0-#1%USERS_IGNITE%][] Critical 
> system error detected. Will be handled accordingly to configured handler 
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet 
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], 
> failureCtx=FailureContext [type=CRITICAL_ERROR, 
> err=java.lang.StackOverflowError]]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-15927) Implement one phase commit

2023-09-01 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-15927:
-
Fix Version/s: 3.0.0-beta2

> Implement one phase commit
> --
>
> Key: IGNITE-15927
> URL: https://issues.apache.org/jira/browse/IGNITE-15927
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Scherbakov
>Assignee: Alexey Scherbakov
>Priority: Major
>  Labels: ignite-3, ignite3_performance
> Fix For: 3.0.0-beta2
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> If all keys in the implicit transaction belong to a same partition in can be 
> committed in one round-trip.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20330) Create an abstraction for building indexes

2023-09-01 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-20330:


 Summary: Create an abstraction for building indexes
 Key: IGNITE-20330
 URL: https://issues.apache.org/jira/browse/IGNITE-20330
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Tkalenko
Assignee: Kirill Tkalenko
 Fix For: 3.0.0-beta2


At the moment, two components react to the create index events:
1. IndexManager - creates indexes and other preparations;
2. PartitionReplicaListener - physically creates indexes and starts building 
them. and also stops building indexes.

It seems that the current implementation looks wrong and should be improved by 
creating an abstraction that would start and stop building indexes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20329) Cannot build ODBC modules on MacOS

2023-09-01 Thread Andrey Khitrin (Jira)
Andrey Khitrin created IGNITE-20329:
---

 Summary: Cannot build ODBC modules on MacOS
 Key: IGNITE-20329
 URL: https://issues.apache.org/jira/browse/IGNITE-20329
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Reporter: Andrey Khitrin


I tried to build AI3 from master (hash: 59107180b) on my MacBook and faced an 
obstacle with compiling ODBC driver:

{code}
./gradlew clean build -x check
...
> Task :platforms:cmakeBuildOdbc FAILED
  CMakePlugin.cmakeConfigure - ERRORS: 
In file included from 
/Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/app/application_data_buffer.cpp:18:
In file included from 
/Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/app/application_data_buffer.h:20:
In file included from 
/Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/common_types.h:20:
/Users/zloddey/work/apache/ignite-3/modules/platforms/cpp/ignite/odbc/system/odbc_constants.h:29:10:
 fatal error: 'odbcinst.h' file not found
#include 
 ^~~~
1 error generated.
make[2]: *** 
[ignite/odbc/CMakeFiles/ignite3-odbc-obj.dir/app/application_data_buffer.cpp.o] 
Error 1
make[1]: *** [ignite/odbc/CMakeFiles/ignite3-odbc-obj.dir/all] Error 2
make: *** [all] Error 2
{code}

I have an unixODBC package installed in my system, and odbcinst.h located in 
/opt/local/include. My problem here is that this header is not found by build 
scripts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20328) Cleanup in BaseIgniteAbstractTest

2023-09-01 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-20328:
-
Labels: ignite-3  (was: )

> Cleanup in BaseIgniteAbstractTest
> -
>
> Key: IGNITE-20328
> URL: https://issues.apache.org/jira/browse/IGNITE-20328
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>
> {{BaseIgniteAbstractTest}} is written in a strange way:
> # It does not utilize Junit lifecycle methods and subclasses are expected to 
> manually call {{setUpBase}} and {{tearDownBase}};
> # Logger field is declared as static, but is initialized in the constructor. 
> This creates a false impression that this logger can be used in static 
> methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20328) Cleanup in BaseIgniteAbstractTest

2023-09-01 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-20328:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Cleanup in BaseIgniteAbstractTest
> -
>
> Key: IGNITE-20328
> URL: https://issues.apache.org/jira/browse/IGNITE-20328
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Major
>
> {{BaseIgniteAbstractTest}} is written in a strange way:
> # It does not utilize Junit lifecycle methods and subclasses are expected to 
> manually call {{setUpBase}} and {{tearDownBase}};
> # Logger field is declared as static, but is initialized in the constructor. 
> This creates a false impression that this logger can be used in static 
> methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20328) Cleanup in BaseIgniteAbstractTest

2023-09-01 Thread Aleksandr Polovtcev (Jira)
Aleksandr Polovtcev created IGNITE-20328:


 Summary: Cleanup in BaseIgniteAbstractTest
 Key: IGNITE-20328
 URL: https://issues.apache.org/jira/browse/IGNITE-20328
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksandr Polovtcev
Assignee: Aleksandr Polovtcev


{{BaseIgniteAbstractTest}} is written in a strange way:

# It does not utilize Junit lifecycle methods and subclasses are expected to 
manually call {{setUpBase}} and {{tearDownBase}};
# Logger field is declared as static, but is initialized in the constructor. 
This creates a false impression that this logger can be used in static methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19889) Implement observable timestamp on server

2023-09-01 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-19889:
-
Fix Version/s: 3.0.0-beta2

> Implement observable timestamp on server
> 
>
> Key: IGNITE-19889
> URL: https://issues.apache.org/jira/browse/IGNITE-19889
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> *Motivation*
> Client timestamp is used to determine a read timestamp for RO transaction on 
> client-side (IGNITE-19888). For consistency behavior, need to implement a 
> similar timestamp on server.
> *Implementation note*
> The last server observable timestamp should update at least when the 
> transaction commuted.
> Any RO transaction should use the timestamp: for SQL (IGNITE-19898) and 
> through key-value API (IGNITE-19887)
> *Definition of done*
> All serve-side created RO transactions should execute in past with timestamp 
> has been determining by last observation time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19889) Implement observable timestamp on server

2023-09-01 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-19889:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Implement observable timestamp on server
> 
>
> Key: IGNITE-19889
> URL: https://issues.apache.org/jira/browse/IGNITE-19889
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> *Motivation*
> Client timestamp is used to determine a read timestamp for RO transaction on 
> client-side (IGNITE-19888). For consistency behavior, need to implement a 
> similar timestamp on server.
> *Implementation note*
> The last server observable timestamp should update at least when the 
> transaction commuted.
> Any RO transaction should use the timestamp: for SQL (IGNITE-19898) and 
> through key-value API (IGNITE-19887)
> *Definition of done*
> All serve-side created RO transactions should execute in past with timestamp 
> has been determining by last observation time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17601) Fix flaky tests and Windows incompatibility in tests

2023-09-01 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-17601:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Fix flaky tests and Windows incompatibility in tests
> 
>
> Key: IGNITE-17601
> URL: https://issues.apache.org/jira/browse/IGNITE-17601
> Project: Ignite
>  Issue Type: Epic
>Reporter: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20327) [Thin clients] Continuous Query EXPIRY/REMOVE events can consumes a huge amount of heap

2023-09-01 Thread Mikhail Petrov (Jira)
Mikhail Petrov created IGNITE-20327:
---

 Summary: [Thin clients] Continuous Query EXPIRY/REMOVE events can 
consumes a huge amount of heap 
 Key: IGNITE-20327
 URL: https://issues.apache.org/jira/browse/IGNITE-20327
 Project: Ignite
  Issue Type: Task
Reporter: Mikhail Petrov


1. CQ is registered through the thin client. Assume that we filter out all 
events except cache entry expired events.
2. A huge amount of cache entries expiry on the cluster and the corresponding 
CQ events are created on the node that holds CQ listener.
3. Assume that thin client connection is slow. Thus, all events designated for 
the thin client are accumulated in the selector queue 
GridSelectorNioSessionImpl#queue before they are sent. Note, that all thin 
clients messages stored in the serializes form.

Here is two main problems 
1. EXPIRY and REMOVE CacheContinuousQueryEntry entries initializes both 
oldValue and newValue with the same object to meet JCache requirements - see 
https://issues.apache.org/jira/browse/IGNITE-8714

During thin client CQ event serialization, we process both oldValue and 
newValue independently. As a result, the same value is serialized twice, which 
can significantly increase the amount of memory consumed by the 
GridSelectorNioSessionImpl#queue.

2. Messages designated to the thin clients are serialized with the use of 
POOLED allocator. The problem here is that the POOLED allocator allocates 
memory in powers of two. As a result, if the serialized message is slightly 
larger than 2^n bytes, then twice as much memory will be allocated to store it.

As a result each EXPIRY/REMOVE CQ event that is awaiting its sending to the 
thin client side can consume  * 4 of Java 
Heap.









--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20305) Incorrect using of Assertions.assertThrows

2023-09-01 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-20305:
---
Fix Version/s: 3.0.0-beta2

> Incorrect using of Assertions.assertThrows
> --
>
> Key: IGNITE-20305
> URL: https://issues.apache.org/jira/browse/IGNITE-20305
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Yury Gerzhedovich
>Assignee: Yury Gerzhedovich
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Source code Ignite 3 contains incorrect using of
> org.junit.jupiter.api.Assertions.assertThrows(Class expectedType, 
> Executable executable, String message), when developer decided that last 
> 'message' parameter mean expected Exception message.
> Let's find all cusch places and fix checks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-18302) ignite-spring-sessions: IgniteSession serialization drags its parent class

2023-09-01 Thread Andrey Novikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761211#comment-17761211
 ] 

Andrey Novikov commented on IGNITE-18302:
-

[~ashapkin] Can I take this issue? I see you can enough time to finish it.

> ignite-spring-sessions: IgniteSession serialization drags its parent class
> --
>
> Key: IGNITE-18302
> URL: https://issues.apache.org/jira/browse/IGNITE-18302
> Project: Ignite
>  Issue Type: Bug
>  Components: extensions
>Reporter: Alexandr Shapkin
>Assignee: Alexandr Shapkin
>Priority: Major
>
> In short, there is a bug in ignite-spring-session-ext implementation.
> We store IgniteIndexedSessionRepository${{{}IgniteSession{}}} in the cache, 
> but it’s an internal non-static class, having a reference to the parent 
> [\{{{}IgniteIndexedSessionRepository{}}}] indirectly.
> Hence, during the serialization Ignite also writes {{{}name=this$0, 
> type=Object, fieldId=0xCBDD23AA (-874699862){}}}, which is the reference to 
> {{{}IgniteIndexedSessionRepository{}}}. That leads to the following issues:
>  * we are serializing and saving internal utility data, like}} Ignite 
> ignite{}}}, {{private IndexResolver indexResolver}} etc
>  * one of the IgniteIndexedSessionRepository’s fields is IgniteCache itself - 
> {{IgniteCache sessions}} that basically keeps every 
> session so far leading to a StackOverflow error after some time. 
>  
> {code:java}
> [2022-11-25T17:27:29,268][ERROR][sys-stripe-0-#1%USERS_IGNITE%][GridCacheIoManager]
>  Failed processing message [senderId=0f0ca915-d6cd-4580-92a3-1fbc3d2a5722, 
> msg=GridNearSingleGetResponse [futId=1669397231378, res=-547701325, 
> topVer=null, err=null, flags=0]] 2java.lang.StackOverflowError: null 3 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshallerUtils.descriptorFromCache(OptimizedMarshallerUtils.java:328)
>  ~[ignite-core-8.8.22.jar:8.8.22] 4 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshallerUtils.classDescriptor(OptimizedMarshallerUtils.java:273)
>  ~[ignite-core-8.8.22.jar:8.8.22] 5 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:354)
>  ~[ignite-core-8.8.22.jar:8.8.22] 6 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:211)
>  ~[ignite-core-8.8.22.jar:8.8.22] 7 at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:480) ~[?:?] 8 at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:447) ~[?:?] 9 at 
> org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.readExternal(GridCacheProxyImpl.java:1662)
>  ~[ignite-core-8.8.22.jar:8.8.22] 10 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readExternalizable(OptimizedObjectInputStream.java:569)
>  ~[ignite-core-8.8.22.jar:8.8.22] 11 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.read(OptimizedClassDescriptor.java:979)
>  ~[ignite-core-8.8.22.jar:8.8.22] 12 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:359)
>  ~[ignite-core-8.8.22.jar:8.8.22] 13 at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:211)
>  ~[ignite-core-8.8.22.jar:8.8.22] 14 at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:480) ~[?:?] 15 at 
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:447) ~[?:?] 16... 
> 17[2022-11-25T17:27:29,276][ERROR][sys-stripe-0-#1%USERS_IGNITE%][] Critical 
> system error detected. Will be handled accordingly to configured handler 
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet 
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], 
> failureCtx=FailureContext [type=CRITICAL_ERROR, 
> err=java.lang.StackOverflowError]]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20317) Meta storage invokes are not completed when events are handled in DZM

2023-09-01 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-20317:
---
Description: 
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# LogicalTopologyEventListener to update logical topology.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.

Need to return meta storage futures from event handlers to ensure event 
linearization.

  was:
There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Currently it does the 
meta storage invokes in:
# ZonesConfigurationListener#onCreate to init a zone.
# ZonesConfigurationListener#onDelete to clean up the zone data.
# LogicalTopologyEventListener to update logical topology.
# DistributionZoneManager#onUpdateFilter to save data nodes in the meta storage.
# Also saveDataNodesToMetaStorageOnScaleUp and 
saveDataNodesToMetaStorageOnScaleDown do invokes.

Need to return meta storage futures from event handlers to ensure event 
linearization.


> Meta storage invokes are not completed when events are handled in DZM 
> --
>
> Key: IGNITE-20317
> URL: https://issues.apache.org/jira/browse/IGNITE-20317
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> There are meta storage invokes in DistributionZoneManager in zone's 
> lifecycle. The futures of these invokes are ignored, so after the lifecycle 
> method is completed actually not all its actions are completed. Currently it 
> does the meta storage invokes in:
> # ZonesConfigurationListener#onCreate to init a zone.
> # ZonesConfigurationListener#onDelete to clean up the zone data.
> # LogicalTopologyEventListener to update logical topology.
> # DistributionZoneManager#onUpdateFilter to save data nodes in the meta 
> storage.
> Need to return meta storage futures from event handlers to ensure event 
> linearization.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19499) TableManager should listen CatalogService events instead of configuration

2023-09-01 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko reassigned IGNITE-19499:


Assignee: Andrey Mashenkov  (was: Kirill Tkalenko)

> TableManager should listen CatalogService events instead of configuration
> -
>
> Key: IGNITE-19499
> URL: https://issues.apache.org/jira/browse/IGNITE-19499
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> As of now, TableManager listens configuration events to create internal 
> structures.
> Let's make TableManager listens CatalogService events instead.
> Note: Some tests may fails due to changed guarantees and related ticked 
> incompletion. So, let's do this in a separate feature branch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20326) Meta storage invokes are not completed when data nodes are recalculated in DZM

2023-09-01 Thread Sergey Uttsel (Jira)
Sergey Uttsel created IGNITE-20326:
--

 Summary: Meta storage invokes are not completed when data nodes 
are recalculated in DZM
 Key: IGNITE-20326
 URL: https://issues.apache.org/jira/browse/IGNITE-20326
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Uttsel


There are meta storage invokes in DistributionZoneManager in zone's lifecycle. 
The futures of these invokes are ignored, so after the lifecycle method is 
completed actually not all its actions are completed. Such invokes is used in:
# DistributionZoneManager#saveDataNodesToMetaStorageOnScaleUp
# DistributionZoneManager#saveDataNodesToMetaStorageOnScaleDown
to recalculate data nodes when timers are fired.

Need to check do we need to await futures from these invokes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20325) Wait for table assignments readiness on table methods invocations

2023-09-01 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-20325:
--

 Summary: Wait for table assignments readiness on table methods 
invocations
 Key: IGNITE-20325
 URL: https://issues.apache.org/jira/browse/IGNITE-20325
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
 Fix For: 3.0.0-beta2


Basically, there are two stages of table internal construction:
 # Create TableImpl object and put it to tablesByIdVv
 # Start RAFT groups and mark assignments as ready

Currently, when obtaining a table via {{{}TableManager{}}}, we get a future 
that only completes when both  stages are complete. But it seems that it might 
be useful to return a table when just first stage is complete. In this case, 
we'll need to wait for stage 2 inside TableImpl methods implementations (only  
for those methods that need assignments to be ready). Methods that don't need 
assignments (like {{{}name(){}}}) will work immediately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20324) Implement integration tests to cover questions in CLI

2023-09-01 Thread Dmitry Baranov (Jira)
Dmitry Baranov created IGNITE-20324:
---

 Summary: Implement integration tests to cover questions in CLI
 Key: IGNITE-20324
 URL: https://issues.apache.org/jira/browse/IGNITE-20324
 Project: Ignite
  Issue Type: Improvement
  Components: cli
Affects Versions: 3.0.0-beta1
Reporter: Dmitry Baranov


Currently all CLI integration tests uses bindAnswer() method to prepare answers 
but there are no tests to validate questions. It may hide some issues.

Need to update cli test framework to check questions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)