[jira] [Assigned] (IGNITE-20360) Implement the set of zone supported storages

2023-12-06 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev reassigned IGNITE-20360:


Assignee: Mirza Aliev

> Implement the set of zone supported storages
> 
>
> Key: IGNITE-20360
> URL: https://issues.apache.org/jira/browse/IGNITE-20360
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> According to IGNITE-20357 we need to have an appropriate zone filter, which 
> filters the nodes based on their available storages.
> *Definition of done*
> - Zone has the filters, which can be unambiguously used to check if table can 
> be "deployed" in this zone
> *Notes*
> - Avoid filter altering for now (but add the appropriate event types)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20117) Implement index backfill process

2023-12-06 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20117:
---
Description: 
Currently, we have backfill process for an index (aka 'index build'). It needs 
to be tuned to satisfy the following requirements:
 # When starting the backfill process, we must first wait till 
safeTime(partition)>=’STARTING state activation timestamp’ to avoid a race 
between starting the backfill process and executing writes that are before the 
index creation (as these writes should not write to the index).
 # If for a row found during the backfill process, there are row versions with 
commitTs <= indexCreationActivationTs, then the most recent of them is written 
to the index
 # If for a row found during the backfill process, there are row versions with 
commitTs > indexCreationActivationTs, then all of them are added to the index; 
otherwise, if there are no such row versions, but there is a write intent (and 
the transaction to which it belongs started before indexCreationActivationTs), 
it is added to the index
 # When the backfill process is finished on all partitions, another schema 
update is installed that declares that the index is in the READY state. This 
installation should be conditional. That is, if the index is still STARTING, it 
should succeed; otherwise (if the index was removed by installing a concurrent 
‘delete from the Catalog’ schema update due to a DROP command), nothing should 
be done here
 # The backfill process stops early as soon as it detects that the index moved 
to ‘deleted from the Catalog’ state. Each step of the process might be supplied 
with a timestamp (from the same clock that moves the partition’s SafeTime 
ahead) and that timestamp could be used to check the index existence; this will 
allow to avoid a race between index destruction and the backfill process.

  was:
Currently, we have backfill process for an index (aka 'index build'). It needs 
to be tuned to satisfy the following requirements:

# When starting the backfill process, we must first wait till 
safeTime(partition)>=’STARTING state activation timestamp’ to avoid a race 
between starting the backfill process and executing writes that are before the 
index creation (as these writes should not write to the index).
# If for a row found during the backfill process, there are row versions with 
commitTs <= indexCreationActivationTs, then the most recent of them is written 
to the index
# If for a row found during the backfill process, there are row versions with 
commitTs > indexCreationActivationTs, then the oldest of them is added to the 
index; otherwise, if there are no such row versions, but there is a write 
intent (and the transaction to which it belongs started before 
indexCreationActivationTs), it is added to the index
# When the backfill process is finished on all partitions, another schema 
update is installed that declares that the index is in the READY state. This 
installation should be conditional. That is, if the index is still STARTING, it 
should succeed; otherwise (if the index was removed by installing a concurrent 
‘delete from the Catalog’ schema update due to a DROP command), nothing should 
be done here
# The backfill process stops early as soon as it detects that the index moved 
to ‘deleted from the Catalog’ state. Each step of the process might be supplied 
with a timestamp (from the same clock that moves the partition’s SafeTime 
ahead) and that timestamp could be used to check the index existence; this will 
allow to avoid a race between index destruction and the backfill process.



> Implement index backfill process
> 
>
> Key: IGNITE-20117
> URL: https://issues.apache.org/jira/browse/IGNITE-20117
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently, we have backfill process for an index (aka 'index build'). It 
> needs to be tuned to satisfy the following requirements:
>  # When starting the backfill process, we must first wait till 
> safeTime(partition)>=’STARTING state activation timestamp’ to avoid a race 
> between starting the backfill process and executing writes that are before 
> the index creation (as these writes should not write to the index).
>  # If for a row found during the backfill process, there are row versions 
> with commitTs <= indexCreationActivationTs, then the most recent of them is 
> written to the index
>  # If for a row found during the backfill process, there are row versions 
> with commitTs > indexCreationActivationTs, then all of them are added to the 
> index; otherwise, if there are no such row versions, but there is a write 
> intent (and the transaction to which it belongs started before 

[jira] [Created] (IGNITE-21031) Calcite engine. Query fails on performance statistics in case of nested scans

2023-12-06 Thread Aleksey Plekhanov (Jira)
Aleksey Plekhanov created IGNITE-21031:
--

 Summary: Calcite engine. Query fails on performance statistics in 
case of nested scans
 Key: IGNITE-21031
 URL: https://issues.apache.org/jira/browse/IGNITE-21031
 Project: Ignite
  Issue Type: Bug
Reporter: Aleksey Plekhanov
Assignee: Aleksey Plekhanov


Nested scan can be performed by Calcite engine, for example, in case of UNION 
ALL, when the first table scan is completed (and {{{}downstream().end(){}}}) 
method is invoked and UNION ALL operator proceed to the next table scan.

Reproducer:
{code:java}
public void testPerformanceStatisticsNestedScan() throws Exception {
sql(grid(0), "CREATE TABLE test_perf_stat_nested (a INT) WITH 
template=REPLICATED");
sql(grid(0), "INSERT INTO test_perf_stat_nested VALUES (0), (1), (2), (3), 
(4)");

startCollectStatistics();

sql(grid(0), "SELECT * FROM test_perf_stat_nested UNION ALL SELECT * FROM 
test_perf_stat_nested");
}{code}
 Fails on:
{noformat}
    at 
org.apache.ignite.internal.metric.IoStatisticsQueryHelper.startGatheringQueryStatistics(IoStatisticsQueryHelper.java:35)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.tracker.PerformanceStatisticsIoTracker.startTracking(PerformanceStatisticsIoTracker.java:65)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanStorageNode.processNextBatch(ScanStorageNode.java:68)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.push(ScanNode.java:145)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.request(ScanNode.java:95)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.rel.UnionAllNode.end(UnionAllNode.java:79)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.processNextBatch(ScanNode.java:185)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanStorageNode.processNextBatch(ScanStorageNode.java:70)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.push(ScanNode.java:145)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.request(ScanNode.java:95)
    at 
org.apache.ignite.internal.processors.query.calcite.exec.rel.UnionAllNode.request(UnionAllNode.java:56)
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-12-06 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().get()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. For example, we can chain our futures on 
{{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
completed before {{metaStorageMgr.deployWatches()}}.
 In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.

UPD:  
We decided to implement the easier way, the harder will be implemented in the 
separate ticket https://issues.apache.org/jira/browse/IGNITE-20477


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().get()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. For example, we can chain our futures on 
{{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
completed before {{metaStorageMgr.deployWatches()}}.
 In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.

UPD: 
We decided to implement the easier way, the harder will be implemented in the 
separate ticket https://issues.apache.org/jira/browse/IGNITE-20477



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: dzm-reviewed, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently 

[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-12-06 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().get()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. For example, we can chain our futures on 
{{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
completed before {{metaStorageMgr.deployWatches()}}.
 In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.

UPD: 
We decided to implement the easier way, the harder will be implemented in the 
separate ticket https://issues.apache.org/jira/browse/IGNITE-20477


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().get()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. For example, we can chain our futures on 
{{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
completed before {{metaStorageMgr.deployWatches()}}.
 In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.

UPD: 
We decided to implement the easier way, the harder will be implemented in the 
separate ticket



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: dzm-reviewed, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> 

[jira] [Updated] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-12-06 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20310:
-
Description: 
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().get()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. For example, we can chain our futures on 
{{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
completed before {{metaStorageMgr.deployWatches()}}.
 In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.

UPD: 
We decided to implement the easier way, the harder will be implemented in the 
separate ticket


  was:
h3. *Motivation*

There are meta storage invokes in DistributionZoneManager start. Currently it 
does the meta storage invokes in 
DistributionZoneManager#createOrRestoreZoneState:
# DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init the 
default zone.
# DistributionZoneManager#restoreTimers in case when a filter update was 
handled before DZM stop, but it didn't update data nodes.

Futures of these invokes are ignored. So after the start method is completed 
actually not all start actions are completed. It can lead to the following 
situation: 
* Initialisation of the default zone is hanged for some reason even after full 
restart of the cluster.
* That means that all data nodes related keys in metastorage haven't been 
initialised.
* For example, if user add some new node, and scale up timer is immediate, 
which leads to immediate data nodes recalculation, this recalculation won't 
happen, because data nodes key have not been initialised. 

h3. *Possible solutions*
h4. Easier
We just need to wait for all async logic to be completed within the 
{{DistributionZoneManager#start}} with {{ms.invoke().get()}}

h4. Harder
We can enhance {{IgniteComponent#start}}, so it could return Completable 
future, and after that we need to change the flow of starting components, so 
node is not ready to work until all {{IgniteComponent#start}} futures are 
completed. For example, we can chain our futures on 
{{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
completed before {{metaStorageMgr.deployWatches()}}.
 In {{DistributionZoneManager#start}}  we can return 
{{CompletableFuture.allOf}} features, that are needed to be completed in the 
{{DistributionZoneManager#start}}



h3. *Definition of done*

All asynchronous logic in the {{DistributionZoneManager#start}} is done before 
a node is ready to work, in particular, ready to interact with zones.



> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: dzm-reviewed, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # 

[jira] [Updated] (IGNITE-20976) Sql. Multistatement dynamic parameters adjusting works incorrect for DELETE operator.

2023-12-06 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-20976:
--
Fix Version/s: 3.0.0-beta2

> Sql. Multistatement dynamic parameters adjusting works incorrect for DELETE 
> operator.
> -
>
> Key: IGNITE-20976
> URL: https://issues.apache.org/jira/browse/IGNITE-20976
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Pavel Pereslegin
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
> Attachments: overwrite_sql_call_create.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Normalized statement cannot be obtained for 'DELETE' statement with dynamic 
> parameters.
> It seems that after adjusting the dynamic parameters (see 
> {{ScriptParseResult.SqlDynamicParamsAdjuster}}),  the new tree cannot be 
> converted to a string.
> Reproducer:
> {code:java}
> @Test
> public void testScriptDelete() {
> String query1 = "SELECT 1;";
> String query2 = "DELETE FROM TEST WHERE ID=?";
> // Parse separately - ok.
> StatementParseResult res1 = IgniteSqlParser.parse(query1, 
> StatementParseResult.MODE);
> StatementParseResult res2 = IgniteSqlParser.parse(query2, 
> StatementParseResult.MODE);
> System.out.println(res1.statement().toString());
> System.out.println(res2.statement().toString());
> // Parse script throws "UnsupportedOperationException" for 
> `toString()` from `DELETE` statement.
> ScriptParseResult scriptRes = IgniteSqlParser.parse(query1 + query2, 
> ScriptParseResult.MODE);
> for (StatementParseResult res : scriptRes.results()) {
> System.out.println(res.statement().toString());
> }
> }
> {code}
> Output
> {noformat}
> SELECT 1
> DELETE FROM `TEST`
> WHERE `ID` = ?
> SELECT 1
> java.lang.UnsupportedOperationException: class 
> org.apache.calcite.sql.SqlSyntax$7: SPECIAL
>   at org.apache.calcite.util.Util.needToImplement(Util.java:1119)
>   at org.apache.calcite.sql.SqlSyntax$7.unparse(SqlSyntax.java:129)
>   at org.apache.calcite.sql.SqlOperator.unparse(SqlOperator.java:385)
>   at org.apache.calcite.sql.SqlDialect.unparseCall(SqlDialect.java:466)
>   at org.apache.calcite.sql.SqlCall.unparse(SqlCall.java:131)
>   at org.apache.calcite.sql.SqlNode.toSqlString(SqlNode.java:156)
>   at org.apache.calcite.sql.SqlNode.toString(SqlNode.java:131)
>   at 
> org.apache.ignite.internal.sql.engine.sql.IgniteSqlParserTest.testScriptDelete(IgniteSqlParserTest.java:59)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21030) Sql. Dump debug info if the ExecutionService could not be stopped within a timeout

2023-12-06 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-21030:
--
Labels: ignite-3  (was: )

> Sql. Dump debug info if the ExecutionService could not be stopped within a 
> timeout
> --
>
> Key: IGNITE-21030
> URL: https://issues.apache.org/jira/browse/IGNITE-21030
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3
>
> To investigate the issue reported in IGNITE-20750 (about hanging of 
> ExecutionService during node shutdown)  we need to collect more debugging 
> information when the sql execution service fails to stop.
> It is proposed to add a timeout for ExecutionServiceImpl stop procedure and 
> output some diagnostic information to the log in case when timeout occurs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21030) Sql. Dump debug info if the ExecutionService could not be stopped within a timeout

2023-12-06 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-21030:
--
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Sql. Dump debug info if the ExecutionService could not be stopped within a 
> timeout
> --
>
> Key: IGNITE-21030
> URL: https://issues.apache.org/jira/browse/IGNITE-21030
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3
>
> To investigate the issue reported in IGNITE-20750 (about hanging of 
> ExecutionService during node shutdown)  we need to collect more debugging 
> information when the sql execution service fails to stop.
> It is proposed to add a timeout for ExecutionServiceImpl stop procedure and 
> output some diagnostic information to the log in case when timeout occurs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21030) Sql. Dump debug info if the ExecutionService could not be stopped within a timeout

2023-12-06 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-21030:
--
Component/s: sql

> Sql. Dump debug info if the ExecutionService could not be stopped within a 
> timeout
> --
>
> Key: IGNITE-21030
> URL: https://issues.apache.org/jira/browse/IGNITE-21030
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3
>
> To investigate the issue reported in IGNITE-20750 (about hanging of 
> ExecutionService during node shutdown)  we need to collect more debugging 
> information when the sql execution service fails to stop.
> It is proposed to add a timeout for ExecutionServiceImpl stop procedure and 
> output some diagnostic information to the log in case when timeout occurs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21030) Sql. Dump debug info if the ExecutionService could not be stopped within a timeout

2023-12-06 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-21030:
--
Summary: Sql. Dump debug info if the ExecutionService could not be stopped 
within a timeout  (was: Sql. Dump debug info to the log, if the 
ExecutionService could not be stopped within a timeout)

> Sql. Dump debug info if the ExecutionService could not be stopped within a 
> timeout
> --
>
> Key: IGNITE-21030
> URL: https://issues.apache.org/jira/browse/IGNITE-21030
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>
> To investigate the issue reported in IGNITE-20750 (about hanging of 
> ExecutionService during node shutdown)  we need to collect more debugging 
> information when the sql execution service fails to stop.
> It is proposed to add a timeout for ExecutionServiceImpl stop procedure and 
> output some diagnostic information to the log in case when timeout occurs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21030) Sql. Dump debug info to the log, if the ExecutionService could not be stopped within a timeout

2023-12-06 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-21030:
--
Summary: Sql. Dump debug info to the log, if the ExecutionService could not 
be stopped within a timeout  (was: Sql. Dump debug info to the log, if the 
ExecutionService could not be stopped within a specfied timeout)

> Sql. Dump debug info to the log, if the ExecutionService could not be stopped 
> within a timeout
> --
>
> Key: IGNITE-21030
> URL: https://issues.apache.org/jira/browse/IGNITE-21030
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>
> To investigate the issue reported in IGNITE-20750 (about hanging of 
> ExecutionService during node shutdown)  we need to collect more debugging 
> information when the sql execution service fails to stop.
> It is proposed to add a timeout for ExecutionServiceImpl stop procedure and 
> output some diagnostic information to the log in case when timeout occurs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (IGNITE-20750) ExecutionServiceImpl#stop() may hang forever

2023-12-06 Thread Pavel Pereslegin (Jira)


[ https://issues.apache.org/jira/browse/IGNITE-20750 ]


Pavel Pereslegin deleted comment on IGNITE-20750:
---

was (Author: xtern):
Last stacktrace that was observed on TC:

{noformat}
ForkJoinPool.commonPool-worker-7" #28 daemon prio=5 os_prio=0 cpu=7071.91ms 
elapsed=3563.38s tid=0x7f2ec0024800 nid=0x2a9b2 waiting on condition  
[0x7f2eab70d000]
   java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.17/Native Method)
- parking to wait for  <0x00070307ce20> (a 
java.util.concurrent.CompletableFuture$Signaller)
at 
java.util.concurrent.locks.LockSupport.park(java.base@11.0.17/LockSupport.java:194)
at 
java.util.concurrent.CompletableFuture$Signaller.block(java.base@11.0.17/CompletableFuture.java:1796)
at 
java.util.concurrent.ForkJoinPool.managedBlock(java.base@11.0.17/ForkJoinPool.java:3118)
at 
java.util.concurrent.CompletableFuture.waitingGet(java.base@11.0.17/CompletableFuture.java:1823)
at 
java.util.concurrent.CompletableFuture.get(java.base@11.0.17/CompletableFuture.java:1998)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl.stop(ExecutionServiceImpl.java:413)
at 
org.apache.ignite.internal.sql.engine.SqlQueryProcessor$$Lambda$2061/0x000800bf2840.close(Unknown
 Source)
at 
org.apache.ignite.internal.util.IgniteUtils.lambda$closeAll$0(IgniteUtils.java:534)
at 
org.apache.ignite.internal.util.IgniteUtils$$Lambda$2014/0x000800be7040.accept(Unknown
 Source)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(java.base@11.0.17/ForEachOps.java:183)
at 
java.util.stream.ReferencePipeline$2$1.accept(java.base@11.0.17/ReferencePipeline.java:177)
at 
java.util.stream.ReferencePipeline$3$1.accept(java.base@11.0.17/ReferencePipeline.java:195)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(java.base@11.0.17/ArrayList.java:1655)
at 
java.util.stream.AbstractPipeline.copyInto(java.base@11.0.17/AbstractPipeline.java:484)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(java.base@11.0.17/AbstractPipeline.java:474)
at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(java.base@11.0.17/ForEachOps.java:150)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(java.base@11.0.17/ForEachOps.java:173)
at 
java.util.stream.AbstractPipeline.evaluate(java.base@11.0.17/AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.forEach(java.base@11.0.17/ReferencePipeline.java:497)
at 
org.apache.ignite.internal.util.IgniteUtils.closeAll(IgniteUtils.java:532)
at 
org.apache.ignite.internal.sql.engine.SqlQueryProcessor.stop(SqlQueryProcessor.java:364)
- locked <0x00072fa41df0> (a 
org.apache.ignite.internal.sql.engine.SqlQueryProcessor)
at 
org.apache.ignite.internal.app.LifecycleManager.lambda$stopAllComponents$1(LifecycleManager.java:133)
at 
org.apache.ignite.internal.app.LifecycleManager$$Lambda$2057/0x000800bf3840.accept(Unknown
 Source)
at 
java.util.Iterator.forEachRemaining(java.base@11.0.17/Iterator.java:133)
at 
org.apache.ignite.internal.app.LifecycleManager.stopAllComponents(LifecycleManager.java:131)
- locked <0x00072fa41ac0> (a 
org.apache.ignite.internal.app.LifecycleManager)
at 
org.apache.ignite.internal.app.LifecycleManager.stopNode(LifecycleManager.java:115)
at org.apache.ignite.internal.app.IgniteImpl.stop(IgniteImpl.java:899)
at 
org.apache.ignite.internal.app.IgnitionImpl.lambda$stop$0(IgnitionImpl.java:113)
at 
org.apache.ignite.internal.app.IgnitionImpl$$Lambda$2016/0x000800be7840.apply(Unknown
 Source)
at 
java.util.concurrent.ConcurrentHashMap.computeIfPresent(java.base@11.0.17/ConcurrentHashMap.java:1822)
- locked <0x000724d2b740> (a 
java.util.concurrent.ConcurrentHashMap$Node)
at 
org.apache.ignite.internal.app.IgnitionImpl.stop(IgnitionImpl.java:111)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:96)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:82)
at 
org.apache.ignite.internal.Cluster.lambda$shutdown$11(Cluster.java:458)
at 
org.apache.ignite.internal.Cluster$$Lambda$2278/0x000800dd7c40.accept(Unknown
 Source)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(java.base@11.0.17/ForEachOps.java:183)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(java.base@11.0.17/ArrayList.java:1655)
at 
java.util.stream.AbstractPipeline.copyInto(java.base@11.0.17/AbstractPipeline.java:484)
at 
java.util.stream.ForEachOps$ForEachTask.compute(java.base@11.0.17/ForEachOps.java:290)
at 
java.util.concurrent.CountedCompleter.exec(java.base@11.0.17/CountedCompleter.java:746)
at 

[jira] [Commented] (IGNITE-20750) ExecutionServiceImpl#stop() may hang forever

2023-12-06 Thread Pavel Pereslegin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793781#comment-17793781
 ] 

Pavel Pereslegin commented on IGNITE-20750:
---

Last stacktrace that was observed on TC:

{noformat}
ForkJoinPool.commonPool-worker-7" #28 daemon prio=5 os_prio=0 cpu=7071.91ms 
elapsed=3563.38s tid=0x7f2ec0024800 nid=0x2a9b2 waiting on condition  
[0x7f2eab70d000]
   java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@11.0.17/Native Method)
- parking to wait for  <0x00070307ce20> (a 
java.util.concurrent.CompletableFuture$Signaller)
at 
java.util.concurrent.locks.LockSupport.park(java.base@11.0.17/LockSupport.java:194)
at 
java.util.concurrent.CompletableFuture$Signaller.block(java.base@11.0.17/CompletableFuture.java:1796)
at 
java.util.concurrent.ForkJoinPool.managedBlock(java.base@11.0.17/ForkJoinPool.java:3118)
at 
java.util.concurrent.CompletableFuture.waitingGet(java.base@11.0.17/CompletableFuture.java:1823)
at 
java.util.concurrent.CompletableFuture.get(java.base@11.0.17/CompletableFuture.java:1998)
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionServiceImpl.stop(ExecutionServiceImpl.java:413)
at 
org.apache.ignite.internal.sql.engine.SqlQueryProcessor$$Lambda$2061/0x000800bf2840.close(Unknown
 Source)
at 
org.apache.ignite.internal.util.IgniteUtils.lambda$closeAll$0(IgniteUtils.java:534)
at 
org.apache.ignite.internal.util.IgniteUtils$$Lambda$2014/0x000800be7040.accept(Unknown
 Source)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(java.base@11.0.17/ForEachOps.java:183)
at 
java.util.stream.ReferencePipeline$2$1.accept(java.base@11.0.17/ReferencePipeline.java:177)
at 
java.util.stream.ReferencePipeline$3$1.accept(java.base@11.0.17/ReferencePipeline.java:195)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(java.base@11.0.17/ArrayList.java:1655)
at 
java.util.stream.AbstractPipeline.copyInto(java.base@11.0.17/AbstractPipeline.java:484)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(java.base@11.0.17/AbstractPipeline.java:474)
at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(java.base@11.0.17/ForEachOps.java:150)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(java.base@11.0.17/ForEachOps.java:173)
at 
java.util.stream.AbstractPipeline.evaluate(java.base@11.0.17/AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.forEach(java.base@11.0.17/ReferencePipeline.java:497)
at 
org.apache.ignite.internal.util.IgniteUtils.closeAll(IgniteUtils.java:532)
at 
org.apache.ignite.internal.sql.engine.SqlQueryProcessor.stop(SqlQueryProcessor.java:364)
- locked <0x00072fa41df0> (a 
org.apache.ignite.internal.sql.engine.SqlQueryProcessor)
at 
org.apache.ignite.internal.app.LifecycleManager.lambda$stopAllComponents$1(LifecycleManager.java:133)
at 
org.apache.ignite.internal.app.LifecycleManager$$Lambda$2057/0x000800bf3840.accept(Unknown
 Source)
at 
java.util.Iterator.forEachRemaining(java.base@11.0.17/Iterator.java:133)
at 
org.apache.ignite.internal.app.LifecycleManager.stopAllComponents(LifecycleManager.java:131)
- locked <0x00072fa41ac0> (a 
org.apache.ignite.internal.app.LifecycleManager)
at 
org.apache.ignite.internal.app.LifecycleManager.stopNode(LifecycleManager.java:115)
at org.apache.ignite.internal.app.IgniteImpl.stop(IgniteImpl.java:899)
at 
org.apache.ignite.internal.app.IgnitionImpl.lambda$stop$0(IgnitionImpl.java:113)
at 
org.apache.ignite.internal.app.IgnitionImpl$$Lambda$2016/0x000800be7840.apply(Unknown
 Source)
at 
java.util.concurrent.ConcurrentHashMap.computeIfPresent(java.base@11.0.17/ConcurrentHashMap.java:1822)
- locked <0x000724d2b740> (a 
java.util.concurrent.ConcurrentHashMap$Node)
at 
org.apache.ignite.internal.app.IgnitionImpl.stop(IgnitionImpl.java:111)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:96)
at org.apache.ignite.IgnitionManager.stop(IgnitionManager.java:82)
at 
org.apache.ignite.internal.Cluster.lambda$shutdown$11(Cluster.java:458)
at 
org.apache.ignite.internal.Cluster$$Lambda$2278/0x000800dd7c40.accept(Unknown
 Source)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(java.base@11.0.17/ForEachOps.java:183)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(java.base@11.0.17/ArrayList.java:1655)
at 
java.util.stream.AbstractPipeline.copyInto(java.base@11.0.17/AbstractPipeline.java:484)
at 
java.util.stream.ForEachOps$ForEachTask.compute(java.base@11.0.17/ForEachOps.java:290)
at 

[jira] [Updated] (IGNITE-21030) Sql. Dump debug info to the log, if the ExecutionService could not be stopped within a specfied timeout

2023-12-06 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-21030:
--
Summary: Sql. Dump debug info to the log, if the ExecutionService could not 
be stopped within a specfied timeout  (was: Sql. Dump debug info to the log, if 
the executing service could not be stopped within a specfied timeout)

> Sql. Dump debug info to the log, if the ExecutionService could not be stopped 
> within a specfied timeout
> ---
>
> Key: IGNITE-21030
> URL: https://issues.apache.org/jira/browse/IGNITE-21030
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>
> To investigate the issue reported in IGNITE-20750 (about hanging of 
> ExecutionService during node shutdown)  we need to collect more debugging 
> information when the sql execution service fails to stop.
> It is proposed to add a timeout for ExecutionServiceImpl stop procedure and 
> output some diagnostic information to the log in case when timeout occurs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21030) Sql. Dump debug info to the log, if the executing service could not be stopped within a specfied timeout

2023-12-06 Thread Pavel Pereslegin (Jira)
Pavel Pereslegin created IGNITE-21030:
-

 Summary: Sql. Dump debug info to the log, if the executing service 
could not be stopped within a specfied timeout
 Key: IGNITE-21030
 URL: https://issues.apache.org/jira/browse/IGNITE-21030
 Project: Ignite
  Issue Type: Improvement
Reporter: Pavel Pereslegin
Assignee: Pavel Pereslegin


To investigate the issue reported in IGNITE-20750 (about hanging of 
ExecutionService during node shutdown)  we need to collect more debugging 
information when the sql execution service fails to stop.

It is proposed to add a timeout for ExecutionServiceImpl stop procedure and 
output some diagnostic information to the log in case when timeout occurs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21029) Update Ignite dependency log4j2

2023-12-06 Thread Aleksandr Nikolaev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Nikolaev updated IGNITE-21029:

Labels: ise  (was: )

> Update Ignite dependency log4j2
> ---
>
> Key: IGNITE-21029
> URL: https://issues.apache.org/jira/browse/IGNITE-21029
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Aleksandr Nikolaev
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>
> Update Ignite dependency log4j 2.20.0 to 2.22.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21029) Update Ignite dependency log4j2

2023-12-06 Thread Aleksandr Nikolaev (Jira)
Aleksandr Nikolaev created IGNITE-21029:
---

 Summary: Update Ignite dependency log4j2
 Key: IGNITE-21029
 URL: https://issues.apache.org/jira/browse/IGNITE-21029
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksandr Nikolaev
Assignee: Aleksandr Nikolaev
 Fix For: 2.17


Update Ignite dependency log4j 2.20.0 to 2.22.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20604) Reimplement CausalityDataNodesEngine#dataNodes according to Catalog and recovery changes

2023-12-06 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793765#comment-17793765
 ] 

Alexander Lapin commented on IGNITE-20604:
--

[~maliev] LGTM

> Reimplement CausalityDataNodesEngine#dataNodes according to Catalog and 
> recovery changes
> 
>
> Key: IGNITE-20604
> URL: https://issues.apache.org/jira/browse/IGNITE-20604
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> h3. *Motivation*
> After Catalog was introduced and recovery for Distribution zone manager was 
> completed, it is needed to reimplement {{CausalityDataNodesEngine#dataNodes}} 
> according to the new circumstances. Most of the code from this method must be 
> removed, and entities from the catalog must be reused.
> h3. *Definition of done*
> {{CausalityDataNodesEngine#dataNodes}} is reimplemented according to the new 
> circumstances.
> h3. *Implementation notes*
> Seems that we can remove {{CausalityDataNodesEngine#zonesVersionedCfg}} could 
> be removed and Catalog must be used



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20971) The Ignite process quietly tear down while creating a lot of tables

2023-12-06 Thread Vladimir Pligin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Pligin reassigned IGNITE-20971:


Assignee: Ivan Bessonov

> The Ignite process quietly tear down while creating a lot of tables
> ---
>
> Key: IGNITE-20971
> URL: https://issues.apache.org/jira/browse/IGNITE-20971
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>
> Creating 1000 tables with 5 column each.
> *Expected:*
> 1000 tables are created.
>  
> *Actual:*
> After some tables (in my case after 75 tables) the Ignite runner process is 
> silently teared down, no any errors in output. GC log doesn't show any 
> problem.
>  
> *Additional information:*
> On more performant (in CPU) servers it can create up to 855 tables on 4GB 
> HEAP and then tearing down with 
> `java.lang.OutOfMemoryError: Java heap space`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21028) .NET: Thin 3.0: Potential PooledBuffer leak

2023-12-06 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21028:

Description: 
Methods like *Transaction.CommitAsync* do not dispose the request buffer.

* Check all usages
* Write better leak detection tests (integrate with *ByteArrayPool*)
* Add a test to check that all buffers are rented via *ByteArrayPool*, not 
directly with *ArrayPool.Shared*

> .NET: Thin 3.0: Potential PooledBuffer leak
> ---
>
> Key: IGNITE-21028
> URL: https://issues.apache.org/jira/browse/IGNITE-21028
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Blocker
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Methods like *Transaction.CommitAsync* do not dispose the request buffer.
> * Check all usages
> * Write better leak detection tests (integrate with *ByteArrayPool*)
> * Add a test to check that all buffers are rented via *ByteArrayPool*, not 
> directly with *ArrayPool.Shared*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21028) .NET: Thin 3.0: Potential PooledBuffer leak

2023-12-06 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-21028:
---

 Summary: .NET: Thin 3.0: Potential PooledBuffer leak
 Key: IGNITE-21028
 URL: https://issues.apache.org/jira/browse/IGNITE-21028
 Project: Ignite
  Issue Type: Bug
  Components: platforms, thin client
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21027) The order of fields in the TABLE_COLUMNS view is inconsistent with the order specified during table creation

2023-12-06 Thread YuJue Li (Jira)
YuJue Li created IGNITE-21027:
-

 Summary: The order of fields in the TABLE_COLUMNS view is 
inconsistent with the order specified during table creation
 Key: IGNITE-21027
 URL: https://issues.apache.org/jira/browse/IGNITE-21027
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.15
Reporter: YuJue Li
Assignee: YuJue Li
 Fix For: 2.17
 Attachments: image-2023-12-06-20-37-37-282.png

!image-2023-12-06-20-37-37-282.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20477) Async component start

2023-12-06 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793630#comment-17793630
 ] 

Mirza Aliev commented on IGNITE-20477:
--

Please be aware of TODOs with this ticket that was added to the codebase after 
https://issues.apache.org/jira/browse/IGNITE-20310 was merged 

> Async component start
> -
>
> Key: IGNITE-20477
> URL: https://issues.apache.org/jira/browse/IGNITE-20477
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>
> Currently all Ignite components start synchronously (see 
> {{IgniteComponent#start}}). This is inconvenient, because some components can 
> complete their startup only when Meta Storage has initialized all Version 
> Values (see {{IgniteImpl#recoverComponentsStateOnStart}}). Because of this, 
> some components employ a hack which consists of having a "special" Versioned 
> Value, which is injected with a future that gets resolved only after the 
> enclosing component completes its startup (see {{startVv}} in 
> {{TableManager}} or {{IndexManager}}). This blocks the Watch Processor inside 
> Meta Storage, preventing it from processing further updates.
> This problem with this approach that it is quite cryptic and hard to 
> understand. Instead I propose to do the following:
> # Change the signature of {{IgniteComponent#start}} to 
> {{CompletableFuture start()}}.
> # All actions in the components startup will be divided into two categories: 
> sync actions, that can be executed synchronously in order for the component 
> to be usable by other components during their startup, and async actions, 
> which need to wait for any of the Versioned Values to be initialized by the 
> Meta Storage. Such async actions should be wrapped in a {{CompletableFuture}} 
> and returned from the {{start}} method.
> # {{IgniteImpl}} startup procedure should be updated to collect the futures 
> from all components and wait for their completion inside 
> {{recoverComponentsStateOnStart}}, after 
> {{metaStorageMgr.notifyRevisionUpdateListenerOnStart()}} has been called, but 
> before Watches are deployed ({{metaStorageMgr.deployWatches()}}) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20310) Meta storage invokes are not completed when DZM start is completed

2023-12-06 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793627#comment-17793627
 ] 

Vladislav Pyatkov commented on IGNITE-20310:


Merged c082849bd8a7105f19c9d53d6878d91f07379d29

> Meta storage invokes are not completed  when DZM start is completed
> ---
>
> Key: IGNITE-20310
> URL: https://issues.apache.org/jira/browse/IGNITE-20310
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Uttsel
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: dzm-reviewed, ignite-3
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. *Motivation*
> There are meta storage invokes in DistributionZoneManager start. Currently it 
> does the meta storage invokes in 
> DistributionZoneManager#createOrRestoreZoneState:
> # DistributionZoneManager#initDataNodesAndTriggerKeysInMetaStorage to init 
> the default zone.
> # DistributionZoneManager#restoreTimers in case when a filter update was 
> handled before DZM stop, but it didn't update data nodes.
> Futures of these invokes are ignored. So after the start method is completed 
> actually not all start actions are completed. It can lead to the following 
> situation: 
> * Initialisation of the default zone is hanged for some reason even after 
> full restart of the cluster.
> * That means that all data nodes related keys in metastorage haven't been 
> initialised.
> * For example, if user add some new node, and scale up timer is immediate, 
> which leads to immediate data nodes recalculation, this recalculation won't 
> happen, because data nodes key have not been initialised. 
> h3. *Possible solutions*
> h4. Easier
> We just need to wait for all async logic to be completed within the 
> {{DistributionZoneManager#start}} with {{ms.invoke().get()}}
> h4. Harder
> We can enhance {{IgniteComponent#start}}, so it could return Completable 
> future, and after that we need to change the flow of starting components, so 
> node is not ready to work until all {{IgniteComponent#start}} futures are 
> completed. For example, we can chain our futures on 
> {{IgniteImpl#recoverComponentsStateOnStart}}, so components' futures are 
> completed before {{metaStorageMgr.deployWatches()}}.
>  In {{DistributionZoneManager#start}}  we can return 
> {{CompletableFuture.allOf}} features, that are needed to be completed in the 
> {{DistributionZoneManager#start}}
> h3. *Definition of done*
> All asynchronous logic in the {{DistributionZoneManager#start}} is done 
> before a node is ready to work, in particular, ready to interact with zones.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21026) [Extensions] Fixed NPE in the performance statistics extension if there are no jobs for a task

2023-12-06 Thread Nikita Amelchev (Jira)
Nikita Amelchev created IGNITE-21026:


 Summary: [Extensions] Fixed NPE in the performance statistics 
extension if there are no jobs for a task
 Key: IGNITE-21026
 URL: https://issues.apache.org/jira/browse/IGNITE-21026
 Project: Ignite
  Issue Type: Bug
Reporter: Nikita Amelchev
Assignee: Nikita Amelchev



{noformat}
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.ignite.internal.performancestatistics.handlers.ComputeHandler.lambda$results$5(ComputeHandler.java:137)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at 
org.apache.ignite.internal.performancestatistics.handlers.ComputeHandler.results(ComputeHandler.java:130)
at 
org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.createReport(PerformanceStatisticsReportBuilder.java:129)
at 
org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.main(PerformanceStatisticsReportBuilder.java:69)
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21021) control.sh cache scan permissions

2023-12-06 Thread Mikhail Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793623#comment-17793623
 ] 

Mikhail Petrov commented on IGNITE-21021:
-

--cache destroy (CACHE_DESTROY)
--cache clear (CACHE_REMOVE)
--cache create (CACHE_CREATE)
--cache scan (CACHE_READ)

All the commands mentioned above have been corrected so that when executed, 
only the permissions specified in parentheses are checked. 

> control.sh cache scan permissions
> -
>
> Key: IGNITE-21021
> URL: https://issues.apache.org/jira/browse/IGNITE-21021
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Mikhail Petrov
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> At startup control.sh --cache scan CACHE_NAME the permissions for ADMIN_OPS 
> and CACHE_READ are checked, at least it would be more logical to use only 
> CACHE_READ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21021) control.sh cache scan permissions

2023-12-06 Thread Mikhail Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793623#comment-17793623
 ] 

Mikhail Petrov edited comment on IGNITE-21021 at 12/6/23 10:36 AM:
---

--cache destroy (CACHE_DESTROY)
--cache clear (CACHE_REMOVE)
--cache create (CACHE_CREATE)
--cache scan (CACHE_READ)

All the CONTROL UTILITY commands mentioned above have been corrected so that 
when executed, only the permissions specified in parentheses are checked. 


was (Author: petrovmikhail):
--cache destroy (CACHE_DESTROY)
--cache clear (CACHE_REMOVE)
--cache create (CACHE_CREATE)
--cache scan (CACHE_READ)

All the commands mentioned above have been corrected so that when executed, 
only the permissions specified in parentheses are checked. 

> control.sh cache scan permissions
> -
>
> Key: IGNITE-21021
> URL: https://issues.apache.org/jira/browse/IGNITE-21021
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Mikhail Petrov
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> At startup control.sh --cache scan CACHE_NAME the permissions for ADMIN_OPS 
> and CACHE_READ are checked, at least it would be more logical to use only 
> CACHE_READ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-21021) control.sh cache scan permissions

2023-12-06 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov resolved IGNITE-21021.
-
Resolution: Fixed

> control.sh cache scan permissions
> -
>
> Key: IGNITE-21021
> URL: https://issues.apache.org/jira/browse/IGNITE-21021
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Mikhail Petrov
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> At startup control.sh --cache scan CACHE_NAME the permissions for ADMIN_OPS 
> and CACHE_READ are checked, at least it would be more logical to use only 
> CACHE_READ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21021) control.sh cache scan permissions

2023-12-06 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-21021:

Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> control.sh cache scan permissions
> -
>
> Key: IGNITE-21021
> URL: https://issues.apache.org/jira/browse/IGNITE-21021
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Mikhail Petrov
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> At startup control.sh --cache scan CACHE_NAME the permissions for ADMIN_OPS 
> and CACHE_READ are checked, at least it would be more logical to use only 
> CACHE_READ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21021) control.sh cache scan permissions

2023-12-06 Thread Mikhail Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793619#comment-17793619
 ] 

Mikhail Petrov commented on IGNITE-21021:
-

[~NSAmelchev] Thank you for the review.

> control.sh cache scan permissions
> -
>
> Key: IGNITE-21021
> URL: https://issues.apache.org/jira/browse/IGNITE-21021
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Mikhail Petrov
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> At startup control.sh --cache scan CACHE_NAME the permissions for ADMIN_OPS 
> and CACHE_READ are checked, at least it would be more logical to use only 
> CACHE_READ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21021) control.sh cache scan permissions

2023-12-06 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov reassigned IGNITE-21021:
---

Assignee: Mikhail Petrov

> control.sh cache scan permissions
> -
>
> Key: IGNITE-21021
> URL: https://issues.apache.org/jira/browse/IGNITE-21021
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Mikhail Petrov
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> At startup control.sh --cache scan CACHE_NAME the permissions for ADMIN_OPS 
> and CACHE_READ are checked, at least it would be more logical to use only 
> CACHE_READ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21021) control.sh cache scan permissions

2023-12-06 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793617#comment-17793617
 ] 

Ignite TC Bot commented on IGNITE-21021:


{panel:title=Branch: [pull/11077/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11077/head] Base: [master] : New Tests 
(4)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Control Utility 2{color} [[tests 
4|https://ci2.ignite.apache.org/viewLog.html?buildId=7645237]]
* {color:#013220}IgniteControlUtilityTestSuite2: 
SecurityCommandHandlerPermissionsTest.testCacheClear[cmdHnd=cli] - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
SecurityCommandHandlerPermissionsTest.testCacheCreate[cmdHnd=cli] - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
SecurityCommandHandlerPermissionsTest.testCacheScan[cmdHnd=cli] - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
SecurityCommandHandlerPermissionsTest.testCacheDestroy[cmdHnd=cli] - 
PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7644183buildTypeId=IgniteTests24Java8_RunAll]

> control.sh cache scan permissions
> -
>
> Key: IGNITE-21021
> URL: https://issues.apache.org/jira/browse/IGNITE-21021
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> At startup control.sh --cache scan CACHE_NAME the permissions for ADMIN_OPS 
> and CACHE_READ are checked, at least it would be more logical to use only 
> CACHE_READ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21021) control.sh cache scan permissions

2023-12-06 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793616#comment-17793616
 ] 

Ignite TC Bot commented on IGNITE-21021:


{panel:title=Branch: [pull/11077/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11077/head] Base: [master] : New Tests 
(4)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Control Utility 2{color} [[tests 
4|https://ci2.ignite.apache.org/viewLog.html?buildId=7645237]]
* {color:#013220}IgniteControlUtilityTestSuite2: 
SecurityCommandHandlerPermissionsTest.testCacheClear[cmdHnd=cli] - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
SecurityCommandHandlerPermissionsTest.testCacheCreate[cmdHnd=cli] - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
SecurityCommandHandlerPermissionsTest.testCacheScan[cmdHnd=cli] - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
SecurityCommandHandlerPermissionsTest.testCacheDestroy[cmdHnd=cli] - 
PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7644183buildTypeId=IgniteTests24Java8_RunAll]

> control.sh cache scan permissions
> -
>
> Key: IGNITE-21021
> URL: https://issues.apache.org/jira/browse/IGNITE-21021
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> At startup control.sh --cache scan CACHE_NAME the permissions for ADMIN_OPS 
> and CACHE_READ are checked, at least it would be more logical to use only 
> CACHE_READ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21025) Processing lease event is unreasonalbly long in case of several sounds of partitions

2023-12-06 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21025:
-
Description: 
Following test
{code:java}
@Test
public void testCreateThousandTables() {
for (int i = 0; i < 1000; i++) {
sql(String.format("create table %s (id varchar default 
gen_random_uuid primary key, val int)", "table" + i));
}
}{code}

shows unreasonbly long lease event processing even with 100+ tables (with 25 
partitions each)
{code:java}
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1806ms
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1707ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1710ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] Entries: 
[placementdriver.leases] {code}

  was:
Following test
@Test
public void testCreateThousandTables() \{
for (int i = 0; i < 1000; i++) {
sql(String.format("create table %s (id varchar default 
gen_random_uuid primary key, val int)", "table" + i));
}
}
shows unreasonbly long lease events processing even with 100+ tables (with 25 
partitions each)
{code:java}
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1806ms
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1707ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1710ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] Entries: 
[placementdriver.leases] {code}


> Processing lease event is unreasonalbly long in case of several sounds of 
> partitions
> 
>
> Key: IGNITE-21025
> URL: https://issues.apache.org/jira/browse/IGNITE-21025
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> Following test
> {code:java}
> @Test
> public void testCreateThousandTables() {
> for (int i = 0; i < 1000; i++) {
> sql(String.format("create table %s (id varchar default 
> gen_random_uuid primary key, val int)", "table" + i));
> }
> }{code}
> shows unreasonbly long lease event processing even with 100+ tables (with 25 
> partitions each)
> {code:java}
> [2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1806ms
> [2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases]
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1707ms
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases]
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1710ms
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21025) Processing lease event is unreasonalbly long in case of several sounds of partitions

2023-12-06 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21025:
-
Labels: ignite-3  (was: )

> Processing lease event is unreasonalbly long in case of several sounds of 
> partitions
> 
>
> Key: IGNITE-21025
> URL: https://issues.apache.org/jira/browse/IGNITE-21025
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> Following test
> @Test
> public void testCreateThousandTables() \{
> for (int i = 0; i < 1000; i++) {
> sql(String.format("create table %s (id varchar default 
> gen_random_uuid primary key, val int)", "table" + i));
> }
> }
> shows unreasonbly long lease events processing even with 100+ tables (with 25 
> partitions each)
> {code:java}
> [2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1806ms
> [2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases]
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1707ms
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases]
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1710ms
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21025) Processing lease event is unreasonalbly long in case of several sounds of partitions

2023-12-06 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21025:
-
Description: 
Following test
@Test
public void testCreateThousandTables() \{
for (int i = 0; i < 1000; i++) {
sql(String.format("create table %s (id varchar default 
gen_random_uuid primary key, val int)", "table" + i));
}
}
shows unreasonbly long lease events processing even with 100+ tables (with 25 
partitions each)
{code:java}
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1806ms
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1707ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1710ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] Entries: 
[placementdriver.leases] {code}

  was:
{code:java}
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1806ms
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1707ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1710ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] Entries: 
[placementdriver.leases] {code}
in case of
@Test
public void testCreateThousandTables() \{
for (int i = 0; i < 1000; i++) {
sql(String.format("create table %s (id varchar default 
gen_random_uuid primary key, val int)", "table" + i));
}
}


> Processing lease event is unreasonalbly long in case of several sounds of 
> partitions
> 
>
> Key: IGNITE-21025
> URL: https://issues.apache.org/jira/browse/IGNITE-21025
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>
> Following test
> @Test
> public void testCreateThousandTables() \{
> for (int i = 0; i < 1000; i++) {
> sql(String.format("create table %s (id varchar default 
> gen_random_uuid primary key, val int)", "table" + i));
> }
> }
> shows unreasonbly long lease events processing even with 100+ tables (with 25 
> partitions each)
> {code:java}
> [2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1806ms
> [2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases]
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1707ms
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases]
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1710ms
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21025) Processing lease event is unreasonalbly long in case of several sounds of partitions

2023-12-06 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21025:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Processing lease event is unreasonalbly long in case of several sounds of 
> partitions
> 
>
> Key: IGNITE-21025
> URL: https://issues.apache.org/jira/browse/IGNITE-21025
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>
> Following test
> @Test
> public void testCreateThousandTables() \{
> for (int i = 0; i < 1000; i++) {
> sql(String.format("create table %s (id varchar default 
> gen_random_uuid primary key, val int)", "table" + i));
> }
> }
> shows unreasonbly long lease events processing even with 100+ tables (with 25 
> partitions each)
> {code:java}
> [2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1806ms
> [2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases]
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1707ms
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases]
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] <$> 
> Watch event processing took too much time: 1710ms
> [2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] Entries: 
> [placementdriver.leases] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21025) Processing lease event is unreasonalbly long in case of several sounds of partitions

2023-12-06 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-21025:


 Summary: Processing lease event is unreasonalbly long in case of 
several sounds of partitions
 Key: IGNITE-21025
 URL: https://issues.apache.org/jira/browse/IGNITE-21025
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


{code:java}
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1806ms
[2023-12-06T12:50:07,699][WARN ][%ictdt_n_0%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1707ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_1%vault-2][WatchProcessor] Entries: 
[placementdriver.leases]
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] <$> Watch 
event processing took too much time: 1710ms
[2023-12-06T12:50:07,702][WARN ][%ictdt_n_2%vault-2][WatchProcessor] Entries: 
[placementdriver.leases] {code}
in case of
@Test
public void testCreateThousandTables() \{
for (int i = 0; i < 1000; i++) {
sql(String.format("create table %s (id varchar default 
gen_random_uuid primary key, val int)", "table" + i));
}
}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-19530) Reduce size of configuration keys

2023-12-06 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727371#comment-17727371
 ] 

Alexander Lapin edited comment on IGNITE-19530 at 12/6/23 9:36 AM:
---

I've tried to create tables with 1 and 1000 columns in order to vary table 
creation message size. Occurred that amount of columns and thus table size 
doesn't effect the ability of meta storage to process tableCreation invokes. 
Locally test starts failing after 84 +/- tables. Ticket priority was changed to 
"Minor".


was (Author: alapin):
I've tried to create tables with 1 and 1000 columns in order to vary table 
creation message size. Occurred that abount of columns and thus table size 
doesn't effect the ability of meta storage to process tableCreation invokes. 
Locally test starts failing after 84 +/- tables. Ticket priority was changed to 
"Minor".

> Reduce size of configuration keys
> -
>
> Key: IGNITE-19530
> URL: https://issues.apache.org/jira/browse/IGNITE-19530
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Minor
>  Labels: ignite-3
>
> *Motivation*
> The distributed configuration keys are byte arrays formed from strings that 
> contain some constant prefixes, postfixes, delimiters and identificators, 
> mostly UUIDs. Example of the configuration key for default value provider 
> type of table column:
> {{dst-cfg.table.tables.d7b99c6a-de10-454d-9370-38d18b65e9c0.columns.d8482dae-cfb8-42b8-a759-9727dd3763a6.defaultValueProvider.type}}
> It contains 2 UUIDs in string representation. Unfortunately, there are 
> several configuration entries for each table column (having similar keys) and 
> besides that about a dozen of keys for table itself.
> As a result, configuration keys take 68% of a meta storage message related to 
> table creation (for one node cluster, for a table of 2 columns and 25 
> partitions) which creates excessive load on meta storage raft group in case 
> of mass table creation (see IGNITE-19275 )
> *Definition of done*
> We should get rid of string representation of UUIDs in configuration keys, 
> UUIDs should be written as 16 bytes each into byte array directly. Also, 
> string constants should be reduced (or even replaced to constants consisting 
> of few bytes) because there is no need to keep them human readable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20771) Implement tx coordinator liveness check

2023-12-06 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17793554#comment-17793554
 ] 

Vladislav Pyatkov commented on IGNITE-20771:


[~Denis Chudov] Please look at the patch.

> Implement tx coordinator liveness check
> ---
>
> Key: IGNITE-20771
> URL: https://issues.apache.org/jira/browse/IGNITE-20771
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> In order to implement tx coordinator recovery, it's definitely required to 
> understand whether coordinator is dead or not. Every data node has it's own 
> txn state local volatile map (txId -> 
> org.apache.ignite.internal.tx.TxStateMeta) where besides other fields we can 
> find txCoordinatorId. Liveness check assumes that if a node with given id is 
> available in physical topology then coordinator is alive, otherwise it's 
> considered as dead. However despite the fact that such local check is fast 
> enough there's no sense in checking it too often, espesially with subsequent 
> sends of initialRecoveryRequests. Thus, it seems reasonable to add one more 
> field to the TxStateMeta that will store last liveness check timestamp. 
> Because it's always local checks it's valid to use System.currentTimeMillis 
> or similar instead of HybridTimestamp in order to reduce the contention on 
> the clock. Please pay attention that triggers that will initiate liveness 
> checks will be implemented separetly.
> h3. Definition of Done
>  * One more lastLivenessCheck timestamp is added to the TxStateMeta.
>  * Aforementioned field is updated locally on each tx operation with 
> currentTimeMillis.
>  * New cluster-wide tx liveness interval configuration property is introduced.
>  * Within liveness check
>  ** if (the lastLivenessCheck >= currentTimeMillis - livenessInterval) - no-op
>  ** else 
>  *** update lastLivenessCheck
>  *** do the probe - check whether txCoordinatorId is still available in 
> physical topology, if it's available no further actions are required if int's 
> not then
>   trigger initiateRecovery procedure implemented in IGNITE-20685.
>   if commit partition is also unavailable (meaning that there's no 
> primary replica) mark transaction as abandoned.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20771) Implement tx coordinator liveness check

2023-12-06 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-20771:
---
Reviewer: Denis Chudov

> Implement tx coordinator liveness check
> ---
>
> Key: IGNITE-20771
> URL: https://issues.apache.org/jira/browse/IGNITE-20771
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> In order to implement tx coordinator recovery, it's definitely required to 
> understand whether coordinator is dead or not. Every data node has it's own 
> txn state local volatile map (txId -> 
> org.apache.ignite.internal.tx.TxStateMeta) where besides other fields we can 
> find txCoordinatorId. Liveness check assumes that if a node with given id is 
> available in physical topology then coordinator is alive, otherwise it's 
> considered as dead. However despite the fact that such local check is fast 
> enough there's no sense in checking it too often, espesially with subsequent 
> sends of initialRecoveryRequests. Thus, it seems reasonable to add one more 
> field to the TxStateMeta that will store last liveness check timestamp. 
> Because it's always local checks it's valid to use System.currentTimeMillis 
> or similar instead of HybridTimestamp in order to reduce the contention on 
> the clock. Please pay attention that triggers that will initiate liveness 
> checks will be implemented separetly.
> h3. Definition of Done
>  * One more lastLivenessCheck timestamp is added to the TxStateMeta.
>  * Aforementioned field is updated locally on each tx operation with 
> currentTimeMillis.
>  * New cluster-wide tx liveness interval configuration property is introduced.
>  * Within liveness check
>  ** if (the lastLivenessCheck >= currentTimeMillis - livenessInterval) - no-op
>  ** else 
>  *** update lastLivenessCheck
>  *** do the probe - check whether txCoordinatorId is still available in 
> physical topology, if it's available no further actions are required if int's 
> not then
>   trigger initiateRecovery procedure implemented in IGNITE-20685.
>   if commit partition is also unavailable (meaning that there's no 
> primary replica) mark transaction as abandoned.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17811) Implement efficient way to retrieve locks by txId

2023-12-06 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-17811:
-
Reviewer: Alexander Lapin

> Implement efficient way to retrieve locks by txId
> -
>
> Key: IGNITE-17811
> URL: https://issues.apache.org/jira/browse/IGNITE-17811
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Alexey Scherbakov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> org.apache.ignite.internal.tx.impl.HeapLockManager#locks require better 
> implementation that'll use index or similar instead of full locks set 
> iteration. 
> {code:java}
> public Iterator locks(UUID txId) {
> // TODO: tmp, use index instead.
> List result = new ArrayList<>();
> for (Map.Entry entry : locks.entrySet()) {
> Waiter waiter = entry.getValue().waiter(txId);
> if (waiter != null) {
> result.add(
> new Lock(
> entry.getKey(),
> waiter.lockMode(),
> txId
> )
> );
> }
> }
> return result.iterator();
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)