[jira] [Updated] (IGNITE-20638) Integration of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20638:
-
Description: 
We need to integrate 
*org.apache.ignite.internal.index.IndexAvailabilityController* into 
*org.apache.ignite.internal.app.IgniteImpl*.

But there are nuances that I would like to solve in the current ticket:
* We have created several components around the index building mechanism that 
would be convenient to keep in one place; I suggest creating the 
*IndexBuildingManager*. Which will contain all the components we need to work 
with building the index.
* *org.apache.ignite.internal.index.IndexBuildController* - should not be a 
*IgniteComponent* and should not close *IndexBuilder* as it will be used by two 
components.

> Integration of distributed index building
> -
>
> Key: IGNITE-20638
> URL: https://issues.apache.org/jira/browse/IGNITE-20638
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to integrate 
> *org.apache.ignite.internal.index.IndexAvailabilityController* into 
> *org.apache.ignite.internal.app.IgniteImpl*.
> But there are nuances that I would like to solve in the current ticket:
> * We have created several components around the index building mechanism that 
> would be convenient to keep in one place; I suggest creating the 
> *IndexBuildingManager*. Which will contain all the components we need to work 
> with building the index.
> * *org.apache.ignite.internal.index.IndexBuildController* - should not be a 
> *IgniteComponent* and should not close *IndexBuilder* as it will be used by 
> two components.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are *not* available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
*Notest:*
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.
# For the build distributed index of a specific partition, delete key 
*partitionBuildIndex..* in the metastore.
*Notest:*
This point is probably the most difficult and requires thought before 
implementation, since it will most likely require raising a replication group 
and rolling up a replication log.
# If there are no keys 
*partitionBuildIndex..*...*partitionBuildIndex..*
 in the metastore, execute 
*org.apache.ignite.internal.catalog.commands.MakeIndexAvailableCommand*.
* For indexes that are available for read-write:
# Delete key *startBuildIndex.* in the metastore if it remains.

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
*Notest:*
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.
# For the build distributed index of a specific partition, delete key 
*partitionBuildIndex..*.
*Notest:*
This point is probably the most difficult and requires thought before 
implementation, since it will most likely require raising a replication group 
and rolling up a replication log.


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are *not* available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> *Notest:*
> This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.
> # For the build distributed index of a specific partition, delete key 
> *partitionBuildIndex..* in the metastore.
> *Notest:*
> This point is probably the most difficult and requires thought before 
> implementation, since it will most likely require raising a replication group 
> and rolling up a replication log.
> # If there are no keys 
> *partitionBuildIndex..*...*partitionBuildIndex..*
>  in the metastore, execute 
> *org.apache.ignite.internal.catalog.commands.MakeIndexAvailableCommand*.
> * For indexes that are available for read-write:
> # Delete key *startBuildIndex.* in the metastore if it remains.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.

*Notest:*
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.
# For the build distributed index of a specific partition, delete key 
*partitionBuildIndex..*.

*Notest:*
This point is probably the most difficult and requires thought before 
implementation, since it will most likely require raising a replication group 
and rolling up a replication log.

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
*Notest:*
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.
# 


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> *Notest:*
> This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.
> # For the build distributed index of a specific partition, delete key 
> *partitionBuildIndex..*.
> *Notest:*
> This point is probably the most difficult and requires thought before 
> implementation, since it will most likely require raising a replication group 
> and rolling up a replication log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
*Notest:*
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.
# For the build distributed index of a specific partition, delete key 
*partitionBuildIndex..*.
*Notest:*
This point is probably the most difficult and requires thought before 
implementation, since it will most likely require raising a replication group 
and rolling up a replication log.

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.

*Notest:*
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.
# For the build distributed index of a specific partition, delete key 
*partitionBuildIndex..*.

*Notest:*
This point is probably the most difficult and requires thought before 
implementation, since it will most likely require raising a replication group 
and rolling up a replication log.


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> *Notest:*
> This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.
> # For the build distributed index of a specific partition, delete key 
> *partitionBuildIndex..*.
> *Notest:*
> This point is probably the most difficult and requires thought before 
> implementation, since it will most likely require raising a replication group 
> and rolling up a replication log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
*Notest:*
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.
# 

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.
# 


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> *Notest:*
> This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.
> # 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
## 123 
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
## 123 
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> ## 123 
> ## This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.
# 

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.
> # 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> ## This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
# # 123
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> # # 123
> ## This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.

  was:
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
# # 123
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.


> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> ## This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Description: 
We need to take care of recovery for the distributed index construction 
mechanism:
* For indexes that are not available for read-write:
# Make sure that the corresponding keys are present in the metastore: 
*startBuildIndex.* and 
*partitionBuildIndex..*...*partitionBuildIndex..*.
## This is easy to implement, since there is an auxiliary key 
*startBuildIndex.*, which, in its absence, will allow you to 
accurately understand that there are no keys for this index in metasore.

> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> We need to take care of recovery for the distributed index construction 
> mechanism:
> * For indexes that are not available for read-write:
> # Make sure that the corresponding keys are present in the metastore: 
> *startBuildIndex.* and 
> *partitionBuildIndex..*...*partitionBuildIndex..*.
> ## This is easy to implement, since there is an auxiliary key 
> *startBuildIndex.*, which, in its absence, will allow you to 
> accurately understand that there are no keys for this index in metasore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20642) "control.sh --snapshot check" command returns 0 even if check fails

2023-10-12 Thread Sergey Korotkov (Jira)
Sergey Korotkov created IGNITE-20642:


 Summary: "control.sh --snapshot check" command returns 0 even if 
check fails
 Key: IGNITE-20642
 URL: https://issues.apache.org/jira/browse/IGNITE-20642
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Korotkov


Even if snapshot check fails the control.sh returns 0 (which is generally 
considered as a success indicator).

Output  contains both error messages about the check failures and final success 
line
_Command [SNAPSHOT] finished with code: 0_ 


{noformat}
[2023-10-11T11:32:39,167][INFO ][session=93ebdf33][CommandHandlerLog] The check 
procedure has failed, conflict partitions has been found: [counterConflicts=0, 
hashConflicts=1024]

[2023-10-11T11:32:39,168][INFO ][session=93ebdf33][CommandHandlerLog] Hash 
conflicts:

[2023-10-11T11:32:39,219][INFO ][session=93ebdf33][CommandHandlerLog] Conflict 
partition: PartitionKeyV2 [grpId=1845542353, grpName=sql_cache, partId=455]

[2023-10-11T11:32:39,223][INFO ][session=93ebdf33][CommandHandlerLog] Partition 
instances: [PartitionHashRecordV2 [isPrimary=false, consistentId=ducker06, 
updateCntr=null, partitionState=OWNING, size=960, partHash=371088385, 
partVerHash=0], PartitionHashRecordV2 [isPrimary=false, consistentId=ducker09, 
updateCntr=null, partitionState=OWNING, size=0, partHash=0, partVerHash=0]]

...
[2023-10-11T11:32:39,486][INFO ][session=93ebdf33][CommandHandlerLog] Command 
[SNAPSHOT] finished with code: 0
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20512) REST API: Remove port range

2023-10-12 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774783#comment-17774783
 ] 

Pavel Tupitsyn commented on IGNITE-20512:
-

[~aleksandr.pakhomov] looks good to me.

> REST API: Remove port range
> ---
>
> Key: IGNITE-20512
> URL: https://issues.apache.org/jira/browse/IGNITE-20512
> Project: Ignite
>  Issue Type: Improvement
>  Components: rest
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Aleksandr
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> See IGNITE-19601. We agreed to remove port range from client connector, the 
> same should be done for REST connector:
> * Usually we know the exact port both on client and on server
> * Other products don't have port ranges, this is unusual for the users to see
> * It brings additional complexity and issues (see IGNITE-19571)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20641) Entries added via data streamer to persistent cache are not written to cache dump

2023-10-12 Thread Sergey Korotkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Korotkov updated IGNITE-20641:
-
Labels: IEP-109  (was: )

> Entries added via data streamer to persistent cache are not written to cache 
> dump
> -
>
> Key: IGNITE-20641
> URL: https://issues.apache.org/jira/browse/IGNITE-20641
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Korotkov
>Priority: Minor
>  Labels: IEP-109
>
> Steps to reproduce the problem:
>  * start ignite with persistence
>  * load some entries via the data streamer
>  * restart ignite
>  * create cache dump
>  * check cache dump consistency
> Consistency check would fail with errors like
> {noformat}
> [2023-10-11T12:13:28,711][INFO ][session=427e7c47][CommandHandlerLog] Hash 
> conflicts:
> [2023-10-11T12:13:28,721][INFO ][session=427e7c47][CommandHandlerLog] 
> Conflict partition: PartitionKeyV2 [grpId=-1988013461, grpName=test-cache-1, 
> partId=947]
> [2023-10-11T12:13:28,725][INFO ][session=427e7c47][CommandHandlerLog] 
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=ducker03, updateCntr=null, partitionState=OWNING, size=0, 
> partHash=0, partVerHash=0], PartitionHashRecordV2 [isPrimary=false, 
> consistentId=ducker02, updateCntr=null, partitionState=OWNING, size=48, 
> partHash=731883010, partVerHash=0]]
> {noformat}
> *.dump files on primary are empty, but on backups are not.
> ---
> Reason is that after ignite restart such records are always considered to be 
> added after dump creation start in CreateDumpFutureTask::isAfterStart. That 
> is because entries added via the datastreamer have version equal to 
> isolatedStreamerVer but isolatedStreamerVer changes on each ignite restart 
> and isolatedStreamerVer is always greater than startVer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20641) Entries added via data streamer to persistent cache are not written to cache dump

2023-10-12 Thread Sergey Korotkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Korotkov updated IGNITE-20641:
-
Labels: IEP-109 ise  (was: IEP-109)

> Entries added via data streamer to persistent cache are not written to cache 
> dump
> -
>
> Key: IGNITE-20641
> URL: https://issues.apache.org/jira/browse/IGNITE-20641
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Korotkov
>Priority: Minor
>  Labels: IEP-109, ise
>
> Steps to reproduce the problem:
>  * start ignite with persistence
>  * load some entries via the data streamer
>  * restart ignite
>  * create cache dump
>  * check cache dump consistency
> Consistency check would fail with errors like
> {noformat}
> [2023-10-11T12:13:28,711][INFO ][session=427e7c47][CommandHandlerLog] Hash 
> conflicts:
> [2023-10-11T12:13:28,721][INFO ][session=427e7c47][CommandHandlerLog] 
> Conflict partition: PartitionKeyV2 [grpId=-1988013461, grpName=test-cache-1, 
> partId=947]
> [2023-10-11T12:13:28,725][INFO ][session=427e7c47][CommandHandlerLog] 
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=ducker03, updateCntr=null, partitionState=OWNING, size=0, 
> partHash=0, partVerHash=0], PartitionHashRecordV2 [isPrimary=false, 
> consistentId=ducker02, updateCntr=null, partitionState=OWNING, size=48, 
> partHash=731883010, partVerHash=0]]
> {noformat}
> *.dump files on primary are empty, but on backups are not.
> ---
> Reason is that after ignite restart such records are always considered to be 
> added after dump creation start in CreateDumpFutureTask::isAfterStart. That 
> is because entries added via the datastreamer have version equal to 
> isolatedStreamerVer but isolatedStreamerVer changes on each ignite restart 
> and isolatedStreamerVer is always greater than startVer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20641) Entries added via data streamer to persistent cache are not written to cache dump

2023-10-12 Thread Sergey Korotkov (Jira)
Sergey Korotkov created IGNITE-20641:


 Summary: Entries added via data streamer to persistent cache are 
not written to cache dump
 Key: IGNITE-20641
 URL: https://issues.apache.org/jira/browse/IGNITE-20641
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Korotkov


Steps to reproduce the problem:
 * start ignite with persistence
 * load some entries via the data streamer
 * restart ignite
 * create cache dump
 * check cache dump consistency

Consistency check would fail with errors like
{noformat}
[2023-10-11T12:13:28,711][INFO ][session=427e7c47][CommandHandlerLog] Hash 
conflicts:
[2023-10-11T12:13:28,721][INFO ][session=427e7c47][CommandHandlerLog] Conflict 
partition: PartitionKeyV2 [grpId=-1988013461, grpName=test-cache-1, partId=947]
[2023-10-11T12:13:28,725][INFO ][session=427e7c47][CommandHandlerLog] Partition 
instances: [PartitionHashRecordV2 [isPrimary=false, consistentId=ducker03, 
updateCntr=null, partitionState=OWNING, size=0, partHash=0, partVerHash=0], 
PartitionHashRecordV2 [isPrimary=false, consistentId=ducker02, updateCntr=null, 
partitionState=OWNING, size=48, partHash=731883010, partVerHash=0]]
{noformat}
*.dump files on primary are empty, but on backups are not.

---

Reason is that after ignite restart such records are always considered to be 
added after dump creation start in CreateDumpFutureTask::isAfterStart. That is 
because entries added via the datastreamer have version equal to 
isolatedStreamerVer but isolatedStreamerVer changes on each ignite restart and 
isolatedStreamerVer is always greater than startVer.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20359) Expose node storage as a node attribute

2023-10-12 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-20359:

Description: 
*Motivation*

To introduce the nodes' filtering by storage type and profile we need to expose 
the appropriate node storage configurations as node attributes (kind of inner 
attributes, not suitable for general filters).

*Definition of done*
- Node storage and storage profile exposed as node attributes for further 
filtering during the zone dataNodes setup.

*Implementation notes*
- These attributes must be the sepparate list of inner attributes, which are 
not exposed for the usual zone filtering. So, ClusterNodeMessage must be 
extended with the appropriate field.
- The attrbiutes should looks like a map of (String engineType-> String 
profileName)
- While the IGNITE-20564 is not done yet, the part about attributes receiving 
from the node configurations can be implemented around the engine and 
dataRegions, instead of profiles.

  was:
*Motivation*

To introduce the nodes' filtering by storage type and profile we need to expose 
the appropriate node storage configurations as node attributes (kind of inner 
attributes, not suitable for general filters).

*Definition of done*
- Node storage and storage profile exposed as node attributes for further 
filtering during the zone dataNodes setup.

*Implementation notes*
- These attributes must be the sepparate list of inner attributes, which are 
not exposed for the usual zone filtering. So, ClusterNodeMessage must be 
extended with the appropriate field.
- The attrbiutes should looks like a map of (String engineType-> String 
profileName)
- While the IGNITE-20564 is not done yet, the part about the receiving 
attributes from the configurations can be implemented around the engineTypes 
and dataRegions, instead of profiles.


> Expose node storage as a node attribute
> ---
>
> Key: IGNITE-20359
> URL: https://issues.apache.org/jira/browse/IGNITE-20359
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> To introduce the nodes' filtering by storage type and profile we need to 
> expose the appropriate node storage configurations as node attributes (kind 
> of inner attributes, not suitable for general filters).
> *Definition of done*
> - Node storage and storage profile exposed as node attributes for further 
> filtering during the zone dataNodes setup.
> *Implementation notes*
> - These attributes must be the sepparate list of inner attributes, which are 
> not exposed for the usual zone filtering. So, ClusterNodeMessage must be 
> extended with the appropriate field.
> - The attrbiutes should looks like a map of (String engineType-> String 
> profileName)
> - While the IGNITE-20564 is not done yet, the part about attributes receiving 
> from the node configurations can be implemented around the engine and 
> dataRegions, instead of profiles.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20359) Expose node storage as a node attribute

2023-10-12 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-20359:

Description: 
*Motivation*

To introduce the nodes' filtering by storage type and profile we need to expose 
the appropriate node storage configurations as node attributes (kind of inner 
attributes, not suitable for general filters).

*Definition of done*
- Node storage and storage profile exposed as node attributes for further 
filtering during the zone dataNodes setup.

*Implementation notes*
- These attributes must be the sepparate list of inner attributes, which are 
not exposed for the usual zone filtering. So, ClusterNodeMessage must be 
extended with the appropriate field.
- The attrbiutes should looks like a map of (String engineType-> String 
profileName)
- While the IGNITE-20564 is not done yet, the part about the receiving 
attributes from the configurations can be implemented around the engineTypes 
and dataRegions, instead of profiles.

  was:
*Motivation*

To introduce the nodes' filtering by storage type and profile we need to expose 
the appropriate node storage configurations as node attributes (kind of inner 
attributes, not suitable for general filters).

*Definition of done*
- Node storage and storage profile exposed as node attributes for further 
filtering during the zone dataNodes setup.

*Implementation notes*
- These attributes must be the sepparate list of inner attributes, which are 
not exposed for the usual zone filtering. So, ClusterNodeMessage must be 
extended with the appropriate field.
- The attrbiutes should looks like a map of (String engineType-> String 
profileName)
- While the IGNITE-20564 is not done yet, the part about the receiving 
attributes from the configurations can be implemented 


> Expose node storage as a node attribute
> ---
>
> Key: IGNITE-20359
> URL: https://issues.apache.org/jira/browse/IGNITE-20359
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> To introduce the nodes' filtering by storage type and profile we need to 
> expose the appropriate node storage configurations as node attributes (kind 
> of inner attributes, not suitable for general filters).
> *Definition of done*
> - Node storage and storage profile exposed as node attributes for further 
> filtering during the zone dataNodes setup.
> *Implementation notes*
> - These attributes must be the sepparate list of inner attributes, which are 
> not exposed for the usual zone filtering. So, ClusterNodeMessage must be 
> extended with the appropriate field.
> - The attrbiutes should looks like a map of (String engineType-> String 
> profileName)
> - While the IGNITE-20564 is not done yet, the part about the receiving 
> attributes from the configurations can be implemented around the engineTypes 
> and dataRegions, instead of profiles.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20359) Expose node storage as a node attribute

2023-10-12 Thread Kirill Gusakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Gusakov updated IGNITE-20359:

Description: 
*Motivation*

To introduce the nodes' filtering by storage type and profile we need to expose 
the appropriate node storage configurations as node attributes (kind of inner 
attributes, not suitable for general filters).

*Definition of done*
- Node storage and storage profile exposed as node attributes for further 
filtering during the zone dataNodes setup.

*Implementation notes*
- These attributes must be the sepparate list of inner attributes, which are 
not exposed for the usual zone filtering. So, ClusterNodeMessage must be 
extended with the appropriate field.
- The attrbiutes should looks like a map of (String engineType-> String 
profileName)
- While the IGNITE-20564 is not done yet, the part about the receiving 
attributes from the configurations can be implemented 

  was:
*Motivation*

To introduce the nodes' filtering by storage type and profile we need to expose 
the appropriate node storage configurations as node attributes (kind of inner 
attributes, not suitable for general filters).

*Definition of done*
- Node storage and storage profile exposed as node attributes (TODO: clarify 
the details, when design will be finished)


> Expose node storage as a node attribute
> ---
>
> Key: IGNITE-20359
> URL: https://issues.apache.org/jira/browse/IGNITE-20359
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> To introduce the nodes' filtering by storage type and profile we need to 
> expose the appropriate node storage configurations as node attributes (kind 
> of inner attributes, not suitable for general filters).
> *Definition of done*
> - Node storage and storage profile exposed as node attributes for further 
> filtering during the zone dataNodes setup.
> *Implementation notes*
> - These attributes must be the sepparate list of inner attributes, which are 
> not exposed for the usual zone filtering. So, ClusterNodeMessage must be 
> extended with the appropriate field.
> - The attrbiutes should looks like a map of (String engineType-> String 
> profileName)
> - While the IGNITE-20564 is not done yet, the part about the receiving 
> attributes from the configurations can be implemented 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20640) Raft node started in a node where it should not be

2023-10-12 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-20640:
---
Description: 
This behavior leads to getting stuck in any RAFT operation because the leader 
cannot be elected.
{noformat}
[2023-10-10T16:48:48,771][INFO ][%node1%tableManager-io-3][Loza] Start new raft 
node=RaftNodeId [groupId=3_part_15, peer=Peer [consistentId=node1, idx=0]] with 
initial configuration=PeersAndLearners [peers=Set12 [Peer [consistentId=node2, 
idx=0]], learners=SetN []]
{noformat}
This issue is reproduced in the test 
ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just add 
an assertion:

{code:title=Loza#startRaftGroupNodeInternal}
assert configuration.peers().contains(nodeId.peer()) || configuration.learners()
                .contains(nodeId.peer()) : "Raft node started on a peer where 
it should not be";
{code}
{noformat}
[2023-10-10T20:51:51,154][ERROR][%node0%tableManager-io-11][WatchProcessor] 
Error occurred when processing a watch event
 java.lang.AssertionError: Raft node started on a peer where it should not be
at 
org.apache.ignite.internal.raft.Loza.startRaftGroupNodeInternal(Loza.java:361) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:252) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:225) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.startPartitionRaftGroupNode(TableManager.java:1986)
 ~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$90(TableManager.java:1878)
 ~[main/:?]
at 
org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:805) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$91(TableManager.java:1848)
 ~[main/:?]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
 [?:?]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
{noformat}

  was:
This behavior leads to getting stuck in any RAFT operation because the leader 
cannot be elected. This issue is reproduced in the test 
ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just add 
an assertion:

{code:title=Loza#startRaftGroupNodeInternal}
assert configuration.peers().contains(nodeId.peer()) || configuration.learners()
                .contains(nodeId.peer()) : "Raft node started on a peer where 
it should not be";
{code}
{noformat}
[2023-10-10T20:51:51,154][ERROR][%node0%tableManager-io-11][WatchProcessor] 
Error occurred when processing a watch event
 java.lang.AssertionError: Raft node started on a peer where it should not be
at 
org.apache.ignite.internal.raft.Loza.startRaftGroupNodeInternal(Loza.java:361) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:252) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:225) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.startPartitionRaftGroupNode(TableManager.java:1986)
 ~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$90(TableManager.java:1878)
 ~[main/:?]
at 
org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:805) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$91(TableManager.java:1848)
 ~[main/:?]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
 [?:?]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
{noformat}


> Raft node started in a node where it should not be
> --
>
> Key: IGNITE-20640
> URL: https://issues.apache.org/jira/browse/IGNITE-20640
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Priority: Major
>
> This behavior leads to getting stuck in any RAFT operation because the leader 
> cannot be elected.
> {noformat}
> [2023-10-10T16:48:48,771][INFO ][%node1%tableManager-io-3][Loza] Start new 
> raft node=RaftNodeId [groupId=3_part_15, peer=Peer [consistentId=node1, 
> idx=0]] 

[jira] [Updated] (IGNITE-20640) Raft node started in a node where it should not be

2023-10-12 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-20640:
---
Description: 
This behavior leads to getting stuck in any RAFT operation because the leader 
cannot be elected. This issue is reproduced in the test 
ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just add 
an assertion:

{code:title='Loza#startRaftGroupNodeInternal'}

assert configuration.peers().contains(nodeId.peer()) || configuration.learners()
                .contains(nodeId.peer()) : "Raft node started on a peer where 
it should not be";

{code}
{noformat}
[2023-10-10T20:51:51,154][ERROR][%node0%tableManager-io-11][WatchProcessor] 
Error occurred when processing a watch event
 java.lang.AssertionError: Raft node started on a peer where it should not be
at 
org.apache.ignite.internal.raft.Loza.startRaftGroupNodeInternal(Loza.java:361) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:252) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:225) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.startPartitionRaftGroupNode(TableManager.java:1986)
 ~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$90(TableManager.java:1878)
 ~[main/:?]
at 
org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:805) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$91(TableManager.java:1848)
 ~[main/:?]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
 [?:?]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
{noformat}

  was:
This behavior leads to getting stuck in any RAFT operation because the leader 
cannot be elected. This issue is reproduced in the test 
ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just add 
an assertion:

{code:title="Loza#startRaftGroupNodeInternal"}

assert configuration.peers().contains(nodeId.peer()) || configuration.learners()
                .contains(nodeId.peer()) : "Raft node started on a peer where 
it should not be";

{code}
{noformat}
[2023-10-10T20:51:51,154][ERROR][%node0%tableManager-io-11][WatchProcessor] 
Error occurred when processing a watch event
 java.lang.AssertionError: Raft node started on a peer where it should not be
at 
org.apache.ignite.internal.raft.Loza.startRaftGroupNodeInternal(Loza.java:361) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:252) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:225) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.startPartitionRaftGroupNode(TableManager.java:1986)
 ~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$90(TableManager.java:1878)
 ~[main/:?]
at 
org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:805) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$91(TableManager.java:1848)
 ~[main/:?]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
 [?:?]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
{noformat}


> Raft node started in a node where it should not be
> --
>
> Key: IGNITE-20640
> URL: https://issues.apache.org/jira/browse/IGNITE-20640
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Priority: Major
>
> This behavior leads to getting stuck in any RAFT operation because the leader 
> cannot be elected. This issue is reproduced in the test 
> ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just 
> add an assertion:
> {code:title='Loza#startRaftGroupNodeInternal'}
> assert configuration.peers().contains(nodeId.peer()) || 
> configuration.learners()
>                 .contains(nodeId.peer()) : "Raft node started on a peer where 
> it should not be";
> {code}
> {noformat}
> 

[jira] [Updated] (IGNITE-20640) Raft node started in a node where it should not be

2023-10-12 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-20640:
---
Description: 
This behavior leads to getting stuck in any RAFT operation because the leader 
cannot be elected. This issue is reproduced in the test 
ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just add 
an assertion:

{code:title=Loza#startRaftGroupNodeInternal}

assert configuration.peers().contains(nodeId.peer()) || configuration.learners()
                .contains(nodeId.peer()) : "Raft node started on a peer where 
it should not be";

{code}
{noformat}
[2023-10-10T20:51:51,154][ERROR][%node0%tableManager-io-11][WatchProcessor] 
Error occurred when processing a watch event
 java.lang.AssertionError: Raft node started on a peer where it should not be
at 
org.apache.ignite.internal.raft.Loza.startRaftGroupNodeInternal(Loza.java:361) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:252) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:225) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.startPartitionRaftGroupNode(TableManager.java:1986)
 ~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$90(TableManager.java:1878)
 ~[main/:?]
at 
org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:805) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$91(TableManager.java:1848)
 ~[main/:?]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
 [?:?]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
{noformat}

  was:
This behavior leads to getting stuck in any RAFT operation because the leader 
cannot be elected. This issue is reproduced in the test 
ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just add 
an assertion:

{code:title='Loza#startRaftGroupNodeInternal'}

assert configuration.peers().contains(nodeId.peer()) || configuration.learners()
                .contains(nodeId.peer()) : "Raft node started on a peer where 
it should not be";

{code}
{noformat}
[2023-10-10T20:51:51,154][ERROR][%node0%tableManager-io-11][WatchProcessor] 
Error occurred when processing a watch event
 java.lang.AssertionError: Raft node started on a peer where it should not be
at 
org.apache.ignite.internal.raft.Loza.startRaftGroupNodeInternal(Loza.java:361) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:252) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:225) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.startPartitionRaftGroupNode(TableManager.java:1986)
 ~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$90(TableManager.java:1878)
 ~[main/:?]
at 
org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:805) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$91(TableManager.java:1848)
 ~[main/:?]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
 [?:?]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
{noformat}


> Raft node started in a node where it should not be
> --
>
> Key: IGNITE-20640
> URL: https://issues.apache.org/jira/browse/IGNITE-20640
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Priority: Major
>
> This behavior leads to getting stuck in any RAFT operation because the leader 
> cannot be elected. This issue is reproduced in the test 
> ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just 
> add an assertion:
> {code:title=Loza#startRaftGroupNodeInternal}
> assert configuration.peers().contains(nodeId.peer()) || 
> configuration.learners()
>                 .contains(nodeId.peer()) : "Raft node started on a peer where 
> it should not be";
> {code}
> {noformat}
> 

[jira] [Updated] (IGNITE-20640) Raft node started in a node where it should not be

2023-10-12 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-20640:
---
Description: 
This behavior leads to getting stuck in any RAFT operation because the leader 
cannot be elected. This issue is reproduced in the test 
ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just add 
an assertion:

{code:title=Loza#startRaftGroupNodeInternal}
assert configuration.peers().contains(nodeId.peer()) || configuration.learners()
                .contains(nodeId.peer()) : "Raft node started on a peer where 
it should not be";
{code}
{noformat}
[2023-10-10T20:51:51,154][ERROR][%node0%tableManager-io-11][WatchProcessor] 
Error occurred when processing a watch event
 java.lang.AssertionError: Raft node started on a peer where it should not be
at 
org.apache.ignite.internal.raft.Loza.startRaftGroupNodeInternal(Loza.java:361) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:252) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:225) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.startPartitionRaftGroupNode(TableManager.java:1986)
 ~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$90(TableManager.java:1878)
 ~[main/:?]
at 
org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:805) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$91(TableManager.java:1848)
 ~[main/:?]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
 [?:?]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
{noformat}

  was:
This behavior leads to getting stuck in any RAFT operation because the leader 
cannot be elected. This issue is reproduced in the test 
ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just add 
an assertion:

{code:title=Loza#startRaftGroupNodeInternal}

assert configuration.peers().contains(nodeId.peer()) || configuration.learners()
                .contains(nodeId.peer()) : "Raft node started on a peer where 
it should not be";

{code}
{noformat}
[2023-10-10T20:51:51,154][ERROR][%node0%tableManager-io-11][WatchProcessor] 
Error occurred when processing a watch event
 java.lang.AssertionError: Raft node started on a peer where it should not be
at 
org.apache.ignite.internal.raft.Loza.startRaftGroupNodeInternal(Loza.java:361) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:252) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:225) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.startPartitionRaftGroupNode(TableManager.java:1986)
 ~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$90(TableManager.java:1878)
 ~[main/:?]
at 
org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:805) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$91(TableManager.java:1848)
 ~[main/:?]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
 [?:?]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
{noformat}


> Raft node started in a node where it should not be
> --
>
> Key: IGNITE-20640
> URL: https://issues.apache.org/jira/browse/IGNITE-20640
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Priority: Major
>
> This behavior leads to getting stuck in any RAFT operation because the leader 
> cannot be elected. This issue is reproduced in the test 
> ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just 
> add an assertion:
> {code:title=Loza#startRaftGroupNodeInternal}
> assert configuration.peers().contains(nodeId.peer()) || 
> configuration.learners()
>                 .contains(nodeId.peer()) : "Raft node started on a peer where 
> it should not be";
> {code}
> {noformat}
> 

[jira] [Created] (IGNITE-20640) Raft node started in a node where it should not be

2023-10-12 Thread Vladislav Pyatkov (Jira)
Vladislav Pyatkov created IGNITE-20640:
--

 Summary: Raft node started in a node where it should not be
 Key: IGNITE-20640
 URL: https://issues.apache.org/jira/browse/IGNITE-20640
 Project: Ignite
  Issue Type: Bug
Reporter: Vladislav Pyatkov


This behavior leads to getting stuck in any RAFT operation because the leader 
cannot be elected. This issue is reproduced in the test 
ItDataSchemaSyncTest#checkSchemasCorrectlyRestore, to test it in a log just add 
an assertion:

{code:title="Loza#startRaftGroupNodeInternal"}

assert configuration.peers().contains(nodeId.peer()) || configuration.learners()
                .contains(nodeId.peer()) : "Raft node started on a peer where 
it should not be";

{code}
{noformat}
[2023-10-10T20:51:51,154][ERROR][%node0%tableManager-io-11][WatchProcessor] 
Error occurred when processing a watch event
 java.lang.AssertionError: Raft node started on a peer where it should not be
at 
org.apache.ignite.internal.raft.Loza.startRaftGroupNodeInternal(Loza.java:361) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:252) 
~[main/:?]
at org.apache.ignite.internal.raft.Loza.startRaftGroupNode(Loza.java:225) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.startPartitionRaftGroupNode(TableManager.java:1986)
 ~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$90(TableManager.java:1878)
 ~[main/:?]
at 
org.apache.ignite.internal.util.IgniteUtils.inBusyLock(IgniteUtils.java:805) 
~[main/:?]
at 
org.apache.ignite.internal.table.distributed.TableManager.lambda$handleChangePendingAssignmentEvent$91(TableManager.java:1848)
 ~[main/:?]
at 
java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
 [?:?]
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
 [?:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20512) REST API: Remove port range

2023-10-12 Thread Aleksandr (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774583#comment-17774583
 ] 

Aleksandr commented on IGNITE-20512:


Hi, [~ptupitsyn]. Can you review the PR, please?

> REST API: Remove port range
> ---
>
> Key: IGNITE-20512
> URL: https://issues.apache.org/jira/browse/IGNITE-20512
> Project: Ignite
>  Issue Type: Improvement
>  Components: rest
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Aleksandr
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> See IGNITE-19601. We agreed to remove port range from client connector, the 
> same should be done for REST connector:
> * Usually we know the exact port both on client and on server
> * Other products don't have port ranges, this is unusual for the users to see
> * It brings additional complexity and issues (see IGNITE-19571)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20639) Remove portRange from network configuration

2023-10-12 Thread Aleksandr (Jira)
Aleksandr created IGNITE-20639:
--

 Summary: Remove portRange from network configuration
 Key: IGNITE-20639
 URL: https://issues.apache.org/jira/browse/IGNITE-20639
 Project: Ignite
  Issue Type: Task
  Components: networking
Reporter: Aleksandr


The goal of IGNITE-20512 was to remove {{rest.portRange}} from REST server 
configuration and force the user to set up the port directly in {{rest.port}} 
configuration. That makes the configuration robust and easier to understand.

I've noticed that we also have {{network.portRange}} configuration that looks 
weird now. It gets even stranger when we configure the node discovery part: 
{{network.nodeFilter.netClusterNodes: [localhost:3344, localhost:3345]}} . We 
have to specify peers with the direct address but configure the current address 
with a range.

I propose to get rid of {{portRange}} in {{network}} section as well as we did 
it in {{rest}} and {{clientConnector}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20638) Integration of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-20638:


 Summary: Integration of distributed index building
 Key: IGNITE-20638
 URL: https://issues.apache.org/jira/browse/IGNITE-20638
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Tkalenko
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery of distributed index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Summary: Implement recovery of distributed index building  (was: Implement 
recovery ща distributed of index building)

> Implement recovery of distributed index building
> 
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20637) Implement recovery ща distributed of index building

2023-10-12 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-20637:
-
Fix Version/s: 3.0.0-beta2

> Implement recovery ща distributed of index building
> ---
>
> Key: IGNITE-20637
> URL: https://issues.apache.org/jira/browse/IGNITE-20637
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20637) Implement recovery ща distributed of index building

2023-10-12 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-20637:


 Summary: Implement recovery ща distributed of index building
 Key: IGNITE-20637
 URL: https://issues.apache.org/jira/browse/IGNITE-20637
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Tkalenko






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20636) Add to the MakeIndexAvailableCommand the ability to use only the indexId

2023-10-12 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-20636:


 Summary: Add to the MakeIndexAvailableCommand the ability to use 
only the indexId
 Key: IGNITE-20636
 URL: https://issues.apache.org/jira/browse/IGNITE-20636
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Tkalenko


It is necessary to add to 
*org.apache.ignite.internal.catalog.commands.MakeIndexAvailableCommand* the use 
of only the table identifier and the name of the table and schema, but so that 
these parameters do not mix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19325) Unmute test MultiActorPlacementDriverTest#prolongAfterActiveActorChanger

2023-10-12 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774559#comment-17774559
 ] 

Vladislav Pyatkov commented on IGNITE-19325:


Merged 44c0c0783555d02d48f9450d79c8e2deec0e4d28

> Unmute test MultiActorPlacementDriverTest#prolongAfterActiveActorChanger
> 
>
> Key: IGNITE-19325
> URL: https://issues.apache.org/jira/browse/IGNITE-19325
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: iep-101, ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Motivation*
> The general public of the issue has at least one test of checking the active 
> actor for the placement driver. The test is already present in the code base 
> (MultiActorPlacementDriverTest#prolongAfterActiveActorChanger), but it was 
> muted a long time ago.
> *Implemetatiuon notes*
> The exception that currently appears in the test looks like the Metastorage 
> one and does not depend on the placement driver:
> {noformat}
> [2023-10-11T14:55:14,672][INFO 
> ][%mapdt_paaac_1234%MessagingService-inbound--0][MultiActorPlacementDriverTest]
>  Meta storage is unavailable
> java.util.concurrent.CompletionException: 
> org.apache.ignite.raft.jraft.rpc.impl.RaftException: IGN-CMN-65535 
> TraceId:54e6d210-5660-4f98-b175-b66ac45aeaf6 ETIMEDOUT:RPC exception:null
>   at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>  ~[?:?]
>   at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.handleErrorResponse(RaftGroupServiceImpl.java:622)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.lambda$sendWithRetry$39(RaftGroupServiceImpl.java:536)
>  ~[main/:?]
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) 
> ~[?:?]
>   at 
> org.apache.ignite.network.DefaultMessagingService.onInvokeResponse(DefaultMessagingService.java:416)
>  ~[main/:?]
>   at 
> org.apache.ignite.network.DefaultMessagingService.onMessage(DefaultMessagingService.java:368)
>  ~[main/:?]
>   at 
> org.apache.ignite.network.DefaultMessagingService.lambda$onMessage$4(DefaultMessagingService.java:350)
>  ~[main/:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: org.apache.ignite.raft.jraft.rpc.impl.RaftException: ETIMEDOUT:RPC 
> exception:null
>   ... 12 more
> {noformat}
> The test should be unmuted and corrected for the current circumstances.
> *Difinition of done*
> We have a test (that runs in TC) that demonstrates the behavior of 
> placement-driven behavior when the active actor is changing.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20618) Sql. Degradation of SELECT operations performance over time

2023-10-12 Thread Konstantin Orlov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774552#comment-17774552
 ] 

Konstantin Orlov commented on IGNITE-20618:
---

Next candidate is PartitionReplicaListener who creates ArrayList size of 
{{request.batchSize()}} (which is 10_000) for every scan of every partitions 
even when we are searching by primary key.

 !Screenshot 2023-10-12 at 17.20.05.png!  !Screenshot 2023-10-12 at 
17.20.20.png! 

> Sql. Degradation of SELECT operations performance over time
> ---
>
> Key: IGNITE-20618
> URL: https://issues.apache.org/jira/browse/IGNITE-20618
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: Screenshot 2023-10-12 at 16.34.16.png, Screenshot 
> 2023-10-12 at 16.34.58.png, Screenshot 2023-10-12 at 17.20.05.png, Screenshot 
> 2023-10-12 at 17.20.20.png, 
> gc-poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> ignite-config.json, 
> poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> sql-degr-insert.png, sql-degr-select.png
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
> {*}Steps{*}:
>  * Start a 1 node cluster with the attached [^ignite-config.json]
>  ** {*}raft.fsync = false{*}{*}{{*}}
>  * Start the benchmark in pre-load mode – preload 100k entries:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
>  * Start the benchmark in 100% read mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
> The following *stable* throughput is observed in preload mode 
> ({{{}INSERT{}}}):
> !sql-degr-insert.png!
> The following *unstable* throughput is observed on reads ({{{}SELECT):{}}}
> {{!sql-degr-select.png!}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20618) Sql. Degradation of SELECT operations performance over time

2023-10-12 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-20618:
--
Attachment: Screenshot 2023-10-12 at 17.20.05.png
Screenshot 2023-10-12 at 17.20.20.png

> Sql. Degradation of SELECT operations performance over time
> ---
>
> Key: IGNITE-20618
> URL: https://issues.apache.org/jira/browse/IGNITE-20618
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: Screenshot 2023-10-12 at 16.34.16.png, Screenshot 
> 2023-10-12 at 16.34.58.png, Screenshot 2023-10-12 at 17.20.05.png, Screenshot 
> 2023-10-12 at 17.20.20.png, 
> gc-poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> ignite-config.json, 
> poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> sql-degr-insert.png, sql-degr-select.png
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
> {*}Steps{*}:
>  * Start a 1 node cluster with the attached [^ignite-config.json]
>  ** {*}raft.fsync = false{*}{*}{{*}}
>  * Start the benchmark in pre-load mode – preload 100k entries:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
>  * Start the benchmark in 100% read mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
> The following *stable* throughput is observed in preload mode 
> ({{{}INSERT{}}}):
> !sql-degr-insert.png!
> The following *unstable* throughput is observed on reads ({{{}SELECT):{}}}
> {{!sql-degr-select.png!}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20635) Clean up code wrt IGNITE-18733 mentions

2023-10-12 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20635:
---
Summary: Clean up code wrt IGNITE-18733 mentions  (was: Enable tests 
disabled with IGNITE-18733 and remove corresponding workarounds)

> Clean up code wrt IGNITE-18733 mentions
> ---
>
> Key: IGNITE-20635
> URL: https://issues.apache.org/jira/browse/IGNITE-20635
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3, tech-debt
> Fix For: 3.0.0-beta2
>
>
> IGNITE-18733 is to be looked up in the code. Tests disabled with this key 
> should be enabled or reassined to other (maybe new) issues. Workarounds (like 
> waiting for tables/indexes to appear) tagged with this key should be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20635) Cleanup code wrt IGNITE-18733 mentions

2023-10-12 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20635:
---
Description: IGNITE-18733 is to be looked up in the code. Tests disabled 
with this key should be enabled or remapped to other (maybe new) issues. 
Workarounds (like waiting for tables/indexes to appear) tagged with this key 
should be removed.  (was: IGNITE-18733 is to be looked up in the code. Tests 
disabled with this key should be enabled or reassined to other (maybe new) 
issues. Workarounds (like waiting for tables/indexes to appear) tagged with 
this key should be removed.)

> Cleanup code wrt IGNITE-18733 mentions
> --
>
> Key: IGNITE-20635
> URL: https://issues.apache.org/jira/browse/IGNITE-20635
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3, tech-debt
> Fix For: 3.0.0-beta2
>
>
> IGNITE-18733 is to be looked up in the code. Tests disabled with this key 
> should be enabled or remapped to other (maybe new) issues. Workarounds (like 
> waiting for tables/indexes to appear) tagged with this key should be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20635) Cleanup code wrt IGNITE-18733 mentions

2023-10-12 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20635:
---
Summary: Cleanup code wrt IGNITE-18733 mentions  (was: Clean up code wrt 
IGNITE-18733 mentions)

> Cleanup code wrt IGNITE-18733 mentions
> --
>
> Key: IGNITE-20635
> URL: https://issues.apache.org/jira/browse/IGNITE-20635
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3, tech-debt
> Fix For: 3.0.0-beta2
>
>
> IGNITE-18733 is to be looked up in the code. Tests disabled with this key 
> should be enabled or reassined to other (maybe new) issues. Workarounds (like 
> waiting for tables/indexes to appear) tagged with this key should be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-17931) Blocking code inside SchemaRegistryImpl#schema(int), need to be refactored.

2023-10-12 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy resolved IGNITE-17931.

Fix Version/s: 3.0.0-beta2
   Resolution: Fixed

Fixed by IGNITE-19226

> Blocking code inside SchemaRegistryImpl#schema(int), need to be refactored.
> ---
>
> Key: IGNITE-17931
> URL: https://issues.apache.org/jira/browse/IGNITE-17931
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha5
>Reporter: Evgeny Stanilovsky
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Previously blocking fut.join() contains in SchemaManager#tableSchema after 
> refactoring it moves into SchemaRegistryImpl#schema(int) [1], it`s necessary 
> to remove blocking approach.
> [1] 
> https://github.com/apache/ignite-3/blob/7b0b3395de97db09896272e03322bba302c0b556/modules/schema/src/main/java/org/apache/ignite/internal/schema/registry/SchemaRegistryImpl.java#L93
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19226) Fetch table schema by timestamp

2023-10-12 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774540#comment-17774540
 ] 

Roman Puchkovskiy commented on IGNITE-19226:


Thanks

> Fetch table schema by timestamp
> ---
>
> Key: IGNITE-19226
> URL: https://issues.apache.org/jira/browse/IGNITE-19226
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: iep-98, ignite-3
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Currently, when obtaining a schema, its latest (from the local point of view) 
> version is returned.
>  # Table schema must always be obtained using a timestamp
>  # This might require a wait (until MetaStorage's SafeTime >= schemaTs-DD, 
> see 
> [https://cwiki.apache.org/confluence/display/IGNITE/IEP-98%3A+Schema+Synchronization#IEP98:SchemaSynchronization-Waitingforsafetimeinthepast]
>  )
> This includes the mechanisms that allow clients obtain 'current' schema (like 
> 'DESCRIBE ', 'list tables', etc).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20635) Enable tests disabled with IGNITE-18733 and remove corresponding workarounds

2023-10-12 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-20635:
--

 Summary: Enable tests disabled with IGNITE-18733 and remove 
corresponding workarounds
 Key: IGNITE-20635
 URL: https://issues.apache.org/jira/browse/IGNITE-20635
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
Assignee: Roman Puchkovskiy
 Fix For: 3.0.0-beta2


IGNITE-18733 is to be looked up in the code. Tests disabled with this key 
should be enabled or reassined to other (maybe new) issues. Workarounds (like 
waiting for tables/indexes to appear) tagged with this key should be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20566) CDC doesn't replicate complex objects when keepBinary is set to the false

2023-10-12 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov reassigned IGNITE-20566:


Assignee: (was: Nikolay Izhikov)

> CDC doesn't replicate complex objects when keepBinary is set to the false
> -
>
> Key: IGNITE-20566
> URL: https://issues.apache.org/jira/browse/IGNITE-20566
> Project: Ignite
>  Issue Type: Bug
>Reporter: Anton Vinogradov
>Priority: Major
>
> To reproduce just change 
> {{org.apache.ignite.cdc.CdcConfiguration#DFLT_KEEP_BINARY}} to the {{false}}.
> {{org.apache.ignite.cdc.AbstractReplicationTest#testActivePassiveReplication}}
>  still will be successfull since uses promitive key/val.
> {{org.apache.ignite.cdc.AbstractReplicationTest#testActivePassiveReplicationComplexKeyWithKeyValue}}
>  will stuck, transaction on destination cluster will never be finished.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20621) Add a replication group index build completion listener to IndexBuilder

2023-10-12 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774523#comment-17774523
 ] 

Roman Puchkovskiy commented on IGNITE-20621:


The patch looks good to me

> Add a replication group index build completion listener to IndexBuilder
> ---
>
> Key: IGNITE-20621
> URL: https://issues.apache.org/jira/browse/IGNITE-20621
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We need to add a replication group index build finish listener from 
> *org.apache.ignite.internal.table.distributed.index.IndexBuilder* so that we 
> can switch the index state from write-only to read-write.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20618) Sql. Degradation of SELECT operations performance over time

2023-10-12 Thread Konstantin Orlov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774522#comment-17774522
 ] 

Konstantin Orlov commented on IGNITE-20618:
---

Looks like main contributor to heap pollution is sql engine who converts index 
key by using BinaryTupleBuilder with default estimates on tuple size:

 !Screenshot 2023-10-12 at 16.34.16.png!  !Screenshot 2023-10-12 at 
16.34.58.png! 

> Sql. Degradation of SELECT operations performance over time
> ---
>
> Key: IGNITE-20618
> URL: https://issues.apache.org/jira/browse/IGNITE-20618
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: Screenshot 2023-10-12 at 16.34.16.png, Screenshot 
> 2023-10-12 at 16.34.58.png, 
> gc-poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> ignite-config.json, 
> poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> sql-degr-insert.png, sql-degr-select.png
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
> {*}Steps{*}:
>  * Start a 1 node cluster with the attached [^ignite-config.json]
>  ** {*}raft.fsync = false{*}{*}{{*}}
>  * Start the benchmark in pre-load mode – preload 100k entries:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
>  * Start the benchmark in 100% read mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
> The following *stable* throughput is observed in preload mode 
> ({{{}INSERT{}}}):
> !sql-degr-insert.png!
> The following *unstable* throughput is observed on reads ({{{}SELECT):{}}}
> {{!sql-degr-select.png!}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20618) Sql. Degradation of SELECT operations performance over time

2023-10-12 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-20618:
--
Attachment: Screenshot 2023-10-12 at 16.34.16.png
Screenshot 2023-10-12 at 16.34.58.png

> Sql. Degradation of SELECT operations performance over time
> ---
>
> Key: IGNITE-20618
> URL: https://issues.apache.org/jira/browse/IGNITE-20618
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: Screenshot 2023-10-12 at 16.34.16.png, Screenshot 
> 2023-10-12 at 16.34.58.png, 
> gc-poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> ignite-config.json, 
> poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> sql-degr-insert.png, sql-degr-select.png
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
> {*}Steps{*}:
>  * Start a 1 node cluster with the attached [^ignite-config.json]
>  ** {*}raft.fsync = false{*}{*}{{*}}
>  * Start the benchmark in pre-load mode – preload 100k entries:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
>  * Start the benchmark in 100% read mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
> The following *stable* throughput is observed in preload mode 
> ({{{}INSERT{}}}):
> !sql-degr-insert.png!
> The following *unstable* throughput is observed on reads ({{{}SELECT):{}}}
> {{!sql-degr-select.png!}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20634) Sql. Indices with write-only status should not be accessible via sql schemas.

2023-10-12 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-20634:
--
Labels: ignite-3  (was: )

> Sql. Indices with write-only status should not be accessible via sql schemas. 
>
> -
>
> Key: IGNITE-20634
> URL: https://issues.apache.org/jira/browse/IGNITE-20634
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> At the moment SqlSchemaManager ignores write-only index status and returns 
> all indices, which may lead to scan/key-lookups over an index that is not 
> fully built.
> Update SqlSchemaManager to ignore such indices. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20634) Sql. Indices with write-only status should not be accessible via sql schemas.

2023-10-12 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-20634:
-

 Summary: Sql. Indices with write-only status should not be 
accessible via sql schemas.
 Key: IGNITE-20634
 URL: https://issues.apache.org/jira/browse/IGNITE-20634
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Maksim Zhuravkov
 Fix For: 3.0.0-beta2


At the moment SqlSchemaManager ignores write-only index status and returns all 
indices, which may lead to scan/key-lookups over an index that is not fully 
built.
Update SqlSchemaManager to ignore such indices. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19325) Unmute test MultiActorPlacementDriverTest#prolongAfterActiveActorChanger

2023-10-12 Thread Vyacheslav Koptilin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774490#comment-17774490
 ] 

Vyacheslav Koptilin commented on IGNITE-19325:
--

lgtm

> Unmute test MultiActorPlacementDriverTest#prolongAfterActiveActorChanger
> 
>
> Key: IGNITE-19325
> URL: https://issues.apache.org/jira/browse/IGNITE-19325
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: iep-101, ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Motivation*
> The general public of the issue has at least one test of checking the active 
> actor for the placement driver. The test is already present in the code base 
> (MultiActorPlacementDriverTest#prolongAfterActiveActorChanger), but it was 
> muted a long time ago.
> *Implemetatiuon notes*
> The exception that currently appears in the test looks like the Metastorage 
> one and does not depend on the placement driver:
> {noformat}
> [2023-10-11T14:55:14,672][INFO 
> ][%mapdt_paaac_1234%MessagingService-inbound--0][MultiActorPlacementDriverTest]
>  Meta storage is unavailable
> java.util.concurrent.CompletionException: 
> org.apache.ignite.raft.jraft.rpc.impl.RaftException: IGN-CMN-65535 
> TraceId:54e6d210-5660-4f98-b175-b66ac45aeaf6 ETIMEDOUT:RPC exception:null
>   at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>  ~[?:?]
>   at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.handleErrorResponse(RaftGroupServiceImpl.java:622)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.lambda$sendWithRetry$39(RaftGroupServiceImpl.java:536)
>  ~[main/:?]
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>  ~[?:?]
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) 
> ~[?:?]
>   at 
> org.apache.ignite.network.DefaultMessagingService.onInvokeResponse(DefaultMessagingService.java:416)
>  ~[main/:?]
>   at 
> org.apache.ignite.network.DefaultMessagingService.onMessage(DefaultMessagingService.java:368)
>  ~[main/:?]
>   at 
> org.apache.ignite.network.DefaultMessagingService.lambda$onMessage$4(DefaultMessagingService.java:350)
>  ~[main/:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: org.apache.ignite.raft.jraft.rpc.impl.RaftException: ETIMEDOUT:RPC 
> exception:null
>   ... 12 more
> {noformat}
> The test should be unmuted and corrected for the current circumstances.
> *Difinition of done*
> We have a test (that runs in TC) that demonstrates the behavior of 
> placement-driven behavior when the active actor is changing.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20575) Forbid mixed cache groups with both atomic and transactional caches (with system property able to allow)

2023-10-12 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-20575:
--
Release Note: Mixed atomicity cache groups now restricted, but 
IGNITE_ALLOW_MIXED_CACHE_GROUPS system option may temporary allow them.  (was: 
Mixed atomicity cache groups now restricted, but 
IGNITE_ALLOW_MIXED_CACHE_GROUPS system option may temporary allow it.)

> Forbid mixed cache groups with both atomic and transactional caches (with 
> system property able to allow)
> 
>
> Key: IGNITE-20575
> URL: https://issues.apache.org/jira/browse/IGNITE-20575
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20575) Forbid mixed cache groups with both atomic and transactional caches (with system property able to allow)

2023-10-12 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-20575:
--
Release Note: Mixed atomicity cache groups now restricted, but 
IGNITE_ALLOW_MIXED_CACHE_GROUPS system option may temporary allow it.

> Forbid mixed cache groups with both atomic and transactional caches (with 
> system property able to allow)
> 
>
> Key: IGNITE-20575
> URL: https://issues.apache.org/jira/browse/IGNITE-20575
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19226) Fetch table schema by timestamp

2023-10-12 Thread Kirill Tkalenko (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774474#comment-17774474
 ] 

Kirill Tkalenko commented on IGNITE-19226:
--

Looks good.

> Fetch table schema by timestamp
> ---
>
> Key: IGNITE-19226
> URL: https://issues.apache.org/jira/browse/IGNITE-19226
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: iep-98, ignite-3
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Currently, when obtaining a schema, its latest (from the local point of view) 
> version is returned.
>  # Table schema must always be obtained using a timestamp
>  # This might require a wait (until MetaStorage's SafeTime >= schemaTs-DD, 
> see 
> [https://cwiki.apache.org/confluence/display/IGNITE/IEP-98%3A+Schema+Synchronization#IEP98:SchemaSynchronization-Waitingforsafetimeinthepast]
>  )
> This includes the mechanisms that allow clients obtain 'current' schema (like 
> 'DESCRIBE ', 'list tables', etc).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-12622) Forbid mixed cache groups with both atomic and transactional caches

2023-10-12 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12622:
--
Description: 
Apparently it's possible in Ignite to configure a cache group with both ATOMIC 
and TRANSACTIONAL caches.
IgniteCacheGroupsTest#testContinuousQueriesMultipleGroups* tests.
As per discussed on dev list 
(http://apache-ignite-developers.2346864.n4.nabble.com/Forbid-mixed-cache-groups-with-both-atomic-and-transactional-caches-td45586.html),
 the community has concluded that such configurations should be prohibited.

Forbidden at IGNITE-20575 
IGNITE-20623 should be fixed prior this fix.

  was:
Apparently it's possible in Ignite to configure a cache group with both ATOMIC 
and TRANSACTIONAL caches.
IgniteCacheGroupsTest#testContinuousQueriesMultipleGroups* tests.
As per discussed on dev list 
(http://apache-ignite-developers.2346864.n4.nabble.com/Forbid-mixed-cache-groups-with-both-atomic-and-transactional-caches-td45586.html),
 the community has concluded that such configurations should be prohibited.

Forbidden at IGNITE-20575 



> Forbid mixed cache groups with both atomic and transactional caches
> ---
>
> Key: IGNITE-12622
> URL: https://issues.apache.org/jira/browse/IGNITE-12622
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Ivan Rakov
>Priority: Major
>  Labels: IEP-80, newbie
>
> Apparently it's possible in Ignite to configure a cache group with both 
> ATOMIC and TRANSACTIONAL caches.
> IgniteCacheGroupsTest#testContinuousQueriesMultipleGroups* tests.
> As per discussed on dev list 
> (http://apache-ignite-developers.2346864.n4.nabble.com/Forbid-mixed-cache-groups-with-both-atomic-and-transactional-caches-td45586.html),
>  the community has concluded that such configurations should be prohibited.
> Forbidden at IGNITE-20575 
> IGNITE-20623 should be fixed prior this fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-12622) Forbid mixed cache groups with both atomic and transactional caches

2023-10-12 Thread Anton Vinogradov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-12622:
--
Description: 
Apparently it's possible in Ignite to configure a cache group with both ATOMIC 
and TRANSACTIONAL caches.
IgniteCacheGroupsTest#testContinuousQueriesMultipleGroups* tests.
As per discussed on dev list 
(http://apache-ignite-developers.2346864.n4.nabble.com/Forbid-mixed-cache-groups-with-both-atomic-and-transactional-caches-td45586.html),
 the community has concluded that such configurations should be prohibited.

Forbidden at IGNITE-20575 


  was:
Apparently it's possible in Ignite to configure a cache group with both ATOMIC 
and TRANSACTIONAL caches.
IgniteCacheGroupsTest#testContinuousQueriesMultipleGroups* tests.
As per discussed on dev list 
(http://apache-ignite-developers.2346864.n4.nabble.com/Forbid-mixed-cache-groups-with-both-atomic-and-transactional-caches-td45586.html),
 the community has concluded that such configurations should be prohibited.


> Forbid mixed cache groups with both atomic and transactional caches
> ---
>
> Key: IGNITE-12622
> URL: https://issues.apache.org/jira/browse/IGNITE-12622
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache
>Reporter: Ivan Rakov
>Priority: Major
>  Labels: IEP-80, newbie
>
> Apparently it's possible in Ignite to configure a cache group with both 
> ATOMIC and TRANSACTIONAL caches.
> IgniteCacheGroupsTest#testContinuousQueriesMultipleGroups* tests.
> As per discussed on dev list 
> (http://apache-ignite-developers.2346864.n4.nabble.com/Forbid-mixed-cache-groups-with-both-atomic-and-transactional-caches-td45586.html),
>  the community has concluded that such configurations should be prohibited.
> Forbidden at IGNITE-20575 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20633) Remove QueryStartRequest#schemaVersion()

2023-10-12 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-20633:
--

 Summary: Remove QueryStartRequest#schemaVersion()
 Key: IGNITE-20633
 URL: https://issues.apache.org/jira/browse/IGNITE-20633
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
 Fix For: 3.0.0-beta2


According to the javadoc of this method, it should be removed. This needs to be 
clarified.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20454) Sql. Extend SQL cursor with ability to check if first page is ready

2023-10-12 Thread Pavel Pereslegin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774465#comment-17774465
 ] 

Pavel Pereslegin commented on IGNITE-20454:
---

[~korlov], [~mzhuravkov], 
please review the proposed changes.

> Sql. Extend SQL cursor with ability to check if first page is ready
> ---
>
> Key: IGNITE-20454
> URL: https://issues.apache.org/jira/browse/IGNITE-20454
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> For multi statement queries, in order to advance to the next statement we 
> have to get sure that the first page of result for current statement is ready 
> to be served. This allows not to depend on a user and finish the script even 
> if no one consumes the results.
> Definition of done: there is an API available from within 
> {{SqlQueryProcessor}} such that will allow to be notified about completion of 
> prefetch ({{AsyncRootNode#prefetch}}).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20603) Restore topologyAugmentationMap on a node restart

2023-10-12 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20603:
-
Description: 
h3. *Motivation*
It is possible that some events were propagated to {{ms.logicalTopology}}, but 
restart happened when we were updating topologyAugmentationMap in 
{{DistributionZoneManager#createMetastorageTopologyListener}}. That means that 
augmentation that must be added to {{zone.topologyAugmentationMap}} wasn't 
added and we need to recover this information.

h3. *Definition of done*
On a node restart, topologyAugmentationMap must be correctly restored according 
to {{ms.logicalTopology}} state.


h3. *Implementation notes*


For every zone, compare {{MS.local.logicalTopology.revision}} with 
max(maxScUpFromMap, maxScDownFromMap). If {{logicalTopology.revision}} is 
greater than max(maxScUpFromMap, maxScDownFromMap), that means that some 
topology changes haven't been propagated to topologyAugmentationMap before 
restart and appropriate timers haven't been scheduled. To fill the gap in 
topologyAugmentationMap, compare {{MS.local.logicalTopology}} with 
{{lastSeenLogicalTopology}} and enhance topologyAugmentationMap with the nodes 
that did not have time to be propagated to topologyAugmentationMap before 
restart. {{lastSeenTopology}} is calculated in the following way: we read 
{{MS.local.dataNodes}}, also we take max(scaleUpTriggerKey, 
scaleDownTriggerKey) and retrieve all additions and removals of nodes from the 
topologyAugmentationMap using max(scaleUpTriggerKey, scaleDownTriggerKey) as 
the left bound. After that apply these changes to the map with nodes counters 
from {{MS.local.dataNodes}} and take nodes only with the positive counters. 
This is the lastSeenTopology. Comparing it with {{MS.local.logicalTopology}} 
will tell us which nodes were not added or removed and weren't propagated to 
topologyAugmentationMap before restart. We take these differences and add them 
to the topologyAugmentationMap. As a revision (key for topologyAugmentationMap) 
take {{MS.local.logicalTopology.revision}}. It is safe to take this revision, 
because if some node was added to the {{ms.topology}} after immediate data 
nodes recalculation, this added node must restore this immediate data nodes' 
recalculation intent. 



  was:
h3. *Motivation*
It is possible that some events were propagated to {{ms.logicalTopology}}, but 
restart happened when we were updating topologyAugmentationMap in 
{{DistributionZoneManager#createMetastorageTopologyListener}}. That means that 
augmentation that must be added to {{zone.topologyAugmentationMap}} wasn't 
added and we need to recover this information.

h3. *Definition of done*
On a node restart, topologyAugmentationMap must be correctly restored according 
to {{ms.logicalTopology}} state.


h3. *Implementation notes*

For every zone, compare {{MS.local.logicalTopology.revision}} with 
max(maxScUpFromMap, maxScDownFromMap). If {{logicalTopology.revision}} is 
greater than max(maxScUpFromMap, maxScDownFromMap), that means that some 
topology changes haven't been propagated to topologyAugmentationMap before 
restart and appropriate timers haven't been scheduled. To fill the gap in 
topologyAugmentationMap, compare {{MS.local.logicalTopology}} with 
{{lastSeenLogicalTopology}} and enhance topologyAugmentationMap with the nodes 
that did not have time to be propagated to topologyAugmentationMap before 
restart. {{lastSeenTopology}} is calculated in the following way: we read 
{{MS.local.dataNodes}}, also we take max(scaleUpTriggerKey, 
scaleDownTriggerKey) and retrieve all additions and removals of nodes from the 
topologyAugmentationMap using max(scaleUpTriggerKey, scaleDownTriggerKey) as 
the left bound. After that apply these changes to the map with nodes counters 
from {{MS.local.dataNodes}} and take nodes only with the positive counters. 
This is the lastSeenTopology. Comparing it with {{MS.local.logicalTopology}} 
will tell us which nodes were not added or removed and weren't propagated to 
topologyAugmentationMap before restart. We take these differences and add them 
to the topologyAugmentationMap. As a revision (key for topologyAugmentationMap) 
take {{MS.local.logicalTopology.revision}}. It is safe to take this revision, 
because if some node was added to the {{ms.topology}} after immediate data 
nodes recalculation, this added node must restore this immediate data nodes' 
recalculation intent. 




> Restore topologyAugmentationMap on a node restart
> -
>
> Key: IGNITE-20603
> URL: https://issues.apache.org/jira/browse/IGNITE-20603
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> It is possible that some events were propagated to 

[jira] [Commented] (IGNITE-20575) Forbid mixed cache groups with both atomic and transactional caches (with system property able to allow)

2023-10-12 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774424#comment-17774424
 ] 

Ignite TC Bot commented on IGNITE-20575:


{panel:title=Branch: [pull/10976/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10976/head] Base: [master] : New Tests 
(2)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Cache 3{color} [[tests 
2|https://ci2.ignite.apache.org/viewLog.html?buildId=7374317]]
* {color:#013220}IgniteBinaryObjectsCacheTestSuite3: 
IgniteCacheGroupsTest.mixedCacheGroupsForbiddenTest - PASSED{color}
* {color:#013220}IgniteBinaryObjectsCacheTestSuite3: 
IgniteCacheGroupsTest.mixedCacheGroupsAllowedTest - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7374414buildTypeId=IgniteTests24Java8_RunAll]

> Forbid mixed cache groups with both atomic and transactional caches (with 
> system property able to allow)
> 
>
> Key: IGNITE-20575
> URL: https://issues.apache.org/jira/browse/IGNITE-20575
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20632) Fix tests in Apache Ignite 3 after init script with default zone will be introduced

2023-10-12 Thread Mirza Aliev (Jira)
Mirza Aliev created IGNITE-20632:


 Summary: Fix tests in Apache Ignite 3 after init script with 
default zone will be introduced 
 Key: IGNITE-20632
 URL: https://issues.apache.org/jira/browse/IGNITE-20632
 Project: Ignite
  Issue Type: Improvement
Reporter: Mirza Aliev


h3. *Motivation*

After https://issues.apache.org/jira/browse/IGNITE-20631 will be implemented, 
tests that use default zone will start to fail, and we need to provide a way to 
use init script with default zone setting in those tests 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (IGNITE-20575) Forbid mixed cache groups with both atomic and transactional caches (with system property able to allow)

2023-10-12 Thread Anton Vinogradov (Jira)


[ https://issues.apache.org/jira/browse/IGNITE-20575 ]


Anton Vinogradov deleted comment on IGNITE-20575:
---

was (Author: ignitetcbot):
{panel:title=Branch: [pull/10976/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10976/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7372964buildTypeId=IgniteTests24Java8_RunAll]

> Forbid mixed cache groups with both atomic and transactional caches (with 
> system property able to allow)
> 
>
> Key: IGNITE-20575
> URL: https://issues.apache.org/jira/browse/IGNITE-20575
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20631) Provide init script for creating default zone

2023-10-12 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20631:
-
Description: 
h3. *Motivation*
In https://issues.apache.org/jira/browse/IGNITE-20613 we provide a new way to 
work with default zone. There won't be any predefined default zone, it will be 
possible to set already created zone as a default one. In this task we need to 
provide a way to have initial script for cluster, which will be taken on the 
init phase of a cluster and will set default zone.

h3. *Implementation notes*

We need to check the packaging, seems like we already have something similar

> Provide init script for creating default zone 
> --
>
> Key: IGNITE-20631
> URL: https://issues.apache.org/jira/browse/IGNITE-20631
> Project: Ignite
>  Issue Type: Improvement
> Environment: h3. *Motivation*
> In https://issues.apache.org/jira/browse/IGNITE-20613 we provide a new way to 
> work with default zone. There won't be any predefined default zone, it will 
> be possible to set already created zone as a default one. In this task we 
> need to provide a way to have initial script for cluster, which will be taken 
> on the init phase of a cluster and will set default zone.
> h3. *Implementation notes*
> We need to check the packaging, seems like we already have something similar
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> In https://issues.apache.org/jira/browse/IGNITE-20613 we provide a new way to 
> work with default zone. There won't be any predefined default zone, it will 
> be possible to set already created zone as a default one. In this task we 
> need to provide a way to have initial script for cluster, which will be taken 
> on the init phase of a cluster and will set default zone.
> h3. *Implementation notes*
> We need to check the packaging, seems like we already have something similar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20631) Provide init script for creating default zone

2023-10-12 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20631:
-
Environment: (was: h3. *Motivation*
In https://issues.apache.org/jira/browse/IGNITE-20613 we provide a new way to 
work with default zone. There won't be any predefined default zone, it will be 
possible to set already created zone as a default one. In this task we need to 
provide a way to have initial script for cluster, which will be taken on the 
init phase of a cluster and will set default zone.

h3. *Implementation notes*

We need to check the packaging, seems like we already have something similar)

> Provide init script for creating default zone 
> --
>
> Key: IGNITE-20631
> URL: https://issues.apache.org/jira/browse/IGNITE-20631
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> h3. *Motivation*
> In https://issues.apache.org/jira/browse/IGNITE-20613 we provide a new way to 
> work with default zone. There won't be any predefined default zone, it will 
> be possible to set already created zone as a default one. In this task we 
> need to provide a way to have initial script for cluster, which will be taken 
> on the init phase of a cluster and will set default zone.
> h3. *Implementation notes*
> We need to check the packaging, seems like we already have something similar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20631) Provide init script for creating default zone

2023-10-12 Thread Mirza Aliev (Jira)
Mirza Aliev created IGNITE-20631:


 Summary: Provide init script for creating default zone 
 Key: IGNITE-20631
 URL: https://issues.apache.org/jira/browse/IGNITE-20631
 Project: Ignite
  Issue Type: Improvement
 Environment: h3. *Motivation*
In https://issues.apache.org/jira/browse/IGNITE-20613 we provide a new way to 
work with default zone. There won't be any predefined default zone, it will be 
possible to set already created zone as a default one. In this task we need to 
provide a way to have initial script for cluster, which will be taken on the 
init phase of a cluster and will set default zone.

h3. *Implementation notes*

We need to check the packaging, seems like we already have something similar
Reporter: Mirza Aliev






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20620) Add index availability command to catalog

2023-10-12 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774407#comment-17774407
 ] 

Roman Puchkovskiy commented on IGNITE-20620:


The patch  looks good to me

> Add index availability command to catalog
> -
>
> Key: IGNITE-20620
> URL: https://issues.apache.org/jira/browse/IGNITE-20620
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> We need to add a command to the directory that will change the index state 
> from write-only to read-write (availability). And also an event for changing 
> state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20442) Sql. Extend grammar with transaction related statements.

2023-10-12 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-20442:

Fix Version/s: 3.0.0-beta2

> Sql. Extend grammar with transaction related statements.
> 
>
> Key: IGNITE-20442
> URL: https://issues.apache.org/jira/browse/IGNITE-20442
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Pavel Pereslegin
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In order to process multistatement queries we need to support the following 
> sql grammar to start/finish transactions.
> {code}
>  ::=
> START TRANSACTION []
>  ::= READ ONLY | READ WRITE
> {code}
> {code}
>  ::=
> COMMIT
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20629) Exclude odbc build from common compilation

2023-10-12 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin reassigned IGNITE-20629:
--

Assignee: Mikhail Pochatkin

> Exclude odbc build from common compilation 
> ---
>
> Key: IGNITE-20629
> URL: https://issues.apache.org/jira/browse/IGNITE-20629
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Pochatkin
>Assignee: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20630) Fix NPE in download deployment unit method

2023-10-12 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin reassigned IGNITE-20630:
--

Assignee: Mikhail Pochatkin

> Fix NPE in download deployment unit method
> --
>
> Key: IGNITE-20630
> URL: https://issues.apache.org/jira/browse/IGNITE-20630
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Mikhail Pochatkin
>Assignee: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3
>
> {code:java}
> java.lang.AssertionError: java.util.concurrent.ExecutionException: 
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:78)
> at 
> org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:35)
> at org.hamcrest.TypeSafeMatcher.matches(TypeSafeMatcher.java:67)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:10)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
> at 
> org.apache.ignite.internal.deployment.ItDeploymentUnitTest.testAbaValidation(ItDeploymentUnitTest.java:268)
> …
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.NullPointerException
> at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2022)
> at 
> org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:74)
> ... 92 more
> Caused by: java.lang.NullPointerException
> at 
> org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:257)
> at 
> org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:174)
> at 
> org.apache.ignite.internal.deployunit.DeployMessagingService.downloadUnitContent(DeployMessagingService.java:116)
> at 
> org.apache.ignite.internal.deployunit.DeploymentManagerImpl.lambda$onDemandDeploy$20(DeploymentManagerImpl.java:361)
> at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
> at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
> at 
> org.apache.ignite.internal.util.subscription.AccumulatorSubscriber.onComplete(AccumulatorSubscriber.java:65)
> at 
> org.apache.ignite.internal.metastorage.impl.CursorSubscription.processRequest(CursorSubscription.java:137)
> at 
> org.apache.ignite.internal.metastorage.impl.CursorSubscription.lambda$requestNextBatch$0(CursorSubscription.java:159)
> at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20630) Fix NPE in download deployment unit method

2023-10-12 Thread Mikhail Pochatkin (Jira)
Mikhail Pochatkin created IGNITE-20630:
--

 Summary: Fix NPE in download deployment unit method
 Key: IGNITE-20630
 URL: https://issues.apache.org/jira/browse/IGNITE-20630
 Project: Ignite
  Issue Type: Improvement
Reporter: Mikhail Pochatkin


java.lang.AssertionError: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:78)
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:35)
  at org.hamcrest.TypeSafeMatcher.matches(TypeSafeMatcher.java:67)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:10)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
  at 
org.apache.ignite.internal.deployment.ItDeploymentUnitTest.testAbaValidation(ItDeploymentUnitTest.java:268)

…

Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
  at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
  at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2022)
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:74)
  ... 92 more
Caused by: java.lang.NullPointerException
  at 
org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:257)
  at 
org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:174)
  at 
org.apache.ignite.internal.deployunit.DeployMessagingService.downloadUnitContent(DeployMessagingService.java:116)
  at 
org.apache.ignite.internal.deployunit.DeploymentManagerImpl.lambda$onDemandDeploy$20(DeploymentManagerImpl.java:361)
  at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
  at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
  at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
  at 
org.apache.ignite.internal.util.subscription.AccumulatorSubscriber.onComplete(AccumulatorSubscriber.java:65)
  at 
org.apache.ignite.internal.metastorage.impl.CursorSubscription.processRequest(CursorSubscription.java:137)
  at 
org.apache.ignite.internal.metastorage.impl.CursorSubscription.lambda$requestNextBatch$0(CursorSubscription.java:159)
  at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
  at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
  at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20630) Fix NPE in download deployment unit method

2023-10-12 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin updated IGNITE-20630:
---
Description: 
{code:java}
java.lang.AssertionError: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:78)
at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:35)
at org.hamcrest.TypeSafeMatcher.matches(TypeSafeMatcher.java:67)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:10)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
at 
org.apache.ignite.internal.deployment.ItDeploymentUnitTest.testAbaValidation(ItDeploymentUnitTest.java:268)
…
Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2022)
at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:74)
... 92 more
Caused by: java.lang.NullPointerException
at 
org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:257)
at 
org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:174)
at 
org.apache.ignite.internal.deployunit.DeployMessagingService.downloadUnitContent(DeployMessagingService.java:116)
at 
org.apache.ignite.internal.deployunit.DeploymentManagerImpl.lambda$onDemandDeploy$20(DeploymentManagerImpl.java:361)
at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at 
org.apache.ignite.internal.util.subscription.AccumulatorSubscriber.onComplete(AccumulatorSubscriber.java:65)
at 
org.apache.ignite.internal.metastorage.impl.CursorSubscription.processRequest(CursorSubscription.java:137)
at 
org.apache.ignite.internal.metastorage.impl.CursorSubscription.lambda$requestNextBatch$0(CursorSubscription.java:159)
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834) {code}

  was:
java.lang.AssertionError: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:78)
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:35)
  at org.hamcrest.TypeSafeMatcher.matches(TypeSafeMatcher.java:67)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:10)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
  at 
org.apache.ignite.internal.deployment.ItDeploymentUnitTest.testAbaValidation(ItDeploymentUnitTest.java:268)

…

Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
  at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
  at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2022)
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:74)
  ... 92 more
Caused by: java.lang.NullPointerException
  at 
org.apache.ignite.network.DefaultMessagingService.invoke0(DefaultMessagingService.java:257)
  at 
org.apache.ignite.network.DefaultMessagingService.invoke(DefaultMessagingService.java:174)
  at 
org.apache.ignite.internal.deployunit.DeployMessagingService.downloadUnitContent(DeployMessagingService.java:116)
  at 
org.apache.ignite.internal.deployunit.DeploymentManagerImpl.lambda$onDemandDeploy$20(DeploymentManagerImpl.java:361)
  at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
  at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
  at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
  at 
org.apache.ignite.internal.util.subscription.AccumulatorSubscriber.onComplete(AccumulatorSubscriber.java:65)
  at 

[jira] [Created] (IGNITE-20629) Exclude odbc build from common compilation

2023-10-12 Thread Mikhail Pochatkin (Jira)
Mikhail Pochatkin created IGNITE-20629:
--

 Summary: Exclude odbc build from common compilation 
 Key: IGNITE-20629
 URL: https://issues.apache.org/jira/browse/IGNITE-20629
 Project: Ignite
  Issue Type: Improvement
Reporter: Mikhail Pochatkin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20625) SELECT MIN(column), MAX(column) by ODBC throws exception

2023-10-12 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego reassigned IGNITE-20625:


Assignee: Igor Sapego

> SELECT MIN(column), MAX(column) by ODBC throws exception
> 
>
> Key: IGNITE-20625
> URL: https://issues.apache.org/jira/browse/IGNITE-20625
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 3.0.0-beta2
>Reporter: Igor
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
>
> h3. Steps to reproduce:
>  # Connect to Ignite using ODBC driver (Python).
>  # Execute separate queries one by one 
> {code:java}
> DROP TABLE IF EXISTS PUBLIC.PARKING;
> CREATE TABLE PUBLIC.PARKING(ID INT, NAME VARCHAR(255), CAPACITY INT NOT NULL, 
> b decimal,c date, CITY VARCHAR(20), PRIMARY KEY (ID, CITY));
> INSERT INTO PUBLIC.PARKING(ID, NAME, CAPACITY, CITY) VALUES(1, 'parking_1', 
> 1, 'New York');
> SELECT MIN(CAPACITY), MAX(CAPACITY) FROM PUBLIC.PARKING; {code}
> h3. Expected result:
> Query executed successfully.
> h3. Actual result:
> The last query throws exception.
> {code:java}
> The value in stream is not a Binary data : 5{code}
> No errors in server log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20445) Clean up write intents for RW transaction on replication group nodes

2023-10-12 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IGNITE-20445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

 Kirill Sizov updated IGNITE-20445:
---
Fix Version/s: 3.0.0-beta2

> Clean up write intents for RW transaction on replication group nodes
> 
>
> Key: IGNITE-20445
> URL: https://issues.apache.org/jira/browse/IGNITE-20445
> Project: Ignite
>  Issue Type: Task
>Reporter:  Kirill Sizov
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> If a transaction was committed/aborted and for any reason the cleanup 
> operation was not performed on a node, the write intent entries would still 
> be present in the storage.
> When an RO transaction sees write intents, no matter on primary or on any 
> other node, it performs write intent resolution and returns the correct 
> result. 
> When an RW transaction sees write intents, we can get an exception.
> _Imagine the case where a finished transaction left its write intents in the 
> storage of all nodes A, B and C. A is primary._
> _An RO transaction is executed on the primary A, it kicks off an async 
> cleanup (IGNITE-20041)._ 
> _The cleanup is a local task (not distributed to the replication group), thus 
> only the A's storage is cleaned. B and C storages still contain the same 
> write intent._
> _Now an RW transaction starts. It sees no write intents on A, executes its 
> action and the action is replicated to B and C. Execution of this task on B 
> and C will result in a storage exception since it's not allowed to have more 
> than one write intent per row._
> *Definition of Done*
> The nodes of the replication group should perform cleanup of their storages 
> when they receive an UpdateCommand, before adding new write intents.
> *Implementation details*
> We can extend the update command with the timestamp of the latest commit on 
> primary. 
> If the nodes of the replication group see a write intent in their storage, 
> they will:
>  * +commit+ the write intent if the UpdateCommand`s latestCommitTimestamp is 
> greater than the commit timestamp of latest committed entry.
>  * +abort+ the write intent if he UpdateCommand`s latestCommitTimestamp is 
> equal to the commit timestamp of latest committed entry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20484) NPE when some operation occurs when the primary replica is changing

2023-10-12 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20484:
-
Fix Version/s: 3.0.0-beta2

> NPE when some operation occurs when the primary replica is changing
> ---
>
> Key: IGNITE-20484
> URL: https://issues.apache.org/jira/browse/IGNITE-20484
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Motivation*
> It happens that when the request is created, the primary replica is in this 
> node, but when the request is executed in the replica, it has already lost 
> its role.
> {noformat}
> [2023-09-25T11:03:24,408][WARN 
> ][%iprct_tpclh_2%metastorage-watch-executor-2][ReplicaManager] Failed to 
> process replica request [request=ReadWriteSingleRowReplicaRequestImpl 
> [binaryRowMessage=BinaryRowMessageImpl 
> [binaryTuple=java.nio.HeapByteBuffer[pos=0 lim=9 cap=9], schemaVersion=1], 
> commitPartitionId=TablePartitionIdMessageImpl [partitionId=0, tableId=4], 
> full=true, groupId=4_part_0, requestType=RW_UPSERT, term=24742070009862, 
> timestampLong=24742430588928, 
> transactionId=018acb5d-4e54-0006--705db0b1]]
>  java.util.concurrent.CompletionException: java.lang.NullPointerException
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
>  ~[?:?]
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
>  ~[?:?]
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1081)
>  ~[?:?]
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>  ~[?:?]
> at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) 
> ~[?:?]
> at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$completeWaitersOnUpdate$0(PendingComparableValuesTracker.java:169)
>  ~[main/:?]
> at java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:122) 
> ~[?:?]
> at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.completeWaitersOnUpdate(PendingComparableValuesTracker.java:169)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.update(PendingComparableValuesTracker.java:103)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.metastorage.server.time.ClusterTimeImpl.updateSafeTime(ClusterTimeImpl.java:146)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl.onSafeTimeAdvanced(MetaStorageManagerImpl.java:849)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl$1.onSafeTimeAdvanced(MetaStorageManagerImpl.java:456)
>  ~[main/:?]
> at 
> org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$advanceSafeTime$7(WatchProcessor.java:269)
>  ~[main/:?]
> at 
> java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
>  [?:?]
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
> at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: java.lang.NullPointerException
> at 
> org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$ensureReplicaIsPrimary$161(PartitionReplicaListener.java:2415)
>  ~[main/:?]
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
>  ~[?:?]
> ... 15 more
> {noformat}
> *Definition of done*
> In this case, we should throw the correct exception because the request 
> cannot be handled in this replica anymore, and the matched transaction will 
> be rolled back.
> *Implementation notes*
> Do not forget to check all places where the issue is mentioned (especially in 
> TODO section).
> As discussed with [~sanpwc]:
> This exception is likely to be thrown when 
> - we successfully get a primary replica on one node
> - send a message and the message is slightly slow to be delivered
> - we handle the received message on the recepient node and run 
> {{placementDriver.getPrimaryReplica}}. 
> If the previous lease has expired by the time we handle the message, the call 
> to {{placementDriver}} will result in a {{null}} value instead of a 
> {{ReplicaMeta}} instance. Hence the NPE.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20628) testDropColumn and testMergeChangesAddDropAdd in ItSchemaChangeKvViewTest are disabled

2023-10-12 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-20628:
--

 Summary: testDropColumn and testMergeChangesAddDropAdd in 
ItSchemaChangeKvViewTest are disabled
 Key: IGNITE-20628
 URL: https://issues.apache.org/jira/browse/IGNITE-20628
 Project: Ignite
  Issue Type: Bug
Reporter: Roman Puchkovskiy
 Fix For: 3.0.0-beta2


It was supposed that IGNITE-17931 was the culprit, but even after removing the 
blocking code the tests are still flaky.

The tests fail with one of 3 symptoms:
 # An NPE happens in the test method code: a value by a key for which a put is 
made earlier is not found when using the same key. This is probably caused by a 
transactional protocol implementation bug, maybe this: IGNITE-20116
 # A PrimaryReplicaAwaitTimeoutException
 # A ReplicationTimeoutException

Items 2 and 3 need to be investigated.
h2. A stacktrace for 1

java.lang.NullPointerException
    at 
org.apache.ignite.internal.runner.app.ItSchemaChangeKvViewTest.testDropColumn(ItSchemaChangeKvViewTest.java:58)
h2. A stacktrace for 2
org.apache.ignite.tx.TransactionException: IGN-PLACEMENTDRIVER-1 
TraceId:0a32c369-b9ca-4091-b8de-af15d65a1f52 Failed to get the primary replica 
[tablePartitionId=3_part_5, awaitTimestamp=HybridTimestamp 
[time=111220884095959043, physical=1697096009765, logical=3]]
 
at 
org.apache.ignite.internal.util.ExceptionUtils.lambda$withCause$1(ExceptionUtils.java:400)
at 
org.apache.ignite.internal.util.ExceptionUtils.withCauseInternal(ExceptionUtils.java:461)
at 
org.apache.ignite.internal.util.ExceptionUtils.withCause(ExceptionUtils.java:400)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.lambda$enlist$71(InternalTableImpl.java:1659)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at 
java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
at 
java.base/java.util.concurrent.CompletableFuture$Timeout.run(CompletableFuture.java:2792)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.util.concurrent.CompletionException: 
org.apache.ignite.internal.placementdriver.PrimaryReplicaAwaitTimeoutException: 
IGN-PLACEMENTDRIVER-1 TraceId:0a32c369-b9ca-4091-b8de-af15d65a1f52 The primary 
replica await timed out [replicationGroupId=3_part_5, 
referenceTimestamp=HybridTimestamp [time=111220884095959043, 
physical=1697096009765, logical=3], currentLease=Lease 
[leaseholder=isckvt_tmcada_3346, accepted=false, startTime=HybridTimestamp 
[time=111220884127809550, physical=1697096010251, logical=14], 
expirationTime=HybridTimestamp [time=111220891992129536, 
physical=1697096130251, logical=0], prolongable=false, 
replicationGroupId=3_part_5]]
at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
at 
java.base/java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:990)
at 
java.base/java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:970)
... 9 more
Caused by: 
org.apache.ignite.internal.placementdriver.PrimaryReplicaAwaitTimeoutException: 
IGN-PLACEMENTDRIVER-1 TraceId:0a32c369-b9ca-4091-b8de-af15d65a1f52 The primary 
replica await timed out [replicationGroupId=3_part_5, 
referenceTimestamp=HybridTimestamp [time=111220884095959043, 
physical=1697096009765, logical=3], currentLease=Lease 
[leaseholder=isckvt_tmcada_3346, accepted=false, startTime=HybridTimestamp 
[time=111220884127809550, physical=1697096010251, logical=14], 
expirationTime=HybridTimestamp [time=111220891992129536, 
physical=1697096130251, logical=0], prolongable=false, 
replicationGroupId=3_part_5]]
at 
org.apache.ignite.internal.placementdriver.leases.LeaseTracker.lambda$awaitPrimaryReplica$2(LeaseTracker.java:229)
at 
java.base/java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:986)
... 10 more
Caused by: java.util.concurrent.TimeoutException
... 7 more
h2. A stacktrace for 3
org.apache.ignite.tx.TransactionException: IGN-REP-3 
TraceId:d41dcd22-5370-47cd-837b-c23268480162 Replication is timed out 
[replicaGrpId=3_part_5]
 
at 

[jira] [Updated] (IGNITE-20577) Partial data loss after node restart

2023-10-12 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20577:
-
Description: 
How to reproduce:

1. Start a 1-node cluster
2. Create several simple tables (usually 5 is enough to reproduce):
{code:sql}
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
...
{code}
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
{code:sql}
SELECT COUNT(*) FROM failoverTest00;
...
{code}
5. Restart node (kill a Java process and start node again).
6. Check all tables again.

Expected behavior: after restart, all tables still contains the same data as 
before.

Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.

This bug was first observed only near Sep 15, 2023. Most probably, it was 
introduced somewhere near that date. Probably, it's an another face of 
IGNITE-20425 (I'm not sure though). No errors in logs observed.

*UPD*: The problem is caused by 
https://issues.apache.org/jira/browse/IGNITE-20116, current issue will be 
solved once https://issues.apache.org/jira/browse/IGNITE-20116 will be done

  was:
How to reproduce:

1. Start a 1-node cluster
2. Create several simple tables (usually 5 is enough to reproduce):
{code:sql}
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
...
{code}
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
{code:sql}
SELECT COUNT(*) FROM failoverTest00;
...
{code}
5. Restart node (kill a Java process and start node again).
6. Check all tables again.

Expected behavior: after restart, all tables still contains the same data as 
before.

Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.

This bug was first observed only near Sep 15, 2023. Most probably, it was 
introduced somewhere near that date. Probably, it's an another face of 
IGNITE-20425 (I'm not sure though). No errors in logs observed.

UPD: The problem is caused by 
https://issues.apache.org/jira/browse/IGNITE-20116, current issue will be 
solved once https://issues.apache.org/jira/browse/IGNITE-20116 will be done


> Partial data loss after node restart
> 
>
> Key: IGNITE-20577
> URL: https://issues.apache.org/jira/browse/IGNITE-20577
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey Khitrin
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> How to reproduce:
> 1. Start a 1-node cluster
> 2. Create several simple tables (usually 5 is enough to reproduce):
> {code:sql}
> create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
> VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
> create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
> VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
> ...
> {code}
> 3. Fill every table with 1000 rows.
> 4. Ensure that every table contains 1000 rows:
> {code:sql}
> SELECT COUNT(*) FROM failoverTest00;
> ...
> {code}
> 5. Restart node (kill a Java process and start node again).
> 6. Check all tables again.
> Expected behavior: after restart, all tables still contains the same data as 
> before.
> Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
> enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.
> This bug was first observed only near Sep 15, 2023. Most probably, it was 
> introduced somewhere near that date. Probably, it's an another face of 
> IGNITE-20425 (I'm not sure though). No errors in logs observed.
> *UPD*: The problem is caused by 
> https://issues.apache.org/jira/browse/IGNITE-20116, current issue will be 
> solved once https://issues.apache.org/jira/browse/IGNITE-20116 will be done



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20577) Partial data loss after node restart

2023-10-12 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-20577:
-
Description: 
How to reproduce:

1. Start a 1-node cluster
2. Create several simple tables (usually 5 is enough to reproduce):
{code:sql}
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
...
{code}
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
{code:sql}
SELECT COUNT(*) FROM failoverTest00;
...
{code}
5. Restart node (kill a Java process and start node again).
6. Check all tables again.

Expected behavior: after restart, all tables still contains the same data as 
before.

Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.

This bug was first observed only near Sep 15, 2023. Most probably, it was 
introduced somewhere near that date. Probably, it's an another face of 
IGNITE-20425 (I'm not sure though). No errors in logs observed.

UPD: The problem is caused by 
https://issues.apache.org/jira/browse/IGNITE-20116, current issue will be 
solved once https://issues.apache.org/jira/browse/IGNITE-20116 will be done

  was:
How to reproduce:

1. Start a 1-node cluster
2. Create several simple tables (usually 5 is enough to reproduce):
{code:sql}
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
...
{code}
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
{code:sql}
SELECT COUNT(*) FROM failoverTest00;
...
{code}
5. Restart node (kill a Java process and start node again).
6. Check all tables again.

Expected behavior: after restart, all tables still contains the same data as 
before.

Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.

This bug was first observed only near Sep 15, 2023. Most probably, it was 
introduced somewhere near that date. Probably, it's an another face of 
IGNITE-20425 (I'm not sure though). No errors in logs observed.


> Partial data loss after node restart
> 
>
> Key: IGNITE-20577
> URL: https://issues.apache.org/jira/browse/IGNITE-20577
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey Khitrin
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> How to reproduce:
> 1. Start a 1-node cluster
> 2. Create several simple tables (usually 5 is enough to reproduce):
> {code:sql}
> create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 
> VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
> create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 
> VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2));
> ...
> {code}
> 3. Fill every table with 1000 rows.
> 4. Ensure that every table contains 1000 rows:
> {code:sql}
> SELECT COUNT(*) FROM failoverTest00;
> ...
> {code}
> 5. Restart node (kill a Java process and start node again).
> 6. Check all tables again.
> Expected behavior: after restart, all tables still contains the same data as 
> before.
> Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast 
> enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.
> This bug was first observed only near Sep 15, 2023. Most probably, it was 
> introduced somewhere near that date. Probably, it's an another face of 
> IGNITE-20425 (I'm not sure though). No errors in logs observed.
> UPD: The problem is caused by 
> https://issues.apache.org/jira/browse/IGNITE-20116, current issue will be 
> solved once https://issues.apache.org/jira/browse/IGNITE-20116 will be done



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20617) SQL: ~20x performance degradation in SELECTS (2 nodes VS 1 node)

2023-10-12 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-20617:

Description: 
Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17

Benchmark: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
 

The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
{{{}preparedStatement{}}}.

 

Steps:
 * Run an Ignite cluster of 2 nodes with the attached config 
[^ignite-config.json] .
 ** *fsync = false*
 * Run the SQL YCSB benchmark in preload mode:
 ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
hosts=192.168.1.60}}
 * Run the SQL YCSB benchmark in 100% read mode: 
 ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=1 -p recordcount=1 -p dataintegrity=true -p 
measurementtype=timeseries -p hosts=192.168.1.60 -s}}
 * Observe the following average throughput on reads:

!sql-2nodes-select.png!

Server node's logs: [^sql-logs-2-server-nodes.zip]

Repeat the test with only 1 server node and observe {*}~20x better throughput 
on reads{*}:

!sql-1node-select.png!

 

 

  was:
Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17

Benchmark: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
 

The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
{{{}preparedStatement{}}}.

 

Steps:
 * Run an Ignite cluster of 2 nodes with the attached config 
[^ignite-config.json] . 
 ** *fsync = false*
 * Run the SQL YCSB benchmark in preload mode:
 ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
hosts=192.168.1.60}}
 * Run the SQL YCSB benchmark in 100% read mode: 
 ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=1 -p recordcount=1 -p dataintegrity=true -p 
measurementtype=timeseries -p hosts=192.168.1.60 -s}}
 * Observe the following average throughput on reads:

!sql-2nodes-select.png!

Repeat the test with only 1 server node and observe {*}~20x better throughput 
on reads{*}:

!sql-1node-select.png!

 

 


> SQL: ~20x performance degradation in SELECTS (2 nodes VS 1 node)
> 
>
> Key: IGNITE-20617
> URL: https://issues.apache.org/jira/browse/IGNITE-20617
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: ignite-config.json, jdbc-1node-select.png, 
> jdbc-2nodes-select.png, sql-1node-select.png, sql-2nodes-select.png, 
> sql-logs-2-server-nodes.zip
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
>  
> Steps:
>  * Run an Ignite cluster of 2 nodes with the attached config 
> [^ignite-config.json] .
>  ** *fsync = false*
>  * Run the SQL YCSB benchmark in preload mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.60}}
>  * Run the SQL YCSB benchmark in 100% read mode: 
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=1 -p recordcount=1 -p dataintegrity=true -p 
> measurementtype=timeseries -p hosts=192.168.1.60 -s}}
>  * Observe the following average throughput on reads:
> !sql-2nodes-select.png!
> Server node's logs: [^sql-logs-2-server-nodes.zip]
> Repeat the test with only 1 server node and observe {*}~20x better throughput 
> on reads{*}:
> !sql-1node-select.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20617) SQL: ~20x performance degradation in SELECTS (2 nodes VS 1 node)

2023-10-12 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-20617:

Attachment: sql-logs-2-server-nodes.zip

> SQL: ~20x performance degradation in SELECTS (2 nodes VS 1 node)
> 
>
> Key: IGNITE-20617
> URL: https://issues.apache.org/jira/browse/IGNITE-20617
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: ignite-config.json, jdbc-1node-select.png, 
> jdbc-2nodes-select.png, sql-1node-select.png, sql-2nodes-select.png, 
> sql-logs-2-server-nodes.zip
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
>  
> Steps:
>  * Run an Ignite cluster of 2 nodes with the attached config 
> [^ignite-config.json] . 
>  ** *fsync = false*
>  * Run the SQL YCSB benchmark in preload mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.60}}
>  * Run the SQL YCSB benchmark in 100% read mode: 
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=1 -p recordcount=1 -p dataintegrity=true -p 
> measurementtype=timeseries -p hosts=192.168.1.60 -s}}
>  * Observe the following average throughput on reads:
> !sql-2nodes-select.png!
> Repeat the test with only 1 server node and observe {*}~20x better throughput 
> on reads{*}:
> !sql-1node-select.png!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20618) Sql. Degradation of SELECT operations performance over time

2023-10-12 Thread Ivan Artiukhov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774344#comment-17774344
 ] 

Ivan Artiukhov commented on IGNITE-20618:
-

[~amashenkov] Here are the node's log and GC log:

[^poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log]

[^gc-poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log]

> Sql. Degradation of SELECT operations performance over time
> ---
>
> Key: IGNITE-20618
> URL: https://issues.apache.org/jira/browse/IGNITE-20618
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: 
> gc-poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> ignite-config.json, 
> poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> sql-degr-insert.png, sql-degr-select.png
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
> {*}Steps{*}:
>  * Start a 1 node cluster with the attached [^ignite-config.json]
>  ** {*}raft.fsync = false{*}{*}{{*}}
>  * Start the benchmark in pre-load mode – preload 100k entries:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
>  * Start the benchmark in 100% read mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
> The following *stable* throughput is observed in preload mode 
> ({{{}INSERT{}}}):
> !sql-degr-insert.png!
> The following *unstable* throughput is observed on reads ({{{}SELECT):{}}}
> {{!sql-degr-select.png!}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20618) Sql. Degradation of SELECT operations performance over time

2023-10-12 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-20618:

Attachment: poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log
gc-poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log

> Sql. Degradation of SELECT operations performance over time
> ---
>
> Key: IGNITE-20618
> URL: https://issues.apache.org/jira/browse/IGNITE-20618
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: 
> gc-poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> ignite-config.json, 
> poc-tester-SERVER-192.168.1.43-id-0-2023-10-09-16-17-07.log, 
> sql-degr-insert.png, sql-degr-select.png
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
> {*}Steps{*}:
>  * Start a 1 node cluster with the attached [^ignite-config.json]
>  ** {*}raft.fsync = false{*}{*}{{*}}
>  * Start the benchmark in pre-load mode – preload 100k entries:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
>  * Start the benchmark in 100% read mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
> The following *stable* throughput is observed in preload mode 
> ({{{}INSERT{}}}):
> !sql-degr-insert.png!
> The following *unstable* throughput is observed on reads ({{{}SELECT):{}}}
> {{!sql-degr-select.png!}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20618) Sql. Degradation of SELECT operations performance over time

2023-10-12 Thread Evgeny Stanilovsky (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17774342#comment-17774342
 ] 

Evgeny Stanilovsky commented on IGNITE-20618:
-

[~Artukhov] yep, it always useful to attach all (system, gc) logs, also 
interesting platform (bare metal, aws ...) and sometimes swap info, 
oversubscription if deal with docker

> Sql. Degradation of SELECT operations performance over time
> ---
>
> Key: IGNITE-20618
> URL: https://issues.apache.org/jira/browse/IGNITE-20618
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: ignite-config.json, sql-degr-insert.png, 
> sql-degr-select.png
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
> {*}Steps{*}:
>  * Start a 1 node cluster with the attached [^ignite-config.json]
>  ** {*}raft.fsync = false{*}{*}{{*}}
>  * Start the benchmark in pre-load mode – preload 100k entries:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
>  * Start the benchmark in 100% read mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
> The following *stable* throughput is observed in preload mode 
> ({{{}INSERT{}}}):
> !sql-degr-insert.png!
> The following *unstable* throughput is observed on reads ({{{}SELECT):{}}}
> {{!sql-degr-select.png!}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20618) Sql. Degradation of SELECT operations performance over time

2023-10-12 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-20618:
--
Summary: Sql. Degradation of SELECT operations performance over time  (was: 
SQL: degradation of SELECT operations performance over time)

> Sql. Degradation of SELECT operations performance over time
> ---
>
> Key: IGNITE-20618
> URL: https://issues.apache.org/jira/browse/IGNITE-20618
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: ignite-config.json, sql-degr-insert.png, 
> sql-degr-select.png
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
> {*}Steps{*}:
>  * Start a 1 node cluster with the attached [^ignite-config.json]
>  ** {*}raft.fsync = false{*}{*}{{*}}
>  * Start the benchmark in pre-load mode – preload 100k entries:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
>  * Start the benchmark in 100% read mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
> The following *stable* throughput is observed in preload mode 
> ({{{}INSERT{}}}):
> !sql-degr-insert.png!
> The following *unstable* throughput is observed on reads ({{{}SELECT):{}}}
> {{!sql-degr-select.png!}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20618) SQL: degradation of SELECT operations performance over time

2023-10-12 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov reassigned IGNITE-20618:
-

Assignee: Konstantin Orlov

> SQL: degradation of SELECT operations performance over time
> ---
>
> Key: IGNITE-20618
> URL: https://issues.apache.org/jira/browse/IGNITE-20618
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3, performance
> Attachments: ignite-config.json, sql-degr-insert.png, 
> sql-degr-select.png
>
>
> Ignite 3, rev. 7d188ac7ae068bd69ff0e6e6cfe5a32ac5749d17
> Benchmark: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.3/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> The benchmark establishes an SQL {{Session}} and perform {{SELECTs}} via 
> {{{}preparedStatement{}}}.
> {*}Steps{*}:
>  * Start a 1 node cluster with the attached [^ignite-config.json]
>  ** {*}raft.fsync = false{*}{*}{{*}}
>  * Start the benchmark in pre-load mode – preload 100k entries:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
>  * Start the benchmark in 100% read mode:
>  ** {{-db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=1 -p dataintegrity=true -p measurementtype=timeseries -p 
> hosts=192.168.1.43 -p recordcount=10 -p operationcount=10}}
> The following *stable* throughput is observed in preload mode 
> ({{{}INSERT{}}}):
> !sql-degr-insert.png!
> The following *unstable* throughput is observed on reads ({{{}SELECT):{}}}
> {{!sql-degr-select.png!}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)