[jira] [Updated] (IGNITE-16975) Windows support for CLI
[ https://issues.apache.org/jira/browse/IGNITE-16975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-16975: Fix Version/s: 3.0.0-beta2 > Windows support for CLI > --- > > Key: IGNITE-16975 > URL: https://issues.apache.org/jira/browse/IGNITE-16975 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr >Assignee: Ivan Gagarkin >Priority: Minor > Labels: ignite-3, ignite-3-cli-tool > Fix For: 3.0.0-beta2 > > Attachments: image-2022-08-25-12-40-20-454.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > The "support" means: > * command output is rendered well (tables, json) > * autocompletion, command history support > * ANSI colors support > Environments: Windows Command Line, Powershell. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-16975) Windows support for CLI
[ https://issues.apache.org/jira/browse/IGNITE-16975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17635177#comment-17635177 ] Pavel Tupitsyn commented on IGNITE-16975: - Merged to main: 5f09e12e7690eaef3305de87843abffb762bda95 > Windows support for CLI > --- > > Key: IGNITE-16975 > URL: https://issues.apache.org/jira/browse/IGNITE-16975 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr >Assignee: Ivan Gagarkin >Priority: Minor > Labels: ignite-3, ignite-3-cli-tool > Attachments: image-2022-08-25-12-40-20-454.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > The "support" means: > * command output is rendered well (tables, json) > * autocompletion, command history support > * ANSI colors support > Environments: Windows Command Line, Powershell. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18180) Implement partition destruction for RocsDB
[ https://issues.apache.org/jira/browse/IGNITE-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-18180: - Description: At the moment, the destruction of the partition does not work quite correctly, what are the problems: # On *org.apache.ignite.internal.storage.rocksdb.RocksDbTableStorage#destroyPartition* call we don't destroy indexes; # If the node falls after the call to destroy the partition and before flushing, then the partition will continue to exist, which may lead to negative consequences. Also important: # Write appropriate tests; # On the destruction of the partition, we must not give the opportunity to read data from the partition and indexes and close all existing cursors for the partition and indexes. was: At the moment, the destruction of the partition does not work quite correctly, what are the problems: # On *org.apache.ignite.internal.storage.rocksdb.RocksDbTableStorage#destroyPartition* call we don't destroy indexes; # If the node falls after the call to destroy the partition and before flushing, then the partition will continue to exist, which may lead to negative consequences. > Implement partition destruction for RocsDB > -- > > Key: IGNITE-18180 > URL: https://issues.apache.org/jira/browse/IGNITE-18180 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > At the moment, the destruction of the partition does not work quite > correctly, what are the problems: > # On > *org.apache.ignite.internal.storage.rocksdb.RocksDbTableStorage#destroyPartition* > call we don't destroy indexes; > # If the node falls after the call to destroy the partition and before > flushing, then the partition will continue to exist, which may lead to > negative consequences. > Also important: > # Write appropriate tests; > # On the destruction of the partition, we must not give the opportunity to > read data from the partition and indexes and close all existing cursors for > the partition and indexes. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18180) Implement partition destruction for RocsDB
Kirill Tkalenko created IGNITE-18180: Summary: Implement partition destruction for RocsDB Key: IGNITE-18180 URL: https://issues.apache.org/jira/browse/IGNITE-18180 Project: Ignite Issue Type: Improvement Reporter: Kirill Tkalenko Fix For: 3.0.0-beta2 At the moment, the destruction of the partition does not work quite correctly, what are the problems: # On *org.apache.ignite.internal.storage.rocksdb.RocksDbTableStorage#destroyPartition* call we don't destroy indexes; # If the node falls after the call to destroy the partition and before flushing, then the partition will continue to exist, which may lead to negative consequences. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18179) KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build
[ https://issues.apache.org/jira/browse/IGNITE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-18179: --- Description: CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are parts of {{kafka-clients}} module. But when you build ignite-cdc-ext with a below command: {code} mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release -DskipTests {code} you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], where {{kafka-clients}} dependency_* is missing*_. So, when you try to start {{KafkaToIgniteCdcStreamer}} or {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries included, but i'm not sure whether all of them needed to run CDC. I have prepared a patch [^cdc-ext-build-patch.patch], which replaces _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: problems with missed classes eliminated and it seems, that simple active-active replication cases works fine. But, patch does not fix other dependencies from old build assembly structure (I'm not sure, that all dependencies satisfied). Structure after patch: [^ignite-cdc-ext-patch.txt] Also, proposed patch affects other modules, which uses {{/assembly/bin-component-shared.xml}} in build process. was: CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are parts of {{kafka-clients}} module. But when you build ignite-cdc-ext with a below command: {code} mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release -DskipTests {code} you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], where {{kafka-clients}} dependency is missing. So, when you try to start {{KafkaToIgniteCdcStreamer}} or {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries included, but i'm not sure whether all of them needed to run CDC. I have prepared a patch [^cdc-ext-build-patch.patch], which replaces _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: problems with missed classes eliminated and it seems, that simple active-active replication cases works fine. But, patch does not fix other dependencies from old build assembly structure (I'm not sure, that all dependencies satisfied). Structure after patch: [^ignite-cdc-ext-patch.txt] Also, proposed patch affects other modules, which uses {{/assembly/bin-component-shared.xml}} in build process. > KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build > - > > Key: IGNITE-18179 > URL: https://issues.apache.org/jira/browse/IGNITE-18179 > Project: Ignite > Issue Type: Bug > Components: extensions >Reporter: Ilya Shishkov >Priority: Major > Labels: ise > Attachments: cdc-ext-build-patch.patch, ignite-cdc-ext-current.txt, > ignite-cdc-ext-old.txt, ignite-cdc-ext-patch.txt > > > CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are > parts of {{kafka-clients}} module. > But when you build ignite-cdc-ext with a below command: > {code} > mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release > -DskipTests > {code} > you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], > where {{kafka-clients}} dependency_* is missing*_. > So, when you try to start {{KafkaToIgniteCdcStreamer}} or > {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. > Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: > [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries > included, but i'm not sure whether all of them needed to run CDC. > I have prepared a patch [^cdc-ext-build-patch.patch], which replaces > _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: > problems with missed classes eliminated and it seems, that simple > active-active replication cases works fine. But, patch does not fix other > dependencies from old build assembly structure (I'm not sure, that all > dependencies satisfied). > Structure after patch: [^ignite-cdc-ext-patch.txt] > Also, proposed patch affects other modules, which uses > {{/assembly/bin-component-shared.xml}} in build process. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18179) KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build
[ https://issues.apache.org/jira/browse/IGNITE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-18179: --- Description: CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are parts of {{kafka-clients}} module. But when you build ignite-cdc-ext with a below command: {code} mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release -DskipTests {code} you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], where {{kafka-clients}} dependency _*is missing*_. So, when you try to start {{KafkaToIgniteCdcStreamer}} or {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries included, but i'm not sure whether all of them needed to run CDC. I have prepared a patch [^cdc-ext-build-patch.patch], which replaces _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: problems with missed classes eliminated and it seems, that simple active-active replication cases works fine. But, patch does not fix other dependencies from old build assembly structure (I'm not sure, that all dependencies satisfied). Structure after patch: [^ignite-cdc-ext-patch.txt] Also, proposed patch affects other modules, which uses {{/assembly/bin-component-shared.xml}} in build process. was: CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are parts of {{kafka-clients}} module. But when you build ignite-cdc-ext with a below command: {code} mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release -DskipTests {code} you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], where {{kafka-clients}} dependency_* is missing*_. So, when you try to start {{KafkaToIgniteCdcStreamer}} or {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries included, but i'm not sure whether all of them needed to run CDC. I have prepared a patch [^cdc-ext-build-patch.patch], which replaces _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: problems with missed classes eliminated and it seems, that simple active-active replication cases works fine. But, patch does not fix other dependencies from old build assembly structure (I'm not sure, that all dependencies satisfied). Structure after patch: [^ignite-cdc-ext-patch.txt] Also, proposed patch affects other modules, which uses {{/assembly/bin-component-shared.xml}} in build process. > KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build > - > > Key: IGNITE-18179 > URL: https://issues.apache.org/jira/browse/IGNITE-18179 > Project: Ignite > Issue Type: Bug > Components: extensions >Reporter: Ilya Shishkov >Priority: Major > Labels: ise > Attachments: cdc-ext-build-patch.patch, ignite-cdc-ext-current.txt, > ignite-cdc-ext-old.txt, ignite-cdc-ext-patch.txt > > > CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are > parts of {{kafka-clients}} module. > But when you build ignite-cdc-ext with a below command: > {code} > mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release > -DskipTests > {code} > you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], > where {{kafka-clients}} dependency _*is missing*_. > So, when you try to start {{KafkaToIgniteCdcStreamer}} or > {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. > Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: > [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries > included, but i'm not sure whether all of them needed to run CDC. > I have prepared a patch [^cdc-ext-build-patch.patch], which replaces > _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: > problems with missed classes eliminated and it seems, that simple > active-active replication cases works fine. But, patch does not fix other > dependencies from old build assembly structure (I'm not sure, that all > dependencies satisfied). > Structure after patch: [^ignite-cdc-ext-patch.txt] > Also, proposed patch affects other modules, which uses > {{/assembly/bin-component-shared.xml}} in build process. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18179) KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build
[ https://issues.apache.org/jira/browse/IGNITE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-18179: --- Description: CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are parts of {{kafka-clients}} module. But when you build ignite-cdc-ext with a below command: {code} mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release -DskipTests {code} you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], where {{kafka-clients}} dependency is missing. So, when you try to start {{KafkaToIgniteCdcStreamer}} or {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries included, but i'm not sure whether all of them needed to run CDC. I have prepared a patch [^cdc-ext-build-patch.patch], which replaces _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: problems with missed classes eliminated and it seems, that simple active-active replication cases works fine. But, patch does not fix other dependencies from old build assembly structure (I'm not sure, that all dependencies satisfied). Structure after patch: [^ignite-cdc-ext-patch.txt] Also, proposed patch affects other modules, which uses {{/assembly/bin-component-shared.xml}} in build process. was: CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are parts of {{kafka-clients}} module. But when you build ignite-cdc-ext with a below command: {code} mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release -DskipTests {code} you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], where {{kafka-clients}} dependency is missing. So, when you try to start {{KafkaToIgniteCdcStreamer}} or {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries included, but i'm not sure whether all of them needed to run CDC. I have prepared a patch [^cdc-ext-build-patch.patch], which replaces _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: problems with missed classes eliminated and it seems, that simple active-active replication cases works fine. But, patch does not fix other dependencies from old build assembly structure (I'm not sure, that all dependencies satisfied). Structure after patch: [^ignite-cdc-ext-patch.txt] > KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build > - > > Key: IGNITE-18179 > URL: https://issues.apache.org/jira/browse/IGNITE-18179 > Project: Ignite > Issue Type: Bug > Components: extensions >Reporter: Ilya Shishkov >Priority: Major > Labels: ise > Attachments: cdc-ext-build-patch.patch, ignite-cdc-ext-current.txt, > ignite-cdc-ext-old.txt, ignite-cdc-ext-patch.txt > > > CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are > parts of {{kafka-clients}} module. > But when you build ignite-cdc-ext with a below command: > {code} > mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release > -DskipTests > {code} > you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], > where {{kafka-clients}} dependency is missing. > So, when you try to start {{KafkaToIgniteCdcStreamer}} or > {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. > Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: > [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries > included, but i'm not sure whether all of them needed to run CDC. > I have prepared a patch [^cdc-ext-build-patch.patch], which replaces > _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: > problems with missed classes eliminated and it seems, that simple > active-active replication cases works fine. But, patch does not fix other > dependencies from old build assembly structure (I'm not sure, that all > dependencies satisfied). > Structure after patch: [^ignite-cdc-ext-patch.txt] > Also, proposed patch affects other modules, which uses > {{/assembly/bin-component-shared.xml}} in build process. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18179) KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build
[ https://issues.apache.org/jira/browse/IGNITE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-18179: --- Attachment: (was: ignite-cdc-ext-current-1.txt) > KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build > - > > Key: IGNITE-18179 > URL: https://issues.apache.org/jira/browse/IGNITE-18179 > Project: Ignite > Issue Type: Bug > Components: extensions >Reporter: Ilya Shishkov >Priority: Major > Labels: ise > Attachments: cdc-ext-build-patch.patch, ignite-cdc-ext-current.txt, > ignite-cdc-ext-old.txt, ignite-cdc-ext-patch.txt > > > CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are > parts of {{kafka-clients}} module. > But when you build ignite-cdc-ext with a below command: > {code} > mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release > -DskipTests > {code} > you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], > where {{kafka-clients}} dependency is missing. > So, when you try to start {{KafkaToIgniteCdcStreamer}} or > {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. > Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: > [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries > included, but i'm not sure whether all of them needed to run CDC. > I have prepared a patch [^cdc-ext-build-patch.patch], which replaces > _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: > problems with missed classes eliminated and it seems, that simple > active-active replication cases works fine. But, patch does not fix other > dependencies from old build assembly structure (I'm not sure, that all > dependencies satisfied). > Structure after patch: [^ignite-cdc-ext-patch.txt] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18179) KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build
[ https://issues.apache.org/jira/browse/IGNITE-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-18179: --- Attachment: ignite-cdc-ext-current.txt > KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build > - > > Key: IGNITE-18179 > URL: https://issues.apache.org/jira/browse/IGNITE-18179 > Project: Ignite > Issue Type: Bug > Components: extensions >Reporter: Ilya Shishkov >Priority: Major > Labels: ise > Attachments: cdc-ext-build-patch.patch, ignite-cdc-ext-current.txt, > ignite-cdc-ext-old.txt, ignite-cdc-ext-patch.txt > > > CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are > parts of {{kafka-clients}} module. > But when you build ignite-cdc-ext with a below command: > {code} > mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release > -DskipTests > {code} > you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], > where {{kafka-clients}} dependency is missing. > So, when you try to start {{KafkaToIgniteCdcStreamer}} or > {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. > Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: > [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries > included, but i'm not sure whether all of them needed to run CDC. > I have prepared a patch [^cdc-ext-build-patch.patch], which replaces > _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: > problems with missed classes eliminated and it seems, that simple > active-active replication cases works fine. But, patch does not fix other > dependencies from old build assembly structure (I'm not sure, that all > dependencies satisfied). > Structure after patch: [^ignite-cdc-ext-patch.txt] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18179) KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build
Ilya Shishkov created IGNITE-18179: -- Summary: KafkaPoducer and KafkaConsumer classes missed in ignite-cdc-ext build Key: IGNITE-18179 URL: https://issues.apache.org/jira/browse/IGNITE-18179 Project: Ignite Issue Type: Bug Components: extensions Reporter: Ilya Shishkov Attachments: cdc-ext-build-patch.patch, ignite-cdc-ext-current-1.txt, ignite-cdc-ext-old.txt, ignite-cdc-ext-patch.txt CDC though Kafka uses {{KafkaConsumer}} and {{KafkaProducer}}, which are parts of {{kafka-clients}} module. But when you build ignite-cdc-ext with a below command: {code} mvn clean package -f modules/cdc-ext/ -P checkstyle,extension-release -DskipTests {code} you will obtain zip file with a structure [^ignite-cdc-ext-current.txt], where {{kafka-clients}} dependency is missing. So, when you try to start {{KafkaToIgniteCdcStreamer}} or {{IgniteToKafkaCdcStreamer}} you will obtain missing classes error. Building of module changed after IGNITE-16847, IGNITE-16815. Structure was: [^ignite-cdc-ext-old.txt]. As you can see, there was many jar libraries included, but i'm not sure whether all of them needed to run CDC. I have prepared a patch [^cdc-ext-build-patch.patch], which replaces _kafka_2.12-2.7.0.jar_ by _kafka-clients-2.7.0.jar_ and tested it locally: problems with missed classes eliminated and it seems, that simple active-active replication cases works fine. But, patch does not fix other dependencies from old build assembly structure (I'm not sure, that all dependencies satisfied). Structure after patch: [^ignite-cdc-ext-patch.txt] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17635006#comment-17635006 ] Ignite TC Bot commented on IGNITE-18178: {panel:title=Branch: [pull/10382/head] Base: [master] : Possible Blockers (3)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1} {color:#d04437}PDS 4{color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=6915356]] * IgnitePdsTestSuite4: IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse.testDeactivateDuringEvictionAndRebalance - Test has low fail rate in base branch 0,0% and is not flaky {color:#d04437}Queries 2{color} [[tests 0 TIMEOUT , Exit Code |https://ci.ignite.apache.org/viewLog.html?buildId=6915371]] {color:#d04437}Cache (Failover SSL){color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=6915313]] * IgniteCacheFailoverTestSuiteSsl: IgniteCacheSslStartStopSelfTest.testInvoke - Test has low fail rate in base branch 0,0% and is not flaky {panel} {panel:title=Branch: [pull/10382/head] Base: [master] : No new tests found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=6915400&buildTypeId=IgniteTests24Java8_RunAll] > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET > Fix For: 2.15 > > > .NET 7 provides Native AOT capabilities, where the app is published as native > code without dependencies: > https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ > However, if we try to use Ignite.NET in such a scenario, the following error > is produced (both for thin and thick APIs): > {code} > Unhandled Exception: System.TypeInitializationException: A type initializer > threw an exception. To determine which type, inspect the InnerException's > StackTrace property. > ---> System.InvalidOperationException: Unable to get memory copy function > delegate. >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0xb9 >--- End of inner exception stack trace --- >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0x153 >at > System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, > IntPtr) + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 >at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 >at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 >at Program.$(String[]) + 0x73 >at Apache.Ignite!+0xfd04bb > Aborted > {code} > * Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native > AOT in user apps. > * Review other cases of reflection usage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-18178: Fix Version/s: 2.15 > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET > Fix For: 2.15 > > > .NET 7 provides Native AOT capabilities, where the app is published as native > code without dependencies: > https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ > However, if we try to use Ignite.NET in such a scenario, the following error > is produced (both for thin and thick APIs): > {code} > Unhandled Exception: System.TypeInitializationException: A type initializer > threw an exception. To determine which type, inspect the InnerException's > StackTrace property. > ---> System.InvalidOperationException: Unable to get memory copy function > delegate. >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0xb9 >--- End of inner exception stack trace --- >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0x153 >at > System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, > IntPtr) + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 >at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 >at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 >at Program.$(String[]) + 0x73 >at Apache.Ignite!+0xfd04bb > Aborted > {code} > * Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native > AOT in user apps. > * Review other cases of reflection usage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn reassigned IGNITE-18178: --- Assignee: Pavel Tupitsyn (was: Igor Sapego) > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET > > .NET 7 provides Native AOT capabilities, where the app is published as native > code without dependencies: > https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ > However, if we try to use Ignite.NET in such a scenario, the following error > is produced (both for thin and thick APIs): > {code} > Unhandled Exception: System.TypeInitializationException: A type initializer > threw an exception. To determine which type, inspect the InnerException's > StackTrace property. > ---> System.InvalidOperationException: Unable to get memory copy function > delegate. >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0xb9 >--- End of inner exception stack trace --- >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0x153 >at > System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, > IntPtr) + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 >at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 >at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 >at Program.$(String[]) + 0x73 >at Apache.Ignite!+0xfd04bb > Aborted > {code} > * Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native > AOT in user apps. > * Review other cases of reflection usage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn reassigned IGNITE-18178: --- Assignee: Igor Sapego (was: Pavel Tupitsyn) [~isapego] please review. > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Igor Sapego >Priority: Major > Labels: .NET > > .NET 7 provides Native AOT capabilities, where the app is published as native > code without dependencies: > https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ > However, if we try to use Ignite.NET in such a scenario, the following error > is produced (both for thin and thick APIs): > {code} > Unhandled Exception: System.TypeInitializationException: A type initializer > threw an exception. To determine which type, inspect the InnerException's > StackTrace property. > ---> System.InvalidOperationException: Unable to get memory copy function > delegate. >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0xb9 >--- End of inner exception stack trace --- >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0x153 >at > System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, > IntPtr) + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 >at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 >at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 >at Program.$(String[]) + 0x73 >at Apache.Ignite!+0xfd04bb > Aborted > {code} > * Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native > AOT in user apps. > * Review other cases of reflection usage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-18178: Description: .NET 7 provides Native AOT capabilities, where the app is published as native code without dependencies: https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ However, if we try to use Ignite.NET in such a scenario, the following error is produced (both for thin and thick APIs): {code} Unhandled Exception: System.TypeInitializationException: A type initializer threw an exception. To determine which type, inspect the InnerException's StackTrace property. ---> System.InvalidOperationException: Unable to get memory copy function delegate. at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 at System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) + 0xb9 --- End of inner exception stack trace --- at System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) + 0x153 at System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, IntPtr) + 0x9 at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 at Program.$(String[]) + 0x73 at Apache.Ignite!+0xfd04bb Aborted {code} * Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native AOT in user apps. * Review other cases of reflection usage. was: .NET 7 provides Native AOT capabilities, where the app is published as native code without dependencies: https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ However, if we try to use Ignite.NET in such a scenario, the following error is produced (both for thin and thick APIs): {code} Unhandled Exception: System.TypeInitializationException: A type initializer threw an exception. To determine which type, inspect the InnerException's StackTrace property. ---> System.InvalidOperationException: Unable to get memory copy function delegate. at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 at System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) + 0xb9 --- End of inner exception stack trace --- at System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) + 0x153 at System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, IntPtr) + 0x9 at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 at Program.$(String[]) + 0x73 at Apache.Ignite!+0xfd04bb Aborted {code} Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native AOT in user apps. > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET > > .NET 7 provides Native AOT capabilities, where the app is published as native > code without dependencies: > https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ > However, if we try to use Ignite.NET in such a scenario, the following error > is produced (both for thin and thick APIs): > {code} > Unhandled Exception: System.TypeInitializationException: A type initializer > threw an exception. To determine which type, inspect the InnerException's > StackTrace property. > ---> System.InvalidOperationException: Unable to get memory copy function > delegate. >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0xb9 >--- End of inner exception stack trace --- >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0x153 >at > System
[jira] [Updated] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-18178: Release Note: .NET: Added support for Native AOT. > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET > > .NET 7 provides Native AOT capabilities, where the app is published as native > code without dependencies: > https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ > However, if we try to use Ignite.NET in such a scenario, the following error > is produced (both for thin and thick APIs): > {code} > Unhandled Exception: System.TypeInitializationException: A type initializer > threw an exception. To determine which type, inspect the InnerException's > StackTrace property. > ---> System.InvalidOperationException: Unable to get memory copy function > delegate. >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0xb9 >--- End of inner exception stack trace --- >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0x153 >at > System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, > IntPtr) + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 >at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 >at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 >at Program.$(String[]) + 0x73 >at Apache.Ignite!+0xfd04bb > Aborted > {code} > Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native > AOT in user apps. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-18178: Ignite Flags: Release Notes Required (was: Docs Required,Release Notes Required) > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET > > .NET 7 provides Native AOT capabilities, where the app is published as native > code without dependencies: > https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ > However, if we try to use Ignite.NET in such a scenario, the following error > is produced (both for thin and thick APIs): > {code} > Unhandled Exception: System.TypeInitializationException: A type initializer > threw an exception. To determine which type, inspect the InnerException's > StackTrace property. > ---> System.InvalidOperationException: Unable to get memory copy function > delegate. >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0xb9 >--- End of inner exception stack trace --- >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0x153 >at > System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, > IntPtr) + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 >at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 >at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 >at Program.$(String[]) + 0x73 >at Apache.Ignite!+0xfd04bb > Aborted > {code} > Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native > AOT in user apps. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-18178: Description: .NET 7 provides Native AOT capabilities, where the app is published as native code without dependencies: https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ However, if we try to use Ignite.NET in such a scenario, the following error is produced (both for thin and thick APIs): {code} TBD {code} Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native AOT in user apps. > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET > > .NET 7 provides Native AOT capabilities, where the app is published as native > code without dependencies: > https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ > However, if we try to use Ignite.NET in such a scenario, the following error > is produced (both for thin and thick APIs): > {code} > TBD > {code} > Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native > AOT in user apps. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-18178: Description: .NET 7 provides Native AOT capabilities, where the app is published as native code without dependencies: https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ However, if we try to use Ignite.NET in such a scenario, the following error is produced (both for thin and thick APIs): {code} Unhandled Exception: System.TypeInitializationException: A type initializer threw an exception. To determine which type, inspect the InnerException's StackTrace property. ---> System.InvalidOperationException: Unable to get memory copy function delegate. at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 at System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) + 0xb9 --- End of inner exception stack trace --- at System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) + 0x153 at System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, IntPtr) + 0x9 at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 at Program.$(String[]) + 0x73 at Apache.Ignite!+0xfd04bb Aborted {code} Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native AOT in user apps. was: .NET 7 provides Native AOT capabilities, where the app is published as native code without dependencies: https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ However, if we try to use Ignite.NET in such a scenario, the following error is produced (both for thin and thick APIs): {code} TBD {code} Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native AOT in user apps. > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET > > .NET 7 provides Native AOT capabilities, where the app is published as native > code without dependencies: > https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/ > However, if we try to use Ignite.NET in such a scenario, the following error > is produced (both for thin and thick APIs): > {code} > Unhandled Exception: System.TypeInitializationException: A type initializer > threw an exception. To determine which type, inspect the InnerException's > StackTrace property. > ---> System.InvalidOperationException: Unable to get memory copy function > delegate. >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils..cctor() + 0x163 >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0xb9 >--- End of inner exception stack trace --- >at > System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) > + 0x153 >at > System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, > IntPtr) + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryUtils.AllocatePool() + 0x9 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryPool..ctor() + 0x20 >at Apache.Ignite.Core.Impl.Memory.PlatformMemoryManager.Pool() + 0x36 >at Apache.Ignite.Core.IgniteConfiguration..ctor(IgniteConfiguration) + 0x32 >at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration) + 0x54 >at Program.$(String[]) + 0x73 >at Apache.Ignite!+0xfd04bb > Aborted > {code} > Get rid of unnecessary reflection in *PlatformMemoryUtils* to enable native > AOT in user apps. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18178) .NET: Add support for Native AOT publish
[ https://issues.apache.org/jira/browse/IGNITE-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-18178: Summary: .NET: Add support for Native AOT publish (was: .NET: Add support for NativeAot publish) > .NET: Add support for Native AOT publish > > > Key: IGNITE-18178 > URL: https://issues.apache.org/jira/browse/IGNITE-18178 > Project: Ignite > Issue Type: Improvement > Components: platforms >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18178) .NET: Add support for NativeAot publish
Pavel Tupitsyn created IGNITE-18178: --- Summary: .NET: Add support for NativeAot publish Key: IGNITE-18178 URL: https://issues.apache.org/jira/browse/IGNITE-18178 Project: Ignite Issue Type: Improvement Components: platforms Reporter: Pavel Tupitsyn Assignee: Pavel Tupitsyn -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18082) .NET: Thin 3.0: LINQ: Joins
[ https://issues.apache.org/jira/browse/IGNITE-18082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634907#comment-17634907 ] Pavel Tupitsyn commented on IGNITE-18082: - Merged to main: 135ad346da6ca489e3ffb44265bd1dd3881ca241 > .NET: Thin 3.0: LINQ: Joins > --- > > Key: IGNITE-18082 > URL: https://issues.apache.org/jira/browse/IGNITE-18082 > Project: Ignite > Issue Type: Improvement > Components: platforms, thin client >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET, LINQ, ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 10m > Remaining Estimate: 0h > > Support queries with joins in the LINQ provider. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-18068) Flaky test GridDrBinaryMarshallerTestSuite2:DrReceiverHubRestartSelfTest.testReceiverHubRestart
[ https://issues.apache.org/jira/browse/IGNITE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yury Gerzhedovich resolved IGNITE-18068. Resolution: Invalid > Flaky test > GridDrBinaryMarshallerTestSuite2:DrReceiverHubRestartSelfTest.testReceiverHubRestart > > > > Key: IGNITE-18068 > URL: https://issues.apache.org/jira/browse/IGNITE-18068 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Yury Gerzhedovich >Priority: Major > > The test  testReceiverHubRestart is flaky. Need to fix it. >  -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18068) Flaky test GridDrBinaryMarshallerTestSuite2:DrReceiverHubRestartSelfTest.testReceiverHubRestart
[ https://issues.apache.org/jira/browse/IGNITE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yury Gerzhedovich updated IGNITE-18068: --- Description: The test  is flaky. Need to fix it.  was: The test [GridDrBinaryMarshallerTestSuite2:DrReceiverHubRestartSelfTest.testReceiverHubRestart|https://ggtc.gridgain.com/test/-7043116888680218123?currentProjectId=GridGain8_Test_EnterpriseUltimateEdition&branch=%3Cdefault%3E]  is flaky. Need to fix it.  > Flaky test > GridDrBinaryMarshallerTestSuite2:DrReceiverHubRestartSelfTest.testReceiverHubRestart > > > > Key: IGNITE-18068 > URL: https://issues.apache.org/jira/browse/IGNITE-18068 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Yury Gerzhedovich >Priority: Major > > The test  is flaky. Need to fix it. >  -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18068) Flaky test GridDrBinaryMarshallerTestSuite2:DrReceiverHubRestartSelfTest.testReceiverHubRestart
[ https://issues.apache.org/jira/browse/IGNITE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yury Gerzhedovich updated IGNITE-18068: --- Description: The test  testReceiverHubRestart is flaky. Need to fix it.  was: The test  is flaky. Need to fix it.  > Flaky test > GridDrBinaryMarshallerTestSuite2:DrReceiverHubRestartSelfTest.testReceiverHubRestart > > > > Key: IGNITE-18068 > URL: https://issues.apache.org/jira/browse/IGNITE-18068 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Yury Gerzhedovich >Priority: Major > > The test  testReceiverHubRestart is flaky. Need to fix it. >  -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18169) IoomFailureHandlerTest.testIoomErrorPdsHandling fails on 64 core machines
[ https://issues.apache.org/jira/browse/IGNITE-18169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Chugunov updated IGNITE-18169: - Ignite Flags: (was: Docs Required,Release Notes Required) > IoomFailureHandlerTest.testIoomErrorPdsHandling fails on 64 core machines > - > > Key: IGNITE-18169 > URL: https://issues.apache.org/jira/browse/IGNITE-18169 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.14 >Reporter: Semyon Danilov >Assignee: Semyon Danilov >Priority: Major > Fix For: 2.15 > > Time Spent: 10m > Remaining Estimate: 0h > > It seems like minimal fragment size of the data region is 1 megabyte. The > quantity of fragments is equal to concurrency level + 1 (checkpoint buffer). > So in case if we set up region size to 10 megabytes and we have 64 cores, > then by default we will have concurrency level 64 and instead of 10 megabytes > region size + 10 megabytes checkpoint buffer size we will have 64 megabytes > region size. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18169) IoomFailureHandlerTest.testIoomErrorPdsHandling fails on 64 core machines
[ https://issues.apache.org/jira/browse/IGNITE-18169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Chugunov updated IGNITE-18169: - Fix Version/s: 2.15 > IoomFailureHandlerTest.testIoomErrorPdsHandling fails on 64 core machines > - > > Key: IGNITE-18169 > URL: https://issues.apache.org/jira/browse/IGNITE-18169 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.14 >Reporter: Semyon Danilov >Assignee: Semyon Danilov >Priority: Major > Fix For: 2.15 > > Time Spent: 10m > Remaining Estimate: 0h > > It seems like minimal fragment size of the data region is 1 megabyte. The > quantity of fragments is equal to concurrency level + 1 (checkpoint buffer). > So in case if we set up region size to 10 megabytes and we have 64 cores, > then by default we will have concurrency level 64 and instead of 10 megabytes > region size + 10 megabytes checkpoint buffer size we will have 64 megabytes > region size. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634809#comment-17634809 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 2:04 PM: - The scenarios we would like to cover is cartesian product of # Nodes' roles combinations in grid. # Scenarios: ** groups start in different order ** groups stop in different order ** group recovery after restart. # User actions that we want to check at each scenario' step. ** RO transaction operation. This requires at least one follower. ** RW transaction operation. This requires quorum (leader) ** DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. This requires Metastorage quorum and maybe distribution zone leader. ** Stop existed node. Changing logical topology requires CMG quorum. ** Start new (non-initialized) node. CMG or CMG+MetaStore quorum? (Start initialized node is covered by restart scenario). ** Start initialized node with different cluster tag. Should never accepted for join. ** -Some distributed operation that requires no quorum. e.g. metrics enable/disable?- NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. Â NB: DNG unavailability implies the expectations for transactional operations over persistent and in-memory tables might be different. was (Author: amashenkov): The scenarios we would like to cover is cartesian product of # Nodes' roles combinations in grid. # Scenarios: ** groups start in different order ** groups stop in different order ** group recovery after restart. # User actions that we want to check at each scenario' step. ** RO transaction operation. This requires at least one follower. ** RW transaction operation. This requires quorum (leader) ** DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. This requires Metastorage quorum and maybe distribution zone leader. ** Stop existed node. Changing logical topology requires CMG quorum. ** Start new (non-initialized) node. CMG or CMG+MetaStore quorum? (Start initialized node is covered by restart scenario). ** -Some distributed operation that requires no quorum. e.g. metrics enable/disable?- NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. Â NB: DNG unavailability implies the expectations for transactional operations over persistent and in-memory tables might be different. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > * Cluster Management Group (CMG), that control new nodes join process. > * MetaStorage group (MSG), that hosts meta storage. > * Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634809#comment-17634809 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 2:01 PM: - The scenarios we would like to cover is cartesian product of # Nodes' roles combinations in grid. # Scenarios: ** groups start in different order ** groups stop in different order ** group recovery after restart. # User actions that we want to check at each scenario' step. ** RO transaction operation. This requires at least one follower. ** RW transaction operation. This requires quorum (leader) ** DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. This requires Metastorage quorum and maybe distribution zone leader. ** Stop existed node. Changing logical topology requires CMG quorum. ** Start new (non-initialized) node. CMG or CMG+MetaStore quorum? (Start initialized node is covered by restart scenario). ** -Some distributed operation that requires no quorum. e.g. metrics enable/disable?- NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. Â NB: DNG unavailability implies the expectations for transactional operations over persistent and in-memory tables might be different. was (Author: amashenkov): The scenarios we would like to cover is cartesian product of # Nodes' roles combinations in grid. # Scenarios: group start/stop in different order, + recovery after restart. # User actions that we want to check at each scenario' step. ## RO transaction operation. This requires at least one follower. ## RW transaction operation. This requires quorum (leader) ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. This requires Metastorage quorum and maybe distribution zone leader. ## Stop existed node. Changing logical topology requires CMG quorum. ## Start new (non-initialized) node. CMG or CMG+MetaStore quorum? (Start initialized node is covered by restart scenario). ## -Some distributed operation that requires no quorum. e.g. metrics enable/disable?- NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. Â NB: DNG unavailability implies the expectations for transactional operations over persistent and in-memory tables might be different. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > * Cluster Management Group (CMG), that control new nodes join process. > * MetaStorage group (MSG), that hosts meta storage. > * Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634809#comment-17634809 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:58 PM: - The scenarios we would like to cover is cartesian product of # Nodes' roles combinations in grid. # Scenarios: group start/stop in different order, + recovery after restart. # User actions that we want to check at each scenario' step. ## RO transaction operation. This requires at least one follower. ## RW transaction operation. This requires quorum (leader) ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. This requires Metastorage quorum and maybe distribution zone leader. ## Stop existed node. Changing logical topology requires CMG quorum. ## Start new (non-initialized) node. CMG or CMG+MetaStore quorum? (Start initialized node is covered by restart scenario). ## -Some distributed operation that requires no quorum. e.g. metrics enable/disable?- NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. Â NB: DNG unavailability implies different expectations for transactional operations over persistent and in-memory tables. was (Author: amashenkov): The scenarios we would like to cover is cartesian product of # Nodes' roles combinations in grid. # Scenarios: group start/stop in different order, + recovery after restart. # User actions that we want to check at each scenario' step. ## RO transaction operation. This requires at least one follower. ## RW transaction operation. This requires quorum (leader) ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. This requires Metastorage quorum and maybe distribution zone leader. ## Stop existed node. Changing logical topology requires CMG quorum. ## Start new (non-initialized) node. CMG or CMG+MetaStore quorum? (Start initialized node is covered by restart scenario). ## -Some distributed operation that requires no quorum. e.g. metrics enable/disable?- NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > * Cluster Management Group (CMG), that control new nodes join process. > * MetaStorage group (MSG), that hosts meta storage. > * Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634809#comment-17634809 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:58 PM: - The scenarios we would like to cover is cartesian product of # Nodes' roles combinations in grid. # Scenarios: group start/stop in different order, + recovery after restart. # User actions that we want to check at each scenario' step. ## RO transaction operation. This requires at least one follower. ## RW transaction operation. This requires quorum (leader) ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. This requires Metastorage quorum and maybe distribution zone leader. ## Stop existed node. Changing logical topology requires CMG quorum. ## Start new (non-initialized) node. CMG or CMG+MetaStore quorum? (Start initialized node is covered by restart scenario). ## -Some distributed operation that requires no quorum. e.g. metrics enable/disable?- NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. Â NB: DNG unavailability implies the expectations for transactional operations over persistent and in-memory tables might be different. was (Author: amashenkov): The scenarios we would like to cover is cartesian product of # Nodes' roles combinations in grid. # Scenarios: group start/stop in different order, + recovery after restart. # User actions that we want to check at each scenario' step. ## RO transaction operation. This requires at least one follower. ## RW transaction operation. This requires quorum (leader) ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. This requires Metastorage quorum and maybe distribution zone leader. ## Stop existed node. Changing logical topology requires CMG quorum. ## Start new (non-initialized) node. CMG or CMG+MetaStore quorum? (Start initialized node is covered by restart scenario). ## -Some distributed operation that requires no quorum. e.g. metrics enable/disable?- NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. Â NB: DNG unavailability implies different expectations for transactional operations over persistent and in-memory tables. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > * Cluster Management Group (CMG), that control new nodes join process. > * MetaStorage group (MSG), that hosts meta storage. > * Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634803#comment-17634803 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:53 PM: - As node may participate is several groups, let's consider next possible roles and their combinations. Node roles: * *CMG* (Cluster Management Group) is a subset of cluster nodes hosting a Raft group. CMG leader is responsible for orchestrating the node join process. * *MSG* (Meta Storage Group) is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of cluster metadata. * *DNG* (Data Node Group) is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of user tables. Node types: A = [] B = [CMG] C = [MSG] D = [DNG] E = [CMG, MSG] F = [CMG, DNG] G = [MSG, DNG] H = [CMG, MSG, DNG] was (Author: amashenkov): As node may participate is several groups, let's consider next possible roles and their combinations. Node roles: * *CMG* (Cluster Management Group) is a subset of cluster nodes hosting a Raft group. CMG leader is responsible for orchestrating the node join process. * *MSG* (Meta Storage Group) is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of cluster metadata. * *DNG* (Data Node Group) ** is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of user tables. Node types: A = [] B = [CMG] C = [MSG] D = [DNG] E = [CMG, MSG] F = [CMG, DNG] G = [MSG, DNG] H = [CMG, MSG, DNG] > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > * Cluster Management Group (CMG), that control new nodes join process. > * MetaStorage group (MSG), that hosts meta storage. > * Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Mashenkov updated IGNITE-18171: -- Description: h2. Definitions. We can distinguish next cluster node groups, see below. Each node may be part of one or more groups. * Cluster Management Group (CMG), that control new nodes join process. * MetaStorage group (MSG), that hosts meta storage. * Data node group (DNG), that just hosts tables partitions. The components (CMG, meta storage, tables components) are depends on each other, but may resides on different (even disjoint) node subsets. So, some components may become temporary unavailable, and dependant components must be aware of such issues and handle them (wait, retry, throw exception or whatever) in expected way, which has to be documented also. [See IEP for details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] h2. Motivation. As of now, the correct way to start the grid (after it was stopped) is: start CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order for correct stop. Other scenarios are not tested and may lead to unexpected behaviour. Let's describe all possible scenarios, expected behaviour for each of them and extend test coverage. was: h2. Definitions. We can distinguish next cluster node groups, see below. Each node may be part of one or more groups. 1. Cluster Management Group (CMG), that control new nodes join process. 2. MetaStorage group (MSG), that hosts meta storage. 3. Data node group (DNG), that just hosts tables partitions. The components (CMG, meta storage, tables components) are depends on each other, but may resides on different (even disjoint) node subsets. So, some components may become temporary unavailable, and dependant components must be aware of such issues and handle them (wait, retry, throw exception or whatever) in expected way, which has to be documented also. [See IEP for details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] h2. Motivation. As of now, the correct way to start the grid (after it was stopped) is: start CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order for correct stop. Other scenarios are not tested and may lead to unexpected behaviour. Let's describe all possible scenarios, expected behaviour for each of them and extend test coverage. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > * Cluster Management Group (CMG), that control new nodes join process. > * MetaStorage group (MSG), that hosts meta storage. > * Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634812#comment-17634812 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:50 PM: - Restart scenarios check recovery correctness after group unavailability # No CMG # No MSG # No DSG was (Author: amashenkov): Restart scenarios check recovery correctness after group unavailability # No CMG # No MSG # No DSG Â > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634803#comment-17634803 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:49 PM: - As node may participate is several groups, let's consider next possible roles and their combinations. Node roles: * *CMG* (Cluster Management Group) is a subset of cluster nodes hosting a Raft group. CMG leader is responsible for orchestrating the node join process. * *MSG* (Meta Storage Group) is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of cluster metadata. * *DNG* (Data Node Group) ** is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of user tables. Node types: A = [] B = [CMG] C = [MSG] D = [DNG] E = [CMG, MSG] F = [CMG, DNG] G = [MSG, DNG] H = [CMG, MSG, DNG] was (Author: amashenkov): As node may participate is several groups, let's consider next possible roles and their combinations. Node roles: * *CMG* *is a subset of cluster nodes hosting a Raft group.* * *CMG* leader is responsible for orchestrating the node join process. * *MSG* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of cluster metadata. * *DNG* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of user tables. Node types: A = [] B = [CMG] C = [MSG] D = [DNG] E = [CMG, MSG] F = [CMG, DNG] G = [MSG, DNG] H = [CMG, MSG, DNG] > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-17793) Historical rebalance must use HWM instead of LWM to seek the proper checkpoint to avoid the data loss
[ https://issues.apache.org/jira/browse/IGNITE-17793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Steshin reassigned IGNITE-17793: - Assignee: Vladimir Steshin > Historical rebalance must use HWM instead of LWM to seek the proper > checkpoint to avoid the data loss > - > > Key: IGNITE-17793 > URL: https://issues.apache.org/jira/browse/IGNITE-17793 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Vladimir Steshin >Priority: Major > Labels: iep-31, ise > Attachments: HistoricalRebalanceCheckpointTest.java > > > Currently, historical rebalance at > {{CheckpointHistory#searchEarliestWalPointer}} seeks for the newest > checkpoint with counter less that lowest entry has to be rebalanced. > Unfortunately, we may have more that one checkpoint with the same counter and > it's impossible to use the newest one as a rebalance start point. > For example, we have partition with LWM=100, some gaps and HWM=200. > Checkpoint will have the counter == 100. > Then we may close some gaps, exluding 101 (to keep LWM == 100). > And again, checkpoint will have counter == 100. > Newest checkpoint (marked with counter 100) will not cointain all committed > entries with counter > 100. > Then lets close the rest of the gaps to make historical rebalance possible. > And after the rebalance finish, we'll see a warning "Some partition entries > were missed during historical rebalance" and inconsistent cluster state. > See reproducer at [^HistoricalRebalanceCheckpointTest.java] > Possible solution is to use HWM instead of LWM during the search. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634811#comment-17634811 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:45 PM: - Startup scenarios: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG NB: Groups may be disjoint or subset of other group or intersects. These are different cases to be checked. Â Stop scenarios are the same, let check service level degrades in expected way. NB: Stop quorum will be enough to achieve group unavailability. Stop order (leader or follower) doesn't matter, as it is covered by other tests. was (Author: amashenkov): Startup scenarios: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG NB: Groups may be disjoint or subset of other group or intersects. These are different cases to be checked. Â Stop scenarios are the same, let check service level degrades in expected way. NB: Stop quorum will be enough to achieve group unavailability. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634812#comment-17634812 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:44 PM: - Restart scenarios check recovery correctness after group unavailability # No CMG # No MSG # No DSG Â was (Author: amashenkov): Restart scenarios check recovery correctness after group unavailability # No CMG # No MSG # No DSG Do we want to check cases when a new node join when CMG or other group is unavailable. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634809#comment-17634809 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:43 PM: - The scenarios we would like to cover is cartesian product of # Nodes' roles combinations in grid. # Scenarios: group start/stop in different order, + recovery after restart. # User actions that we want to check at each scenario' step. ## RO transaction operation. This requires at least one follower. ## RW transaction operation. This requires quorum (leader) ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. This requires Metastorage quorum and maybe distribution zone leader. ## Stop existed node. Changing logical topology requires CMG quorum. ## Start new (non-initialized) node. CMG or CMG+MetaStore quorum? (Start initialized node is covered by restart scenario). ## -Some distributed operation that requires no quorum. e.g. metrics enable/disable?- NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. was (Author: amashenkov): The scenarios we would like to cover is cartesian product of # Nodes' roles combinations. # Group start/stop/restart order. # User actions. ## RO transaction operation. ## RW transaction operation. ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. ## Stop existed node for logical topology change validation. ## Start new (non-initialized) node. (Start initialized node is covered by restart scenario). NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634809#comment-17634809 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:32 PM: - The scenarios we would like to cover is cartesian product of # Nodes' roles combinations. # Group start/stop/restart order. # User actions. ## RO transaction operation. ## RW transaction operation. ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. ## Stop existed node for logical topology change validation. ## Start new (non-initialized) node. (Start initialized node is covered by restart scenario). NB: Some combinations may have no sense and might be excluded. E.g. DDL operation on some steps of "grid startup" scenarios, when CMG is not available yet, because there is no entry point (e.g. node instance) to start the operation. was (Author: amashenkov): The scenarios we would like to cover is cartesian product of # Nodes' roles combinations. # Group start/stop/restart order. # User actions. ## RO transaction operation. ## RW transaction operation. ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. ## Stop existed node for logical topology change validation. ## Start new (non-initialized) node. (Start initialized node is covered by restart scenario). > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634811#comment-17634811 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:32 PM: - Startup scenarios: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG NB: Groups may be disjoint or subset of other group or intersects. These are different cases to be checked. Â Stop scenarios are the same, let check service level degrades in expected way. NB: Stop quorum will be enough to achieve group unavailability. was (Author: amashenkov): Startup scenarios: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG NB: Groups may be disjoint or subset of other group or intersects. These are different cases to be checked. TBD: describe expected grid state for each scenario, and allowed user operations? Â Stop scenarios are the same, let check service level degrades in expected way. NB: Stop quorum will be enough to achieve group unavailability. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634809#comment-17634809 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:28 PM: - The scenarios we would like to cover is cartesian product of # Nodes' roles combinations. # Group start/stop/restart order. # User actions. ## RO transaction operation. ## RW transaction operation. ## DDL operation. E.g. create table in available distribution zone as well as non-available distribution zone. ## Stop existed node for logical topology change validation. ## Start new (non-initialized) node. (Start initialized node is covered by restart scenario). was (Author: amashenkov): The scenarios we would like to cover is cartesian product of # Nodes' roles combinations. # Group start/stop/restart order. # User actions. RO, RW operations, DDL operations, stop existed node for logical topology change validaton, start new (non-initialized) node. Start initialized node is covered by restart scenario. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634809#comment-17634809 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 1:24 PM: - The scenarios we would like to cover is cartesian product of # Nodes' roles combinations. # Group start/stop/restart order. # User actions. RO, RW operations, DDL operations, stop existed node for logical topology change validaton, start new (non-initialized) node. Start initialized node is covered by restart scenario. was (Author: amashenkov): The scenarios we would like to cover is cartesian product of 1. Initialized/non-initialized nodes. (Check non-initialized for (re)start scenario only?) 2. Nodes' roles combinations. 3. Group start/stop/restart order. 4. User actions??? > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634811#comment-17634811 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 12:51 PM: -- Startup scenarios: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG NB: Groups may be disjoint or subset of other group or intersects. These are different cases to be checked. TBD: describe expected grid state for each scenario, and allowed user operations? Â Stop scenarios are the same, let check service level degrades in expected way. NB: Stop quorum will be enough to achieve group unavailability. was (Author: amashenkov): Startup scenarios: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG TBD: describe expected grid state for each scenario, and allowed user operations? Â Stop scenarios are the same, let check service level degrades in expected way. Stop quorum will be enough to achieve group unavailability. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634811#comment-17634811 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 12:47 PM: -- Startup scenarios: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG TBD: describe expected grid state for each scenario, and allowed user operations? Â Stop scenarios are the same, let check service level degrades in expected way. Stop quorum will be enough to achieve group unavailability. was (Author: amashenkov): Startup scenarios: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG TBD: describe expected grid state for each scenario, and allowed user operations? Â Stop scenarios are the same, let check service level degrades in expected way. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18177) Clarify DEVNOTES.md in modules/platforms/cpp
Ivan Artukhov created IGNITE-18177: -- Summary: Clarify DEVNOTES.md in modules/platforms/cpp Key: IGNITE-18177 URL: https://issues.apache.org/jira/browse/IGNITE-18177 Project: Ignite Issue Type: Improvement Components: platforms Affects Versions: 3.0.0-beta1 Reporter: Ivan Artukhov I've checked C++ module in Apache Ignite 3 beta1 RC2 (https://dist.apache.org/repos/dist/dev/ignite/3.0.0-beta1-rc2/apache-ignite-3.0.0-beta1-cpp.zip). Found no issues with building the module and running tests but DEVNOTES.md needs the following clarification of section "Build Java". * it is not clear whether a user needs to build the Java part of the project to run C++ tests. * `mvn` is deprecated in favor of `gradle` so `DEVNOTES.md` should not mention `mvn`. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634812#comment-17634812 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 12:29 PM: -- Restart scenarios check recovery correctness after group unavailability # No CMG # No MSG # No DSG Do we want to check cases when a new node join when CMG or other group is unavailable. was (Author: amashenkov): Restart scenarios check recovery correctness after service unavilability # No CMG # No MSG # No DSG Â > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18169) IoomFailureHandlerTest.testIoomErrorPdsHandling fails on 64 core machines
[ https://issues.apache.org/jira/browse/IGNITE-18169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634813#comment-17634813 ] Ignite TC Bot commented on IGNITE-18169: {panel:title=Branch: [pull/10381/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/10381/head] Base: [master] : No new tests found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=6913455&buildTypeId=IgniteTests24Java8_RunAll] > IoomFailureHandlerTest.testIoomErrorPdsHandling fails on 64 core machines > - > > Key: IGNITE-18169 > URL: https://issues.apache.org/jira/browse/IGNITE-18169 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.14 >Reporter: Semyon Danilov >Assignee: Semyon Danilov >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > It seems like minimal fragment size of the data region is 1 megabyte. The > quantity of fragments is equal to concurrency level + 1 (checkpoint buffer). > So in case if we set up region size to 10 megabytes and we have 64 cores, > then by default we will have concurrency level 64 and instead of 10 megabytes > region size + 10 megabytes checkpoint buffer size we will have 64 megabytes > region size. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634812#comment-17634812 ] Andrey Mashenkov commented on IGNITE-18171: --- Restart scenarios check recovery correctness after service unavilability # No CMG # No MSG # No DSG Â > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634811#comment-17634811 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 12:25 PM: -- Startup scenarios: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG TBD: describe expected grid state for each scenario, and allowed user operations? Â Stop scenarios are the same, let check service level degrades in expected way. was (Author: amashenkov): Startup scenario: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG TBD: describe expected grid state for each scenario, and allowed user operations? > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634811#comment-17634811 ] Andrey Mashenkov commented on IGNITE-18171: --- Startup scenario: # CMG -> MSG -> DNG # CMG -> DNG -> MSG # MSG -> CMG -> DNG Â # MSG -> DNG -> CMG # DNG -> CMG -> MSG # DNG -> MSG -> CMG TBD: describe expected grid state for each scenario, and allowed user operations? > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634803#comment-17634803 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 12:16 PM: -- As node may participate is several groups, let's consider next possible roles and their combinations. Node roles: * *CMG* * *is a subset of cluster nodes hosting a Raft group.* * *CMG* leader is responsible for orchestrating the node join process. * *MetaStorage* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of cluster metadata. * *DataNode* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of user tables. Node types: A = [] B = [CMG] C = [MSG] D = [DNG] E = [CMG, MSG] F = [CMG, DNG] G = [MSG, DNG] H = [CMG, MSG, DNG] was (Author: amashenkov): As node may participate is several groups, let's consider next possible roles and their combinations. Node roles: * *CMG* ** is a subset of cluster nodes hosting a Raft group. ** *CMG* leader is responsible for orchestrating the node join process. * *MetaStorage* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of cluster metadata. * *DataNode* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of user tables. Node types: A = [] B = [CMG] C = [MetaStorage] D = [DataNode] E = [CMG, MetaStorage] F = [CMG, DataNode] G = [MetaStorage, DataNode] H = [CMG, MetaStorage, DataNode] > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634803#comment-17634803 ] Andrey Mashenkov edited comment on IGNITE-18171 at 11/16/22 12:16 PM: -- As node may participate is several groups, let's consider next possible roles and their combinations. Node roles: * *CMG* *is a subset of cluster nodes hosting a Raft group.* * *CMG* leader is responsible for orchestrating the node join process. * *MSG* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of cluster metadata. * *DNG* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of user tables. Node types: A = [] B = [CMG] C = [MSG] D = [DNG] E = [CMG, MSG] F = [CMG, DNG] G = [MSG, DNG] H = [CMG, MSG, DNG] was (Author: amashenkov): As node may participate is several groups, let's consider next possible roles and their combinations. Node roles: * *CMG* * *is a subset of cluster nodes hosting a Raft group.* * *CMG* leader is responsible for orchestrating the node join process. * *MetaStorage* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of cluster metadata. * *DataNode* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of user tables. Node types: A = [] B = [CMG] C = [MSG] D = [DNG] E = [CMG, MSG] F = [CMG, DNG] G = [MSG, DNG] H = [CMG, MSG, DNG] > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Mashenkov updated IGNITE-18171: -- Description: h2. Definitions. We can distinguish next cluster node groups, see below. Each node may be part of one or more groups. 1. Cluster Management Group (CMG), that control new nodes join process. 2. MetaStorage group (MSG), that hosts meta storage. 3. Data node group (DNG), that just hosts tables partitions. The components (CMG, meta storage, tables components) are depends on each other, but may resides on different (even disjoint) node subsets. So, some components may become temporary unavailable, and dependant components must be aware of such issues and handle them (wait, retry, throw exception or whatever) in expected way, which has to be documented also. [See IEP for details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] h2. Motivation. As of now, the correct way to start the grid (after it was stopped) is: start CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order for correct stop. Other scenarios are not tested and may lead to unexpected behaviour. Let's describe all possible scenarios, expected behaviour for each of them and extend test coverage. was: h2. Definitions. We can distinguish next cluster node groups, see below. Each node may be part of one or more groups. 1. Cluster Management Group (CMG), that control new nodes join process. 2. MetaStorage group, that hosts meta storage. 3. DataNode, that just hosts tables partitions. The components (CMG, meta storage, tables components) are depends on each other, but may resides on different (even disjoint) node subsets. So, some components may become temporary unavailable, and dependant components must be aware of such issues and handle them (wait, retry, throw exception or whatever) in expected way, which has to be documented also. [See IEP for details| https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] h2. Motivation. As of now, the correct way to start the grid (after it was stopped) is: start CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order for correct stop. Other scenarios are not tested and may lead to unexpected behaviour. Let's describe all possible scenarios, expected behaviour for each of them and extend test coverage. > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group (MSG), that hosts meta storage. > 3. Data node group (DNG), that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for > details|https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634809#comment-17634809 ] Andrey Mashenkov commented on IGNITE-18171: --- The scenarios we would like to cover is cartesian product of 1. Initialized/non-initialized nodes. (Check non-initialized for (re)start scenario only?) 2. Nodes' roles combinations. 3. Group start/stop/restart order. 4. User actions??? > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group, that hosts meta storage. > 3. DataNode, that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for details| > https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634803#comment-17634803 ] Andrey Mashenkov commented on IGNITE-18171: --- As node may participate is several groups, let's consider next possible roles and their combinations. Node roles: * *CMG* ** is a subset of cluster nodes hosting a Raft group. ** *CMG* leader is responsible for orchestrating the node join process. * *MetaStorage* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of cluster metadata. * *DataNode* is a subset of cluster nodes hosting a Raft group responsible for storing a master copy of user tables. Node types: A = [] B = [CMG] C = [MetaStorage] D = [DataNode] E = [CMG, MetaStorage] F = [CMG, DataNode] G = [MetaStorage, DataNode] H = [CMG, MetaStorage, DataNode] > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group, that hosts meta storage. > 3. DataNode, that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for details| > https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-14369) Node.js: incorrect Hash Code calculcation
[ https://issues.apache.org/jira/browse/IGNITE-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bojidar Marinov updated IGNITE-14369: - Labels: (was: node) > Node.js: incorrect Hash Code calculcation > - > > Key: IGNITE-14369 > URL: https://issues.apache.org/jira/browse/IGNITE-14369 > Project: Ignite > Issue Type: Bug > Components: thin client >Affects Versions: 2.10 >Reporter: Bojidar Marinov >Priority: Major > > The Node.js thin client calculates wrong hashcodes, possibly leading to > duplicated rows and inability to read rows with complex key types written > from other languages: > # BinaryUtils.contentHashCode is called with wrong end parameter at > [BinaryObject.ts:397|https://github.com/apache/ignite-nodejs-thin-client/blob/76c7d7eb2b1856295f877434ef358beaa7155d91/src/BinaryObject.ts#L397]. > The second parameter is the end position, not the content length, and thus > should be relative to this._startOffset. > Experimentally confirmed that changing to this._startOffset + > this._schemaOffset - 1 works. > # BinaryUtils.contentHashCode uses unsigned bytes at > [BinaryUtils.ts:632|https://github.com/apache/ignite-nodejs-thin-client/blob/76c7d7eb2b1856295f877434ef358beaa7155d91/src/internal/BinaryUtils.ts#L632]. > [buffer[idx]|https://nodejs.org/api/buffer.html#buffer_buf_index] returns a > number between 0..255, while Java's byte is -128..127. > Experimentally confirmed that switching to > [Buffer.readInt8|https://nodejs.org/api/buffer.html#buffer_buf_readint8_offset] > works. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Mashenkov updated IGNITE-18171: -- Description: h2. Definitions. We can distinguish next cluster node groups, see below. Each node may be part of one or more groups. 1. Cluster Management Group (CMG), that control new nodes join process. 2. MetaStorage group, that hosts meta storage. 3. DataNode, that just hosts tables partitions. The components (CMG, meta storage, tables components) are depends on each other, but may resides on different (even disjoint) node subsets. So, some components may become temporary unavailable, and dependant components must be aware of such issues and handle them (wait, retry, throw exception or whatever) in expected way, which has to be documented also. [See IEP for details| https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] h2. Motivation. As of now, the correct way to start the grid (after it was stopped) is: start CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order for correct stop. Other scenarios are not tested and may lead to unexpected behaviour. Let's describe all possible scenarios, expected behaviour for each of them and extend test coverage. was:TBD > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > h2. Definitions. > We can distinguish next cluster node groups, see below. Each node may be part > of one or more groups. > 1. Cluster Management Group (CMG), that control new nodes join process. > 2. MetaStorage group, that hosts meta storage. > 3. DataNode, that just hosts tables partitions. > The components (CMG, meta storage, tables components) are depends on each > other, but may resides on different (even disjoint) node subsets. So, some > components may become temporary unavailable, and dependant components must be > aware of such issues and handle them (wait, retry, throw exception or > whatever) in expected way, which has to be documented also. > [See IEP for details| > https://cwiki.apache.org/confluence/display/IGNITE/IEP-77%3A+Node+Join+Protocol+and+Initialization+for+Ignite+3] > h2. Motivation. > As of now, the correct way to start the grid (after it was stopped) is: start > CMG nodes, then Meta Storage nodes, then Data nodes. And in backward order > for correct stop. Other scenarios are not tested and may lead to unexpected > behaviour. > Let's describe all possible scenarios, expected behaviour for each of them > and extend test coverage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-14369) Node.js: incorrect Hash Code calculcation
[ https://issues.apache.org/jira/browse/IGNITE-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bojidar Marinov updated IGNITE-14369: - Labels: node (was: ) > Node.js: incorrect Hash Code calculcation > - > > Key: IGNITE-14369 > URL: https://issues.apache.org/jira/browse/IGNITE-14369 > Project: Ignite > Issue Type: Bug > Components: thin client >Affects Versions: 2.10 >Reporter: Bojidar Marinov >Priority: Major > Labels: node > > The Node.js thin client calculates wrong hashcodes, possibly leading to > duplicated rows and inability to read rows with complex key types written > from other languages: > # BinaryUtils.contentHashCode is called with wrong end parameter at > [BinaryObject.ts:397|https://github.com/apache/ignite-nodejs-thin-client/blob/76c7d7eb2b1856295f877434ef358beaa7155d91/src/BinaryObject.ts#L397]. > The second parameter is the end position, not the content length, and thus > should be relative to this._startOffset. > Experimentally confirmed that changing to this._startOffset + > this._schemaOffset - 1 works. > # BinaryUtils.contentHashCode uses unsigned bytes at > [BinaryUtils.ts:632|https://github.com/apache/ignite-nodejs-thin-client/blob/76c7d7eb2b1856295f877434ef358beaa7155d91/src/internal/BinaryUtils.ts#L632]. > [buffer[idx]|https://nodejs.org/api/buffer.html#buffer_buf_index] returns a > number between 0..255, while Java's byte is -128..127. > Experimentally confirmed that switching to > [Buffer.readInt8|https://nodejs.org/api/buffer.html#buffer_buf_readint8_offset] > works. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17499) Service method invocation exception is not propagated to thin client side
[ https://issues.apache.org/jira/browse/IGNITE-17499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-17499: Summary: Service method invocation exception is not propagated to thin client side (was: Service method invocation excepition is not propagated to thin client side.) > Service method invocation exception is not propagated to thin client side > - > > Key: IGNITE-17499 > URL: https://issues.apache.org/jira/browse/IGNITE-17499 > Project: Ignite > Issue Type: Bug >Reporter: Mikhail Petrov >Assignee: Mikhail Petrov >Priority: Minor > Labels: ise > Fix For: 2.14 > > Time Spent: 0.5h > Remaining Estimate: 0h > > https://issues.apache.org/jira/browse/IGNITE-13389 introduced dedicated flag > that make it possible to propagate server side stacktrace to a thin client > side. The mentoined above propagation does not work for exceptions that > arises during Ignite Service invocation. > Steps to reproduce: > 1. Start .Net Ignite node > 2. Deploy service which invocation throws an arbitrary uncaught exception > 3. Invoke previously deployed services via Java thin client > As a result, information about the custom code exception is not present in > the exception stacktrace that is thrown after the service call. > The main reason of such behaviour is that > ClientServiceInvokeRequest.java:198 does not propagate initial exception. So > ClientRequestHandler#handleException could not handle exception properly even > if ThinClientConfiguration#sendServerExceptionStackTraceToClient() is enabled. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Mashenkov reassigned IGNITE-18171: - Assignee: Andrey Mashenkov > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > TBD -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-16753) ControlUtility and Zookeeper tests don't run due to updated curator-test dep
[ https://issues.apache.org/jira/browse/IGNITE-16753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Pavlov updated IGNITE-16753: --- Labels: ise (was: ) > ControlUtility and Zookeeper tests don't run due to updated curator-test dep > > > Key: IGNITE-16753 > URL: https://issues.apache.org/jira/browse/IGNITE-16753 > Project: Ignite > Issue Type: Bug >Reporter: Maksim Timonin >Assignee: Maksim Timonin >Priority: Major > Labels: ise > Fix For: 2.13 > > Time Spent: 20m > Remaining Estimate: 0h > > Commit [1]updated curator-test dependency and now it fetches junit5 > dependency. But Ignite runs tests only for junit4. Then we should exclude > junit from curator-test to run tests again. > Â > It affects control-utility and zookeeper modules. > Â > Â > [1][https://github.com/apache/ignite/commit/fe95954c5072534b52c62a6f643f3cb96f92628b] > Â > Â -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18081) .NET: Thin 3.0: LINQ: Basic select queries
[ https://issues.apache.org/jira/browse/IGNITE-18081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-18081: Labels: .NET LINQ ignite-3 (was: .NET ignite-3) > .NET: Thin 3.0: LINQ: Basic select queries > -- > > Key: IGNITE-18081 > URL: https://issues.apache.org/jira/browse/IGNITE-18081 > Project: Ignite > Issue Type: Improvement > Components: platforms, thin client >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET, LINQ, ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Implement basic LINQ provider with simple SELECT query support. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-18011) Avoid obtaining LogManager to stream RAFT snapshots
[ https://issues.apache.org/jira/browse/IGNITE-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy resolved IGNITE-18011. Resolution: Invalid Irrelevant as in IGNITE-17935 the code mentioned by this feature was replaced by another implementation, which in turn was replaced in IGNITE-18122. > Avoid obtaining LogManager to stream RAFT snapshots > --- > > Key: IGNITE-18011 > URL: https://issues.apache.org/jira/browse/IGNITE-18011 > Project: Ignite > Issue Type: Improvement > Components: persistence >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > When freezing an outgoing snapshot scope (see > {{{}OutgoingSnapshot#freezeScope(){}}}), we need to get snapshot metadata > corresponding to the contents of our storages (MV+TX) at that precise moment. > The snapshot metadata includes last applied index (which we have), also term > and lists of followers and learners which we need to obtain. In JRaft, we can > take them from LogManager. > The problem is that JRaft does not provide a way to get the corresponding > {{LogManager}} from this code. The 'right' way to fix this would be to change > JRaft internals so that LogManager instance is made available to snapshot > readers. But we should not touch JRaft core when we can avoid this (because, > in the future, we might need to merge new version of JRaft into our > codebase). So current implementation adds a way to obtain the current JRaft > Node and then take LogManager instance from it. > A facility to get peers (followers/learners) is planned in the future, it > could be used instead of a LogManager. But it's not clear how a term could be > obtained without it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18122) Track last applied term and group config in storages
[ https://issues.apache.org/jira/browse/IGNITE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634770#comment-17634770 ] Roman Puchkovskiy commented on IGNITE-18122: Thanks guys! > Track last applied term and group config in storages > > > Key: IGNITE-18122 > URL: https://issues.apache.org/jira/browse/IGNITE-18122 > Project: Ignite > Issue Type: Improvement > Components: persistence >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 6h 40m > Remaining Estimate: 0h > > We need last applied index, term and group config to build a snapshot meta. > In the current implementation, only the index is stored our storages (MV and > TX), but term and config are taken from JRaft's {{{}LogManager{}}}. This is > unreliable as the log might be truncated. > We must store term and config in our storages as well (term in both of them > as it is kinda required attribute of a RAFT index, and group only in MV > storage). > Also, we must make sure that on ANY command processed by > {{PartitionListener}} (and on configuration committed event, too), we update > lastAppliedIndex+term in one of the storages, Otherwise, a resulting gap > might hinder {{AppendEntries}} calls to followers requiring to install a > snapshot in an infinite loop. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18176) Update documentation for Apache Ignite 3 introduction
Igor Gusev created IGNITE-18176: --- Summary: Update documentation for Apache Ignite 3 introduction Key: IGNITE-18176 URL: https://issues.apache.org/jira/browse/IGNITE-18176 Project: Ignite Issue Type: Task Reporter: Igor Gusev Currently our introduction page is designed for alpha. We should fix it for beta, providing a new list of features and limitations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17953) NPE and closed connection on some malformed SQL requests using third-party SQL clients
[ https://issues.apache.org/jira/browse/IGNITE-17953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-17953: Ignite Flags: (was: Docs Required,Release Notes Required) > NPE and closed connection on some malformed SQL requests using third-party > SQL clients > -- > > Key: IGNITE-17953 > URL: https://issues.apache.org/jira/browse/IGNITE-17953 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Andrey Khitrin >Priority: Major > Labels: ignite-3 > > I try to run different SQL queries in AI3 using > [SqlLine|https://github.com/julianhyde/sqlline] tool and fresh ignite-client > JAR downloaded from CI. I tried both correct and some incorrect SQL queries. > And it looks like some incorrect SQL queries lead to irrecoverable error on > the client side. The stack trace is the following: > {code:java} > Oct 21, 2022 4:57:02 PM io.netty.channel.DefaultChannelPipeline > onUnhandledInboundException > WARNING: An exceptionCaught() event was fired, and it reached at the tail of > the pipeline. It usually means the last handler in the pipeline did not > handle the exception. > java.lang.NullPointerException > at org.apache.ignite.lang.ErrorGroup.errorMessage(ErrorGroup.java:193) > at > org.apache.ignite.lang.IgniteException.(IgniteException.java:190) > at > org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:336) > at > org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:301) > at > org.apache.ignite.internal.client.TcpClientChannel.onMessage(TcpClientChannel.java:160) > at > org.apache.ignite.internal.client.io.netty.NettyClientConnection.onMessage(NettyClientConnection.java:94) > at > org.apache.ignite.internal.client.io.netty.NettyClientMessageHandler.channelRead(NettyClientMessageHandler.java:34) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) > at > io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327) > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) > at > io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) > at > io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995) > at > io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.base/java.lang.Thread.run(Thread.java:829) > Oct 21, 2022 4:58:07 PM io.netty.channel.DefaultChannelPipeline > onUnhandledInboundException > WARNING: An exceptionCaught() event was fired, and it reached at the tail of > the pipeline. It usually means the last handler in the pipeline did not > handle the exception. > java.io.IOException: Connection reset by peer > at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at > java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276) > at j
[jira] [Assigned] (IGNITE-17953) NPE and closed connection on some malformed SQL requests using third-party SQL clients
[ https://issues.apache.org/jira/browse/IGNITE-17953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky reassigned IGNITE-17953: --- Assignee: Evgeny Stanilovsky > NPE and closed connection on some malformed SQL requests using third-party > SQL clients > -- > > Key: IGNITE-17953 > URL: https://issues.apache.org/jira/browse/IGNITE-17953 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Andrey Khitrin >Assignee: Evgeny Stanilovsky >Priority: Major > Labels: calcite3-required, ignite-3 > > I try to run different SQL queries in AI3 using > [SqlLine|https://github.com/julianhyde/sqlline] tool and fresh ignite-client > JAR downloaded from CI. I tried both correct and some incorrect SQL queries. > And it looks like some incorrect SQL queries lead to irrecoverable error on > the client side. The stack trace is the following: > {code:java} > Oct 21, 2022 4:57:02 PM io.netty.channel.DefaultChannelPipeline > onUnhandledInboundException > WARNING: An exceptionCaught() event was fired, and it reached at the tail of > the pipeline. It usually means the last handler in the pipeline did not > handle the exception. > java.lang.NullPointerException > at org.apache.ignite.lang.ErrorGroup.errorMessage(ErrorGroup.java:193) > at > org.apache.ignite.lang.IgniteException.(IgniteException.java:190) > at > org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:336) > at > org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:301) > at > org.apache.ignite.internal.client.TcpClientChannel.onMessage(TcpClientChannel.java:160) > at > org.apache.ignite.internal.client.io.netty.NettyClientConnection.onMessage(NettyClientConnection.java:94) > at > org.apache.ignite.internal.client.io.netty.NettyClientMessageHandler.channelRead(NettyClientMessageHandler.java:34) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) > at > io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327) > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) > at > io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) > at > io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995) > at > io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.base/java.lang.Thread.run(Thread.java:829) > Oct 21, 2022 4:58:07 PM io.netty.channel.DefaultChannelPipeline > onUnhandledInboundException > WARNING: An exceptionCaught() event was fired, and it reached at the tail of > the pipeline. It usually means the last handler in the pipeline did not > handle the exception. > java.io.IOException: Connection reset by peer > at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at > java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at java.base/sun.nio.ch.IOUtil.readIntoNativeB
[jira] [Updated] (IGNITE-17953) NPE and closed connection on some malformed SQL requests using third-party SQL clients
[ https://issues.apache.org/jira/browse/IGNITE-17953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-17953: Labels: calcite3-required ignite-3 (was: ignite-3) > NPE and closed connection on some malformed SQL requests using third-party > SQL clients > -- > > Key: IGNITE-17953 > URL: https://issues.apache.org/jira/browse/IGNITE-17953 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Andrey Khitrin >Priority: Major > Labels: calcite3-required, ignite-3 > > I try to run different SQL queries in AI3 using > [SqlLine|https://github.com/julianhyde/sqlline] tool and fresh ignite-client > JAR downloaded from CI. I tried both correct and some incorrect SQL queries. > And it looks like some incorrect SQL queries lead to irrecoverable error on > the client side. The stack trace is the following: > {code:java} > Oct 21, 2022 4:57:02 PM io.netty.channel.DefaultChannelPipeline > onUnhandledInboundException > WARNING: An exceptionCaught() event was fired, and it reached at the tail of > the pipeline. It usually means the last handler in the pipeline did not > handle the exception. > java.lang.NullPointerException > at org.apache.ignite.lang.ErrorGroup.errorMessage(ErrorGroup.java:193) > at > org.apache.ignite.lang.IgniteException.(IgniteException.java:190) > at > org.apache.ignite.internal.client.TcpClientChannel.readError(TcpClientChannel.java:336) > at > org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:301) > at > org.apache.ignite.internal.client.TcpClientChannel.onMessage(TcpClientChannel.java:160) > at > org.apache.ignite.internal.client.io.netty.NettyClientConnection.onMessage(NettyClientConnection.java:94) > at > org.apache.ignite.internal.client.io.netty.NettyClientMessageHandler.channelRead(NettyClientMessageHandler.java:34) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) > at > io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327) > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) > at > io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) > at > io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995) > at > io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.base/java.lang.Thread.run(Thread.java:829) > Oct 21, 2022 4:58:07 PM io.netty.channel.DefaultChannelPipeline > onUnhandledInboundException > WARNING: An exceptionCaught() event was fired, and it reached at the tail of > the pipeline. It usually means the last handler in the pipeline did not > handle the exception. > java.io.IOException: Connection reset by peer > at java.base/sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at > java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276) >
[jira] [Commented] (IGNITE-18122) Track last applied term and group config in storages
[ https://issues.apache.org/jira/browse/IGNITE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634764#comment-17634764 ] Vladislav Pyatkov commented on IGNITE-18122: The modification connected with transferring a term to RAFT state machine is looked good for me. > Track last applied term and group config in storages > > > Key: IGNITE-18122 > URL: https://issues.apache.org/jira/browse/IGNITE-18122 > Project: Ignite > Issue Type: Improvement > Components: persistence >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 6.5h > Remaining Estimate: 0h > > We need last applied index, term and group config to build a snapshot meta. > In the current implementation, only the index is stored our storages (MV and > TX), but term and config are taken from JRaft's {{{}LogManager{}}}. This is > unreliable as the log might be truncated. > We must store term and config in our storages as well (term in both of them > as it is kinda required attribute of a RAFT index, and group only in MV > storage). > Also, we must make sure that on ANY command processed by > {{PartitionListener}} (and on configuration committed event, too), we update > lastAppliedIndex+term in one of the storages, Otherwise, a resulting gap > might hinder {{AppendEntries}} calls to followers requiring to install a > snapshot in an infinite loop. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18082) .NET: Thin 3.0: LINQ: Joins
[ https://issues.apache.org/jira/browse/IGNITE-18082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634759#comment-17634759 ] Igor Sapego commented on IGNITE-18082: -- [~ptupitsyn] looks good to me. > .NET: Thin 3.0: LINQ: Joins > --- > > Key: IGNITE-18082 > URL: https://issues.apache.org/jira/browse/IGNITE-18082 > Project: Ignite > Issue Type: Sub-task > Components: platforms, thin client >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET, ignite-3 > Fix For: 3.0.0-beta2 > > > Support queries with joins in the LINQ provider. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-17733) Change lock manager implementation
[ https://issues.apache.org/jira/browse/IGNITE-17733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Chudov reassigned IGNITE-17733: - Assignee: Denis Chudov > Change lock manager implementation > -- > > Key: IGNITE-17733 > URL: https://issues.apache.org/jira/browse/IGNITE-17733 > Project: Ignite > Issue Type: Bug >Reporter: Vladislav Pyatkov >Assignee: Denis Chudov >Priority: Major > Labels: ignite-3 > > *Motivation:* > Lock manager should be based on _Wait-Die_ deadlock resolution strategy by > default. The conception is implemented in > [POC|https://github.com/ascherbakoff/ai3-txn-mvp]. > Since current implementation has another resolution strategy, some tests will > become failing. All those test should to be fixed as a part of the ticket. > *Definition of Done:* > Required to replace implementation of _HeapLockManager_ to [_Lock_ > |https://github.com/ascherbakoff/ai3-txn-mvp/blob/main/src/main/java/com/ascherbakoff/ai3/lock/Lock.java] > and adjusted API. > Hence, the lock resolution strategy is changed, it leads to failing of tests > from _AbstractLockManagerTest_ and _TxAbstractTest_. These failings have to > be fixed. > Property IGNITE_ALL_LOCK_TYPES_ARE_USED should be removed. > *Workaround:* > Until, this issue does not be fixed, we use lock only on primary keys. This > behavior is turned by property IGNITE_ALL_LOCK_TYPES_ARE_USED. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18173) SQL: implement EVERY and SOME aggregate functions
[ https://issues.apache.org/jira/browse/IGNITE-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yury Gerzhedovich updated IGNITE-18173: --- Labels: calcite calcite2-required calcite3-required ignite-3 (was: ignite-3) > SQL: implement EVERY and SOME aggregate functions > - > > Key: IGNITE-18173 > URL: https://issues.apache.org/jira/browse/IGNITE-18173 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Khitrin >Priority: Major > Labels: calcite, calcite2-required, calcite3-required, ignite-3 > > Aggregate functions EVERY and SOME are part of SQL standard. Unfortunately, > they're not implemented in AI3 beta1 yet. Could you please implement them? > In AI2, they work in the following manner: > {code:sql} > create table tmp_table_age_name_wage (key_field INT PRIMARY KEY,AGE > INT,field1 VARCHAR,field2 INT); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (1, > 42,'John',10); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (2, > 43,'Jack',5); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (3, > 42,'Jen',3); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (4, > 42,'Jim',7); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (5, > 41,'Jess',3); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (6, > 50,'Joe',4); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (7, > 43,'Jeff',2); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (8, > 32,'Joel',8); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (9, > 33,'Joe',3); > insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (10, > 41,'Jill',9); > SELECT EVERY(AGE > 20) FROM tmp_table_age_name_wage;--> true > SELECT EVERY(AGE > 40) FROM tmp_table_age_name_wage;--> false > SELECT SOME(field2 = 9) FROM tmp_table_age_name_wage; --> true > SELECT SOME(field2 <> 9) FROM tmp_table_age_name_wage; --> true > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18175) SQL: value out of type bounds is converted into 0 during implicit casting
[ https://issues.apache.org/jira/browse/IGNITE-18175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yury Gerzhedovich updated IGNITE-18175: --- Labels: calcite2-required calcite3-required ignite-3 (was: ignite-3) > SQL: value out of type bounds is converted into 0 during implicit casting > - > > Key: IGNITE-18175 > URL: https://issues.apache.org/jira/browse/IGNITE-18175 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Andrey Khitrin >Priority: Major > Labels: calcite2-required, calcite3-required, ignite-3 > > A simple scenario: > {code:sql} > create table test_e011_INTEGER_from (key_field INT PRIMARY KEY, field1 > INTEGER); > insert into test_e011_INTEGER_from (key_field, field1) values (1, > -2147483648); > create table test_e011_SMALLINT (key_field INT PRIMARY KEY, field1_SMALLINT > SMALLINT); > insert into test_e011_SMALLINT (key_field, field1_SMALLINT) values (1, > (select field1 from test_e011_INTEGER_from where key_field=1)); > select * from test_e011_SMALLINT; > {code} > I expect it either to return '1, null' (like in postrgres or sqlite3) or to > raise an error on insert (like in GG8) as value of -2147483648 is out of > bounds for SMALLINT data type. > Instead, '1, 0' is stored within test_e011_SMALLINT table and returned from > select. In other words, -2147483648 was converted into 0. Such behavior seems > as incorrect. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18165) Apply short term loks to sorted indexes
[ https://issues.apache.org/jira/browse/IGNITE-18165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladislav Pyatkov reassigned IGNITE-18165: -- Assignee: Vladislav Pyatkov > Apply short term loks to sorted indexes > --- > > Key: IGNITE-18165 > URL: https://issues.apache.org/jira/browse/IGNITE-18165 > Project: Ignite > Issue Type: Improvement >Reporter: Vladislav Pyatkov >Assignee: Vladislav Pyatkov >Priority: Major > > *Motivation:* > Transaction isolation requires using short term locks in insert operation > over sorted indexes. It was not implemented because short term locks had not > supported. > According to the transaction protocol IEP [1] insert operation in RW > transactions for sortex index looks as follows: > Unique index: > // insert > IX_short(nextKey) // released after the insertion > X_commit(currentKey) // acquired before releasing IX_short > Non-unique index: > // insert > IX_short(nextKey) > X_commit(currentKey) if nextKey previously locked in S, X or SIX mode > IX_commit(currentKey) otherwise > *Implementation notes:* > For the code related to locks for indexes, see > org.apache.ignite.internal.table.distributed.IndexLocker. We are interested > in SortedIndexLocker implementation, method locksForInsert. Actually, there > is some draft, but it is commented. > [1] > https://cwiki.apache.org/confluence/display/IGNITE/IEP-91%3A+Transaction+protocol -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18172) Raft learners rebalance
[ https://issues.apache.org/jira/browse/IGNITE-18172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtcev updated IGNITE-18172: - Description: Currently, the Rebalance protocol only applies to Raft followers. We would like to extend it to be applicable to rebalancing Raft learners as well. The following should be done: # {{partition.assignments.*}} Metastorage keys should store followers' assignments as well as learners' assignments. I propose adding a flag to the corresponding entries (i.e. {{isFollower}}). # When reacting to changes in Metastorage entries, call a new version of `changePeersAsync` (see IGNITE-18155) that will provide new followers and learners. Concrete applications of learners rebalance and actual learner assignments are subject of further design (e.g. it can be used in Metastorage and Replicated Tables). > Raft learners rebalance > --- > > Key: IGNITE-18172 > URL: https://issues.apache.org/jira/browse/IGNITE-18172 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr Polovtcev >Priority: Major > Labels: ignite-3 > > Currently, the Rebalance protocol only applies to Raft followers. We would > like to extend it to be applicable to rebalancing Raft learners as well. The > following should be done: > # {{partition.assignments.*}} Metastorage keys should store followers' > assignments as well as learners' assignments. I propose adding a flag to the > corresponding entries (i.e. {{isFollower}}). > # When reacting to changes in Metastorage entries, call a new version of > `changePeersAsync` (see IGNITE-18155) that will provide new followers and > learners. > Concrete applications of learners rebalance and actual learner assignments > are subject of further design (e.g. it can be used in Metastorage and > Replicated Tables). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18175) SQL: value out of type bounds is converted into 0 during implicit casting
Andrey Khitrin created IGNITE-18175: --- Summary: SQL: value out of type bounds is converted into 0 during implicit casting Key: IGNITE-18175 URL: https://issues.apache.org/jira/browse/IGNITE-18175 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 3.0.0-beta1 Reporter: Andrey Khitrin A simple scenario: {code:sql} create table test_e011_INTEGER_from (key_field INT PRIMARY KEY, field1 INTEGER); insert into test_e011_INTEGER_from (key_field, field1) values (1, -2147483648); create table test_e011_SMALLINT (key_field INT PRIMARY KEY, field1_SMALLINT SMALLINT); insert into test_e011_SMALLINT (key_field, field1_SMALLINT) values (1, (select field1 from test_e011_INTEGER_from where key_field=1)); select * from test_e011_SMALLINT; {code} I expect it either to return '1, null' (like in postrgres or sqlite3) or to raise an error on insert (like in GG8) as value of -2147483648 is out of bounds for SMALLINT data type. Instead, '1, 0' is stored within test_e011_SMALLINT table and returned from select. In other words, -2147483648 was converted into 0. Such behavior seems as incorrect. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18174) SQL: implement expanded NULL predicate
Andrey Khitrin created IGNITE-18174: --- Summary: SQL: implement expanded NULL predicate Key: IGNITE-18174 URL: https://issues.apache.org/jira/browse/IGNITE-18174 Project: Ignite Issue Type: Improvement Components: sql Reporter: Andrey Khitrin "Expanded NULL predicate" is referenced in SQL standard as F481 feature. It allows to use something other than a column reference as row value expression. The following query works in AI2: {code:sql} create table tmp_simple_table (key_field INT PRIMARY KEY,x INT,y INT,z INT); insert into tmp_simple_table (key_field,x,y,z) values (1, 1,1,1); insert into tmp_simple_table (key_field,x,y,z) values (2, 2,2,2); insert into tmp_simple_table (key_field,x,y,z) values (3, null,3,null); insert into tmp_simple_table (key_field,x,y,z) values (4, 4,null,null); insert into tmp_simple_table (key_field,x,y,z) values (5, null,null,null); select x, y, z from tmp_simple_table t where (select x, z from tmp_simple_table where x=t.x and y=t.y and z=t.z) is not NULL; -- expanded NULL predicate {code} But in AI3 beta1 it's not implemented yet. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18173) SQL: implement EVERY and SOME aggregate functions
Andrey Khitrin created IGNITE-18173: --- Summary: SQL: implement EVERY and SOME aggregate functions Key: IGNITE-18173 URL: https://issues.apache.org/jira/browse/IGNITE-18173 Project: Ignite Issue Type: Improvement Components: sql Reporter: Andrey Khitrin Aggregate functions EVERY and SOME are part of SQL standard. Unfortunately, they're not implemented in AI3 beta1 yet. Could you please implement them? In AI2, they work in the following manner: {code:sql} create table tmp_table_age_name_wage (key_field INT PRIMARY KEY,AGE INT,field1 VARCHAR,field2 INT); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (1, 42,'John',10); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (2, 43,'Jack',5); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (3, 42,'Jen',3); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (4, 42,'Jim',7); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (5, 41,'Jess',3); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (6, 50,'Joe',4); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (7, 43,'Jeff',2); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (8, 32,'Joel',8); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (9, 33,'Joe',3); insert into tmp_table_age_name_wage (key_field,AGE,field1,field2) values (10, 41,'Jill',9); SELECT EVERY(AGE > 20) FROM tmp_table_age_name_wage;--> true SELECT EVERY(AGE > 40) FROM tmp_table_age_name_wage;--> false SELECT SOME(field2 = 9) FROM tmp_table_age_name_wage; --> true SELECT SOME(field2 <> 9) FROM tmp_table_age_name_wage; --> true {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18172) Raft learners rebalance
Aleksandr Polovtcev created IGNITE-18172: Summary: Raft learners rebalance Key: IGNITE-18172 URL: https://issues.apache.org/jira/browse/IGNITE-18172 Project: Ignite Issue Type: Task Reporter: Aleksandr Polovtcev -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-17959) ReplicaUnavailableException: Replica is not ready.
[ https://issues.apache.org/jira/browse/IGNITE-17959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634723#comment-17634723 ] Sergey Uttsel edited comment on IGNITE-17959 at 11/16/22 8:29 AM: -- We decide to rework current implementation to avoid creating new executor. All the logic described below is supposed to be implemented inside ReplicaService and ReplicaManager. # We send an invoke, which can end with a ReplicaUnavailableException # If we got a ReplicaUnavailableException, then ## We create a future on which the creation of a replica will be awaited by all subsequent invocations to this replica. Future creations are a point of synchronization, we can have other invokes that need to wait one the future. So in the replicaService we have a map into which we add futures through the compute method. ## We add a retry of the original invoke on this future. ## After creating the future, we send a new awaitReplicaRequest request to the replica. ## On receiving awaitReplicaRequest, ReplicaManager thread-safely checks if the replica is ready, and if it is not ready, it registers a listener on it. ## On the listener triggering, the ReplicaManager sends a response back to the ReplicaService. We can do this approach to reactive, but for now we will not do this. In case of activation of the listener, the listener is removed. ## On receiving of the awaitReplicaRequest response that the replica is ready the future on which the invoke hung is compliting. Invokes are sending. ## We can also get a timeout - upon the fact of the timeout, we send a request to remove the listener. was (Author: sergey uttsel): We decide to rework current implementation to avoid creating new executor. # All the logic described below is supposed to be implemented inside ReplicaService and ReplicaManager. # We send an invoke, which can end with a ReplicaUnavailableException # If we got a ReplicaUnavailableException, then ## We create a future on which the creation of a replica will be awaited by all subsequent invocations to this replica. Future creations are a point of synchronization, we can have other invokes that need to wait one the future. So in the replicaService we have a map into which we add futures through the compute method. ## We add a retry of the original invoke on this future. ## After creating the future, we send a new awaitReplicaRequest request to the replica. ## On receiving awaitReplicaRequest, ReplicaManager thread-safely checks if the replica is ready, and if it is not ready, it registers a listener on it. ## On the listener triggering, the ReplicaManager sends a response back to the ReplicaService. We can do this approach to reactive, but for now we will not do this. In case of activation of the listener, the listener is removed. ## On receiving of the awaitReplicaRequest response that the replica is ready the future on which the invoke hung is compliting. Invokes are sending. ## We can also get a timeout - upon the fact of the timeout, we send a request to remove the listener. > ReplicaUnavailableException: Replica is not ready. > -- > > Key: IGNITE-17959 > URL: https://issues.apache.org/jira/browse/IGNITE-17959 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 3.0.0-alpha5 >Reporter: Evgeny Stanilovsky >Assignee: Sergey Uttsel >Priority: Major > Labels: ignite-3 > Attachments: err.log, err2.log > > Time Spent: 0.5h > Remaining Estimate: 0h > > h2. *Motivation* > Frequently in long running tests can be observed (full msg in attach) : > {noformat} > Caused by: > org.apache.ignite.internal.replicator.exception.ReplicaUnavailableException: > IGN-REP-5 TraceId:82267e0a-aca2-47a3-806e-7922ed61d6d3 Replica is not ready > [replicationGroupId=b5b3a2e5-1342-4a90-97b3-a46e9509a1d6_part_5, > nodeName=iist_n_1] > {noformat} > check for example test: ItIndexSpoolTest.test, numerous runs or run until > failure will highlight the problem. > Additionally we can observe (err2.log attached): > {noformat} > 2022-10-24 13:23:52:308 +0300 > [WARNING][%iist_n_1%Raft-Group-Client-4][RaftGroupServiceImpl] Recoverable > error during the request type=ActionRequestImpl occurred (will be retried on > the randomly selected node): > java.util.concurrent.CompletionException: > java.util.concurrent.TimeoutException > at > java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367) > at > java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376) > at > java.base/java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:1019) > at > java.base/java.util.concurrent.CompletableFuture.postCo
[jira] [Comment Edited] (IGNITE-17959) ReplicaUnavailableException: Replica is not ready.
[ https://issues.apache.org/jira/browse/IGNITE-17959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634723#comment-17634723 ] Sergey Uttsel edited comment on IGNITE-17959 at 11/16/22 8:28 AM: -- We decide to rework current implementation to avoid creating new executor. # All the logic described below is supposed to be implemented inside ReplicaService and ReplicaManager. # We send an invoke, which can end with a ReplicaUnavailableException # If we got a ReplicaUnavailableException, then ## We create a future on which the creation of a replica will be awaited by all subsequent invocations to this replica. Future creations are a point of synchronization, we can have other invokes that need to wait one the future. So in the replicaService we have a map into which we add futures through the compute method. ## We add a retry of the original invoke on this future. ## After creating the future, we send a new awaitReplicaRequest request to the replica. ## On receiving awaitReplicaRequest, ReplicaManager thread-safely checks if the replica is ready, and if it is not ready, it registers a listener on it. ## On the listener triggering, the ReplicaManager sends a response back to the ReplicaService. We can do this approach to reactive, but for now we will not do this. In case of activation of the listener, the listener is removed. ## On receiving of the awaitReplicaRequest response that the replica is ready the future on which the invoke hung is compliting. Invokes are sending. ## We can also get a timeout - upon the fact of the timeout, we send a request to remove the listener. was (Author: sergey uttsel): We decide to rework current implementation to avoid creating new executor. # All the logic described below is supposed to be implemented inside ReplicaService and ReplicaManager. # We send an invoke, which can end with a ReplicaUnavailableException ## If we got a ReplicaUnavailableException, then ## We create a future on which the creation of a replica will be awaited by all subsequent invocations to this replica. Future creations are a point of synchronization, we can have other invokes that need to wait one the future. So in the replicaService we have a map into which we add futures through the compute method. ## We add a retry of the original invoke on this future. ## After creating the future, we send a new awaitReplicaRequest request to the replica. ## On receiving awaitReplicaRequest, ReplicaManager thread-safely checks if the replica is ready, and if it is not ready, it registers a listener on it. ## On the listener triggering, the ReplicaManager sends a response back to the ReplicaService. We can do this approach to reactive, but for now we will not do this. In case of activation of the listener, the listener is removed. ## On receiving of the awaitReplicaRequest response that the replica is ready the future on which the invoke hung is compliting. Invokes are sending. ## We can also get a timeout - upon the fact of the timeout, we send a request to remove the listener. > ReplicaUnavailableException: Replica is not ready. > -- > > Key: IGNITE-17959 > URL: https://issues.apache.org/jira/browse/IGNITE-17959 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 3.0.0-alpha5 >Reporter: Evgeny Stanilovsky >Assignee: Sergey Uttsel >Priority: Major > Labels: ignite-3 > Attachments: err.log, err2.log > > Time Spent: 0.5h > Remaining Estimate: 0h > > h2. *Motivation* > Frequently in long running tests can be observed (full msg in attach) : > {noformat} > Caused by: > org.apache.ignite.internal.replicator.exception.ReplicaUnavailableException: > IGN-REP-5 TraceId:82267e0a-aca2-47a3-806e-7922ed61d6d3 Replica is not ready > [replicationGroupId=b5b3a2e5-1342-4a90-97b3-a46e9509a1d6_part_5, > nodeName=iist_n_1] > {noformat} > check for example test: ItIndexSpoolTest.test, numerous runs or run until > failure will highlight the problem. > Additionally we can observe (err2.log attached): > {noformat} > 2022-10-24 13:23:52:308 +0300 > [WARNING][%iist_n_1%Raft-Group-Client-4][RaftGroupServiceImpl] Recoverable > error during the request type=ActionRequestImpl occurred (will be retried on > the randomly selected node): > java.util.concurrent.CompletionException: > java.util.concurrent.TimeoutException > at > java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367) > at > java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376) > at > java.base/java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:1019) > at > java.base/java.util.concurrent.CompletableFuture.pos
[jira] [Commented] (IGNITE-17959) ReplicaUnavailableException: Replica is not ready.
[ https://issues.apache.org/jira/browse/IGNITE-17959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17634723#comment-17634723 ] Sergey Uttsel commented on IGNITE-17959: We decide to rework current implementation to avoid creating new executor. # All the logic described below is supposed to be implemented inside ReplicaService and ReplicaManager. # We send an invoke, which can end with a ReplicaUnavailableException ## If we got a ReplicaUnavailableException, then ## We create a future on which the creation of a replica will be awaited by all subsequent invocations to this replica. Future creations are a point of synchronization, we can have other invokes that need to wait one the future. So in the replicaService we have a map into which we add futures through the compute method. ## We add a retry of the original invoke on this future. ## After creating the future, we send a new awaitReplicaRequest request to the replica. ## On receiving awaitReplicaRequest, ReplicaManager thread-safely checks if the replica is ready, and if it is not ready, it registers a listener on it. ## On the listener triggering, the ReplicaManager sends a response back to the ReplicaService. We can do this approach to reactive, but for now we will not do this. In case of activation of the listener, the listener is removed. ## On receiving of the awaitReplicaRequest response that the replica is ready the future on which the invoke hung is compliting. Invokes are sending. ## We can also get a timeout - upon the fact of the timeout, we send a request to remove the listener. > ReplicaUnavailableException: Replica is not ready. > -- > > Key: IGNITE-17959 > URL: https://issues.apache.org/jira/browse/IGNITE-17959 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 3.0.0-alpha5 >Reporter: Evgeny Stanilovsky >Assignee: Sergey Uttsel >Priority: Major > Labels: ignite-3 > Attachments: err.log, err2.log > > Time Spent: 0.5h > Remaining Estimate: 0h > > h2. *Motivation* > Frequently in long running tests can be observed (full msg in attach) : > {noformat} > Caused by: > org.apache.ignite.internal.replicator.exception.ReplicaUnavailableException: > IGN-REP-5 TraceId:82267e0a-aca2-47a3-806e-7922ed61d6d3 Replica is not ready > [replicationGroupId=b5b3a2e5-1342-4a90-97b3-a46e9509a1d6_part_5, > nodeName=iist_n_1] > {noformat} > check for example test: ItIndexSpoolTest.test, numerous runs or run until > failure will highlight the problem. > Additionally we can observe (err2.log attached): > {noformat} > 2022-10-24 13:23:52:308 +0300 > [WARNING][%iist_n_1%Raft-Group-Client-4][RaftGroupServiceImpl] Recoverable > error during the request type=ActionRequestImpl occurred (will be retried on > the randomly selected node): > java.util.concurrent.CompletionException: > java.util.concurrent.TimeoutException > at > java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367) > at > java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376) > at > java.base/java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:1019) > at > java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) > at > java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) > at > java.base/java.util.concurrent.CompletableFuture$Timeout.run(CompletableFuture.java:2792) > at > java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:829) > Caused by: java.util.concurrent.TimeoutException > ... 7 more > 2022-10-24 13:24:13:437 +0300 > [WARNING][%iist_n_1%Raft-Group-Client-2][RaftGroupServiceImpl] Recoverable > error during the request type=ActionRequestImpl occurred (will be retried on > the randomly selected node): > java.util.concurrent.CompletionException: > java.util.concurrent.TimeoutException > at > java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367) > at > java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376) > at > java.base/java.util.concurrent.CompletableFuture$UniRelay.try
[jira] [Updated] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yury Gerzhedovich updated IGNITE-18171: --- Component/s: sql > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > TBD -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18171) Descibe nodes start/stop scenarios
[ https://issues.apache.org/jira/browse/IGNITE-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Mashenkov updated IGNITE-18171: -- Description: TBD > Descibe nodes start/stop scenarios > -- > > Key: IGNITE-18171 > URL: https://issues.apache.org/jira/browse/IGNITE-18171 > Project: Ignite > Issue Type: Improvement >Reporter: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > TBD -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18171) Descibe nodes start/stop scenarios
Andrey Mashenkov created IGNITE-18171: - Summary: Descibe nodes start/stop scenarios Key: IGNITE-18171 URL: https://issues.apache.org/jira/browse/IGNITE-18171 Project: Ignite Issue Type: Improvement Reporter: Andrey Mashenkov -- This message was sent by Atlassian Jira (v8.20.10#820010)