[jira] [Updated] (IGNITE-19877) Sql. Erroneous cast possibility Custom object to Numeric.
[ https://issues.apache.org/jira/browse/IGNITE-19877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-19877: Description: {code:java} @Test public void test0() { String query = format("SELECT CAST(? AS DECIMAL(5, 1))"); assertQuery(query).withParams(LocalTime.now()).returns(2).ok(); } {code} Throws Numeric overflow exception, seems this is incorrect behavior. Seems scope of this issue is to check casting of: 1. not supported objects 2. IgniteCustomType full casting matrix need to be fixed in scope of [1] [1] https://issues.apache.org/jira/browse/IGNITE-20069 was: {code:java} @Test public void test0() \{ String query = format("SELECT CAST(? AS DECIMAL(5, 1))"); sql(query).withParams(LocalTime.now()).returns(2).ok(); } {code} Throws Numeric overflow exception, seems this is incorrect behavior. > Sql. Erroneous cast possibility Custom object to Numeric. > - > > Key: IGNITE-19877 > URL: https://issues.apache.org/jira/browse/IGNITE-19877 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Evgeny Stanilovsky >Assignee: Evgeny Stanilovsky >Priority: Major > Labels: ignite-3 > Time Spent: 1h 10m > Remaining Estimate: 0h > > {code:java} > @Test > public void test0() { > String query = format("SELECT CAST(? AS DECIMAL(5, 1))"); > assertQuery(query).withParams(LocalTime.now()).returns(2).ok(); > } > {code} > Throws Numeric overflow exception, seems this is incorrect behavior. > Seems scope of this issue is to check casting of: > 1. not supported objects > 2. IgniteCustomType > full casting matrix need to be fixed in scope of [1] > [1] https://issues.apache.org/jira/browse/IGNITE-20069 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19877) Sql. Erroneous cast possibility Custom object to Numeric.
[ https://issues.apache.org/jira/browse/IGNITE-19877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-19877: Description: {code:java} @Test public void test0() \{ String query = format("SELECT CAST(? AS DECIMAL(5, 1))"); sql(query).withParams(LocalTime.now()).returns(2).ok(); } {code} Throws Numeric overflow exception, seems this is incorrect behavior. was: {code:java} @Test public void test0() \{ String query = format("SELECT CAST(? AS DECIMAL(5, 1))"); sql(query).withParams(LocalDateTime.now()).returns(2).ok(); } {code} Throws Numeric overflow exception, seems this is incorrect behavior. > Sql. Erroneous cast possibility Custom object to Numeric. > - > > Key: IGNITE-19877 > URL: https://issues.apache.org/jira/browse/IGNITE-19877 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Evgeny Stanilovsky >Assignee: Evgeny Stanilovsky >Priority: Major > Labels: ignite-3 > Time Spent: 1h 10m > Remaining Estimate: 0h > > {code:java} > @Test > public void test0() \{ > String query = format("SELECT CAST(? AS DECIMAL(5, 1))"); > sql(query).withParams(LocalTime.now()).returns(2).ok(); > } > {code} > Throws Numeric overflow exception, seems this is incorrect behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20114) DistributionZoneManager should listen CatalogService events instead of configuration
[ https://issues.apache.org/jira/browse/IGNITE-20114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-20114: - Issue Type: Improvement (was: New Feature) > DistributionZoneManager should listen CatalogService events instead of > configuration > > > Key: IGNITE-20114 > URL: https://issues.apache.org/jira/browse/IGNITE-20114 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > As of now, *DistributionZoneManager* listens configuration events to create > internal structures. > Let's make *DistributionZoneManager* listens CatalogService events instead. > Note: Some tests may fails due to changed guarantees and related ticked > incompletion. So, let's do this in a separate feature branch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-20129) Gradle build fails on some agents because of the invalid CMake config
[ https://issues.apache.org/jira/browse/IGNITE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750091#comment-17750091 ] Pavel Tupitsyn commented on IGNITE-20129: - [~isapego] looks good to me > Gradle build fails on some agents because of the invalid CMake config > - > > Key: IGNITE-20129 > URL: https://issues.apache.org/jira/browse/IGNITE-20129 > Project: Ignite > Issue Type: Bug > Components: build, platforms >Reporter: Igor Sapego >Assignee: Igor Sapego >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 10m > Remaining Estimate: 0h > > Here is an example of a failing build: > https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_BuildApacheIgnite/7405565?hideProblemsFromDependencies=false=false=true > {noformat} > Task :platforms:cmakeConfigureOdbc FAILED > :platforms:cmakeConfigureOdbc > CMakePlugin.cmakeConfigure - ERRORS: > gtest/1.12.1: WARN: Package binary is corrupted, removing: > fad4f20dcccb91f288c5e1c3c6370be311eb3026 > gtest/1.12.1: WARN: Build folder is dirty, removing it: > /opt/buildagent/.conan/data/gtest/1.12.1/_/_/build/fad4f20dcccb91f288c5e1c3c6370be311eb3026 > gtest/1.12.1: WARN: Using the new toolchains and generators without > specifying a build profile (e.g: -pr:b=default) is discouraged and might > cause failures and unexpected behavior > ERROR: gtest/1.12.1: Error in generate() method, line 122 > tc.generate() > TemplateAssertionError: no test named 'boolean' > CMake Error at cmake/conan.cmake:651 (message): > Execution failed for task ':platforms:cmakeConfigureOdbc'. > org.gradle.api.GradleException: [cmakeConfigureOdbc]Error: CMAKE returned 1 > Conan install failed='1' > Call Stack (most recent call first): > CMakeLists.txt:75 (conan_cmake_install) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-19922) Gradle checkstyle tasks are greedy
[ https://issues.apache.org/jira/browse/IGNITE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750086#comment-17750086 ] Pavel Tupitsyn commented on IGNITE-19922: - [~ksizov] *--max-workers* helps, thank you! > Gradle checkstyle tasks are greedy > -- > > Key: IGNITE-19922 > URL: https://issues.apache.org/jira/browse/IGNITE-19922 > Project: Ignite > Issue Type: New Feature >Reporter: Mikhail Pochatkin >Priority: Major > Labels: ignite-3 > Attachments: image-2023-07-06-11-18-40-515.png, screenshot-1.png > > > This is memory consumption during {{gradlew checkstyleMain}} execution - > goes from ~10 GB to 30. All CPU cores are also at 100%. This causes chrome > tabs to unload and overall stress on the system. > Also, RAM usage does not go down after this command unless I kill/stop Gradle > daemons > !image-2023-07-06-11-18-40-515.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-20129) Gradle build fails on some agents because of the invalid CMake config
[ https://issues.apache.org/jira/browse/IGNITE-20129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17750048#comment-17750048 ] Igor Sapego commented on IGNITE-20129: -- Runs on all agents: https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_BuildApacheIgnite?branch=pull%2F2395=builds > Gradle build fails on some agents because of the invalid CMake config > - > > Key: IGNITE-20129 > URL: https://issues.apache.org/jira/browse/IGNITE-20129 > Project: Ignite > Issue Type: Bug > Components: build, platforms >Reporter: Igor Sapego >Assignee: Igor Sapego >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 10m > Remaining Estimate: 0h > > Here is an example of a failing build: > https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_BuildApacheIgnite/7405565?hideProblemsFromDependencies=false=false=true > {noformat} > Task :platforms:cmakeConfigureOdbc FAILED > :platforms:cmakeConfigureOdbc > CMakePlugin.cmakeConfigure - ERRORS: > gtest/1.12.1: WARN: Package binary is corrupted, removing: > fad4f20dcccb91f288c5e1c3c6370be311eb3026 > gtest/1.12.1: WARN: Build folder is dirty, removing it: > /opt/buildagent/.conan/data/gtest/1.12.1/_/_/build/fad4f20dcccb91f288c5e1c3c6370be311eb3026 > gtest/1.12.1: WARN: Using the new toolchains and generators without > specifying a build profile (e.g: -pr:b=default) is discouraged and might > cause failures and unexpected behavior > ERROR: gtest/1.12.1: Error in generate() method, line 122 > tc.generate() > TemplateAssertionError: no test named 'boolean' > CMake Error at cmake/conan.cmake:651 (message): > Execution failed for task ':platforms:cmakeConfigureOdbc'. > org.gradle.api.GradleException: [cmakeConfigureOdbc]Error: CMAKE returned 1 > Conan install failed='1' > Call Stack (most recent call first): > CMakeLists.txt:75 (conan_cmake_install) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20129) Gradle build fails on some agents because of the invalid CMake config
Igor Sapego created IGNITE-20129: Summary: Gradle build fails on some agents because of the invalid CMake config Key: IGNITE-20129 URL: https://issues.apache.org/jira/browse/IGNITE-20129 Project: Ignite Issue Type: Bug Components: build, platforms Reporter: Igor Sapego Assignee: Igor Sapego Fix For: 3.0.0-beta2 Here is an example of a failing build: https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_BuildApacheIgnite/7405565?hideProblemsFromDependencies=false=false=true {noformat} Task :platforms:cmakeConfigureOdbc FAILED :platforms:cmakeConfigureOdbc CMakePlugin.cmakeConfigure - ERRORS: gtest/1.12.1: WARN: Package binary is corrupted, removing: fad4f20dcccb91f288c5e1c3c6370be311eb3026 gtest/1.12.1: WARN: Build folder is dirty, removing it: /opt/buildagent/.conan/data/gtest/1.12.1/_/_/build/fad4f20dcccb91f288c5e1c3c6370be311eb3026 gtest/1.12.1: WARN: Using the new toolchains and generators without specifying a build profile (e.g: -pr:b=default) is discouraged and might cause failures and unexpected behavior ERROR: gtest/1.12.1: Error in generate() method, line 122 tc.generate() TemplateAssertionError: no test named 'boolean' CMake Error at cmake/conan.cmake:651 (message): Execution failed for task ':platforms:cmakeConfigureOdbc'. org.gradle.api.GradleException: [cmakeConfigureOdbc]Error: CMAKE returned 1 Conan install failed='1' Call Stack (most recent call first): CMakeLists.txt:75 (conan_cmake_install) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-19922) Gradle checkstyle tasks are greedy
[ https://issues.apache.org/jira/browse/IGNITE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17749935#comment-17749935 ] Kirill Sizov commented on IGNITE-19922: Some low-hanging fruits on configuring Gradle to reduce the pressure on the OS. The reason why the daemons don't go away is the way Gradle was designed: ??These worker daemon processes will persist across builds and can be reused during subsequent builds. If system resources get low, however, Gradle will stop any unused worker daemons.?? [https://docs.gradle.org/current/userguide/worker_api.html#creating_a_worker_daemon] Starting the build with {{--no-daemon}} parameter should help. There is a way to limit the number of parallel workers to any desired number: [https://docs.gradle.org/current/userguide/build_environment.html#sec:gradle_configuration_properties] Either by passing an extra command line argument {{--max-workers}} Or by setting {{org.gradle.workers.max}} property in {{~/.gradle/gradle.properties}} > Gradle checkstyle tasks are greedy > -- > > Key: IGNITE-19922 > URL: https://issues.apache.org/jira/browse/IGNITE-19922 > Project: Ignite > Issue Type: New Feature >Reporter: Mikhail Pochatkin >Priority: Major > Labels: ignite-3 > Attachments: image-2023-07-06-11-18-40-515.png, screenshot-1.png > > > This is memory consumption during {{gradlew checkstyleMain}} execution - > goes from ~10 GB to 30. All CPU cores are also at 100%. This causes chrome > tabs to unload and overall stress on the system. > Also, RAM usage does not go down after this command unless I kill/stop Gradle > daemons > !image-2023-07-06-11-18-40-515.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20128) Sql. Clean up ignored SQL tests
Yury Gerzhedovich created IGNITE-20128: -- Summary: Sql. Clean up ignored SQL tests Key: IGNITE-20128 URL: https://issues.apache.org/jira/browse/IGNITE-20128 Project: Ignite Issue Type: Improvement Components: sql Reporter: Yury Gerzhedovich Assignee: Yury Gerzhedovich We have buch of muted, but works test, or muted tests which shouldn't work at all. Let's do our tests are a little bit less messed. Test require attention: sql/sqlite/join/join1.test_ignore sql/sqlite/select2/select2_erroneous_hash_res.test_ignored sql/sqlite/select2/select2_erroneous_res.test_ignored sql/sqlite/select3/select3_erroneous_hash_res.test_ignore sql/sqlite/select3/select3_erroneous_res.test_ignore sql/subquery/table/test_aliasing.test_ignore sql/filter/test_constant_comparisons.test_ignore sql/insert/test_insert_type.test_ignore sql/filter/test_obsolete_filters.test_ignore sql/order/test_order_same_value.test_slow_ignore sql/subquery/table/test_table_subquery.test_ignore sql/subquery/any_all/test_uncorrelated_all_subquery.test_ignore sql/subquery/any_all/test_uncorrelated_any_subquery.test_ignored sql/subquery/scalar/test_uncorrelated_scalar_subquery.test_ignore Tickets require attention: https://issues.apache.org/jira/browse/IGNITE-14617 https://issues.apache.org/jira/browse/IGNITE-15561 https://issues.apache.org/jira/browse/IGNITE-15583 https://issues.apache.org/jira/browse/IGNITE-15586 https://issues.apache.org/jira/browse/IGNITE-15605 https://issues.apache.org/jira/browse/IGNITE-17644 https://issues.apache.org/jira/browse/IGNITE-17921 https://issues.apache.org/jira/browse/IGNITE-18365 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-20058) NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter
[ https://issues.apache.org/jira/browse/IGNITE-20058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Uttsel reassigned IGNITE-20058: -- Assignee: Sergey Uttsel (was: Alexander Lapin) > NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter > - > > Key: IGNITE-20058 > URL: https://issues.apache.org/jira/browse/IGNITE-20058 > Project: Ignite > Issue Type: Bug >Reporter: Mirza Aliev >Assignee: Sergey Uttsel >Priority: Major > Labels: ignite-3 > > {{DistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky and with > very low failure rate it fails with NPE (1 fail in 1500 runs) > {noformat} > 2023-07-25 16:48:30:520 +0400 > [ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred > when processing a watch event > java.lang.NullPointerException > at > org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737) > at > org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488) > at > org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136) > at > org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129) > at > org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown > Source) > {noformat} > {code:java} > 2023-08-01 15:55:40:440 +0300 > [INFO][%test%metastorage-watch-executor-1][ConfigurationRegistry] Failed to > notify configuration listener > java.lang.NullPointerException > at > org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.updateZoneConfiguration(CausalityDataNodesEngine.java:570) > at > org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.onUpdateFilter(CausalityDataNodesEngine.java:557) > at > org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateFilter$18(DistributionZoneManager.java:774) > at > org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488) > at > org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136) > at > org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129) > at > org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown > Source){code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20116) Linearize storage updates with safeTime adjustment rules
[ https://issues.apache.org/jira/browse/IGNITE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20116: - Description: h3. Motivation The logic of setting safeTime explicitly prohibits setting a larger time ahead of a smaller one. In other words, all data updates within storages should be strictly ordered by the safeTime associated with such updates. Currently it's not true: * We associate update and safe time during update command creation (see org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) {code:java} UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() ... .safeTimeLong(hybridClock.nowLong()); {code} * However, neither applying a given command locally nor sending it to the raft isn't linearized with associated safeTime value. In other words, it's possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply cmd1 prior to cmd0 locally. Simply speaking, we lack some sort of synchronization here. h3. Definition of Done * It's required to linearize updates application to preserve guarantees of the monotonicity of a safeTime's adjustment. was: h3. Motivation The logic of setting safeTime explicitly prohibits setting a larger time ahead of a smaller one. In other words, all data updates within storages should be strictly ordered by the safeTime associated with such updates. Currently it's not true: * We associate update and safe time during update command creation (see org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) {code:java} UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() ... .safeTimeLong(hybridClock.nowLong()); {code} * However, neither applying a given command locally nor sending it to the raft isn't linearized with associated safeTime value. In other words, it's possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply cmd1 prior to cmd0 locally. Simply speaking, we lack some sort of synchronization here. h3. Definition of Done * It's required to linearize updates application to preserve guarantees of the monotonicity of a safeTime's adjustment. > Linearize storage updates with safeTime adjustment rules > > > Key: IGNITE-20116 > URL: https://issues.apache.org/jira/browse/IGNITE-20116 > Project: Ignite > Issue Type: Bug >Reporter: Alexander Lapin >Priority: Blocker > Labels: ignite-3, transactions > > h3. Motivation > The logic of setting safeTime explicitly prohibits setting a larger time > ahead of a smaller one. In other words, all data updates within storages > should be strictly ordered by the safeTime associated with such updates. > Currently it's not true: > * We associate update and safe time during update command creation (see > org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) > {code:java} > UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() > ... > .safeTimeLong(hybridClock.nowLong()); {code} > * However, neither applying a given command locally nor sending it to the > raft isn't linearized with associated safeTime value. In other words, it's > possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply > cmd1 prior to cmd0 locally. > Simply speaking, we lack some sort of synchronization here. > h3. Definition of Done > * It's required to linearize updates application to preserve guarantees of > the monotonicity of a safeTime's adjustment. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20127) Implement 1rtt RW transaction await logic in pre commit
[ https://issues.apache.org/jira/browse/IGNITE-20127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20127: - Description: h3. Motivation Our transaction protocol assumes that all required request validations, lock acquisitions and similar activities are performed on a primary replica prior to command replication, meaning that it's not necessary to await replication for every request one by one rather it's required to await them all at once in pre-commit phase. Most of what is required for such all at once await has already been implemented. h3. Definition of Done * It's required to do the command replication in an async manner, meaning that it's necessary to return the result to the client right after replication is triggered. Currently, we return replication result in PartitionReplicaListener#applyCmdWithExceptionHandling and await it in ReplicaManager#onReplicaMessageReceive {code:java} CompletableFuture result = replica.processRequest(request); result.handle((res, ex) -> { ... msg = prepareReplicaResponse(requestTimestamp, res); ... clusterNetSvc.messagingService().respond(senderConsistentId, msg, correlationId); {code} * And of course it's required to await all commands replication at once in pre-commit. We already have such logic in ReadWriteTransactionImpl#finish {code:java} protected CompletableFuture finish(boolean commit) { ... CompletableFuture mainFinishFut = CompletableFuture .allOf(enlistedResults.toArray(new CompletableFuture[0])) .thenCompose( ... return txManager.finish( ...{code} however, it should use not the result from primary, but the replication completion one. h3. Implementation Notes I believe it's possible to implement it in a following way: * ReplicaManager should await only primary related actions like lock acquisition and store the replication future in a sort of map. It's possible to use safeTime as request Id. * Transaction should send replicationAwaitRequest in an async manner right after replicationResponse from primary was achieved. * enlistedResults should be switched to replicationAwaitResponse. was: h3. Motivation Our transaction protocol assumes that all required request validations, lock acquisitions and similar activities are performed on a primary replica prior to command replication, meaning that it's not necessary to await replication for every request one by one rather it's required to await them all at once in pre-commit phase. Most of what is required for such all at once await has already been implemented. h3. Definition of Done * It's required to do the command replication in an async manner, meaning that it's required to return the result to the client right after replication is triggered. Currently we return replication result in PartitionReplicaListener#applyCmdWithExceptionHandling and await it in ReplicaManager#onReplicaMessageReceive {code:java} CompletableFuture result = replica.processRequest(request); result.handle((res, ex) -> { ... msg = prepareReplicaResponse(requestTimestamp, res); ... clusterNetSvc.messagingService().respond(senderConsistentId, msg, correlationId); {code} > Implement 1rtt RW transaction await logic in pre commit > --- > > Key: IGNITE-20127 > URL: https://issues.apache.org/jira/browse/IGNITE-20127 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > > h3. Motivation > Our transaction protocol assumes that all required request validations, lock > acquisitions and similar activities are performed on a primary replica prior > to command replication, meaning that it's not necessary to await replication > for every request one by one rather it's required to await them all at once > in pre-commit phase. Most of what is required for such all at once await has > already been implemented. > h3. Definition of Done > * It's required to do the command replication in an async manner, meaning > that it's necessary to return the result to the client right after > replication is triggered. Currently, we return replication result in > PartitionReplicaListener#applyCmdWithExceptionHandling and await it in > ReplicaManager#onReplicaMessageReceive > {code:java} > CompletableFuture result = replica.processRequest(request); > result.handle((res, ex) -> { > ... > msg = prepareReplicaResponse(requestTimestamp, res); > ... > clusterNetSvc.messagingService().respond(senderConsistentId, msg, > correlationId); {code} > * And of course it's required to await all commands replication at once in > pre-commit. We already have such
[jira] [Updated] (IGNITE-20127) Implement 1rtt RW transaction await logic in pre commit
[ https://issues.apache.org/jira/browse/IGNITE-20127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20127: - Description: h3. Motivation Our transaction protocol assumes that all required request validations, lock acquisitions and similar activities are performed on a primary replica prior to command replication, meaning that it's not necessary to await replication for every request one by one rather it's required to await them all at once in pre-commit phase. Most of what is required for such all at once await has already been implemented. h3. Definition of Done * It's required to do the command replication in an async manner, meaning that it's necessary to return the result to the client right after replication is triggered. Currently, we return replication result in PartitionReplicaListener#applyCmdWithExceptionHandling and await it in ReplicaManager#onReplicaMessageReceive {code:java} CompletableFuture result = replica.processRequest(request); result.handle((res, ex) -> { ... msg = prepareReplicaResponse(requestTimestamp, res); ... clusterNetSvc.messagingService().respond(senderConsistentId, msg, correlationId); {code} * And of course it's required to await all commands replication at once in pre-commit. We already have such logic in ReadWriteTransactionImpl#finish {code:java} protected CompletableFuture finish(boolean commit) { ... CompletableFuture mainFinishFut = CompletableFuture .allOf(enlistedResults.toArray(new CompletableFuture[0])) .thenCompose( ... return txManager.finish( ...{code} however, it should use not the result from primary, but the replication completion one. h3. Implementation Notes I believe it's possible to implement it in a following way: * ReplicaManager should await only primary related actions like lock acquisition and store the replication future in a sort of map. It's possible to use safeTime as request Id. * Transaction should send replicationAwaitRequest in an async manner right after replicationResponse from primary was achieved. * enlistedResults should be switched to replicationAwaitResponse. was: h3. Motivation Our transaction protocol assumes that all required request validations, lock acquisitions and similar activities are performed on a primary replica prior to command replication, meaning that it's not necessary to await replication for every request one by one rather it's required to await them all at once in pre-commit phase. Most of what is required for such all at once await has already been implemented. h3. Definition of Done * It's required to do the command replication in an async manner, meaning that it's necessary to return the result to the client right after replication is triggered. Currently, we return replication result in PartitionReplicaListener#applyCmdWithExceptionHandling and await it in ReplicaManager#onReplicaMessageReceive {code:java} CompletableFuture result = replica.processRequest(request); result.handle((res, ex) -> { ... msg = prepareReplicaResponse(requestTimestamp, res); ... clusterNetSvc.messagingService().respond(senderConsistentId, msg, correlationId); {code} * And of course it's required to await all commands replication at once in pre-commit. We already have such logic in ReadWriteTransactionImpl#finish {code:java} protected CompletableFuture finish(boolean commit) { ... CompletableFuture mainFinishFut = CompletableFuture .allOf(enlistedResults.toArray(new CompletableFuture[0])) .thenCompose( ... return txManager.finish( ...{code} however, it should use not the result from primary, but the replication completion one. h3. Implementation Notes I believe it's possible to implement it in a following way: * ReplicaManager should await only primary related actions like lock acquisition and store the replication future in a sort of map. It's possible to use safeTime as request Id. * Transaction should send replicationAwaitRequest in an async manner right after replicationResponse from primary was achieved. * enlistedResults should be switched to replicationAwaitResponse. > Implement 1rtt RW transaction await logic in pre commit > --- > > Key: IGNITE-20127 > URL: https://issues.apache.org/jira/browse/IGNITE-20127 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > > h3. Motivation > Our transaction protocol assumes that all required request validations, lock > acquisitions and similar activities are performed on a primary replica prior > to command
[jira] [Commented] (IGNITE-18875) Sql. Drop AbstractPlannerTest.TestTable.
[ https://issues.apache.org/jira/browse/IGNITE-18875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17749912#comment-17749912 ] Gael Yimen Yimga commented on IGNITE-18875: --- [~jooger] Sorry I got distracted by some other work. You can assign it to someone else. I will pick up another ticket. > Sql. Drop AbstractPlannerTest.TestTable. > > > Key: IGNITE-18875 > URL: https://issues.apache.org/jira/browse/IGNITE-18875 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3, newbie, tech-debt-test > Fix For: 3.0.0-beta2 > > Attachments: Screen Shot 2023-04-03 at 1.04.39 AM.png > > Time Spent: 2h 20m > Remaining Estimate: 0h > > {{org.apache.ignite.internal.sql.engine.planner.AbstractPlannerTest.TestTable}} > uses > IgniteTypeFactory.createJavaType() method to create RelDataType from java > classes. > We should create tables in tests in same way we do in product code. > Let's use test framework for schema configuration in tests and replace > {code:java} > org.apache.ignite.internal.sql.engine.planner.AbstractPlannerTest.TestTable > {code} > usage with > {code:java} > org.apache.ignite.internal.sql.engine.framework.TestTable > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20127) Implement 1rtt RW transaction await logic in pre commit
[ https://issues.apache.org/jira/browse/IGNITE-20127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20127: - Description: h3. Motivation Our transaction protocol assumes that all required request validations, lock acquisitions and similar activities are performed on a primary replica prior to command replication, meaning that it's not necessary to await replication for every request one by one rather it's required to await them all at once in pre-commit phase. Most of what is required for such all at once await has already been implemented. h3. Definition of Done * It's required to do the command replication in an async manner, meaning that it's required to return the result to the client right after replication is triggered. Currently we return replication result in PartitionReplicaListener#applyCmdWithExceptionHandling and await it in ReplicaManager#onReplicaMessageReceive {code:java} CompletableFuture result = replica.processRequest(request); result.handle((res, ex) -> { ... msg = prepareReplicaResponse(requestTimestamp, res); ... clusterNetSvc.messagingService().respond(senderConsistentId, msg, correlationId); {code} > Implement 1rtt RW transaction await logic in pre commit > --- > > Key: IGNITE-20127 > URL: https://issues.apache.org/jira/browse/IGNITE-20127 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > > h3. Motivation > Our transaction protocol assumes that all required request validations, lock > acquisitions and similar activities are performed on a primary replica prior > to command replication, meaning that it's not necessary to await replication > for every request one by one rather it's required to await them all at once > in pre-commit phase. Most of what is required for such all at once await has > already been implemented. > h3. Definition of Done > * It's required to do the command replication in an async manner, meaning > that it's required to return the result to the client right after replication > is triggered. Currently we return replication result in > PartitionReplicaListener#applyCmdWithExceptionHandling and await it in > ReplicaManager#onReplicaMessageReceive > {code:java} > CompletableFuture result = replica.processRequest(request); > result.handle((res, ex) -> { > ... > msg = prepareReplicaResponse(requestTimestamp, res); > ... > clusterNetSvc.messagingService().respond(senderConsistentId, msg, > correlationId); {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20127) Implement 1rtt RW transaction await logic in pre commit
Alexander Lapin created IGNITE-20127: Summary: Implement 1rtt RW transaction await logic in pre commit Key: IGNITE-20127 URL: https://issues.apache.org/jira/browse/IGNITE-20127 Project: Ignite Issue Type: Improvement Reporter: Alexander Lapin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20127) Implement 1rtt RW transaction await logic in pre commit
[ https://issues.apache.org/jira/browse/IGNITE-20127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20127: - Epic Link: IGNITE-19479 > Implement 1rtt RW transaction await logic in pre commit > --- > > Key: IGNITE-20127 > URL: https://issues.apache.org/jira/browse/IGNITE-20127 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20127) Implement 1rtt RW transaction await logic in pre commit
[ https://issues.apache.org/jira/browse/IGNITE-20127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20127: - Labels: ignite-3 transactions (was: ) > Implement 1rtt RW transaction await logic in pre commit > --- > > Key: IGNITE-20127 > URL: https://issues.apache.org/jira/browse/IGNITE-20127 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20127) Implement 1rtt RW transaction await logic in pre commit
[ https://issues.apache.org/jira/browse/IGNITE-20127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20127: - Ignite Flags: (was: Docs Required,Release Notes Required) > Implement 1rtt RW transaction await logic in pre commit > --- > > Key: IGNITE-20127 > URL: https://issues.apache.org/jira/browse/IGNITE-20127 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20116) Linearize storage updates with safeTime adjustment rules
[ https://issues.apache.org/jira/browse/IGNITE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20116: - Description: h3. Motivation The logic of setting safeTime explicitly prohibits setting a larger time ahead of a smaller one. In other words, all data updates within storages should be strictly ordered by the safeTime associated with such updates. Currently it's not true: * We associate update and safe time during update command creation (see org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) {code:java} UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() ... .safeTimeLong(hybridClock.nowLong()); {code} * However, neither applying a given command locally nor sending it to the raft isn't linearized with associated safeTime value. In other words, it's possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply cmd1 prior to cmd0 locally. Simply speaking, we lack some sort of synchronization here. h3. Definition of Done * It's required to linearize updates application to preserve guarantees of the monotonicity of a safeTime's adjustment. was: h3. Motivation The logic of setting safeTime explicitly prohibits setting a larger time ahead of a smaller one. In other words, all data updates within storages should be strictly ordered by the safeTime associated with such updates. Currently it's not true: * We associate update and safe time during update command creation (see org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) {code:java} UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() ... .safeTimeLong(hybridClock.nowLong()); {code} * However, neither applying a given command locally nor sending it to the raft isn't linearized with associated safeTime value. In other words, it's possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply cmd1 prior to cmd0 locally. Simply speaking, we lack some sort of synchronization here. h3. Definition of Done * It's required to add an assert that will verify that we never ever try to update a safeTime with a smaller or equal value. * It's required to linearize updates application to preserve guarantees of the monotonicity of a safeTime's adjustment. > Linearize storage updates with safeTime adjustment rules > > > Key: IGNITE-20116 > URL: https://issues.apache.org/jira/browse/IGNITE-20116 > Project: Ignite > Issue Type: Bug >Reporter: Alexander Lapin >Priority: Blocker > Labels: ignite-3, transactions > > h3. Motivation > The logic of setting safeTime explicitly prohibits setting a larger time > ahead of a smaller one. In other words, all data updates within storages > should be strictly ordered by the safeTime associated with such updates. > Currently it's not true: > * We associate update and safe time during update command creation (see > org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) > {code:java} > UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() > ... > .safeTimeLong(hybridClock.nowLong()); {code} > * However, neither applying a given command locally nor sending it to the > raft isn't linearized with associated safeTime value. In other words, it's > possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply > cmd1 prior to cmd0 locally. > Simply speaking, we lack some sort of synchronization here. > h3. Definition of Done > * It's required to linearize updates application to preserve guarantees of > the monotonicity of a safeTime's adjustment. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20124) Prevent double storage updates within primary
[ https://issues.apache.org/jira/browse/IGNITE-20124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20124: - Description: h3. Motivation In order to preserve the guarantee that the primary replica is always up-to-date it's required to: * In case of common RW transaction - insert writeIntent to the storage within primary before replication. * In case of one-phase-commit - insert commitedWrite after the replication. Both have already been done. However, that means that if primary is part of the replication group, and it's true in almost all cases, we will double the insert: * In case of common RW transaction - through the replication. * In case of one-phase-commit - either through the replication, or though post insert, if replication was fast enough. h3. Definition of Done * Prevent double storage updates within primary. h3. Implementation Notes The easiest way to prevent double insert is to skip one if local safe time is greater or equal to candidates. There are 3 places where we update partition storage: # Primary pre-replication insert. In that case, it's never possible to see already adjusted data. # Primary post-replication insert in case of 1PC. It's possible to see already inserted data if replication was already processed locally. It is expected to be already covered in https://issues.apache.org/jira/browse/IGNITE-15927 # Insert through replication. In case of !1PC on every primary there will be double insert. In case of 1PC it depends. was: h3. Motivation In order to preserve the guarantee that the primary replica is always up-to-date it's required to: * In case of common RW transaction - insert writeIntent to the storage within primary before replication. * In case of one-phase-commit - insert commitedWrite after the replication. Both have already been done. However, that means that if primary is part of the replication group, and it's true in almost all cases, we will double the insert: * In case of common RW transaction - through the replication. * In case of one-phase-commit - either through the replication, or though post insert, if replication was fast enough. h3. Definition of Done * Prevent re-insertion of data on the primer > Prevent double storage updates within primary > - > > Key: IGNITE-20124 > URL: https://issues.apache.org/jira/browse/IGNITE-20124 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > > h3. Motivation > In order to preserve the guarantee that the primary replica is always > up-to-date it's required to: > * In case of common RW transaction - insert writeIntent to the storage > within primary before replication. > * In case of one-phase-commit - insert commitedWrite after the replication. > Both have already been done. However, that means that if primary is part of > the replication group, and it's true in almost all cases, we will double the > insert: > * In case of common RW transaction - through the replication. > * In case of one-phase-commit - either through the replication, or though > post insert, if replication was fast enough. > h3. Definition of Done > * Prevent double storage updates within primary. > h3. Implementation Notes > The easiest way to prevent double insert is to skip one if local safe time is > greater or equal to candidates. There are 3 places where we update partition > storage: > # Primary pre-replication insert. In that case, it's never possible to see > already adjusted data. > # Primary post-replication insert in case of 1PC. It's possible to see > already inserted data if replication was already processed locally. It is > expected to be already covered in > https://issues.apache.org/jira/browse/IGNITE-15927 > # Insert through replication. In case of !1PC on every primary there will be > double insert. In case of 1PC it depends. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20126) Check that the index exists before reading from it
Roman Puchkovskiy created IGNITE-20126: -- Summary: Check that the index exists before reading from it Key: IGNITE-20126 URL: https://issues.apache.org/jira/browse/IGNITE-20126 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Fix For: 3.0.0-beta2 Before each read-from-index operation (like opening a scan cursor or getting next batch of such a cursor), we should check whether the index exists (i.e. not removed from the Catalog) at moment opTs. If not, the operation must be failed and the current RW tx must be aborted. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20125) Write to write-compatible indices when writing to partition
Roman Puchkovskiy created IGNITE-20125: -- Summary: Write to write-compatible indices when writing to partition Key: IGNITE-20125 URL: https://issues.apache.org/jira/browse/IGNITE-20125 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Fix For: 3.0.0-beta2 For each operation that is writing, the operation’s timestamp (which moves partition’s SafeTime forward) T~op~ is used to get the schema corresponding to the operation. When it’s obtained, all writable (STARTING, READY, STOPPING) indices that are write-compatible at T~op~ are taken, and the current operation writes to them all. An index is write-compatible at timestamp T~op~ if for each column of the index the following holds: the column was not dropped at all, or it was dropped strictly after T~op~. If an index does not exist anymore at T~op~, the write to it is just ignored, the transaction is not aborted. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20124) Exclude double storage updates
[ https://issues.apache.org/jira/browse/IGNITE-20124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20124: - Description: h3. Motivation In order to preserve the guarantee that the primary replica is always up-to-date it's required to: * In case of common RW transaction - insert writeIntent to the storage within primary before replication. * In case of one-phase-commit - insert commitedWrite after the replication. Both have already been done. However, that means that if primary is part of the replication group, and it's true in almost all cases, we will double the insert: * In case of common RW transaction - through the replication. * In case of one-phase-commit - either through the replication, or though post insert, if replication was fast enough. h3. Definition of Done * Prevent re-insertion of data on the primer was: h3. Motivation > Exclude double storage updates > -- > > Key: IGNITE-20124 > URL: https://issues.apache.org/jira/browse/IGNITE-20124 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > > h3. Motivation > In order to preserve the guarantee that the primary replica is always > up-to-date it's required to: > * In case of common RW transaction - insert writeIntent to the storage > within primary before replication. > * In case of one-phase-commit - insert commitedWrite after the replication. > Both have already been done. However, that means that if primary is part of > the replication group, and it's true in almost all cases, we will double the > insert: > * In case of common RW transaction - through the replication. > * In case of one-phase-commit - either through the replication, or though > post insert, if replication was fast enough. > h3. Definition of Done > * Prevent re-insertion of data on the primer > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20124) Prevent double storage updates within primary
[ https://issues.apache.org/jira/browse/IGNITE-20124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20124: - Summary: Prevent double storage updates within primary (was: Exclude double storage updates) > Prevent double storage updates within primary > - > > Key: IGNITE-20124 > URL: https://issues.apache.org/jira/browse/IGNITE-20124 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > > h3. Motivation > In order to preserve the guarantee that the primary replica is always > up-to-date it's required to: > * In case of common RW transaction - insert writeIntent to the storage > within primary before replication. > * In case of one-phase-commit - insert commitedWrite after the replication. > Both have already been done. However, that means that if primary is part of > the replication group, and it's true in almost all cases, we will double the > insert: > * In case of common RW transaction - through the replication. > * In case of one-phase-commit - either through the replication, or though > post insert, if replication was fast enough. > h3. Definition of Done > * Prevent re-insertion of data on the primer > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20122) When a STARTING index is dropped, it should be removed right away
[ https://issues.apache.org/jira/browse/IGNITE-20122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-20122: --- Description: When a DROP INDEX is executed for a an index that is STARTING, it should be removed right away (skipping the READY and STOPPING states) and start destruction. This should be done using a conditional schema update (IGNITE-20115) to avoid a race with switching to the READY state. If the conditional schema update fails (because the index has been switched to the READY state), we should fallback to the usual procedure (IGNITE-20119). was: When a STARTING index is dropped, it should be removed right away (skipping the READY and STOPPING states) and start destruction. This should be done using a conditional schema update (IGNITE-20115) to avoid a race with switching to the READY state. If the conditional schema update fails (because the index has been switched to the READY state), we should fallback to the usual procedure (IGNITE-20119). > When a STARTING index is dropped, it should be removed right away > - > > Key: IGNITE-20122 > URL: https://issues.apache.org/jira/browse/IGNITE-20122 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > When a DROP INDEX is executed for a an index that is STARTING, it should be > removed right away (skipping the READY and STOPPING states) and start > destruction. This should be done using a conditional schema update > (IGNITE-20115) to avoid a race with switching to the READY state. > If the conditional schema update fails (because the index has been switched > to the READY state), we should fallback to the usual procedure (IGNITE-20119). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20122) When a STARTING index is dropped, it should be removed right away
[ https://issues.apache.org/jira/browse/IGNITE-20122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-20122: --- Description: When a STARTING index is dropped, it should be removed right away (skipping the READY and STOPPING states) and start destruction. This should be done using a conditional schema update (IGNITE-20115) to avoid a race with switching to the READY state. If the conditional schema update fails (because the index has been switched to the READY state), we should fallback to the usual procedure (IGNITE-20119). was:When a STARTING index is dropped, it should be removed right away (skipping the READY and STOPPING states) and start destruction. This should be done using a conditional schema update (IGNITE-20115) to avoid a race with switching to the READY state. > When a STARTING index is dropped, it should be removed right away > - > > Key: IGNITE-20122 > URL: https://issues.apache.org/jira/browse/IGNITE-20122 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > When a STARTING index is dropped, it should be removed right away (skipping > the READY and STOPPING states) and start destruction. This should be done > using a conditional schema update (IGNITE-20115) to avoid a race with > switching to the READY state. > If the conditional schema update fails (because the index has been switched > to the READY state), we should fallback to the usual procedure (IGNITE-20119). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20124) Exclude double storage updates
[ https://issues.apache.org/jira/browse/IGNITE-20124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20124: - Epic Link: IGNITE-19479 > Exclude double storage updates > -- > > Key: IGNITE-20124 > URL: https://issues.apache.org/jira/browse/IGNITE-20124 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > > h3. Motivation > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20124) Exclude double storage updates
[ https://issues.apache.org/jira/browse/IGNITE-20124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20124: - Description: h3. Motivation > Exclude double storage updates > -- > > Key: IGNITE-20124 > URL: https://issues.apache.org/jira/browse/IGNITE-20124 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > > h3. Motivation > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20124) Exclude double storage updates
[ https://issues.apache.org/jira/browse/IGNITE-20124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20124: - Labels: ignite-3 transactions (was: ) > Exclude double storage updates > -- > > Key: IGNITE-20124 > URL: https://issues.apache.org/jira/browse/IGNITE-20124 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20124) Exclude double storage updates
Alexander Lapin created IGNITE-20124: Summary: Exclude double storage updates Key: IGNITE-20124 URL: https://issues.apache.org/jira/browse/IGNITE-20124 Project: Ignite Issue Type: Improvement Reporter: Alexander Lapin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20124) Exclude double storage updates
[ https://issues.apache.org/jira/browse/IGNITE-20124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20124: - Ignite Flags: (was: Docs Required,Release Notes Required) > Exclude double storage updates > -- > > Key: IGNITE-20124 > URL: https://issues.apache.org/jira/browse/IGNITE-20124 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-20006) Calcite engine. Make table/index scan iterators yieldable
[ https://issues.apache.org/jira/browse/IGNITE-20006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17749849#comment-17749849 ] Ivan Daschinsky commented on IGNITE-20006: -- [~alex_pl] Looks good to me > Calcite engine. Make table/index scan iterators yieldable > -- > > Key: IGNITE-20006 > URL: https://issues.apache.org/jira/browse/IGNITE-20006 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: calcite, ise > Time Spent: 40m > Remaining Estimate: 0h > > Currently, index/table iterators can scan unpredictable count of cache > entries during one {{hasNext()}}/{{next()}} call. These iterators contain > filter, which applyed to each entry and row produced only for entries that > satisfy filter. If filter contains "always false" rule, one {{hasNext()}} > call may scan entiry table uninterruptably, without timeouts and yields to > make another queries do their job. We should fix this behaviour. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20123) IgniteTxHandler initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20123: -- Fix Version/s: 2.16 > IgniteTxHandler initial cleanup > --- > > Key: IGNITE-20123 > URL: https://issues.apache.org/jira/browse/IGNITE-20123 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20123) IgniteTxHandler initial cleanup
Anton Vinogradov created IGNITE-20123: - Summary: IgniteTxHandler initial cleanup Key: IGNITE-20123 URL: https://issues.apache.org/jira/browse/IGNITE-20123 Project: Ignite Issue Type: Sub-task Reporter: Anton Vinogradov Assignee: Anton Vinogradov -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20123) IgniteTxHandler initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20123: -- Ignite Flags: (was: Docs Required,Release Notes Required) > IgniteTxHandler initial cleanup > --- > > Key: IGNITE-20123 > URL: https://issues.apache.org/jira/browse/IGNITE-20123 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20116) Linearize storage updates with safeTime adjustment rules
[ https://issues.apache.org/jira/browse/IGNITE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20116: - Issue Type: Bug (was: Improvement) > Linearize storage updates with safeTime adjustment rules > > > Key: IGNITE-20116 > URL: https://issues.apache.org/jira/browse/IGNITE-20116 > Project: Ignite > Issue Type: Bug >Reporter: Alexander Lapin >Priority: Blocker > Labels: ignite-3, transactions > > h3. Motivation > The logic of setting safeTime explicitly prohibits setting a larger time > ahead of a smaller one. In other words, all data updates within storages > should be strictly ordered by the safeTime associated with such updates. > Currently it's not true: > * We associate update and safe time during update command creation (see > org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) > {code:java} > UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() > ... > .safeTimeLong(hybridClock.nowLong()); {code} > * However, neither applying a given command locally nor sending it to the > raft isn't linearized with associated safeTime value. In other words, it's > possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply > cmd1 prior to cmd0 locally. > Simply speaking, we lack some sort of synchronization here. > h3. Definition of Done > * It's required to add an assert that will verify that we never ever try to > update a safeTime with a smaller or equal value. > * It's required to linearize updates application to preserve guarantees of > the monotonicity of a safeTime's adjustment. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20116) Linearize storage updates with safeTime adjustment rules
[ https://issues.apache.org/jira/browse/IGNITE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20116: - Priority: Blocker (was: Major) > Linearize storage updates with safeTime adjustment rules > > > Key: IGNITE-20116 > URL: https://issues.apache.org/jira/browse/IGNITE-20116 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Blocker > Labels: ignite-3, transactions > > h3. Motivation > The logic of setting safeTime explicitly prohibits setting a larger time > ahead of a smaller one. In other words, all data updates within storages > should be strictly ordered by the safeTime associated with such updates. > Currently it's not true: > * We associate update and safe time during update command creation (see > org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) > {code:java} > UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() > ... > .safeTimeLong(hybridClock.nowLong()); {code} > * However, neither applying a given command locally nor sending it to the > raft isn't linearized with associated safeTime value. In other words, it's > possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply > cmd1 prior to cmd0 locally. > Simply speaking, we lack some sort of synchronization here. > h3. Definition of Done > * It's required to add an assert that will verify that we never ever try to > update a safeTime with a smaller or equal value. > * It's required to linearize updates application to preserve guarantees of > the monotonicity of a safeTime's adjustment. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20122) When a STARTING index is dropped, it should be removed right away
[ https://issues.apache.org/jira/browse/IGNITE-20122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-20122: --- Summary: When a STARTING index is dropped, it should be removed right away (was: When a STARTING index is dropped, it should switch to destruction) > When a STARTING index is dropped, it should be removed right away > - > > Key: IGNITE-20122 > URL: https://issues.apache.org/jira/browse/IGNITE-20122 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > When a STARTING index is dropped, it should be removed right away (skipping > the READY and STOPPING states) and start destruction. This should be done > using a conditional schema update (IGNITE-20115) to avoid a race with > switching to the READY state. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20122) When a STARTING index is dropped, it should switch to destruction
[ https://issues.apache.org/jira/browse/IGNITE-20122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-20122: --- Description: When a STARTING index is dropped, it should be removed right away (skipping the READY and STOPPING states) and start destruction. This should be done using a conditional schema update (IGNITE-20115) to avoid a race with switching to the READY state. (was: When a STARTING index is dropped, it should be removed right away (skipping the READY and STOPPING states). This should be done using a conditional schema update (IGNITE-20115) to avoid a race with switching to the READY state.) > When a STARTING index is dropped, it should switch to destruction > - > > Key: IGNITE-20122 > URL: https://issues.apache.org/jira/browse/IGNITE-20122 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > When a STARTING index is dropped, it should be removed right away (skipping > the READY and STOPPING states) and start destruction. This should be done > using a conditional schema update (IGNITE-20115) to avoid a race with > switching to the READY state. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20116) Linearize storage updates with safeTime adjustment rules
[ https://issues.apache.org/jira/browse/IGNITE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20116: - Description: h3. Motivation The logic of setting safeTime explicitly prohibits setting a larger time ahead of a smaller one. In other words, all data updates within storages should be strictly ordered by the safeTime associated with such updates. Currently it's not true: * We associate update and safe time during update command creation (see org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) {code:java} UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() ... .safeTimeLong(hybridClock.nowLong()); {code} * However, neither applying a given command locally nor sending it to the raft isn't linearized with associated safeTime value. In other words, it's possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply cmd1 prior to cmd0 locally. Simply speaking, we lack some sort of synchronization here. h3. Definition of Done * It's required to add an assert that will verify that we never ever try to update a safeTime with a smaller or equal value. * It's required to linearize updates application to preserve guarantees of the monotonicity of a safeTime's adjustment. > Linearize storage updates with safeTime adjustment rules > > > Key: IGNITE-20116 > URL: https://issues.apache.org/jira/browse/IGNITE-20116 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > > h3. Motivation > The logic of setting safeTime explicitly prohibits setting a larger time > ahead of a smaller one. In other words, all data updates within storages > should be strictly ordered by the safeTime associated with such updates. > Currently it's not true: > * We associate update and safe time during update command creation (see > org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener) > {code:java} > UpdateCommandBuilder bldr = MSG_FACTORY.updateCommand() > ... > .safeTimeLong(hybridClock.nowLong()); {code} > * However, neither applying a given command locally nor sending it to the > raft isn't linearized with associated safeTime value. In other words, it's > possible that we will assign t0 to the cmd0 and t1 to the cmd1 but will apply > cmd1 prior to cmd0 locally. > Simply speaking, we lack some sort of synchronization here. > h3. Definition of Done > * It's required to add an assert that will verify that we never ever try to > update a safeTime with a smaller or equal value. > * It's required to linearize updates application to preserve guarantees of > the monotonicity of a safeTime's adjustment. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20122) When a STARTING index is dropped, it should switch to destruction
[ https://issues.apache.org/jira/browse/IGNITE-20122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-20122: --- Summary: When a STARTING index is dropped, it should switch to destruction (was: When a STARTING index is dropped, it should be removed right away) > When a STARTING index is dropped, it should switch to destruction > - > > Key: IGNITE-20122 > URL: https://issues.apache.org/jira/browse/IGNITE-20122 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > When a STARTING index is dropped, it should be removed right away (skipping > the READY and STOPPING states). This should be done using a conditional > schema update (IGNITE-20115) to avoid a race with switching to the READY > state. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20122) When a STARTING index is dropped, it should be removed right away
Roman Puchkovskiy created IGNITE-20122: -- Summary: When a STARTING index is dropped, it should be removed right away Key: IGNITE-20122 URL: https://issues.apache.org/jira/browse/IGNITE-20122 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Fix For: 3.0.0-beta2 When a STARTING index is dropped, it should be removed right away (skipping the READY and STOPPING states). This should be done using a conditional schema update (IGNITE-20115) to avoid a race with switching to the READY state. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20121) Start index destruction when it is removed from the Catalog
Roman Puchkovskiy created IGNITE-20121: -- Summary: Start index destruction when it is removed from the Catalog Key: IGNITE-20121 URL: https://issues.apache.org/jira/browse/IGNITE-20121 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Fix For: 3.0.0-beta2 Index destruction, before starting, must wait until the partition’s SafeTime becomes >= ‘Activation moment of index removal’ (aka ‘end time of STOPPING state’ for this index). This is to avoid a race between operations on the index (including writes and reads) and its destruction. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20120) Remove index from Catalog when activation of its STOPPING state is below GC LWM
Roman Puchkovskiy created IGNITE-20120: -- Summary: Remove index from Catalog when activation of its STOPPING state is below GC LWM Key: IGNITE-20120 URL: https://issues.apache.org/jira/browse/IGNITE-20120 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Fix For: 3.0.0-beta2 When activation moment of the STOPPING state of an index is below GC LWM (meaning that no new transaction can read from this index), the index should be removed from the Catalog. A conditional schema update (IGNITE-20115) might be used. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20119) Switch index to STOPPING state as a reaction to DROP INDEX
Roman Puchkovskiy created IGNITE-20119: -- Summary: Switch index to STOPPING state as a reaction to DROP INDEX Key: IGNITE-20119 URL: https://issues.apache.org/jira/browse/IGNITE-20119 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Fix For: 3.0.0-beta2 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20118) Create an index in STARTING state
Roman Puchkovskiy created IGNITE-20118: -- Summary: Create an index in STARTING state Key: IGNITE-20118 URL: https://issues.apache.org/jira/browse/IGNITE-20118 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Fix For: 3.0.0-beta2 The 'state' field should be added (probably, instead of 'writeOnly' flag). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20117) Implement index backfill process
[ https://issues.apache.org/jira/browse/IGNITE-20117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-20117: --- Description: Currently, we have backfill process for an index (aka 'index build'). It needs to be tuned to satisfy the following requirements: # When starting the backfill process, we must first wait till safeTime(partition)>=’STARTING state activation timestamp’ to avoid a race between starting the backfill process and executing writes that are before the index creation (as these writes should not write to the index). # If for a row found during the backfill process, there are row versions with commitTs <= indexCreationActivationTs, then the most recent of them is written to the index # If for a row found during the backfill process, there are row versions with commitTs > indexCreationActivationTs, then the oldest of them is added to the index; otherwise, if there are no such row versions, but there is a write intent (and the transaction to which it belongs started before indexCreationActivationTs), it is added to the index # When the backfill process is finished on all partitions, another schema update is installed that declares that the index is in the READY state. This installation should be conditional. That is, if the index is still STARTING, it should succeed; otherwise (if the index was removed by installing a concurrent ‘delete from the Catalog’ schema update due to a DROP command), nothing should be done here # The backfill process stops early as soon as it detects that the index moved to ‘deleted from the Catalog’ state. Each step of the process might be supplied with a timestamp (from the same clock that moves the partition’s SafeTime ahead) and that timestamp could be used to check the index existence; this will allow to avoid a race between index destruction and the backfill process. was: Currently, we have backfill process for an index (aka 'index build'). It needs to be tuned to satisfy the following requirements: # When starting the backfill process, we must first wait till safeTime(partition)>=’STARTING state activation timestamp’ to avoid a race between starting the backfill process and executing writes that are before the index creation (as these writes should not write to the index). # If for a row found during the backfill process, there are row versions with commitTs <= indexCreationActivationTs, then the most recent of them is written to the index # If for a row found during the backfill process, there are row versions with commitTs > indexCreationActivationTs, then the oldest of them is added to the index; otherwise, if there are no such row versions, but there is a write intent (and the transaction to which it belongs started before indexCreationActivationTs), it is added to the index # When the backfill process is finished on all partitions, another schema update is installed that declares that the index is in the READY state. This installation should be conditional. That is, if the index is still STARTING, it should succeed; otherwise (if the index was removed by installing a concurrent ‘delete from the Catalog’ schema update due to a DROP command), nothing should be done here > Implement index backfill process > > > Key: IGNITE-20117 > URL: https://issues.apache.org/jira/browse/IGNITE-20117 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Currently, we have backfill process for an index (aka 'index build'). It > needs to be tuned to satisfy the following requirements: > # When starting the backfill process, we must first wait till > safeTime(partition)>=’STARTING state activation timestamp’ to avoid a race > between starting the backfill process and executing writes that are before > the index creation (as these writes should not write to the index). > # If for a row found during the backfill process, there are row versions with > commitTs <= indexCreationActivationTs, then the most recent of them is > written to the index > # If for a row found during the backfill process, there are row versions with > commitTs > indexCreationActivationTs, then the oldest of them is added to the > index; otherwise, if there are no such row versions, but there is a write > intent (and the transaction to which it belongs started before > indexCreationActivationTs), it is added to the index > # When the backfill process is finished on all partitions, another schema > update is installed that declares that the index is in the READY state. This > installation should be conditional. That is, if the index is still STARTING, > it should succeed; otherwise (if the index was removed by installing a > concurrent
[jira] [Created] (IGNITE-20117) Implement index backfill process
Roman Puchkovskiy created IGNITE-20117: -- Summary: Implement index backfill process Key: IGNITE-20117 URL: https://issues.apache.org/jira/browse/IGNITE-20117 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Fix For: 3.0.0-beta2 Currently, we have backfill process for an index (aka 'index build'). It needs to be tuned to satisfy the following requirements: # When starting the backfill process, we must first wait till safeTime(partition)>=’STARTING state activation timestamp’ to avoid a race between starting the backfill process and executing writes that are before the index creation (as these writes should not write to the index). # If for a row found during the backfill process, there are row versions with commitTs <= indexCreationActivationTs, then the most recent of them is written to the index # If for a row found during the backfill process, there are row versions with commitTs > indexCreationActivationTs, then the oldest of them is added to the index; otherwise, if there are no such row versions, but there is a write intent (and the transaction to which it belongs started before indexCreationActivationTs), it is added to the index # When the backfill process is finished on all partitions, another schema update is installed that declares that the index is in the READY state. This installation should be conditional. That is, if the index is still STARTING, it should succeed; otherwise (if the index was removed by installing a concurrent ‘delete from the Catalog’ schema update due to a DROP command), nothing should be done here -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20116) Linearize storage updates with safeTime adjustment rules
[ https://issues.apache.org/jira/browse/IGNITE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20116: - Labels: ignite-3 transactions (was: ) > Linearize storage updates with safeTime adjustment rules > > > Key: IGNITE-20116 > URL: https://issues.apache.org/jira/browse/IGNITE-20116 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transactions > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20116) Linearize storage updates with safeTime adjustment rules
Alexander Lapin created IGNITE-20116: Summary: Linearize storage updates with safeTime adjustment rules Key: IGNITE-20116 URL: https://issues.apache.org/jira/browse/IGNITE-20116 Project: Ignite Issue Type: Improvement Reporter: Alexander Lapin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20115) Support conditional schema update install
Roman Puchkovskiy created IGNITE-20115: -- Summary: Support conditional schema update install Key: IGNITE-20115 URL: https://issues.apache.org/jira/browse/IGNITE-20115 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Fix For: 3.0.0-beta2 Currently, the Catalog allows to install a schema update, but it's unconditional. We'll need a way to install schema updates conditionally, like this: 'if the latest index state is X, install a schema update chaning it to Y, otherwise do nothing and return false'. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19360) Schema synchronization design: indices
[ https://issues.apache.org/jira/browse/IGNITE-19360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-19360: --- Epic Link: IGNITE-17766 (was: IGNITE-18733) > Schema synchronization design: indices > -- > > Key: IGNITE-19360 > URL: https://issues.apache.org/jira/browse/IGNITE-19360 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > > In addition to the basic Schema Synchronization mechanism, indices handling > needs to be designed. > # How we create indices (and make them available for the Query Engine) and > drop them with respect to the Schema Synchronization > # How we support full index historicity -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19360) Schema synchronization design: indices
[ https://issues.apache.org/jira/browse/IGNITE-19360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-19360: --- Description: In addition to the basic Schema Synchronization mechanism, indices handling needs to be designed. # How we create indices (and make them available for the Query Engine) and drop them with respect to the Schema Synchronization # How we support full index data/metadata versioning was: In addition to the basic Schema Synchronization mechanism, indices handling needs to be designed. # How we create indices (and make them available for the Query Engine) and drop them with respect to the Schema Synchronization # How we support full index historicity > Schema synchronization design: indices > -- > > Key: IGNITE-19360 > URL: https://issues.apache.org/jira/browse/IGNITE-19360 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > > In addition to the basic Schema Synchronization mechanism, indices handling > needs to be designed. > # How we create indices (and make them available for the Query Engine) and > drop them with respect to the Schema Synchronization > # How we support full index data/metadata versioning -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20114) DistributionZoneManager should listen CatalogService events instead of configuration
Kirill Tkalenko created IGNITE-20114: Summary: DistributionZoneManager should listen CatalogService events instead of configuration Key: IGNITE-20114 URL: https://issues.apache.org/jira/browse/IGNITE-20114 Project: Ignite Issue Type: New Feature Reporter: Kirill Tkalenko Fix For: 3.0.0-beta2 As of now, *DistributionZoneManager* listens configuration events to create internal structures. Let's make *DistributionZoneManager* listens CatalogService events instead. Note: Some tests may fails due to changed guarantees and related ticked incompletion. So, let's do this in a separate feature branch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-20114) DistributionZoneManager should listen CatalogService events instead of configuration
[ https://issues.apache.org/jira/browse/IGNITE-20114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko reassigned IGNITE-20114: Assignee: Kirill Tkalenko > DistributionZoneManager should listen CatalogService events instead of > configuration > > > Key: IGNITE-20114 > URL: https://issues.apache.org/jira/browse/IGNITE-20114 > Project: Ignite > Issue Type: New Feature >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > As of now, *DistributionZoneManager* listens configuration events to create > internal structures. > Let's make *DistributionZoneManager* listens CatalogService events instead. > Note: Some tests may fails due to changed guarantees and related ticked > incompletion. So, let's do this in a separate feature branch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20113) IgniteTxStateImpl initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20113: -- Ignite Flags: (was: Docs Required,Release Notes Required) > IgniteTxStateImpl initial cleanup > - > > Key: IGNITE-20113 > URL: https://issues.apache.org/jira/browse/IGNITE-20113 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20113) IgniteTxStateImpl initial cleanup
Anton Vinogradov created IGNITE-20113: - Summary: IgniteTxStateImpl initial cleanup Key: IGNITE-20113 URL: https://issues.apache.org/jira/browse/IGNITE-20113 Project: Ignite Issue Type: Sub-task Reporter: Anton Vinogradov Assignee: Anton Vinogradov -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20113) IgniteTxStateImpl initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20113: -- Fix Version/s: 2.16 > IgniteTxStateImpl initial cleanup > - > > Key: IGNITE-20113 > URL: https://issues.apache.org/jira/browse/IGNITE-20113 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20112) IgniteTxImplicitSingleStateImpl initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20112: -- Fix Version/s: 2.16 > IgniteTxImplicitSingleStateImpl initial cleanup > --- > > Key: IGNITE-20112 > URL: https://issues.apache.org/jira/browse/IGNITE-20112 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20112) IgniteTxImplicitSingleStateImpl initial cleanup
Anton Vinogradov created IGNITE-20112: - Summary: IgniteTxImplicitSingleStateImpl initial cleanup Key: IGNITE-20112 URL: https://issues.apache.org/jira/browse/IGNITE-20112 Project: Ignite Issue Type: Sub-task Reporter: Anton Vinogradov Assignee: Anton Vinogradov -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20112) IgniteTxImplicitSingleStateImpl initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20112: -- Ignite Flags: (was: Docs Required,Release Notes Required) > IgniteTxImplicitSingleStateImpl initial cleanup > --- > > Key: IGNITE-20112 > URL: https://issues.apache.org/jira/browse/IGNITE-20112 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-20111) IgniteTxLocalStateAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov resolved IGNITE-20111. --- Resolution: Won't Fix Nothing to cleanup > IgniteTxLocalStateAdapter initial cleanup > - > > Key: IGNITE-20111 > URL: https://issues.apache.org/jira/browse/IGNITE-20111 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20111) IgniteTxLocalStateAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20111: -- Fix Version/s: 2.16 > IgniteTxLocalStateAdapter initial cleanup > - > > Key: IGNITE-20111 > URL: https://issues.apache.org/jira/browse/IGNITE-20111 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20111) IgniteTxLocalStateAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20111: -- Ignite Flags: (was: Docs Required,Release Notes Required) > IgniteTxLocalStateAdapter initial cleanup > - > > Key: IGNITE-20111 > URL: https://issues.apache.org/jira/browse/IGNITE-20111 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20111) IgniteTxLocalStateAdapter initial cleanup
Anton Vinogradov created IGNITE-20111: - Summary: IgniteTxLocalStateAdapter initial cleanup Key: IGNITE-20111 URL: https://issues.apache.org/jira/browse/IGNITE-20111 Project: Ignite Issue Type: Sub-task Reporter: Anton Vinogradov Assignee: Anton Vinogradov -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20058) NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter
[ https://issues.apache.org/jira/browse/IGNITE-20058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Uttsel updated IGNITE-20058: --- Description: {{DistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky and with very low failure rate it fails with NPE (1 fail in 1500 runs) {noformat} 2023-07-25 16:48:30:520 +0400 [ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred when processing a watch event java.lang.NullPointerException at org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737) at org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488) at org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136) at org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129) at org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown Source) {noformat} {code:java} 2023-08-01 15:55:40:440 +0300 [INFO][%test%metastorage-watch-executor-1][ConfigurationRegistry] Failed to notify configuration listener java.lang.NullPointerException at org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.updateZoneConfiguration(CausalityDataNodesEngine.java:570) at org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.onUpdateFilter(CausalityDataNodesEngine.java:557) at org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateFilter$18(DistributionZoneManager.java:774) at org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488) at org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136) at org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129) at org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown Source){code} was: {{DistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky and with very low failure rate it fails with NPE (1 fail in 1500 runs) {noformat} 2023-07-25 16:48:30:520 +0400 [ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred when processing a watch event java.lang.NullPointerException at org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737) at org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488) at org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136) at org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129) at org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown Source) {noformat} > NPE in DistributionZoneManagerAlterFilterTest#testAlterFilter > - > > Key: IGNITE-20058 > URL: https://issues.apache.org/jira/browse/IGNITE-20058 > Project: Ignite > Issue Type: Bug >Reporter: Mirza Aliev >Assignee: Alexander Lapin >Priority: Major > Labels: ignite-3 > > {{DistributionZoneManagerAlterFilterTest.testAlterFilter}} is flaky and with > very low failure rate it fails with NPE (1 fail in 1500 runs) > {noformat} > 2023-07-25 16:48:30:520 +0400 > [ERROR][%test%metastorage-watch-executor-0][WatchProcessor] Error occurred > when processing a watch event > java.lang.NullPointerException > at > org.apache.ignite.internal.distributionzones.DistributionZoneManager.lambda$onUpdateScaleDown$18(DistributionZoneManager.java:737) > at > org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier.notifyPublicListeners(ConfigurationNotifier.java:488) > at > org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:136) > at > org.apache.ignite.internal.configuration.notifications.ConfigurationNotifier$1.visitLeafNode(ConfigurationNotifier.java:129) > at > org.apache.ignite.internal.distributionzones.configuration.DistributionZoneNode.traverseChildren(Unknown > Source) > {noformat} > {code:java} > 2023-08-01 15:55:40:440 +0300 >
[jira] [Updated] (IGNITE-19746) control.sh --performance-statistics status doesn't not print actual status
[ https://issues.apache.org/jira/browse/IGNITE-19746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-19746: - Fix Version/s: 2.16 > control.sh --performance-statistics status doesn't not print actual status > -- > > Key: IGNITE-19746 > URL: https://issues.apache.org/jira/browse/IGNITE-19746 > Project: Ignite > Issue Type: Bug >Reporter: Sergey Korotkov >Assignee: Sergey Korotkov >Priority: Major > Labels: IEP-81, ise > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > > The status sub-command of the control.sh --performance-statistics doesn't not > print the actual status to console. > Previously it was like (note the *Disabled.* word): > {noformat} > Control utility [ver. 15.0.0-SNAPSHOT#20230422-sha1:7f80003d] > 2023 Copyright(C) Apache Software Foundation > User: ducker > Time: 2023-04-23T22:17:12.489 > Command [PERFORMANCE-STATISTICS] started > Arguments: --host x.x.x.x --performance-statistics status --user admin > --password * > > Disabled. > Command [PERFORMANCE-STATISTICS] finished with code: 0 > Control utility has completed execution at: 2023-04-23T22:17:13.271 > Execution time: 782 ms > {noformat} > > Now it's like (note the absence of the *Disabled.* word): > {noformat} > Control utility [ver. 15.0.0-SNAPSHOT#20230613-sha1:cacee58d] > 2023 Copyright(C) Apache Software Foundation > User: ducker > Time: 2023-06-15T15:46:41.586 > Command [PERFORMANCE-STATISTICS] started > Arguments: --host x.x.x.x --performance-statistics status --user admin > --password * > > Command [PERFORMANCE-STATISTICS] finished with code: 0 > Control utility has completed execution at: 2023-06-15T15:46:42.523 > Execution time: 937 ms > {noformat} > > Outputs of other sub-commands are also need to be checked. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20099) IOException in CLI on node shutdown
[ https://issues.apache.org/jira/browse/IGNITE-20099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-20099: - Labels: ignite-3 (was: ) > IOException in CLI on node shutdown > --- > > Key: IGNITE-20099 > URL: https://issues.apache.org/jira/browse/IGNITE-20099 > Project: Ignite > Issue Type: Bug > Components: cli >Affects Versions: 3.0.0-beta1 >Reporter: Dmitry Baranov >Priority: Major > Labels: ignite-3 > Attachments: 2023-07-31_08-46-39.png > > > On node shutdown IOException printed in CLI > Steps to reproduce: > 1. Start ignite3 ./{_}bin/ignite3db start{_} > 2. start CLI and invoke _connect_ command > 3. shutdown ignite ./{_}bin/ignite3db stop{_} > Expected: user readable error message about connection lost > Actual: > [defaultNode]> [java.io.IOException: unexpected end of stream on > http://localhost:10300/..., java.net.SocketException: Connection reset, > java.net.SocketException: Connection reset, java.net.SocketException: > Connection reset, java.net.SocketException: Connection reset, > java.net.ConnectException: Failed to connect to localhost/127.0.0.1:10300] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20076) Improve networking shutdown implementation
[ https://issues.apache.org/jira/browse/IGNITE-20076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-20076: - Labels: ignite-3 (was: igntie-3) > Improve networking shutdown implementation > -- > > Key: IGNITE-20076 > URL: https://issues.apache.org/jira/browse/IGNITE-20076 > Project: Ignite > Issue Type: Bug >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 1h > Remaining Estimate: 0h > > Currently, when initiating an Ignite's node shutdown, we first stop > ScaleCube's cluster (so that it sends a LEAVING message) and only when it's > completely shutdown do we shut the connection manager. As a result, there is > some interval when the node's networking thinks it's still alive (and hence > it tries to restore connections with other nodes), but other nodes think the > node has already left (as they received that LEAVING message from it), so > they don't let it establish connections. The first node sees that it is > rejected and tries to handle this is a critical failure. Currently, it just > logs a scary message, but, when we implement a proper failure handler, this > will kill the node. This is not ok for a graceful stop scenario. > The idea is to first (before stopping the ScaleCube local cluster) tell the > connection manager that it is now in the 'stopping' state. In this state, it > does not try to establish new connections (and does not attempt to reconnect) > and does not allow any incoming connections; also, it does not handle > rejections by other nodes as critical failures in this state. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20025) Command 'connect defaultNode' throws error
[ https://issues.apache.org/jira/browse/IGNITE-20025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-20025: - Labels: ignite-3 (was: ) > Command 'connect defaultNode' throws error > -- > > Key: IGNITE-20025 > URL: https://issues.apache.org/jira/browse/IGNITE-20025 > Project: Ignite > Issue Type: Bug > Components: cli >Affects Versions: 3.0.0-beta1 >Reporter: Dmitry Baranov >Priority: Major > Labels: ignite-3 > Attachments: image-2023-07-23-16-57-58-225.png > > > Steps to reproduce the issue > 1. from disconnected state execute > {code:java} > connect{code} > Connected to http://localhost:10300 > 2. > {code:java} > disconnect{code} > Disconnected from http://localhost:10300 > 3. > {code:java} > connect http://localhost:10300{code} > Connected to http://localhost:10300 > 4. [admin:defaultNode]> > {code:java} > disconnect{code} > Disconnected from http://localhost:10300 > 5. [disconnected]> > {code:java} > connect defaultNode{code} > *Actual:* > Invalid value for positional parameter at index 0 (): Node > defaultNode not found. Provide valid name or use URL > Usage: connect [-hv] > Connects to Ignite 3 node > URL or name of an Ignite node > -h, --help Show help for the specified command > -v, --verbose Show additional information: logs, REST calls > {*}Expected{*}: > Successful connection to the defaultNode > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20003) ODBC 3.0: Add support of ODBC version 3.0
[ https://issues.apache.org/jira/browse/IGNITE-20003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-20003: - Labels: ignite-3 (was: ) > ODBC 3.0: Add support of ODBC version 3.0 > - > > Key: IGNITE-20003 > URL: https://issues.apache.org/jira/browse/IGNITE-20003 > Project: Ignite > Issue Type: Improvement > Components: odbc >Reporter: Igor Sapego >Assignee: Igor Sapego >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 20m > Remaining Estimate: 0h > > It seems that a lot of libraries and tools (e.g. pyodbc and isql) use > SQL_OV_ODBC3 when create SQL environment handle. Currently, we only support > SQL_OV_ODBC3_80. Add support of SQL_OV_ODBC3 as well. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-19904) Assertion in defragmentation
[ https://issues.apache.org/jira/browse/IGNITE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17749563#comment-17749563 ] Vladimir Steshin edited comment on IGNITE-19904 at 8/1/23 11:43 AM: Caused by concurrent default checkpointer which clears shared {code:java} CheckpointProgress#clearCounters() {code} and rises hidden NPE in {code:java} @Override public void CheckpointProgressImpl#updateEvictedPages(int delta) { A.ensure(delta > 0, "param must be positive"); if (evictedPagesCounter() != null) evictedPagesCounter().addAndGet(delta); } {code} while flushing replaced page in `PageMemoryImpl#allocatePage(int grpId, int partId, byte flags)`. See IGNITE-20047 and 'failure_with_root_npe_cause.log'. was (Author: vladsz83): Caused by concurrent default checkpointer which clears shared {code:java} CheckpointProgress#clearCounters() {code} and rises hidden NPE in {code:java} @Override public void CheckpointProgressImpl#updateEvictedPages(int delta) { A.ensure(delta > 0, "param must be positive"); if (evictedPagesCounter() != null) evictedPagesCounter().addAndGet(delta); } {code} while flushing replaced page in `PageMemoryImpl#allocatePage(int grpId, int partId, byte flags)`. See IGNITE-20047. > Assertion in defragmentation > > > Key: IGNITE-19904 > URL: https://issues.apache.org/jira/browse/IGNITE-19904 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.12 >Reporter: Vladimir Steshin >Priority: Major > Labels: ise > Attachments: default-config.xml, failure2.16_with_thread_dump.log, > failure_with_root_npe_cause.log, ignite.log, jvm.opts > > Time Spent: 20m > Remaining Estimate: 0h > > Defragmentaion fails with: > {code:java} > java.lang.AssertionError: Invalid state. Type is 0! pageId = 0001000d00024cbf > at > org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.copyPageForCheckpoint(PageMemoryImpl.java:1359) > ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT] > at > org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.checkpointWritePage(PageMemoryImpl.java:1277) > ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT] > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.writePages(CheckpointPagesWriter.java:208) > ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT] > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.run(CheckpointPagesWriter.java:150) > ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT] > {code} > Difficult to write a test. Can't reproduce on my computers :(. Flackly > appears on a server (4 core x 4 cpu) with 100G of the test cache data and > million+ pages to checkpoint during defragmentation. More often, this occurs > with pageSize 1024 (to produce more pages). > Regarding my diagnostic build, I suppose that a fresh, empty page is caught > in defragmentation. Here is a page dump with test-expented PAGE_OVERHEAD > (=64) and same error a bit before copyPageForCheckpoint(): > {code:java} > org.apache.ignite.IgniteException: Wrong page type in checkpointWritePage1. > Page: Data region = 'defragPartitionsDataRegion'. > FullPageId [pageId=281878703760205, effectivePageId=403727049549, > grpId=-1368047378]. > PageDump = page_id: 281878703760205, rel_id: 48603, cache_id: -1368047378, > pin: 0, lock: 65536, tmp_buf: 72057594037927935, test_val: 1. data_hex: >
[jira] [Updated] (IGNITE-19904) Assertion in defragmentation
[ https://issues.apache.org/jira/browse/IGNITE-19904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Steshin updated IGNITE-19904: -- Attachment: failure_with_root_npe_cause.log > Assertion in defragmentation > > > Key: IGNITE-19904 > URL: https://issues.apache.org/jira/browse/IGNITE-19904 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.12 >Reporter: Vladimir Steshin >Priority: Major > Labels: ise > Attachments: default-config.xml, failure2.16_with_thread_dump.log, > failure_with_root_npe_cause.log, ignite.log, jvm.opts > > Time Spent: 10m > Remaining Estimate: 0h > > Defragmentaion fails with: > {code:java} > java.lang.AssertionError: Invalid state. Type is 0! pageId = 0001000d00024cbf > at > org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.copyPageForCheckpoint(PageMemoryImpl.java:1359) > ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT] > at > org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.checkpointWritePage(PageMemoryImpl.java:1277) > ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT] > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.writePages(CheckpointPagesWriter.java:208) > ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT] > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointPagesWriter.run(CheckpointPagesWriter.java:150) > ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT] > {code} > Difficult to write a test. Can't reproduce on my computers :(. Flackly > appears on a server (4 core x 4 cpu) with 100G of the test cache data and > million+ pages to checkpoint during defragmentation. More often, this occurs > with pageSize 1024 (to produce more pages). > Regarding my diagnostic build, I suppose that a fresh, empty page is caught > in defragmentation. Here is a page dump with test-expented PAGE_OVERHEAD > (=64) and same error a bit before copyPageForCheckpoint(): > {code:java} > org.apache.ignite.IgniteException: Wrong page type in checkpointWritePage1. > Page: Data region = 'defragPartitionsDataRegion'. > FullPageId [pageId=281878703760205, effectivePageId=403727049549, > grpId=-1368047378]. > PageDump = page_id: 281878703760205, rel_id: 48603, cache_id: -1368047378, > pin: 0, lock: 65536, tmp_buf: 72057594037927935, test_val: 1. data_hex: > > at > org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.checkpointWritePage(PageMemoryImpl.java:1240) > ~[ignite-core-2.16.0-SNAPSHOT.jar:2.16.0-SNAPSHOT] > at >
[jira] [Commented] (IGNITE-19888) Java client: Track observable timestamp
[ https://issues.apache.org/jira/browse/IGNITE-19888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17749629#comment-17749629 ] Pavel Tupitsyn commented on IGNITE-19888: - Merged to main: 37fe3e86f176e3ca43b605d42362dfd9acee1583 > Java client: Track observable timestamp > --- > > Key: IGNITE-19888 > URL: https://issues.apache.org/jira/browse/IGNITE-19888 > Project: Ignite > Issue Type: Improvement > Components: platforms, thin client >Affects Versions: 3.0.0-beta1 >Reporter: Vladislav Pyatkov >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 1h > Remaining Estimate: 0h > > *Motivation* > The read timestamp for a RO transaction is supposed to be determined by a > client timestamp to linearize transactions. > *Implementation notes* > * The request which starts RO transaction (IGNITE-19887) has to provide a > timestamp. > * Requests which start SQL, also provide a specific timestamp (if they start > RO internally) (IGNITE-19898 here the concrete method to retrieve timestamp > will be implemented). > * The current server timestamp ({{clock.now()}}) should be added to (except > in the cases above) the transaction response. > * If a server response does not have the timestamp or timestamp is less than > the client already has, do nothing. > * If the time is grater than the client has, the client timestamp should be > updated. > * The timestamp is used to start RO transaction (IGNITE-19887) > *Definition of done* > The timestamp is passed from the server-side to a client. The client just > save the timestamp and send it in each request to server-side. > All client-side created RO transactions should execute in past with timestamp > has been determining by client timestamp. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-19888) Java client: Track observable timestamp
[ https://issues.apache.org/jira/browse/IGNITE-19888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17749626#comment-17749626 ] Denis Chudov commented on IGNITE-19888: --- [~ptupitsyn] LGTM. > Java client: Track observable timestamp > --- > > Key: IGNITE-19888 > URL: https://issues.apache.org/jira/browse/IGNITE-19888 > Project: Ignite > Issue Type: Improvement > Components: platforms, thin client >Affects Versions: 3.0.0-beta1 >Reporter: Vladislav Pyatkov >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 50m > Remaining Estimate: 0h > > *Motivation* > The read timestamp for a RO transaction is supposed to be determined by a > client timestamp to linearize transactions. > *Implementation notes* > * The request which starts RO transaction (IGNITE-19887) has to provide a > timestamp. > * Requests which start SQL, also provide a specific timestamp (if they start > RO internally) (IGNITE-19898 here the concrete method to retrieve timestamp > will be implemented). > * The current server timestamp ({{clock.now()}}) should be added to (except > in the cases above) the transaction response. > * If a server response does not have the timestamp or timestamp is less than > the client already has, do nothing. > * If the time is grater than the client has, the client timestamp should be > updated. > * The timestamp is used to start RO transaction (IGNITE-19887) > *Definition of done* > The timestamp is passed from the server-side to a client. The client just > save the timestamp and send it in each request to server-side. > All client-side created RO transactions should execute in past with timestamp > has been determining by client timestamp. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19845) IgniteTxLocalAdapter.sndTransformedVals field removal
[ https://issues.apache.org/jira/browse/IGNITE-19845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19845: -- Fix Version/s: 2.16 > IgniteTxLocalAdapter.sndTransformedVals field removal > - > > Key: IGNITE-19845 > URL: https://issues.apache.org/jira/browse/IGNITE-19845 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19845) IgniteTxLocalAdapter.sndTransformedVals field removal
[ https://issues.apache.org/jira/browse/IGNITE-19845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19845: -- Ignite Flags: (was: Docs Required,Release Notes Required) > IgniteTxLocalAdapter.sndTransformedVals field removal > - > > Key: IGNITE-19845 > URL: https://issues.apache.org/jira/browse/IGNITE-19845 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19873) GridNearTxLocal initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19873: -- Fix Version/s: 2.16 > GridNearTxLocal initial cleanup > --- > > Key: IGNITE-19873 > URL: https://issues.apache.org/jira/browse/IGNITE-19873 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Step-by-step cleanup. > A lot of minor fixes. Most of them are about unused or alwais ftue/false > params. > Please check atomic commits instead of whole changes at the review. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19847) IgniteTxAdapter.TxShadow removal
[ https://issues.apache.org/jira/browse/IGNITE-19847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19847: -- Ignite Flags: (was: Docs Required,Release Notes Required) > IgniteTxAdapter.TxShadow removal > > > Key: IGNITE-19847 > URL: https://issues.apache.org/jira/browse/IGNITE-19847 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > Seems, it never used. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19846) IgniteTxAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19846: -- Ignite Flags: (was: Docs Required,Release Notes Required) > IgniteTxAdapter initial cleanup > --- > > Key: IGNITE-19846 > URL: https://issues.apache.org/jira/browse/IGNITE-19846 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 1h 10m > Remaining Estimate: 0h > > Fields finalization > Unused fields/methods removal > Code simplification > Methods relocation > Code deduplication -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19874) GridDhtTxLocal initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19874: -- Ignite Flags: (was: Docs Required,Release Notes Required) > GridDhtTxLocal initial cleanup > -- > > Key: IGNITE-19874 > URL: https://issues.apache.org/jira/browse/IGNITE-19874 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 2h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19873) GridNearTxLocal initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19873: -- Ignite Flags: (was: Docs Required,Release Notes Required) > GridNearTxLocal initial cleanup > --- > > Key: IGNITE-19873 > URL: https://issues.apache.org/jira/browse/IGNITE-19873 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > Step-by-step cleanup. > A lot of minor fixes. Most of them are about unused or alwais ftue/false > params. > Please check atomic commits instead of whole changes at the review. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19846) IgniteTxAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19846: -- Fix Version/s: 2.16 > IgniteTxAdapter initial cleanup > --- > > Key: IGNITE-19846 > URL: https://issues.apache.org/jira/browse/IGNITE-19846 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Fields finalization > Unused fields/methods removal > Code simplification > Methods relocation > Code deduplication -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19872) IgniteTxLocalAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19872: -- Ignite Flags: (was: Docs Required,Release Notes Required) > IgniteTxLocalAdapter initial cleanup > > > Key: IGNITE-19872 > URL: https://issues.apache.org/jira/browse/IGNITE-19872 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19847) IgniteTxAdapter.TxShadow removal
[ https://issues.apache.org/jira/browse/IGNITE-19847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19847: -- Fix Version/s: 2.16 > IgniteTxAdapter.TxShadow removal > > > Key: IGNITE-19847 > URL: https://issues.apache.org/jira/browse/IGNITE-19847 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Seems, it never used. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19874) GridDhtTxLocal initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19874: -- Fix Version/s: 2.16 > GridDhtTxLocal initial cleanup > -- > > Key: IGNITE-19874 > URL: https://issues.apache.org/jira/browse/IGNITE-19874 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 2h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19872) IgniteTxLocalAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19872: -- Fix Version/s: 2.16 > IgniteTxLocalAdapter initial cleanup > > > Key: IGNITE-19872 > URL: https://issues.apache.org/jira/browse/IGNITE-19872 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19875) GridDhtTxLocalAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19875: -- Fix Version/s: 2.16 > GridDhtTxLocalAdapter initial cleanup > - > > Key: IGNITE-19875 > URL: https://issues.apache.org/jira/browse/IGNITE-19875 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20093) GridCacheSharedManagerAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20093: -- Fix Version/s: 2.16 > GridCacheSharedManagerAdapter initial cleanup > - > > Key: IGNITE-20093 > URL: https://issues.apache.org/jira/browse/IGNITE-20093 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20094) IgniteTxManager initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20094: -- Ignite Flags: (was: Docs Required,Release Notes Required) > IgniteTxManager initial cleanup > --- > > Key: IGNITE-20094 > URL: https://issues.apache.org/jira/browse/IGNITE-20094 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19875) GridDhtTxLocalAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-19875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-19875: -- Ignite Flags: (was: Docs Required,Release Notes Required) > GridDhtTxLocalAdapter initial cleanup > - > > Key: IGNITE-19875 > URL: https://issues.apache.org/jira/browse/IGNITE-19875 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20094) IgniteTxManager initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20094: -- Fix Version/s: 2.16 > IgniteTxManager initial cleanup > --- > > Key: IGNITE-20094 > URL: https://issues.apache.org/jira/browse/IGNITE-20094 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20093) GridCacheSharedManagerAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20093: -- Ignite Flags: (was: Docs Required,Release Notes Required) > GridCacheSharedManagerAdapter initial cleanup > - > > Key: IGNITE-20093 > URL: https://issues.apache.org/jira/browse/IGNITE-20093 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20084) GridDistributedTxRemoteAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20084: -- Fix Version/s: 2.16 > GridDistributedTxRemoteAdapter initial cleanup > -- > > Key: IGNITE-20084 > URL: https://issues.apache.org/jira/browse/IGNITE-20084 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20085) GridNearTxRemote initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20085: -- Fix Version/s: 2.16 > GridNearTxRemote initial cleanup > > > Key: IGNITE-20085 > URL: https://issues.apache.org/jira/browse/IGNITE-20085 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20084) GridDistributedTxRemoteAdapter initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20084: -- Ignite Flags: (was: Docs Required,Release Notes Required) > GridDistributedTxRemoteAdapter initial cleanup > -- > > Key: IGNITE-20084 > URL: https://issues.apache.org/jira/browse/IGNITE-20084 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20088) GridDhtTxRemote initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20088: -- Ignite Flags: (was: Docs Required,Release Notes Required) > GridDhtTxRemote initial cleanup > --- > > Key: IGNITE-20088 > URL: https://issues.apache.org/jira/browse/IGNITE-20088 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20085) GridNearTxRemote initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20085: -- Ignite Flags: (was: Docs Required,Release Notes Required) > GridNearTxRemote initial cleanup > > > Key: IGNITE-20085 > URL: https://issues.apache.org/jira/browse/IGNITE-20085 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20088) GridDhtTxRemote initial cleanup
[ https://issues.apache.org/jira/browse/IGNITE-20088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Vinogradov updated IGNITE-20088: -- Fix Version/s: 2.16 > GridDhtTxRemote initial cleanup > --- > > Key: IGNITE-20088 > URL: https://issues.apache.org/jira/browse/IGNITE-20088 > Project: Ignite > Issue Type: Sub-task >Reporter: Anton Vinogradov >Assignee: Anton Vinogradov >Priority: Major > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)