[jira] [Updated] (IGNITE-20890) Log execution summary after reading dump
[ https://issues.apache.org/jira/browse/IGNITE-20890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuri Naryshkin updated IGNITE-20890: Description: 1. Need to log execution summary after reading dump, for example like "OK all processed" or "Finished with errors". 2. Print {{Consumed partitions 42 of 1000}} so that user can see how many partitions processed. 3. Count total number of records and print the processing rate once in 30 seconds. was: 1. Print {{Consumed partitions 42 of 1000}} so that user can see how many partitions processed. 2. Count total number of records and print the processing rate once in 30 seconds. > Log execution summary after reading dump > > > Key: IGNITE-20890 > URL: https://issues.apache.org/jira/browse/IGNITE-20890 > Project: Ignite > Issue Type: Task >Reporter: Yuri Naryshkin >Assignee: Yuri Naryshkin >Priority: Major > Labels: IEP-109, ise > > 1. Need to log execution summary after reading dump, for example like "OK all > processed" or "Finished with errors". > > 2. Print {{Consumed partitions 42 of 1000}} so that user can see how many > partitions processed. > 3. Count total number of records and print the processing rate once in 30 > seconds. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20890) Log execution summary after reading dump
[ https://issues.apache.org/jira/browse/IGNITE-20890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuri Naryshkin updated IGNITE-20890: Summary: Log execution summary after reading dump (was: Log dump loading stats) > Log execution summary after reading dump > > > Key: IGNITE-20890 > URL: https://issues.apache.org/jira/browse/IGNITE-20890 > Project: Ignite > Issue Type: Task >Reporter: Yuri Naryshkin >Assignee: Yuri Naryshkin >Priority: Major > Labels: IEP-109, ise > > 1. Print {{Consumed partitions 42 of 1000}} so that user can see how many > partitions processed. > 2. Count total number of records and print the processing rate once in 30 > seconds. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20890) Log dump loading stats
Yuri Naryshkin created IGNITE-20890: --- Summary: Log dump loading stats Key: IGNITE-20890 URL: https://issues.apache.org/jira/browse/IGNITE-20890 Project: Ignite Issue Type: Task Reporter: Yuri Naryshkin Assignee: Yuri Naryshkin 1. Print {{Consumed partitions 42 of 1000}} so that user can see how many partitions processed. 2. Count total number of records and print the processing rate once in 30 seconds. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19807) Deprecate legacy authorization approach through Security Context.
[ https://issues.apache.org/jira/browse/IGNITE-19807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Pavlov updated IGNITE-19807: --- Labels: ise (was: ) > Deprecate legacy authorization approach through Security Context. > - > > Key: IGNITE-19807 > URL: https://issues.apache.org/jira/browse/IGNITE-19807 > Project: Ignite > Issue Type: Task >Reporter: Mikhail Petrov >Assignee: Mikhail Petrov >Priority: Major > Labels: ise > Fix For: 2.16 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > We currently have several ways to check if a user has permission to perform > an operation. > 1. IgniteSecurity#authorize methods that delegate permission check to > security plugin. > 2. SecurityContext#*OperationAllowed methods. They currently are used just > for one check. This approach assumes that granted permissions set is > returned during user authentication and remains immutable. > Let's deprecate the second authorization approach and migrate completely to > the first. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-20634) Sql. Indices with write-only status should not be accessible via sql schemas.
[ https://issues.apache.org/jira/browse/IGNITE-20634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17787287#comment-17787287 ] Maksim Zhuravkov commented on IGNITE-20634: --- [~jooger], [~zstan] Could you please review my PR ? https://github.com/apache/ignite-3/pull/2712 > Sql. Indices with write-only status should not be accessible via sql schemas. > > - > > Key: IGNITE-20634 > URL: https://issues.apache.org/jira/browse/IGNITE-20634 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Maksim Zhuravkov >Assignee: Maksim Zhuravkov >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 0.5h > Remaining Estimate: 0h > > At the moment SqlSchemaManager ignores write-only index status and returns > all indices, which may lead to scan/key-lookups over an index that is not > fully built. > Update SqlSchemaManager to ignore such indices. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20889) Sql. Change type derivation for literals and expressions for overflowed BIGINT
[ https://issues.apache.org/jira/browse/IGNITE-20889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-20889: Description: After [1] will be merged, it become possible to control overflow with numeric operations, there are 3 places in code (will be notified under this issue) which have been changed but seems it`s all due to partially incorrect (calcite?) implementation. I spend a bit time and found that case with overflowed BIGINT insertion: *ItDmlTest#testInsertValueOverflow* can be fixed by correct implementing: *SqlNumericLiteral#createSqlType*, i fast check and seems it will began work properly well, without additional code changes (all core manipulations in scope of [1] can be reverted). The second case: insertion for overflowed SUM of BIGINT additional fix need to be done, check: *SqlValidatorImpl#getValidatedNodeType* Further research is required here. [1] https://issues.apache.org/jira/browse/IGNITE-18662 Just for note, for all above checks - i made dirty hack here: IgniteSqlValidator#deriveType {code:java} if (expr instanceof SqlNumericLiteral) { SqlNumericLiteral expr0 = (SqlNumericLiteral) expr; if (expr0.toValue().length() > 10 && expr0.getTypeName() == SqlTypeName.DECIMAL) { return typeFactory.createSqlType( SqlTypeName.DECIMAL, requireNonNull(20, "prec"), 0); } } {code} was: After [1] will be merged, it become possible to control overflow with numeric operations, there are 3 places in code (will be notified under this issue) which have been changed but seems it`s all due to partially incorrect (calcite?) implementation. I spend a bit time and found that case with overflowed BIGINT insertion: *ItDmlTest#testInsertValueOverflow* can be fixed by correct implementing: *SqlNumericLiteral#createSqlType*, i fast check and seems it will began work properly well, without additional code changes (all core manipulations in scope of [1] can be reverted). The second case: insertion for overflowed SUM of BIGINT additional fix need to be done, check: *SqlValidatorImpl#getValidatedNodeType* Further research is required here. [1] https://issues.apache.org/jira/browse/IGNITE-18662 Just for note, i made dirty hack here: IgniteSqlValidator#deriveType {code:java} if (expr instanceof SqlNumericLiteral) { SqlNumericLiteral expr0 = (SqlNumericLiteral) expr; if (expr0.toValue().length() > 10 && expr0.getTypeName() == SqlTypeName.DECIMAL) { return typeFactory.createSqlType( SqlTypeName.DECIMAL, requireNonNull(20, "prec"), 0); } } {code} > Sql. Change type derivation for literals and expressions for overflowed BIGINT > -- > > Key: IGNITE-20889 > URL: https://issues.apache.org/jira/browse/IGNITE-20889 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Evgeny Stanilovsky >Priority: Major > Labels: ignite-3 > > After [1] will be merged, it become possible to control overflow with numeric > operations, there are 3 places in code (will be notified under this issue) > which have been changed but seems it`s all due to partially incorrect > (calcite?) implementation. I spend a bit time and found that case with > overflowed BIGINT insertion: *ItDmlTest#testInsertValueOverflow* can be fixed > by correct implementing: *SqlNumericLiteral#createSqlType*, i fast check and > seems it will began work properly well, without additional code changes (all > core manipulations in scope of [1] can be reverted). The second case: > insertion for overflowed SUM of BIGINT additional fix need to be done, > check: *SqlValidatorImpl#getValidatedNodeType* > Further research is required here. > [1] https://issues.apache.org/jira/browse/IGNITE-18662 > Just for note, for all above checks - i made dirty hack here: > IgniteSqlValidator#deriveType > {code:java} > if (expr instanceof SqlNumericLiteral) { > SqlNumericLiteral expr0 = (SqlNumericLiteral) expr; > if (expr0.toValue().length() > 10 && expr0.getTypeName() == > SqlTypeName.DECIMAL) { > return typeFactory.createSqlType( > SqlTypeName.DECIMAL, > requireNonNull(20, "prec"), > 0); > } > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20889) Sql. Change type derivation for literals and expressions for overflowed BIGINT
[ https://issues.apache.org/jira/browse/IGNITE-20889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-20889: Description: After [1] will be merged, it become possible to control overflow with numeric operations, there are 3 places in code (will be notified under this issue) which have been changed but seems it`s all due to partially incorrect (calcite?) implementation. I spend a bit time and found that case with overflowed BIGINT insertion: *ItDmlTest#testInsertValueOverflow* can be fixed by correct implementing: *SqlNumericLiteral#createSqlType*, i fast check and seems it will began work properly well, without additional code changes (all core manipulations in scope of [1] can be reverted). The second case: insertion for overflowed SUM of BIGINT additional fix need to be done, check: *SqlValidatorImpl#getValidatedNodeType* Further research is required here. [1] https://issues.apache.org/jira/browse/IGNITE-18662 Just for note, i made dirty hack here: IgniteSqlValidator#deriveType {code:java} if (expr instanceof SqlNumericLiteral) { SqlNumericLiteral expr0 = (SqlNumericLiteral) expr; if (expr0.toValue().length() > 10 && expr0.getTypeName() == SqlTypeName.DECIMAL) { return typeFactory.createSqlType( SqlTypeName.DECIMAL, requireNonNull(20, "prec"), 0); } } {code} was: After [1] will be merged, it become possible to control overflow with numeric operations, there are 3 places in code (will be notified under this issue) which have been changed but seems it`s all due to partially incorrect (calcite?) implementation. I spend a bit time and found that case with overflowed BIGINT insertion: ItDmlTest#testInsertValueOverflow can be fixed by correct implementing: SqlNumericLiteral#createSqlType, i fast check and seems it will began work properly well, without additional code changes (all core manipulations in scope of [1] can be reverted). The second case: insertion for overflowed SUM of BIGINT additional fix need to be done, check: SqlValidatorImpl#getValidatedNodeType Further research is required here. [1] https://issues.apache.org/jira/browse/IGNITE-18662 Just for note, i made dirty hack here: IgniteSqlValidator#deriveType {code:java} if (expr instanceof SqlNumericLiteral) { SqlNumericLiteral expr0 = (SqlNumericLiteral) expr; if (expr0.toValue().length() > 10 && expr0.getTypeName() == SqlTypeName.DECIMAL) { return typeFactory.createSqlType( SqlTypeName.DECIMAL, requireNonNull(20, "prec"), 0); } } {code} > Sql. Change type derivation for literals and expressions for overflowed BIGINT > -- > > Key: IGNITE-20889 > URL: https://issues.apache.org/jira/browse/IGNITE-20889 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Evgeny Stanilovsky >Priority: Major > Labels: ignite-3 > > After [1] will be merged, it become possible to control overflow with numeric > operations, there are 3 places in code (will be notified under this issue) > which have been changed but seems it`s all due to partially incorrect > (calcite?) implementation. I spend a bit time and found that case with > overflowed BIGINT insertion: *ItDmlTest#testInsertValueOverflow* can be fixed > by correct implementing: *SqlNumericLiteral#createSqlType*, i fast check and > seems it will began work properly well, without additional code changes (all > core manipulations in scope of [1] can be reverted). The second case: > insertion for overflowed SUM of BIGINT additional fix need to be done, > check: *SqlValidatorImpl#getValidatedNodeType* > Further research is required here. > [1] https://issues.apache.org/jira/browse/IGNITE-18662 > Just for note, i made dirty hack here: IgniteSqlValidator#deriveType > {code:java} > if (expr instanceof SqlNumericLiteral) { > SqlNumericLiteral expr0 = (SqlNumericLiteral) expr; > if (expr0.toValue().length() > 10 && expr0.getTypeName() == > SqlTypeName.DECIMAL) { > return typeFactory.createSqlType( > SqlTypeName.DECIMAL, > requireNonNull(20, "prec"), > 0); > } > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20874) Node cleanup procedure
[ https://issues.apache.org/jira/browse/IGNITE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-20874: - Epic Link: IGNITE-20007 Ignite Flags: (was: Docs Required,Release Notes Required) > Node cleanup procedure > -- > > Key: IGNITE-20874 > URL: https://issues.apache.org/jira/browse/IGNITE-20874 > Project: Ignite > Issue Type: Improvement >Reporter: Vladislav Pyatkov >Priority: Major > Labels: ignite-3 > > h3. Motivation > In the final stage, an RW transaction sends cleanup messages to all enlisted > replication groups in the transaction. Although several of these groups might > be in the same node, the nodes would be notified several times. Besides, a > release lock procedure makes sense only once for a specific transaction on > the node. > h3. Definition of done > Implement a node-wide cleanup. This procedure should be triggered by a direct > message to a particular node and release all locks for a specific transaction > synchronously (before response). The request also triggers replication > cleanup, which fixes all write intents for the specific transaction. > h3. Implementation notes > * Add a new message that ought to be named LockReleaseMessage. > * The new message has a pure network nature (not a replication request). > * The message should contain a list of replication groups (the list might be > empty) and a transaction ID. > * When the message is received, it should release all locks that are held by > the transaction. Then update the lock transation state map and start the > cleanup process over all replication groups in the list and immediately reply > (do not wait for cleanup on replication groups). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20889) Sql. Change type derivation for literals and expressions for overflowed BIGINT
Evgeny Stanilovsky created IGNITE-20889: --- Summary: Sql. Change type derivation for literals and expressions for overflowed BIGINT Key: IGNITE-20889 URL: https://issues.apache.org/jira/browse/IGNITE-20889 Project: Ignite Issue Type: Improvement Components: sql Affects Versions: 3.0.0-beta1 Reporter: Evgeny Stanilovsky After [1] will be merged, it become possible to control overflow with numeric operations, there are 3 places in code (will be notified under this issue) which have been changed but seems it`s all due to partially incorrect (calcite?) implementation. I spend a bit time and found that case with overflowed BIGINT insertion: ItDmlTest#testInsertValueOverflow can be fixed by correct implementing: SqlNumericLiteral#createSqlType, i fast check and seems it will began work properly well, without additional code changes (all core manipulations in scope of [1] can be reverted). The second case: insertion for overflowed SUM of BIGINT additional fix need to be done, check: SqlValidatorImpl#getValidatedNodeType Further research is required here. [1] https://issues.apache.org/jira/browse/IGNITE-18662 Just for note, i made dirty hack here: IgniteSqlValidator#deriveType {code:java} if (expr instanceof SqlNumericLiteral) { SqlNumericLiteral expr0 = (SqlNumericLiteral) expr; if (expr0.toValue().length() > 10 && expr0.getTypeName() == SqlTypeName.DECIMAL) { return typeFactory.createSqlType( SqlTypeName.DECIMAL, requireNonNull(20, "prec"), 0); } } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20830) Do not retry attempts to subscribe in TopologyAwareRaftGroupService
[ https://issues.apache.org/jira/browse/IGNITE-20830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladislav Pyatkov updated IGNITE-20830: --- Description: h3. Motivation In IGNITE-20828, retries on UNsubscription were removed. It needs to be considered whether we should retry subscriptions or not. h3. Implementation notes * Replace the TopologyAwareRaftGroupService#sendWithRetry method with the simple netwotk invoke. * Move specific exception handling logic to the particular method (TopologyAwareRaftGroupService#unsubscribeLeader). was:In IGNITE-20828, retries on UNsubscription were removed. It needs to be considered whether we should retry on subscriptions or not. > Do not retry attempts to subscribe in TopologyAwareRaftGroupService > --- > > Key: IGNITE-20830 > URL: https://issues.apache.org/jira/browse/IGNITE-20830 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > h3. Motivation > In IGNITE-20828, retries on UNsubscription were removed. It needs to be > considered whether we should retry subscriptions or not. > h3. Implementation notes > * Replace the TopologyAwareRaftGroupService#sendWithRetry method with the > simple netwotk invoke. > * Move specific exception handling logic to the particular method > (TopologyAwareRaftGroupService#unsubscribeLeader). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20685) Implement ability to trigger transaction recovery
[ https://issues.apache.org/jira/browse/IGNITE-20685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladislav Pyatkov updated IGNITE-20685: --- Attachment: OrphanTxRecovery-LockConflictHandling.jpg > Implement ability to trigger transaction recovery > - > > Key: IGNITE-20685 > URL: https://issues.apache.org/jira/browse/IGNITE-20685 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Assignee: Vladislav Pyatkov >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Attachments: OrphanTxRecovery-LockConflictHandling.jpg > > Time Spent: 4.5h > Remaining Estimate: 0h > > h3. Motivation > Let's assume that the datanode somehow found out that the transaction > coordinator is dead, but the products of its activity such as locks and write > intents are still present. In that case it’s necessary to check whether > corresponding transaction was actually finished and if not finish it. > h3. Definition of Done > * Transactions X that detects (detection logic will be covered in a separate > ticket) that coordinator is dead awaits commitPartition primary replica and > sends initiateRecoveryReplicaRequest to it in a fully asynchronous manner. > Meaning that transaction X should behave itself in a way as it specified in > deadlock prevention engine and do not explicitly wait for initiateRecovery > result. Actually, we do not expect any direct response from initiate > recovery. Initiate recovery failover will be implemented in a different way. > * Commit partition somewhere handles given request. No-op handling is > expected for now, proper one will be added in IGNITE-20735 Let's consider > either TransactionStateResolver or TxManagerImpl as initiateRecovery handler. > TransactionStateResolver seems as the best choice here, however it should be > refactored a bit, basically because it's won't be only state resolver any > longer. > h3. Implementation Notes > * Given ticket is trivial and should be considered as a bridge between > durable tx coordinator liveness detection and corresponding > initiateRecoveryReplicaRequest handling. Both items will be covered in a > separate tickets. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20874) Node cleanup procedure
[ https://issues.apache.org/jira/browse/IGNITE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladislav Pyatkov updated IGNITE-20874: --- Description: h3. Motivation In the final stage, an RW transaction sends cleanup messages to all enlisted replication groups in the transaction. Although several of these groups might be in the same node, the nodes would be notified several times. Besides, a release lock procedure makes sense only once for a specific transaction on the node. h3. Definition of done Implement a node-wide cleanup. This procedure should be triggered by a direct message to a particular node and release all locks for a specific transaction synchronously (before response). The request also triggers replication cleanup, which fixes all write intents for the specific transaction. h3. Implementation notes * Add a new message that ought to be named LockReleaseMessage. * The new message has a pure network nature (not a replication request). * The message should contain a list of replication groups (the list might be empty) and a transaction ID. * When the message is received, it should release all locks that are held by the transaction. Then update the lock transation state map and start the cleanup process over all replication groups in the list and immediately reply (do not wait for cleanup on replication groups). was: h3. Motivation In the final stage, an RW transaction sends cleanup messages to all enlisted replication groups in the transaction. Although several of these groups might be in the same node, the nodes would be notified several times. Besides, a release lock procedure makes sense only once for a specific transaction on the node. h3. Definition of done Implement a node-wide cleanup. This procedure should be triggered by a direct message to a particular node and release all locks for a specific transaction synchronously (before response). The request also triggers replication cleanup, which fixes all write intents for the specific transaction. h3. Implementation notes * Add a new message that ought to be named LockReleaseMessage. * The new message has a pure network nature (not a replication request). * The message should contain a list of replication groups (the list might be empty) and a transaction ID. * When the message is received, it should release all locks that are held by the transaction. Then update the lock transation state map and start the cleanup process over all replication groups in the list and immediately reply (do not wait for cleanup on replication groopups). > Node cleanup procedure > -- > > Key: IGNITE-20874 > URL: https://issues.apache.org/jira/browse/IGNITE-20874 > Project: Ignite > Issue Type: Improvement >Reporter: Vladislav Pyatkov >Priority: Major > Labels: ignite-3 > > h3. Motivation > In the final stage, an RW transaction sends cleanup messages to all enlisted > replication groups in the transaction. Although several of these groups might > be in the same node, the nodes would be notified several times. Besides, a > release lock procedure makes sense only once for a specific transaction on > the node. > h3. Definition of done > Implement a node-wide cleanup. This procedure should be triggered by a direct > message to a particular node and release all locks for a specific transaction > synchronously (before response). The request also triggers replication > cleanup, which fixes all write intents for the specific transaction. > h3. Implementation notes > * Add a new message that ought to be named LockReleaseMessage. > * The new message has a pure network nature (not a replication request). > * The message should contain a list of replication groups (the list might be > empty) and a transaction ID. > * When the message is received, it should release all locks that are held by > the transaction. Then update the lock transation state map and start the > cleanup process over all replication groups in the list and immediately reply > (do not wait for cleanup on replication groups). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20874) Node cleanup procedure
[ https://issues.apache.org/jira/browse/IGNITE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladislav Pyatkov updated IGNITE-20874: --- Description: h3. Motivation In the final stage, an RW transaction sends cleanup messages to all enlisted replication groups in the transaction. Although several of these groups might be in the same node, the nodes would be notified several times. Besides, a release lock procedure makes sense only once for a specific transaction on the node. h3. Definition of done Implement a node-wide cleanup. This procedure should be triggered by a direct message to a particular node and release all locks for a specific transaction synchronously (before response). The request also triggers replication cleanup, which fixes all write intents for the specific transaction. h3. Implementation notes * Add a new message that ought to be named LockReleaseMessage. * The new message has a pure network nature (not a replication request). * The message should contain a list of replication groups (the list might be empty) and a transaction ID. * When the message is received, it should release all locks that are held by the transaction. Then update the lock transation state map and start the cleanup process over all replication groups in the list and immediately reply (do not wait for cleanup on replication groopups). was: h3. Motivation In the final stage, an RW transaction sends cleanup messages to all enlisted replication groups in the transaction. Although several of these groups might be in the same node, the nodes would be notified several times. Besides, a release lock procedure makes sense only once for a specific transaction on the node. h3. Definition of done Implement a node-wide cleanup. This procedure should be triggered by a direct message to a particular node and release all locks for a specific transaction synchronously (before response). The request also triggers replication cleanup, which fixes all write intents for the specific transaction. h3 Implementation notes * Add a new message that ought to be named LockReleaseMessage. * The new message has a pure network nature (not a replication request). * The message should contain a list of replication groups (the list might be empty) and a transaction ID. * When the message is received, it should release all locks that are held by the transaction. Then update the lock transation state map and start the cleanup process over all replication groups in the list and immediately reply (do not wait for cleanup on replication groopups). > Node cleanup procedure > -- > > Key: IGNITE-20874 > URL: https://issues.apache.org/jira/browse/IGNITE-20874 > Project: Ignite > Issue Type: Improvement >Reporter: Vladislav Pyatkov >Priority: Major > Labels: ignite-3 > > h3. Motivation > In the final stage, an RW transaction sends cleanup messages to all enlisted > replication groups in the transaction. Although several of these groups might > be in the same node, the nodes would be notified several times. Besides, a > release lock procedure makes sense only once for a specific transaction on > the node. > h3. Definition of done > Implement a node-wide cleanup. This procedure should be triggered by a direct > message to a particular node and release all locks for a specific transaction > synchronously (before response). The request also triggers replication > cleanup, which fixes all write intents for the specific transaction. > h3. Implementation notes > * Add a new message that ought to be named LockReleaseMessage. > * The new message has a pure network nature (not a replication request). > * The message should contain a list of replication groups (the list might be > empty) and a transaction ID. > * When the message is received, it should release all locks that are held by > the transaction. Then update the lock transation state map and start the > cleanup process over all replication groups in the list and immediately reply > (do not wait for cleanup on replication groopups). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20874) Node cleanup procedure
[ https://issues.apache.org/jira/browse/IGNITE-20874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladislav Pyatkov updated IGNITE-20874: --- Description: h3. Motivation In the final stage, an RW transaction sends cleanup messages to all enlisted replication groups in the transaction. Although several of these groups might be in the same node, the nodes would be notified several times. Besides, a release lock procedure makes sense only once for a specific transaction on the node. h3. Definition of done Implement a node-wide cleanup. This procedure should be triggered by a direct message to a particular node and release all locks for a specific transaction synchronously (before response). The request also triggers replication cleanup, which fixes all write intents for the specific transaction. h3 Implementation notes * Add a new message that ought to be named LockReleaseMessage. * The new message has a pure network nature (not a replication request). * The message should contain a list of replication groups (the list might be empty) and a transaction ID. * When the message is received, it should release all locks that are held by the transaction. Then update the lock transation state map and start the cleanup process over all replication groups in the list and immediately reply (do not wait for cleanup on replication groopups). was: h3. Motivation In the final stage, an RW transaction sends cleanup messages to all enlisted replication groups in the transaction. Although several of these groups might be in the same node, the nodes would be notified several times. Besides, a release lock procedure makes sense only once for a specific transaction on the node. h3. Definition of done Implement a node-wide cleanup. This procedure should be triggered by a direct message to a particular node and release all locks for a specific transaction synchronously (before response). The request also triggers replication cleanup, which fixes all write intents for the specific transaction. > Node cleanup procedure > -- > > Key: IGNITE-20874 > URL: https://issues.apache.org/jira/browse/IGNITE-20874 > Project: Ignite > Issue Type: Improvement >Reporter: Vladislav Pyatkov >Priority: Major > Labels: ignite-3 > > h3. Motivation > In the final stage, an RW transaction sends cleanup messages to all enlisted > replication groups in the transaction. Although several of these groups might > be in the same node, the nodes would be notified several times. Besides, a > release lock procedure makes sense only once for a specific transaction on > the node. > h3. Definition of done > Implement a node-wide cleanup. This procedure should be triggered by a direct > message to a particular node and release all locks for a specific transaction > synchronously (before response). The request also triggers replication > cleanup, which fixes all write intents for the specific transaction. > h3 Implementation notes > * Add a new message that ought to be named LockReleaseMessage. > * The new message has a pure network nature (not a replication request). > * The message should contain a list of replication groups (the list might be > empty) and a transaction ID. > * When the message is received, it should release all locks that are held by > the transaction. Then update the lock transation state map and start the > cleanup process over all replication groups in the list and immediately reply > (do not wait for cleanup on replication groopups). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20572) AssertionError in CDC on metadata deserialize
[ https://issues.apache.org/jira/browse/IGNITE-20572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Timonin updated IGNITE-20572: Ignite Flags: (was: Docs Required,Release Notes Required) > AssertionError in CDC on metadata deserialize > - > > Key: IGNITE-20572 > URL: https://issues.apache.org/jira/browse/IGNITE-20572 > Project: Ignite > Issue Type: Bug >Reporter: Sergey Korotkov >Assignee: Maksim Timonin >Priority: Major > Labels: ise > > CDC process terminates sometimes with the below error. > One the cases is cache creation which causes the creation of the .bin file in > the binary_meta directory. Say if custom indexed types are passed via the > CacheConfiguration.setIndexedTypes. > Looks like the race condition. CDC processed the binary meta file which is > already created but still empty. > {noformat} > [2023-10-04T22:29:32,673][ERROR][Thread-0][] Cdc error > java.lang.AssertionError: null > at > org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:302) > ~[classes/:?] > at > org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:120) > ~[classes/:?] > at > org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:92) > ~[classes/:?] > at > org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10761) > ~[classes/:?] > at > org.apache.ignite.internal.processors.cache.binary.BinaryMetadataFileStore.restoreMetadata(BinaryMetadataFileStore.java:222) > ~[classes/:?] > at > org.apache.ignite.internal.processors.cache.binary.BinaryMetadataFileStore.restoreMetadata(BinaryMetadataFileStore.java:216) > ~[classes/:?] > at > org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.cacheMetadataLocally(CacheObjectBinaryProcessorImpl.java:1076) > ~[classes/:?] > at > org.apache.ignite.internal.cdc.CdcMain.lambda$updateTypes$4(CdcMain.java:616) > ~[classes/:?] > at > java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) > ~[?:?] > at > java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) > ~[?:?] > at > java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:958) > ~[?:?] > at > java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:294) > ~[?:?] > at > java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206) > ~[?:?] > at > java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161) > ~[?:?] > at > java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300) > ~[?:?] > at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681) ~[?:?] > at org.apache.ignite.internal.cdc.CdcMain.updateTypes(CdcMain.java:627) > ~[classes/:?] > at > org.apache.ignite.internal.cdc.CdcMain.updateMetadata(CdcMain.java:588) > ~[classes/:?] > at > org.apache.ignite.internal.cdc.CdcMain.consumeWalSegmentsUntilStopped(CdcMain.java:463) > ~[classes/:?] > at org.apache.ignite.internal.cdc.CdcMain.runX(CdcMain.java:324) > ~[classes/:?] > at org.apache.ignite.internal.cdc.CdcMain.run(CdcMain.java:266) > [classes/:?] > at java.lang.Thread.run(Thread.java:834) [?:?] > {noformat} > *** > Possible solution would be to change the > BinaryMetadataFileStore.writeMetadata to create file atomically. > The same way as in the MarshallerMappingFileStore.writeMapping using the > ATOMIC_MOVE. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-20572) AssertionError in CDC on metadata deserialize
[ https://issues.apache.org/jira/browse/IGNITE-20572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Timonin reassigned IGNITE-20572: --- Assignee: Maksim Timonin > AssertionError in CDC on metadata deserialize > - > > Key: IGNITE-20572 > URL: https://issues.apache.org/jira/browse/IGNITE-20572 > Project: Ignite > Issue Type: Bug >Reporter: Sergey Korotkov >Assignee: Maksim Timonin >Priority: Major > Labels: ise > > CDC process terminates sometimes with the below error. > One the cases is cache creation which causes the creation of the .bin file in > the binary_meta directory. Say if custom indexed types are passed via the > CacheConfiguration.setIndexedTypes. > Looks like the race condition. CDC processed the binary meta file which is > already created but still empty. > {noformat} > [2023-10-04T22:29:32,673][ERROR][Thread-0][] Cdc error > java.lang.AssertionError: null > at > org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:302) > ~[classes/:?] > at > org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:120) > ~[classes/:?] > at > org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:92) > ~[classes/:?] > at > org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10761) > ~[classes/:?] > at > org.apache.ignite.internal.processors.cache.binary.BinaryMetadataFileStore.restoreMetadata(BinaryMetadataFileStore.java:222) > ~[classes/:?] > at > org.apache.ignite.internal.processors.cache.binary.BinaryMetadataFileStore.restoreMetadata(BinaryMetadataFileStore.java:216) > ~[classes/:?] > at > org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.cacheMetadataLocally(CacheObjectBinaryProcessorImpl.java:1076) > ~[classes/:?] > at > org.apache.ignite.internal.cdc.CdcMain.lambda$updateTypes$4(CdcMain.java:616) > ~[classes/:?] > at > java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) > ~[?:?] > at > java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) > ~[?:?] > at > java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:958) > ~[?:?] > at > java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:294) > ~[?:?] > at > java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206) > ~[?:?] > at > java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:161) > ~[?:?] > at > java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:300) > ~[?:?] > at java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681) ~[?:?] > at org.apache.ignite.internal.cdc.CdcMain.updateTypes(CdcMain.java:627) > ~[classes/:?] > at > org.apache.ignite.internal.cdc.CdcMain.updateMetadata(CdcMain.java:588) > ~[classes/:?] > at > org.apache.ignite.internal.cdc.CdcMain.consumeWalSegmentsUntilStopped(CdcMain.java:463) > ~[classes/:?] > at org.apache.ignite.internal.cdc.CdcMain.runX(CdcMain.java:324) > ~[classes/:?] > at org.apache.ignite.internal.cdc.CdcMain.run(CdcMain.java:266) > [classes/:?] > at java.lang.Thread.run(Thread.java:834) [?:?] > {noformat} > *** > Possible solution would be to change the > BinaryMetadataFileStore.writeMetadata to create file atomically. > The same way as in the MarshallerMappingFileStore.writeMapping using the > ATOMIC_MOVE. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-20836) Support zipping of dump files
[ https://issues.apache.org/jira/browse/IGNITE-20836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17787176#comment-17787176 ] Ignite TC Bot commented on IGNITE-20836: {panel:title=Branch: [pull/11040/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/11040/head] Base: [master] : New Tests (32)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1} {color:#8b}Snapshots 3{color} [[tests 32|https://ci2.ignite.apache.org/viewLog.html?buildId=7618243]] * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=1,backups=0,persistence=false,mode=TRANSACTIONAL,useDataStreamer=true,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=1,backups=0,persistence=false,mode=TRANSACTIONAL,useDataStreamer=false,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=1,backups=0,persistence=false,mode=ATOMIC,useDataStreamer=true,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=1,backups=0,persistence=false,mode=ATOMIC,useDataStreamer=false,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=1,backups=0,persistence=true,mode=TRANSACTIONAL,useDataStreamer=true,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=1,backups=0,persistence=true,mode=TRANSACTIONAL,useDataStreamer=false,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=1,backups=0,persistence=true,mode=ATOMIC,useDataStreamer=true,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=1,backups=0,persistence=true,mode=ATOMIC,useDataStreamer=false,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=3,backups=0,persistence=false,mode=TRANSACTIONAL,useDataStreamer=true,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=3,backups=0,persistence=false,mode=TRANSACTIONAL,useDataStreamer=false,onlyPrimary=false] - PASSED{color} * {color:#013220}IgniteSnapshotTestSuite3: IgniteCacheDumpSelfTest.testZippedCacheDump[nodes=3,backups=0,persistence=false,mode=ATOMIC,useDataStreamer=true,onlyPrimary=false] - PASSED{color} ... and 21 new tests {panel} [TeamCity *--> Run :: All* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7618244&buildTypeId=IgniteTests24Java8_RunAll] > Support zipping of dump files > - > > Key: IGNITE-20836 > URL: https://issues.apache.org/jira/browse/IGNITE-20836 > Project: Ignite > Issue Type: Task >Reporter: Yuri Naryshkin >Assignee: Yuri Naryshkin >Priority: Major > Labels: IEP-109, ise > Time Spent: 4h 10m > Remaining Estimate: 0h > > For additional space saving, need to introduce a mode in which dump files are > compressed during the creation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-20778) Provide internal API to get a list of parameters for non-executed query
[ https://issues.apache.org/jira/browse/IGNITE-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Zhuravkov reassigned IGNITE-20778: - Assignee: Maksim Zhuravkov > Provide internal API to get a list of parameters for non-executed query > --- > > Key: IGNITE-20778 > URL: https://issues.apache.org/jira/browse/IGNITE-20778 > Project: Ignite > Issue Type: New Feature > Components: sql >Reporter: Igor Sapego >Assignee: Maksim Zhuravkov >Priority: Major > Labels: ignite-3 > > To properly support ODBC metadata requirements we should be able to provide > user with a list of parameters with types for the non-executed query. > Example > Given a query: > {noformat} > insert into some_table(id, val1, val2) values(?, ?, ?) > {noformat} > This API should return an array of size of 3 that at least contains types for > the parameters. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-20888) Sql. Use single row replication request when possible
[ https://issues.apache.org/jira/browse/IGNITE-20888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov reassigned IGNITE-20888: - Assignee: Konstantin Orlov > Sql. Use single row replication request when possible > - > > Key: IGNITE-20888 > URL: https://issues.apache.org/jira/browse/IGNITE-20888 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Konstantin Orlov >Assignee: Konstantin Orlov >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Single-row replication request shows slightly better performance than their > multi-row counterpart. > Let's adjust {{UpdateableTableImpl}} in order to prefer single-row requests > when possible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20888) Sql. Use single row replication request when possible
[ https://issues.apache.org/jira/browse/IGNITE-20888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-20888: -- Fix Version/s: 3.0.0-beta2 > Sql. Use single row replication request when possible > - > > Key: IGNITE-20888 > URL: https://issues.apache.org/jira/browse/IGNITE-20888 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Konstantin Orlov >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Single-row replication request shows slightly better performance than their > multi-row counterpart. > Let's adjust {{UpdateableTableImpl}} in order to prefer single-row requests > when possible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20888) Sql. Use single row replication request when possible
Konstantin Orlov created IGNITE-20888: - Summary: Sql. Use single row replication request when possible Key: IGNITE-20888 URL: https://issues.apache.org/jira/browse/IGNITE-20888 Project: Ignite Issue Type: Improvement Components: sql Reporter: Konstantin Orlov Single-row replication request shows slightly better performance than their multi-row counterpart. Let's adjust {{UpdateableTableImpl}} in order to prefer single-row requests when possible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-20884) Registering all indexes for active tables on node recovery
[ https://issues.apache.org/jira/browse/IGNITE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko reassigned IGNITE-20884: Assignee: Kirill Tkalenko > Registering all indexes for active tables on node recovery > -- > > Key: IGNITE-20884 > URL: https://issues.apache.org/jira/browse/IGNITE-20884 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > At the moment, when performing an UpdateOperation, we use only those indexes > that are current at the time of the operation, which is not correct. > We must update all available indexes since the catalog version on which the > transaction began, as well as all registered indexes since the beginning of > the transaction that have not been deleted. > In this ticket, we need to make sure that when on node recovery, we register > not only indexes from the latest version of the catalog, but also all indexes > of all current tables (which are in the latest version of the catalog) from > the earliest available version of the catalog. > In the future, we will most likely start all the tables, even those deleted > from the earliest version of the catalog, and we will need to register all > indexes for all these tables, but this is another ticket. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-20758) Fix LongDestroyDurableBackgroundTaskTest
[ https://issues.apache.org/jira/browse/IGNITE-20758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17787133#comment-17787133 ] Ilya Shishkov commented on IGNITE-20758: [~vladsz83], [~dpavlov], thank you a lot for the review. > Fix LongDestroyDurableBackgroundTaskTest > > > Key: IGNITE-20758 > URL: https://issues.apache.org/jira/browse/IGNITE-20758 > Project: Ignite > Issue Type: Test >Reporter: Ilya Shishkov >Assignee: Ilya Shishkov >Priority: Trivial > Labels: ise > Fix For: 2.17 > > Time Spent: 20m > Remaining Estimate: 0h > > Some tests fails with error: > {code} > java.lang.IllegalArgumentException: Value for '--check-first' property should > be positive. > at > org.apache.ignite.internal.management.cache.CacheValidateIndexesCommandArg.ensurePositive(CacheValidateIndexesCommandArg.java:70) > at > org.apache.ignite.internal.management.cache.CacheValidateIndexesCommandArg.checkFirst(CacheValidateIndexesCommandArg.java:165) > at > org.apache.ignite.internal.processors.cache.persistence.db.LongDestroyDurableBackgroundTaskTest.validateIndexes(LongDestroyDurableBackgroundTaskTest.java:374) > at > org.apache.ignite.internal.processors.cache.persistence.db.LongDestroyDurableBackgroundTaskTest.testLongIndexDeletion(LongDestroyDurableBackgroundTaskTest.java:339) > at > org.apache.ignite.internal.processors.cache.persistence.db.LongDestroyDurableBackgroundTaskTest.testLongIndexDeletionSimple(LongDestroyDurableBackgroundTaskTest.java:630) > ... > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-20886) Don't unregister indexes on CatalogEvent#INDEX_DROP
[ https://issues.apache.org/jira/browse/IGNITE-20886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko reassigned IGNITE-20886: Assignee: Kirill Tkalenko > Don't unregister indexes on CatalogEvent#INDEX_DROP > --- > > Key: IGNITE-20886 > URL: https://issues.apache.org/jira/browse/IGNITE-20886 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > > At the moment, when performing an UpdateOperation, we use only those indexes > that are current at the time of the operation, which is not correct. > We must update all available indexes since the catalog version on which the > transaction began, as well as all registered indexes since the beginning of > the transaction that have not been deleted. > In this ticket, we need not unregister indexes on > *org.apache.ignite.internal.catalog.events.CatalogEvent#INDEX_DROP* > (*org.apache.ignite.internal.index.IndexManager#onIndexDrop*), since they may > be needed when performing the operations described above. > Unregistration of indexes must occur in IGNITE-20121(or IGNITE-20120) before > we realize that we no longer need the index and we can safely physically > delete this index both from the catalog (from previous versions) and its > storage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20886) Don't unregister indexes on CatalogEvent#INDEX_DROP
[ https://issues.apache.org/jira/browse/IGNITE-20886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-20886: - Fix Version/s: 3.0.0-beta2 > Don't unregister indexes on CatalogEvent#INDEX_DROP > --- > > Key: IGNITE-20886 > URL: https://issues.apache.org/jira/browse/IGNITE-20886 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > At the moment, when performing an UpdateOperation, we use only those indexes > that are current at the time of the operation, which is not correct. > We must update all available indexes since the catalog version on which the > transaction began, as well as all registered indexes since the beginning of > the transaction that have not been deleted. > In this ticket, we need not unregister indexes on > *org.apache.ignite.internal.catalog.events.CatalogEvent#INDEX_DROP* > (*org.apache.ignite.internal.index.IndexManager#onIndexDrop*), since they may > be needed when performing the operations described above. > Unregistration of indexes must occur in IGNITE-20121(or IGNITE-20120) before > we realize that we no longer need the index and we can safely physically > delete this index both from the catalog (from previous versions) and its > storage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20884) Registering all indexes for active tables on node recovery
[ https://issues.apache.org/jira/browse/IGNITE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-20884: - Summary: Registering all indexes for active tables on node recovery (was: Registering indexes for active tables on node recovery) > Registering all indexes for active tables on node recovery > -- > > Key: IGNITE-20884 > URL: https://issues.apache.org/jira/browse/IGNITE-20884 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > At the moment, when performing an UpdateOperation, we use only those indexes > that are current at the time of the operation, which is not correct. > We must update all available indexes since the catalog version on which the > transaction began, as well as all registered indexes since the beginning of > the transaction that have not been deleted. > In this ticket, we need to make sure that when on node recovery, we register > not only indexes from the latest version of the catalog, but also all indexes > of all current tables (which are in the latest version of the catalog) from > the earliest available version of the catalog. > In the future, we will most likely start all the tables, even those deleted > from the earliest version of the catalog, and we will need to register all > indexes for all these tables, but this is another ticket. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-20887) Sql. Avoid using blocking api in sql threads
[ https://issues.apache.org/jira/browse/IGNITE-20887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov reassigned IGNITE-20887: - Assignee: Konstantin Orlov > Sql. Avoid using blocking api in sql threads > > > Key: IGNITE-20887 > URL: https://issues.apache.org/jira/browse/IGNITE-20887 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Konstantin Orlov >Assignee: Konstantin Orlov >Priority: Major > Labels: ignite-3 > > As for now, there are two places which blocks sql thread in order to wait for > completion of the operation: > * {{join()}} on futures returned by {{UpdatableTable}} in {{ModifyNode}} > * finalisation of a transaction in {{QueryTransactionWrapper}} > Performance of a sql engine is sensitive to a blocking of sql threads because > every fragment of a query is bound to a particular thread. > Let's revise and fix aforementioned places. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20887) Sql. Avoid using blocking api in sql threads
[ https://issues.apache.org/jira/browse/IGNITE-20887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-20887: -- Fix Version/s: 3.0.0-beta2 > Sql. Avoid using blocking api in sql threads > > > Key: IGNITE-20887 > URL: https://issues.apache.org/jira/browse/IGNITE-20887 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Konstantin Orlov >Assignee: Konstantin Orlov >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > As for now, there are two places which blocks sql thread in order to wait for > completion of the operation: > * {{join()}} on futures returned by {{UpdatableTable}} in {{ModifyNode}} > * finalisation of a transaction in {{QueryTransactionWrapper}} > Performance of a sql engine is sensitive to a blocking of sql threads because > every fragment of a query is bound to a particular thread. > Let's revise and fix aforementioned places. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20884) Registering indexes for active tables on node recovery
[ https://issues.apache.org/jira/browse/IGNITE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-20884: - Summary: Registering indexes for active tables on node recovery (was: Registering deleted indexes for active tables on node recovery) > Registering indexes for active tables on node recovery > -- > > Key: IGNITE-20884 > URL: https://issues.apache.org/jira/browse/IGNITE-20884 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > At the moment, when performing an UpdateOperation, we use only those indexes > that are current at the time of the operation, which is not correct. > We must update all available indexes since the catalog version on which the > transaction began, as well as all registered indexes since the beginning of > the transaction that have not been deleted. > In this ticket, we need to make sure that when on node recovery, we register > not only indexes from the latest version of the catalog, but also all indexes > of all current tables (which are in the latest version of the catalog) from > the earliest available version of the catalog. > In the future, we will most likely start all the tables, even those deleted > from the earliest version of the catalog, and we will need to register all > indexes for all these tables, but this is another ticket. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20887) Sql. Avoid using blocking api in sql threads
Konstantin Orlov created IGNITE-20887: - Summary: Sql. Avoid using blocking api in sql threads Key: IGNITE-20887 URL: https://issues.apache.org/jira/browse/IGNITE-20887 Project: Ignite Issue Type: Improvement Components: sql Reporter: Konstantin Orlov As for now, there are two places which blocks sql thread in order to wait for completion of the operation: * {{join()}} on futures returned by {{UpdatableTable}} in {{ModifyNode}} * finalisation of a transaction in {{QueryTransactionWrapper}} Performance of a sql engine is sensitive to a blocking of sql threads because every fragment of a query is bound to a particular thread. Let's revise and fix aforementioned places. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20884) Registering deleted indexes for active tables on node recovery
[ https://issues.apache.org/jira/browse/IGNITE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-20884: - Description: At the moment, when performing an UpdateOperation, we use only those indexes that are current at the time of the operation, which is not correct. We must update all available indexes since the catalog version on which the transaction began, as well as all registered indexes since the beginning of the transaction that have not been deleted. In this ticket, we need to make sure that when on node recovery, we register not only indexes from the latest version of the catalog, but also all indexes of all current tables (which are in the latest version of the catalog) from the earliest available version of the catalog. In the future, we will most likely start all the tables, even those deleted from the earliest version of the catalog, and we will need to register all indexes for all these tables, but this is another ticket. was: At the moment, when performing an UpdateOperation, we use only those indexes that are current at the time of the operation, which is not correct. We must update all available indexes since the catalog version on which the transaction began, as well as all registered indexes since the beginning of the transaction that have not been deleted. In this ticket, we need to make sure that when on node recovery, we register not only indexes from the latest version of the catalog, but also all available indexes of all current tables (which are in the latest version of the catalog) from the earliest available version of the catalog. In the future, we will most likely start all the tables, even those deleted from the earliest version of the catalog, and we will need to register all available indexes for all these tables, but this is another ticket. > Registering deleted indexes for active tables on node recovery > -- > > Key: IGNITE-20884 > URL: https://issues.apache.org/jira/browse/IGNITE-20884 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > At the moment, when performing an UpdateOperation, we use only those indexes > that are current at the time of the operation, which is not correct. > We must update all available indexes since the catalog version on which the > transaction began, as well as all registered indexes since the beginning of > the transaction that have not been deleted. > In this ticket, we need to make sure that when on node recovery, we register > not only indexes from the latest version of the catalog, but also all indexes > of all current tables (which are in the latest version of the catalog) from > the earliest available version of the catalog. > In the future, we will most likely start all the tables, even those deleted > from the earliest version of the catalog, and we will need to register all > indexes for all these tables, but this is another ticket. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20884) Registering deleted indexes for active tables on node recovery
[ https://issues.apache.org/jira/browse/IGNITE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Tkalenko updated IGNITE-20884: - Summary: Registering deleted indexes for active tables on node recovery (was: Registering deleted available indexes for active tables on node recovery) > Registering deleted indexes for active tables on node recovery > -- > > Key: IGNITE-20884 > URL: https://issues.apache.org/jira/browse/IGNITE-20884 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > At the moment, when performing an UpdateOperation, we use only those indexes > that are current at the time of the operation, which is not correct. > We must update all available indexes since the catalog version on which the > transaction began, as well as all registered indexes since the beginning of > the transaction that have not been deleted. > In this ticket, we need to make sure that when on node recovery, we register > not only indexes from the latest version of the catalog, but also all > available indexes of all current tables (which are in the latest version of > the catalog) from the earliest available version of the catalog. > In the future, we will most likely start all the tables, even those deleted > from the earliest version of the catalog, and we will need to register all > available indexes for all these tables, but this is another ticket. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20886) Don't unregister indexes on CatalogEvent#INDEX_DROP
Kirill Tkalenko created IGNITE-20886: Summary: Don't unregister indexes on CatalogEvent#INDEX_DROP Key: IGNITE-20886 URL: https://issues.apache.org/jira/browse/IGNITE-20886 Project: Ignite Issue Type: Improvement Reporter: Kirill Tkalenko At the moment, when performing an UpdateOperation, we use only those indexes that are current at the time of the operation, which is not correct. We must update all available indexes since the catalog version on which the transaction began, as well as all registered indexes since the beginning of the transaction that have not been deleted. In this ticket, we need not unregister indexes on *org.apache.ignite.internal.catalog.events.CatalogEvent#INDEX_DROP* (*org.apache.ignite.internal.index.IndexManager#onIndexDrop*), since they may be needed when performing the operations described above. Unregistration of indexes must occur in IGNITE-20121(or IGNITE-20120) before we realize that we no longer need the index and we can safely physically delete this index both from the catalog (from previous versions) and its storage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20149) Sql. Revise use INTERNAL_ERR in sql module
[ https://issues.apache.org/jira/browse/IGNITE-20149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-20149: - Fix Version/s: 3.0.0-beta2 > Sql. Revise use INTERNAL_ERR in sql module > -- > > Key: IGNITE-20149 > URL: https://issues.apache.org/jira/browse/IGNITE-20149 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Yury Gerzhedovich >Assignee: Yury Gerzhedovich >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 50m > Remaining Estimate: 0h > > Error code Common.INTERNAL_ERR should use only for internal error, which > could treat as a bug require attention from developer. However we use the > error code often and for normal situation, e.g. node left during execution of > a query. > Let's revise SQL module on use INTERNAL_ERR error code according the above. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20760) Drop column error message get indexes by column name only
[ https://issues.apache.org/jira/browse/IGNITE-20760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-20760: - Fix Version/s: 3.0.0-beta2 > Drop column error message get indexes by column name only > -- > > Key: IGNITE-20760 > URL: https://issues.apache.org/jira/browse/IGNITE-20760 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 3.0 >Reporter: Alexander Belyak >Assignee: Yury Gerzhedovich >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 20m > Remaining Estimate: 0h > > If there are some index, preventing column to be dropped, then error message > contains all indexes with all columns with the same name. Even if there is > completely different tables with the same named columns: > {code:java} > drop table tab1; > drop table tab2; > create table tab1(id integer not null primary key, f1 int); > create index tab1_f1 on tab1(f1); > create table tab2(id integer not null primary key, f1 int, f2 int); > create index tab2_f1 on tab2(f1); > create index tab2_f12 on tab2(f1,f2); > alter table tab2 drop column f1; > >> Fail with wrong error message: > >> [Code: 0, SQL State: 5] Failed to validate query. Deleting column > >> 'F1' used by index(es) [TAB1_F1, TAB2_F1, TAB2_F12], it is not allowed > >> Because it contains TAB1_F1 index > drop index tab2_f12; > drop index tab2_f1; > alter table tab2 drop column f1 > >> Success, so the problem only on the error message generation. {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20885) SQL. Bump calcite version to 1.36
Evgeny Stanilovsky created IGNITE-20885: --- Summary: SQL. Bump calcite version to 1.36 Key: IGNITE-20885 URL: https://issues.apache.org/jira/browse/IGNITE-20885 Project: Ignite Issue Type: Improvement Components: sql Affects Versions: 3.0.0-beta1 Reporter: Evgeny Stanilovsky New version is resolved, also [3] consists in this version and we can simplify corresponding (SqlKind.DEFAULT keyword) code in IgniteSqlToRelConvertor implemented here [2] [1] https://calcite.apache.org/docs/history.html#v1-36-0 [2] https://issues.apache.org/jira/browse/IGNITE-19096 [3] https://issues.apache.org/jira/browse/CALCITE-5950 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20875) Add enlistment consistency token to PrimaryReplicaRequest interface
[ https://issues.apache.org/jira/browse/IGNITE-20875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladislav Pyatkov updated IGNITE-20875: --- Description: h3. Motivation The procedure in the method PartitionReplicaListener#ensureReplicaIsPrimary looks not simple because we have no interface that provides the enlistment consistency id. But PrimaryReplicaRequest well-suited for the role because it is used to determine messages targeted at the primary replica. h3. Definition of done All messages that are certain to the primary replica should include the PrimaryReplicaTestRequest interface. The interface should have the enlistment consistency token internally. was: h3. Motivation The procedure in the method PartitionReplicaListener#ensureReplicaIsPrimary looks not simple because we have no interface that provides the enlistment consistency id. But PrimaryReplicaTestRequest well-suited for the role because it is used to determine messages targeted at the primary replica. h3. Definition of done All messages that are certain to the primary replica should include the PrimaryReplicaTestRequest interface. The interface should have the enlistment consistency token internally. > Add enlistment consistency token to PrimaryReplicaRequest interface > --- > > Key: IGNITE-20875 > URL: https://issues.apache.org/jira/browse/IGNITE-20875 > Project: Ignite > Issue Type: Improvement >Reporter: Vladislav Pyatkov >Priority: Major > Labels: ignite-3 > > h3. Motivation > The procedure in the method PartitionReplicaListener#ensureReplicaIsPrimary > looks not simple because we have no interface that provides the enlistment > consistency id. But PrimaryReplicaRequest well-suited for the role because it > is used to determine messages targeted at the primary replica. > h3. Definition of done > All messages that are certain to the primary replica should include the > PrimaryReplicaTestRequest interface. The interface should have the enlistment > consistency token internally. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20875) Add enlistment consistency token to PrimaryReplicaRequest interface
[ https://issues.apache.org/jira/browse/IGNITE-20875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladislav Pyatkov updated IGNITE-20875: --- Summary: Add enlistment consistency token to PrimaryReplicaRequest interface (was: Add enlistment consistency token to PrimaryReplicaTestRequest interface) > Add enlistment consistency token to PrimaryReplicaRequest interface > --- > > Key: IGNITE-20875 > URL: https://issues.apache.org/jira/browse/IGNITE-20875 > Project: Ignite > Issue Type: Improvement >Reporter: Vladislav Pyatkov >Priority: Major > Labels: ignite-3 > > h3. Motivation > The procedure in the method PartitionReplicaListener#ensureReplicaIsPrimary > looks not simple because we have no interface that provides the enlistment > consistency id. But PrimaryReplicaTestRequest well-suited for the role > because it is used to determine messages targeted at the primary replica. > h3. Definition of done > All messages that are certain to the primary replica should include the > PrimaryReplicaTestRequest interface. The interface should have the enlistment > consistency token internally. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20884) Registering deleted available indexes for active tables on node recovery
Kirill Tkalenko created IGNITE-20884: Summary: Registering deleted available indexes for active tables on node recovery Key: IGNITE-20884 URL: https://issues.apache.org/jira/browse/IGNITE-20884 Project: Ignite Issue Type: Improvement Reporter: Kirill Tkalenko Fix For: 3.0.0-beta2 At the moment, when performing an UpdateOperation, we use only those indexes that are current at the time of the operation, which is not correct. We must update all available indexes since the catalog version on which the transaction began, as well as all registered indexes since the beginning of the transaction that have not been deleted. In this ticket, we need to make sure that when on node recovery, we register not only indexes from the latest version of the catalog, but also all available indexes of all current tables (which are in the latest version of the catalog) from the earliest available version of the catalog. In the future, we will most likely start all the tables, even those deleted from the earliest version of the catalog, and we will need to register all available indexes for all these tables, but this is another ticket. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20883) ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky with The query was cancelled while executing
[ https://issues.apache.org/jira/browse/IGNITE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20883: - Labels: ignite-3 (was: ) > ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky with The query > was cancelled while executing > - > > Key: IGNITE-20883 > URL: https://issues.apache.org/jira/browse/IGNITE-20883 > Project: Ignite > Issue Type: Bug >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3 > > > {code:java} > org.apache.ignite.sql.SqlException: IGN-SQL-8 > TraceId:bdc067e0-18f5-4c17-a3c7-9777e02a9abd The query was cancelled while > executing. at > java.base@11.0.17/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) > at > app//org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765) > at > app//org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699) > at > app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:536) > at > app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:634) > at > app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:487) > at > app//org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63) > at > app//org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest.checkSchemasCorrectlyRestore(ItDataSchemaSyncTest.java:268) > at > java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base@11.0.17/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base@11.0.17/java.lang.reflect.Method.invoke(Method.java:566) at > app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727) > at > app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > at > app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86) > at > app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217) > at > app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) > at > app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213) > at > app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138) > at > app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68) > at > app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) > at > app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) > at > app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursivel
[jira] [Resolved] (IGNITE-20570) ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky possibly because of an issue with RAFT
[ https://issues.apache.org/jira/browse/IGNITE-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin resolved IGNITE-20570. -- Resolution: Won't Fix `Replication is timed out` is no longer reproducable, however given test still may fail with `The query was cancelled while executing`. I've created new [ticket|https://issues.apache.org/jira/browse/IGNITE-20883] for that. > ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky possibly because > of an issue with RAFT > --- > > Key: IGNITE-20570 > URL: https://issues.apache.org/jira/browse/IGNITE-20570 > Project: Ignite > Issue Type: Bug >Reporter: Sergey Chugunov >Priority: Major > Labels: ignite-3 > > Test failed recently with the following stack trace in logs (abridged): > {code:java} > org.apache.ignite.tx.TransactionException: IGN-REP-3 > TraceId:41e00c72-74d5-4309-ba56-acdffd3a4132 > org.apache.ignite.internal.replicator.exception.ReplicationTimeoutException: > IGN-REP-3 TraceId:41e00c72-74d5-4309-ba56-acdffd3a4132 Replication is timed > out [replicaGrpId=3_part_5] > at > java.base@11.0.17/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) > at > app//org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:772) > at > app//org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:706) > at > app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:543) > at > app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:641) > at > app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:494) > at > app//org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63) > at > app//org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest.sql(ItDataSchemaSyncTest.java:364) > at > app//org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest.checkSchemasCorrectlyRestore(ItDataSchemaSyncTest.java:273) > > ...{code} > Link to the failed run is > [here|https://ci.ignite.apache.org/viewLog.html?buildId=7537526&tab=buildResultsDiv&buildTypeId=ApacheIgnite3xGradle_Test_IntegrationTests_ModuleRunner]. > It was failing before for other reasons (see linked ticket) but they seem to > be fixed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20883) ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky with The query was cancelled while executing
[ https://issues.apache.org/jira/browse/IGNITE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-20883: - Summary: ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky with The query was cancelled while executing (was: ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky) > ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky with The query > was cancelled while executing > - > > Key: IGNITE-20883 > URL: https://issues.apache.org/jira/browse/IGNITE-20883 > Project: Ignite > Issue Type: Bug >Reporter: Alexander Lapin >Priority: Major > > > {code:java} > org.apache.ignite.sql.SqlException: IGN-SQL-8 > TraceId:bdc067e0-18f5-4c17-a3c7-9777e02a9abd The query was cancelled while > executing. at > java.base@11.0.17/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) > at > app//org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765) > at > app//org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699) > at > app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:536) > at > app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:634) > at > app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:487) > at > app//org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63) > at > app//org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest.checkSchemasCorrectlyRestore(ItDataSchemaSyncTest.java:268) > at > java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) at > java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base@11.0.17/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base@11.0.17/java.lang.reflect.Method.invoke(Method.java:566) at > app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727) > at > app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > at > app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147) > at > app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > at > app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92) > at > app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86) > at > app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217) > at > app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) > at > app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213) > at > app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138) > at > app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68) > at > app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) > at > app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute
[jira] [Updated] (IGNITE-20758) Fix LongDestroyDurableBackgroundTaskTest
[ https://issues.apache.org/jira/browse/IGNITE-20758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Pavlov updated IGNITE-20758: --- Fix Version/s: 2.17 > Fix LongDestroyDurableBackgroundTaskTest > > > Key: IGNITE-20758 > URL: https://issues.apache.org/jira/browse/IGNITE-20758 > Project: Ignite > Issue Type: Test >Reporter: Ilya Shishkov >Assignee: Ilya Shishkov >Priority: Trivial > Labels: ise > Fix For: 2.17 > > Time Spent: 20m > Remaining Estimate: 0h > > Some tests fails with error: > {code} > java.lang.IllegalArgumentException: Value for '--check-first' property should > be positive. > at > org.apache.ignite.internal.management.cache.CacheValidateIndexesCommandArg.ensurePositive(CacheValidateIndexesCommandArg.java:70) > at > org.apache.ignite.internal.management.cache.CacheValidateIndexesCommandArg.checkFirst(CacheValidateIndexesCommandArg.java:165) > at > org.apache.ignite.internal.processors.cache.persistence.db.LongDestroyDurableBackgroundTaskTest.validateIndexes(LongDestroyDurableBackgroundTaskTest.java:374) > at > org.apache.ignite.internal.processors.cache.persistence.db.LongDestroyDurableBackgroundTaskTest.testLongIndexDeletion(LongDestroyDurableBackgroundTaskTest.java:339) > at > org.apache.ignite.internal.processors.cache.persistence.db.LongDestroyDurableBackgroundTaskTest.testLongIndexDeletionSimple(LongDestroyDurableBackgroundTaskTest.java:630) > ... > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20883) ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky
Alexander Lapin created IGNITE-20883: Summary: ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky Key: IGNITE-20883 URL: https://issues.apache.org/jira/browse/IGNITE-20883 Project: Ignite Issue Type: Bug Reporter: Alexander Lapin {code:java} org.apache.ignite.sql.SqlException: IGN-SQL-8 TraceId:bdc067e0-18f5-4c17-a3c7-9777e02a9abd The query was cancelled while executing. at java.base@11.0.17/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at app//org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:765) at app//org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:699) at app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:536) at app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:634) at app//org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:487) at app//org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63) at app//org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest.checkSchemasCorrectlyRestore(ItDataSchemaSyncTest.java:268) at java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base@11.0.17/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base@11.0.17/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base@11.0.17/java.lang.reflect.Method.invoke(Method.java:566) at app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727) at app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at app//org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45) at app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156) at app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147) at app//org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92) at app//org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86) at app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217) at app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213) at app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138) at app//org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68) at app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151) at app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at app//org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at app//org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at app//org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at app//org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at app//org.junit.platform.engine.support.
[jira] [Assigned] (IGNITE-20878) Basic criteria queries
[ https://issues.apache.org/jira/browse/IGNITE-20878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Novikov reassigned IGNITE-20878: --- Assignee: Andrey Novikov > Basic criteria queries > -- > > Key: IGNITE-20878 > URL: https://issues.apache.org/jira/browse/IGNITE-20878 > Project: Ignite > Issue Type: New Feature > Components: sql >Reporter: Andrey Novikov >Assignee: Andrey Novikov >Priority: Major > > Implement basic criteria query. The only field in {{CreteriaQueryOptions}} > should be {{{}pageSize{}}}. > Criteria to implement: equals -- This message was sent by Atlassian Jira (v8.20.10#820010)