[jira] [Updated] (IGNITE-20117) Implement index backfill process

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20117:
---
Description: 
Currently, we have backfill process for an index (aka 'index build'). It needs 
to be tuned to satisfy the following requirements:
 # Before starting the backfill process, we must first wait for the finish of 
all operations of transactions started on schemas before the index has switched 
to BACKFILLING (see IGNITE-21017)
 # Then, we must wait till safeTime(partition)>=’BACKFILLING state activation 
timestamp’ to avoid a race between starting the backfill process and executing 
writes that are before the index backfilling activates (as these writes might 
not yet write to the index themselves).
 # If for a row found during the backfill process, there are row versions with 
commitTs <= ActivationTs(Index Backfilling state), then the most recent of them 
is written to the index
 # All row versions with commitTs > ActivationTs(Index Backfilling state) are 
ignored during the backfill
 # All Write Intents of transactions started at or later than 
ActivationTs(Index Registered state) are ignored
 # For each Write Intent of a transaction started before ActivationTs(Index 
Registered state), the Intent Resulution procedure is performed. If it yields a 
committed version, it's added to the index; if it yields an aborted write, it's 
skipped; if the state is unknown, the Backfill freezes until the uncertainty is 
resolved.
 # When the backfill process is finished on all partitions, another schema 
update is installed that declares that the index is in the AVAILABLE state. 
 # The backfill process stops early as soon as it detects that the index moved 
to the ‘deleted from the Catalog’ state. Each step of the process might be 
supplied with a timestamp (from the same clock that moves the partition’s 
SafeTime ahead) and that timestamp could be used to check the index existence; 
this will allow to avoid a race between index destruction and the backfill 
process.

  was:
Currently, we have backfill process for an index (aka 'index build'). It needs 
to be tuned to satisfy the following requirements:
 # Before starting the backfill process, we must first wait for the finish of 
all operations of transactions started on schemas before the index has switched 
to BACKFILLING (see IGNITE-21017)
 # Then, we must wait till safeTime(partition)>=’BACKFILLING state activation 
timestamp’ to avoid a race between starting the backfill process and executing 
writes that are before the index backfilling activates (as these writes might 
not yet write to the index themselves).
 # If for a row found during the backfill process, there are row versions with 
commitTs <= indexCreationActivationTs, then the most recent of them is written 
to the index
 # If for a row found during the backfill process, there are row versions with 
commitTs > indexCreationActivationTs, then all of them are added to the index; 
otherwise, if there are no such row versions, but there is a write intent (and 
the transaction to which it belongs started before indexCreationActivationTs), 
it is added to the index
 # When the backfill process is finished on all partitions, another schema 
update is installed that declares that the index is in the READY state. This 
installation should be conditional. That is, if the index is still STARTING, it 
should succeed; otherwise (if the index was removed by installing a concurrent 
‘delete from the Catalog’ schema update due to a DROP command), nothing should 
be done here
 # The backfill process stops early as soon as it detects that the index moved 
to ‘deleted from the Catalog’ state. Each step of the process might be supplied 
with a timestamp (from the same clock that moves the partition’s SafeTime 
ahead) and that timestamp could be used to check the index existence; this will 
allow to avoid a race between index destruction and the backfill process.


> Implement index backfill process
> 
>
> Key: IGNITE-20117
> URL: https://issues.apache.org/jira/browse/IGNITE-20117
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently, we have backfill process for an index (aka 'index build'). It 
> needs to be tuned to satisfy the following requirements:
>  # Before starting the backfill process, we must first wait for the finish of 
> all operations of transactions started on schemas before the index has 
> switched to BACKFILLING (see IGNITE-21017)
>  # Then, we must wait till safeTime(partition)>=’BACKFILLING state activation 
> timestamp’ to avoid a race between starting the backfill process and 
> executing writes that are before the index backfilling activates (as these 

[jira] [Updated] (IGNITE-21034) Java thin 3.0: testClientRetriesComputeJobOnPrimaryAndDefaultNodes is broken due to notification mechanism

2023-12-18 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21034:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Java thin 3.0: testClientRetriesComputeJobOnPrimaryAndDefaultNodes is broken 
> due to notification mechanism
> --
>
> Key: IGNITE-21034
> URL: https://issues.apache.org/jira/browse/IGNITE-21034
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> *ClientComputeTest.TestClientRetriesComputeJobOnPrimaryAndDefaultNodes* is 
> broken because of the new notification mechanism introduced by IGNITE-20909



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20117) Implement index backfill process

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20117:
---
Description: 
Currently, we have backfill process for an index (aka 'index build'). It needs 
to be tuned to satisfy the following requirements:
 # Before starting the backfill process, we must first wait for the finish of 
all operations of transactions started on schemas before the index has switched 
to BACKFILLING (see IGNITE-21017)
 # Then, we must wait till safeTime(partition)>=’BACKFILLING state activation 
timestamp’ to avoid a race between starting the backfill process and executing 
writes that are before the index backfilling activates (as these writes might 
not yet write to the index themselves).
 # If for a row found during the backfill process, there are row versions with 
commitTs <= indexCreationActivationTs, then the most recent of them is written 
to the index
 # If for a row found during the backfill process, there are row versions with 
commitTs > indexCreationActivationTs, then all of them are added to the index; 
otherwise, if there are no such row versions, but there is a write intent (and 
the transaction to which it belongs started before indexCreationActivationTs), 
it is added to the index
 # When the backfill process is finished on all partitions, another schema 
update is installed that declares that the index is in the READY state. This 
installation should be conditional. That is, if the index is still STARTING, it 
should succeed; otherwise (if the index was removed by installing a concurrent 
‘delete from the Catalog’ schema update due to a DROP command), nothing should 
be done here
 # The backfill process stops early as soon as it detects that the index moved 
to ‘deleted from the Catalog’ state. Each step of the process might be supplied 
with a timestamp (from the same clock that moves the partition’s SafeTime 
ahead) and that timestamp could be used to check the index existence; this will 
allow to avoid a race between index destruction and the backfill process.

  was:
Currently, we have backfill process for an index (aka 'index build'). It needs 
to be tuned to satisfy the following requirements:
 # When starting the backfill process, we must first wait till 
safeTime(partition)>=’BACKFILLING state activation timestamp’ to avoid a race 
between starting the backfill process and executing writes that are before the 
index backfilling activates (as these writes should not write to the index).
 # If for a row found during the backfill process, there are row versions with 
commitTs <= indexCreationActivationTs, then the most recent of them is written 
to the index
 # If for a row found during the backfill process, there are row versions with 
commitTs > indexCreationActivationTs, then all of them are added to the index; 
otherwise, if there are no such row versions, but there is a write intent (and 
the transaction to which it belongs started before indexCreationActivationTs), 
it is added to the index
 # When the backfill process is finished on all partitions, another schema 
update is installed that declares that the index is in the READY state. This 
installation should be conditional. That is, if the index is still STARTING, it 
should succeed; otherwise (if the index was removed by installing a concurrent 
‘delete from the Catalog’ schema update due to a DROP command), nothing should 
be done here
 # The backfill process stops early as soon as it detects that the index moved 
to ‘deleted from the Catalog’ state. Each step of the process might be supplied 
with a timestamp (from the same clock that moves the partition’s SafeTime 
ahead) and that timestamp could be used to check the index existence; this will 
allow to avoid a race between index destruction and the backfill process.


> Implement index backfill process
> 
>
> Key: IGNITE-20117
> URL: https://issues.apache.org/jira/browse/IGNITE-20117
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently, we have backfill process for an index (aka 'index build'). It 
> needs to be tuned to satisfy the following requirements:
>  # Before starting the backfill process, we must first wait for the finish of 
> all operations of transactions started on schemas before the index has 
> switched to BACKFILLING (see IGNITE-21017)
>  # Then, we must wait till safeTime(partition)>=’BACKFILLING state activation 
> timestamp’ to avoid a race between starting the backfill process and 
> executing writes that are before the index backfilling activates (as these 
> writes might not yet write to the index themselves).
>  # If for a row found during the backfill process, there are row versions 
> 

[jira] [Updated] (IGNITE-21085) Fix the update versions script fail on ignite-calcite module.

2023-12-18 Thread Aleksandr Nikolaev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Nikolaev updated IGNITE-21085:

Labels: ise  (was: )

> Fix the update versions script fail on ignite-calcite module.
> -
>
> Key: IGNITE-21085
> URL: https://issues.apache.org/jira/browse/IGNITE-21085
> Project: Ignite
>  Issue Type: Bug
>Reporter: Nikita Amelchev
>Assignee: Nikita Amelchev
>Priority: Critical
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The ignite-calcite module requires the ignite-core dependency at validate 
> phase. Example of fail: 
> [CI|https://ci2.ignite.apache.org/buildConfiguration/Releases_ApacheIgniteMain_ReleaseBuild/7656645?hideProblemsFromDependencies=false=false=true=true=7656644_1948_535=debug=flowAware]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21111) Mechanism to wait for in-flight operations started on old schemas finishing

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-2:
---
Summary: Mechanism to wait for in-flight operations started on old schemas 
finishing  (was: Mechanism to wait for in-flight operations started on old 
schemas to finish)

> Mechanism to wait for in-flight operations started on old schemas finishing
> ---
>
> Key: IGNITE-2
> URL: https://issues.apache.org/jira/browse/IGNITE-2
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21111) Mechanism to wait for in-flight operations started on old schemas to finish

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-2:
---
Summary: Mechanism to wait for in-flight operations started on old schemas 
to finish  (was: Mechanism to wait for operations executed on old schemas to 
finish)

> Mechanism to wait for in-flight operations started on old schemas to finish
> ---
>
> Key: IGNITE-2
> URL: https://issues.apache.org/jira/browse/IGNITE-2
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21111) Mechanism to wait for operations executed on old schemas to finish

2023-12-18 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-2:
--

 Summary: Mechanism to wait for operations executed on old schemas 
to finish
 Key: IGNITE-2
 URL: https://issues.apache.org/jira/browse/IGNITE-2
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20117) Implement index backfill process

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20117:
---
Description: 
Currently, we have backfill process for an index (aka 'index build'). It needs 
to be tuned to satisfy the following requirements:
 # When starting the backfill process, we must first wait till 
safeTime(partition)>=’BACKFILLING state activation timestamp’ to avoid a race 
between starting the backfill process and executing writes that are before the 
index backfilling activates (as these writes should not write to the index).
 # If for a row found during the backfill process, there are row versions with 
commitTs <= indexCreationActivationTs, then the most recent of them is written 
to the index
 # If for a row found during the backfill process, there are row versions with 
commitTs > indexCreationActivationTs, then all of them are added to the index; 
otherwise, if there are no such row versions, but there is a write intent (and 
the transaction to which it belongs started before indexCreationActivationTs), 
it is added to the index
 # When the backfill process is finished on all partitions, another schema 
update is installed that declares that the index is in the READY state. This 
installation should be conditional. That is, if the index is still STARTING, it 
should succeed; otherwise (if the index was removed by installing a concurrent 
‘delete from the Catalog’ schema update due to a DROP command), nothing should 
be done here
 # The backfill process stops early as soon as it detects that the index moved 
to ‘deleted from the Catalog’ state. Each step of the process might be supplied 
with a timestamp (from the same clock that moves the partition’s SafeTime 
ahead) and that timestamp could be used to check the index existence; this will 
allow to avoid a race between index destruction and the backfill process.

  was:
Currently, we have backfill process for an index (aka 'index build'). It needs 
to be tuned to satisfy the following requirements:
 # When starting the backfill process, we must first wait till 
safeTime(partition)>=’STARTING state activation timestamp’ to avoid a race 
between starting the backfill process and executing writes that are before the 
index creation (as these writes should not write to the index).
 # If for a row found during the backfill process, there are row versions with 
commitTs <= indexCreationActivationTs, then the most recent of them is written 
to the index
 # If for a row found during the backfill process, there are row versions with 
commitTs > indexCreationActivationTs, then all of them are added to the index; 
otherwise, if there are no such row versions, but there is a write intent (and 
the transaction to which it belongs started before indexCreationActivationTs), 
it is added to the index
 # When the backfill process is finished on all partitions, another schema 
update is installed that declares that the index is in the READY state. This 
installation should be conditional. That is, if the index is still STARTING, it 
should succeed; otherwise (if the index was removed by installing a concurrent 
‘delete from the Catalog’ schema update due to a DROP command), nothing should 
be done here
 # The backfill process stops early as soon as it detects that the index moved 
to ‘deleted from the Catalog’ state. Each step of the process might be supplied 
with a timestamp (from the same clock that moves the partition’s SafeTime 
ahead) and that timestamp could be used to check the index existence; this will 
allow to avoid a race between index destruction and the backfill process.


> Implement index backfill process
> 
>
> Key: IGNITE-20117
> URL: https://issues.apache.org/jira/browse/IGNITE-20117
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently, we have backfill process for an index (aka 'index build'). It 
> needs to be tuned to satisfy the following requirements:
>  # When starting the backfill process, we must first wait till 
> safeTime(partition)>=’BACKFILLING state activation timestamp’ to avoid a race 
> between starting the backfill process and executing writes that are before 
> the index backfilling activates (as these writes should not write to the 
> index).
>  # If for a row found during the backfill process, there are row versions 
> with commitTs <= indexCreationActivationTs, then the most recent of them is 
> written to the index
>  # If for a row found during the backfill process, there are row versions 
> with commitTs > indexCreationActivationTs, then all of them are added to the 
> index; otherwise, if there are no such row versions, but there is a write 
> intent (and the transaction to 

[jira] [Updated] (IGNITE-18595) Implement index build process during the full state transfer

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-18595:
---
Description: 
Before starting to accept tuples during a full state transfer, we should take 
the list of all the indices of the table in question that are in states between 
REGISTERED and READ_ONLY at the Catalog version passed with the full state 
transfer. Let’s put them in the *CurrentIndices* list.

Then, for each tuple version we accept:
 # If it’s committed, only consider indices from *CurrentIndices* that are not 
in the REGISTERED state now. We don’t need index committed versions for 
REGISTERED indices as they will be indexed by the backfiller (after the index 
switches to BACKFILLING). For each remaining index in {*}CurrentIndices{*}, put 
the tuple version to the index if one of the following is true:
 # The index state is not READ_ONLY at the snapshot catalog version (so it’s 
one of BACKFILLING, AVAILABLE, STOPPING) - because these tuples can still be 
read by both RW and RO transactions via the index
 # The index state is READ_ONLY at the snapshot catalog version, but at 
commitTs it either did not yet exist, or strictly preceded STOPPING (we don’t 
include tuples committed on STOPPING as, from the point of view of RO 
transactions, it’s impossible to query such tuples via the index [it is not 
queryable at those timestamps], new RW transactions don’t see the index, and 
old RW transactions [that saw it] have already finished)

 # If it’s a Write Intent, then:
 # If the index is in the REGISTERED state at the snapshot catalog version, add 
the tuple to the index if its transaction was started in the REGISTERED state 
of the index; otherwise, skip it as it will be indexed by the backfiller.
 # If the index is in any of BACKFILLING, AVAILABLE, STOPPING states at the 
snapshot catalog version, add the tuple to the index
 # If the index is in READ_ONLY state at the snapshot catalog version, add the 
tuple to the index only if the transaction had been started before the index 
switched to the STOPPING state (this is to index a write intent from a 
finished, but not yet cleaned up, transaction)

Unlike the Backfiller operation, during a full state transfer, we don’t need to 
use the Write Intent resolution procedure as races with transaction cleanup are 
not possible, we just index a Write Intent; If, after the partition replica 
goes online, it gets a cleanup request with ABORT, it will clean the index 
itself.

If the initial state of an index during the full state transfer was BACKFILLING 
and, during accepting the full state transfer, we saw that the index was 
dropped (and moved to the [deleted] pseudostate), we should stop writing to 
that index (and allow it be destroyed on that partition).

If we start a full state transfer on a partition for which an index is being 
built (so the index is in the BACKFILLING state): we’ll index the accepted 
tuples (according to the rules above). After the full state transfer finishes, 
we’ll start getting ‘add this batch to the index’ commands from the RAFT log 
(as the Backfiller emits them during the backfilling process), we can just 
ignore or reapply them. To ignore them, we can raise a special flag in the 
index storage when finishing a full state transfer started with the index being 
in BACKFILLING state.
h1. Old version

Here there is no source of information for schema versions, associated with 
individual inserts. The core idea of the full rebalance is that all versions of 
all rows will be sent, while indexes will be rebuilt locally on the consumer. 
This is unfortunate. Why, you may ask.

Imagine the following situation:
 * time T1: table A with index X is created
 * time T2: user uploads the data
 * time T3: user drops index X
 * time T4: “clean” node N enters topology and downloads data via full 
rebalance procedure
 * time T5: N becomes a leader and receives (already running) RO transactions 
with timestamp T2 Implement index build process during the full state transfer
> 
>
> Key: IGNITE-18595
> URL: https://issues.apache.org/jira/browse/IGNITE-18595
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Before starting to accept tuples during a full state transfer, we should take 
> the list of all the indices of the table in question that are in states 
> between REGISTERED and READ_ONLY at the Catalog version passed with the full 
> state transfer. Let’s put them in the *CurrentIndices* list.
> Then, for each tuple version we accept:
>  # If it’s committed, only consider indices from *CurrentIndices* that are 
> not in the REGISTERED state now. We don’t need index committed versions for 

[jira] [Updated] (IGNITE-21110) Add the management annotation to all management tasks

2023-12-18 Thread Nikita Amelchev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Amelchev updated IGNITE-21110:
-
Labels: IEP-94 ise  (was: IEP-94)

> Add the management annotation to all management tasks
> -
>
> Key: IGNITE-21110
> URL: https://issues.apache.org/jira/browse/IGNITE-21110
> Project: Ignite
>  Issue Type: Task
>Reporter: Nikita Amelchev
>Assignee: Nikita Amelchev
>Priority: Minor
>  Labels: IEP-94, ise
> Fix For: 2.17
>
>
> The {{GridVisorManagementTask}} annotation is used to mark management task. 
> But not all management tasks are covered with it. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21110) Add the management annotation to all management tasks

2023-12-18 Thread Nikita Amelchev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Amelchev updated IGNITE-21110:
-
Labels: IEP-94  (was: )

> Add the management annotation to all management tasks
> -
>
> Key: IGNITE-21110
> URL: https://issues.apache.org/jira/browse/IGNITE-21110
> Project: Ignite
>  Issue Type: Task
>Reporter: Nikita Amelchev
>Assignee: Nikita Amelchev
>Priority: Minor
>  Labels: IEP-94
> Fix For: 2.17
>
>
> The {{GridVisorManagementTask}} annotation is used to mark management task. 
> But not all management tasks are covered with it. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21079) PartitionReplicaListener should not interact with storages in the messaging thread

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21079:
-
Description: 
h3. Motivation

Currently, PartitionReplicaListener does not switch its operations to another 
thread pool when handling ReplicaRequests, so, most of the time, they are 
handled in the messaging thread (the thread that is used to handle incoming 
messages). There is only one such thread per Ignite node, and its disruption 
might harm node liveness a great deal (for instance, it may make the node drop 
off the Physical Topology due to inability to ack a ping in a timely manner).
h3. Definition of Done

Calls to storages may cause I/O and block on locks, so they should be avoided 
in the messaging thread.
h3. Implementation Notes

 

  was:
Currently, PartitionReplicaListener does not switch its operations to another 
thread pool when handling ReplicaRequests, so, most of the time, they are 
handled in the messaging thread (the thread that is used to handle incoming 
messages). There is only one such thread per Ignite node, and its disruption 
might harm node liveness a great deal (for instance, it may make the node drop 
off the Physical Topology due to inability to ack a ping in a timely manner).

Calls to storages may cause I/O and block on locks, so they should be avoided 
in the messaging thread.


> PartitionReplicaListener should not interact with storages in the messaging 
> thread
> --
>
> Key: IGNITE-21079
> URL: https://issues.apache.org/jira/browse/IGNITE-21079
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> Currently, PartitionReplicaListener does not switch its operations to another 
> thread pool when handling ReplicaRequests, so, most of the time, they are 
> handled in the messaging thread (the thread that is used to handle incoming 
> messages). There is only one such thread per Ignite node, and its disruption 
> might harm node liveness a great deal (for instance, it may make the node 
> drop off the Physical Topology due to inability to ack a ping in a timely 
> manner).
> h3. Definition of Done
> Calls to storages may cause I/O and block on locks, so they should be avoided 
> in the messaging thread.
> h3. Implementation Notes
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18595) Implement index build process during the full state transfer

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-18595:
---
Description: 
Before starting to accept tuples during a full state transfer, we should take 
the list of all the indices of the table in question that are in states between 
REGISTERED and READ_ONLY at the Catalog version passed with the full state 
transfer. Let’s put them in the *CurrentIndices* list.

Then, for each tuple version we accept:
 # If it’s committed, only consider indices from *CurrentIndices* that are not 
in the REGISTERED state now. We don’t need index committed versions for 
REGISTERED indices as they will be indexed by the backfiller (after the index 
switches to BACKFILLING). For each remaining index in {*}CurrentIndices{*}, put 
the tuple version to the index if one of the following is true:
 # The index state is not READ_ONLY at the snapshot catalog version (so it’s 
one of BACKFILLING, AVAILABLE, STOPPING) - because these tuples can still be 
read by both RW and RO transactions via the index
 # The index state is READ_ONLY at the snapshot catalog version, but at 
commitTs it either did not yet exist, or strictly preceded STOPPING (we don’t 
include tuples committed on STOPPING as, from the point of view of RO 
transactions, it’s impossible to query such tuples via the index [it is not 
queryable at those timestamps], new RW transactions don’t see the index, and 
old RW transactions [that saw it] have already finished)

 # If it’s a Write Intent, then:
 # If the index is in the REGISTERED state at the snapshot catalog version, add 
the tuple to the index if its transaction was started in the REGISTERED state 
of the index; otherwise, skip it as it will be indexed by the backfiller.
 # If the index is in any of BACKFILLING, AVAILABLE, STOPPING states at the 
snapshot catalog version, add the tuple to the index
 # If the index is in READ_ONLY state at the snapshot catalog version, add the 
tuple to the index only if the transaction had been started before the index 
switched to the STOPPING state (this is to index a write intent from a 
finished, but not yet cleaned up, transaction)

Unlike the Backfiller operation, during a full state transfer, we don’t need to 
use the Write Intent resolution procedure as races with transaction cleanup are 
not possible, we just index a Write Intent; If, after the partition replica 
goes online, it gets a cleanup request with ABORT, it will clean the index 
itself.

If the initial state of an index during the full state transfer was BACKFILLING 
and, during accepting the full state transfer, we saw that the index was 
dropped (and moved to the [deleted] pseudostate), we should stop writing to 
that index (and allow it be destroyed on that partition).

If we start a full state transfer on a partition for which an index is being 
built (so the index is in the BACKFILLING state): we’ll index the accepted 
tuples (according to the rules above). After the full state transfer finishes, 
we’ll start getting ‘add this batch to the index’ commands from the RAFT log 
(as the Backfiller emits them during the backfilling process), we can just 
ignore or reapply them. To ignore them, we will need to raise a partition 
replica-local flag in the index storage when finishing a full state transfer 
started when the index was in BACKFILLING state.
h1. Old version

Here there is no source of information for schema versions, associated with 
individual inserts. The core idea of the full rebalance is that all versions of 
all rows will be sent, while indexes will be rebuilt locally on the consumer. 
This is unfortunate. Why, you may ask.

Imagine the following situation:
 * time T1: table A with index X is created
 * time T2: user uploads the data
 * time T3: user drops index X
 * time T4: “clean” node N enters topology and downloads data via full 
rebalance procedure
 * time T5: N becomes a leader and receives (already running) RO transactions 
with timestamp T2 Implement index build process during the full state transfer
> 
>
> Key: IGNITE-18595
> URL: https://issues.apache.org/jira/browse/IGNITE-18595
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Before starting to accept tuples during a full state transfer, we should take 
> the list of all the indices of the table in question that are in states 
> between REGISTERED and READ_ONLY at the Catalog version passed with the full 
> state transfer. Let’s put them in the *CurrentIndices* list.
> Then, for each tuple version we accept:
>  # If it’s committed, only consider indices from *CurrentIndices* that are 
> not in the REGISTERED state now. We don’t need index 

[jira] [Updated] (IGNITE-18595) Implement index build process during the full state transfer

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-18595:
---
Description: 
Before starting to accept tuples during a full state transfer, we should take 
the list of all the indices of the table in question that are in states between 
REGISTERED and READ_ONLY at the Catalog version passed with the full state 
transfer. Let’s put them in the *CurrentIndices* list.

Then, for each tuple version we accept:
 # If it’s committed, only consider indices from *CurrentIndices* that are not 
in the REGISTERED state now. We don’t need index committed versions for 
REGISTERED indices as they will be indexed by the backfiller (after the index 
switches to BACKFILLING). For each remaining index in {*}CurrentIndices{*}, put 
the tuple version to the index if one of the following is true:
 # The index state is not READ_ONLY at the snapshot catalog version (so it’s 
one of BACKFILLING, AVAILABLE, STOPPING) - because these tuples can still be 
read by both RW and RO transactions via the index
 # The index state is READ_ONLY at the snapshot catalog version, but at 
commitTs it either did not yet exist, or strictly preceded STOPPING (we don’t 
include tuples committed on STOPPING as, from the point of view of RO 
transactions, it’s impossible to query such tuples via the index [it is not 
queryable at those timestamps], new RW transactions don’t see the index, and 
old RW transactions [that saw it] have already finished)


 # If it’s a Write Intent, then:
 # If the index is in the REGISTERED state at the snapshot catalog version, add 
the tuple to the index if its transaction was started in the REGISTERED state 
of the index; otherwise, skip it as it will be indexed by the backfiller.
 # If the index is in any of BACKFILLING, AVAILABLE, STOPPING states at the 
snapshot catalog version, add the tuple to the index
 # If the index is in READ_ONLY state at the snapshot catalog version, add the 
tuple to the index only if the transaction had been started before the index 
switched to the STOPPING state (this is to index a write intent from a 
finished, but not yet cleaned up, transaction)

Unlike the Backfiller operation, during a full state transfer, we don’t need to 
use the Write Intent resolution procedure as races with transaction cleanup are 
not possible, we just index a Write Intent; If, after the partition replica 
goes online, it gets a cleanup request with ABORT, it will clean the index 
itself.

If the initial state of an index during the full state transfer was BACKFILLING 
and, during accepting the full state transfer, we saw that the index was 
dropped (and moved to the [deleted] pseudostate), we should stop writing to 
that index (and allow it be destroyed on that partition).

If we start a full state transfer on a partition for which an index is being 
built (so the index is in the BACKFILLING state): we’ll index the accepted 
tuples (according to the rules above). After the full state transfer finishes, 
we’ll start getting ‘index that batch’ commands from the RAFT log (as the 
Backfiller emits them during the backfilling process), we can just ignore or 
reapply them. To ignore them, we will need to raise a partition replica-local 
flag in the index storage when finishing a full state transfer started when the 
index was in BACKFILLING state.
h1. Old version

Here there is no source of information for schema versions, associated with 
individual inserts. The core idea of the full rebalance is that all versions of 
all rows will be sent, while indexes will be rebuilt locally on the consumer. 
This is unfortunate. Why, you may ask.

Imagine the following situation:
 * time T1: table A with index X is created
 * time T2: user uploads the data
 * time T3: user drops index X
 * time T4: “clean” node N enters topology and downloads data via full 
rebalance procedure
 * time T5: N becomes a leader and receives (already running) RO transactions 
with timestamp T2 Implement index build process during the full state transfer
> 
>
> Key: IGNITE-18595
> URL: https://issues.apache.org/jira/browse/IGNITE-18595
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Before starting to accept tuples during a full state transfer, we should take 
> the list of all the indices of the table in question that are in states 
> between REGISTERED and READ_ONLY at the Catalog version passed with the full 
> state transfer. Let’s put them in the *CurrentIndices* list.
> Then, for each tuple version we accept:
>  # If it’s committed, only consider indices from *CurrentIndices* that are 
> not in the REGISTERED state now. We don’t need index committed 

[jira] [Updated] (IGNITE-21110) Add the management annotation to all management tasks

2023-12-18 Thread Nikita Amelchev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Amelchev updated IGNITE-21110:
-
Fix Version/s: 2.17

> Add the management annotation to all management tasks
> -
>
> Key: IGNITE-21110
> URL: https://issues.apache.org/jira/browse/IGNITE-21110
> Project: Ignite
>  Issue Type: Task
>Reporter: Nikita Amelchev
>Assignee: Nikita Amelchev
>Priority: Minor
> Fix For: 2.17
>
>
> The {{GridVisorManagementTask}} annotation is used to mark management task. 
> But not all management tasks are covered with it. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21110) Add the management annotation to all management tasks

2023-12-18 Thread Nikita Amelchev (Jira)
Nikita Amelchev created IGNITE-21110:


 Summary: Add the management annotation to all management tasks
 Key: IGNITE-21110
 URL: https://issues.apache.org/jira/browse/IGNITE-21110
 Project: Ignite
  Issue Type: Task
Reporter: Nikita Amelchev
Assignee: Nikita Amelchev


The {{GridVisorManagementTask}} annotation is used to mark management task. But 
not all management tasks are covered with it. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20120) Remove index from Catalog when it's READ_ONLY and activation of its STOPPING state is below GC LWM

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20120:
---
Summary: Remove index from Catalog when it's READ_ONLY and activation of 
its STOPPING state is below GC LWM  (was: Remove index from Catalog when 
activation of its STOPPING state is below GC LWM)

> Remove index from Catalog when it's READ_ONLY and activation of its STOPPING 
> state is below GC LWM
> --
>
> Key: IGNITE-20120
> URL: https://issues.apache.org/jira/browse/IGNITE-20120
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When activation moment of the STOPPING state of an index is below GC LWM 
> (meaning that no new transaction can read from this index), the index should 
> be removed from the Catalog. A conditional schema update (IGNITE-20115) might 
> be used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20120) Remove index from Catalog when it's READ_ONLY and activation of its STOPPING state is below GC LWM

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20120:
---
Description: When an index is READ_ONLY and activation moment of the 
STOPPING state of the index is below GC LWM (meaning that no new transaction 
can read from this index), the index should be removed from the Catalog. A 
conditional schema update (IGNITE-20115) might be used.  (was: When activation 
moment of the STOPPING state of an index is below GC LWM (meaning that no new 
transaction can read from this index), the index should be removed from the 
Catalog. A conditional schema update (IGNITE-20115) might be used.)

> Remove index from Catalog when it's READ_ONLY and activation of its STOPPING 
> state is below GC LWM
> --
>
> Key: IGNITE-20120
> URL: https://issues.apache.org/jira/browse/IGNITE-20120
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When an index is READ_ONLY and activation moment of the STOPPING state of the 
> index is below GC LWM (meaning that no new transaction can read from this 
> index), the index should be removed from the Catalog. A conditional schema 
> update (IGNITE-20115) might be used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20125) Write to writable indices when writing to partition

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20125:
---
Description: 
For each operation that is writing, the operation’s timestamp (which is NOT the 
same as the operation’s command’s SafeTime timestamp) T~op~ is used to get the 
schema corresponding to the operation. When it’s obtained, all writable indices 
at T~op~ are taken, and the current operation writes to them all.

An index is writable for an RW transaction started at T~begin~ if the index 
exists in a writable state (REGISTERED, BACKFILLING, AVAILABLE, STOPPING) in 
the schema corresponding to T~begin~, but writability is lost once the index 
transitions to the READ_ONLY state (in the CURRENT schema).

If an index does not exist anymore at T~op~, the write to it is just ignored, 
the transaction is not aborted.


  was:
For each operation that is writing, the operation’s timestamp (which moves 
partition’s SafeTime forward) T~op~ is used to get the schema corresponding to 
the operation. When it’s obtained, all writable (STARTING, READY, STOPPING) 
indices that are write-compatible at T~op~ are taken, and the current operation 
writes to them all.

An index is write-compatible at timestamp T~op~ if for each column of the index 
the following holds: the column was not dropped at all, or it was dropped 
strictly after T~op~.

If an index does not exist anymore at T~op~, the write to it is just ignored, 
the transaction is not aborted.



> Write to writable indices when writing to partition
> ---
>
> Key: IGNITE-20125
> URL: https://issues.apache.org/jira/browse/IGNITE-20125
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> For each operation that is writing, the operation’s timestamp (which is NOT 
> the same as the operation’s command’s SafeTime timestamp) T~op~ is used to 
> get the schema corresponding to the operation. When it’s obtained, all 
> writable indices at T~op~ are taken, and the current operation writes to them 
> all.
> An index is writable for an RW transaction started at T~begin~ if the index 
> exists in a writable state (REGISTERED, BACKFILLING, AVAILABLE, STOPPING) in 
> the schema corresponding to T~begin~, but writability is lost once the index 
> transitions to the READ_ONLY state (in the CURRENT schema).
> If an index does not exist anymore at T~op~, the write to it is just ignored, 
> the transaction is not aborted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20125) Write to writable indices when writing to partition

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20125:
---
Summary: Write to writable indices when writing to partition  (was: Write 
to write-compatible indices when writing to partition)

> Write to writable indices when writing to partition
> ---
>
> Key: IGNITE-20125
> URL: https://issues.apache.org/jira/browse/IGNITE-20125
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> For each operation that is writing, the operation’s timestamp (which moves 
> partition’s SafeTime forward) T~op~ is used to get the schema corresponding 
> to the operation. When it’s obtained, all writable (STARTING, READY, 
> STOPPING) indices that are write-compatible at T~op~ are taken, and the 
> current operation writes to them all.
> An index is write-compatible at timestamp T~op~ if for each column of the 
> index the following holds: the column was not dropped at all, or it was 
> dropped strictly after T~op~.
> If an index does not exist anymore at T~op~, the write to it is just ignored, 
> the transaction is not aborted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20122) When a REGISTERED/BACKFILLING index is dropped, it should be removed right away

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20122:
---
Description: 
When a DROP INDEX is executed for an index that is STARTING or BACKFILLING, it 
should be removed right away (skipping the AVAILABLE and subsequent states) and 
start destruction. This should be done using a conditional schema update 
(IGNITE-20115) to avoid a race with switching to the BACKFILING/AVAILABLE state.

If the conditional schema update fails (because the index has been switched to 
the BACKFILLING/AVAILABLE state), we should retry at the new state. If it's 
BACKFILLING, we should try to destroy the index again; if it's AVAILABLE, then 
fallback to the usual procedure (IGNITE-20119).

  was:
When a DROP INDEX is executed for a an index that is STARTING, it should be 
removed right away (skipping the READY and STOPPING states) and start 
destruction. This should be done using a conditional schema update 
(IGNITE-20115) to avoid a race with switching to the READY state.

If the conditional schema update fails (because the index has been switched to 
the READY state), we should fallback to the usual procedure (IGNITE-20119).


> When a REGISTERED/BACKFILLING index is dropped, it should be removed right 
> away
> ---
>
> Key: IGNITE-20122
> URL: https://issues.apache.org/jira/browse/IGNITE-20122
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When a DROP INDEX is executed for an index that is STARTING or BACKFILLING, 
> it should be removed right away (skipping the AVAILABLE and subsequent 
> states) and start destruction. This should be done using a conditional schema 
> update (IGNITE-20115) to avoid a race with switching to the 
> BACKFILING/AVAILABLE state.
> If the conditional schema update fails (because the index has been switched 
> to the BACKFILLING/AVAILABLE state), we should retry at the new state. If 
> it's BACKFILLING, we should try to destroy the index again; if it's 
> AVAILABLE, then fallback to the usual procedure (IGNITE-20119).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20122) When a REGISTERED/BACKFILLING index is dropped, it should be removed right away

2023-12-18 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20122:
---
Summary: When a REGISTERED/BACKFILLING index is dropped, it should be 
removed right away  (was: When a STARTING index is dropped, it should be 
removed right away)

> When a REGISTERED/BACKFILLING index is dropped, it should be removed right 
> away
> ---
>
> Key: IGNITE-20122
> URL: https://issues.apache.org/jira/browse/IGNITE-20122
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> When a DROP INDEX is executed for a an index that is STARTING, it should be 
> removed right away (skipping the READY and STOPPING states) and start 
> destruction. This should be done using a conditional schema update 
> (IGNITE-20115) to avoid a race with switching to the READY state.
> If the conditional schema update fails (because the index has been switched 
> to the READY state), we should fallback to the usual procedure (IGNITE-20119).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-20904) MQ API caused the systemview to be unavailable.

2023-12-18 Thread yafengshi (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17795056#comment-17795056
 ] 

yafengshi edited comment on IGNITE-20904 at 12/19/23 2:43 AM:
--

[~NIzhikov] , Please review my changes.
 


was (Author: JIRAUSER302440):
 [~aonikolaev] , Please review my changes.
 

> MQ API caused the systemview to be unavailable.
> ---
>
> Key: IGNITE-20904
> URL: https://issues.apache.org/jira/browse/IGNITE-20904
> Project: Ignite
>  Issue Type: Bug
>  Components: messaging
>Affects Versions: 2.15
>Reporter: yafengshi
>Assignee: yafengshi
>Priority: Critical
>  Labels: newbie
> Fix For: 3.0, 2.17
>
>   Original Estimate: 1h
>  Time Spent: 10m
>  Remaining Estimate: 50m
>
>  
> After I execute the following code, I want to query system views.
> {code:java}
> ignite.message(ignite.cluster().forServers()).remoteListen("A1", (nodeId, 
> msg) -> {
>              System.out.println(msg);
>              return true;
> });
>   
>  for (int i = 0; i < 10; i++)
>    ignite.message().sendOrdered("A1", Integer.toString(i),0); {code}
>  
> then it threw an IllegalStateException.
> {code:java}
> jdbc:ignite:thin://127.0.0.1/sys> select count(1) from CONTINUOUS_QUERIES; 
> Error: General error: "java.lang.IllegalStateException";   {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21109) Move LogicalTopologyServiceImplTest to cluster-management-module

2023-12-18 Thread Aleksandr (Jira)
Aleksandr created IGNITE-21109:
--

 Summary: Move LogicalTopologyServiceImplTest to 
cluster-management-module
 Key: IGNITE-21109
 URL: https://issues.apache.org/jira/browse/IGNITE-21109
 Project: Ignite
  Issue Type: Improvement
  Components: compute
Reporter: Aleksandr


LogicalTopologyServiceImplTest was moved into the compute module because it is 
located in the compute package. We have to move it back and change the package 
name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20735) Implement initiate recovery handling logic

2023-12-18 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798323#comment-17798323
 ] 

Denis Chudov commented on IGNITE-20735:
---

[~ksizov] LGTM.

> Implement initiate recovery handling logic
> --
>
> Key: IGNITE-20735
> URL: https://issues.apache.org/jira/browse/IGNITE-20735
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> IGNITE-20685 will send initiate recovery replica request that should be 
> properly handled in order to detect whether transaction is finished and 
> rollback it if it's abandoned. Abandoned means that transaction is in pending 
> state, however tx coordinator is dead.
> h3. Definition of Done
>  * If transaction state is either finished or aborted, then а cleanup request 
> is sent in a common durable manner to a partition that have initiated 
> recovery.
>  * If the transaction state is pending, then the transaction should be rolled 
> back, meaning that the state is changed to aborted and a corresponding 
> cleanup request is sent in a common durable manner to a partition that have 
> initiated recovery.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21108) NPE in ReplicaManager

2023-12-18 Thread Denis Chudov (Jira)
Denis Chudov created IGNITE-21108:
-

 Summary: NPE in ReplicaManager
 Key: IGNITE-21108
 URL: https://issues.apache.org/jira/browse/IGNITE-21108
 Project: Ignite
  Issue Type: Improvement
Reporter: Denis Chudov


java.lang.NullPointerException: null
  at 
org.apache.ignite.internal.replicator.ReplicaManager.onReplicaMessageReceived(ReplicaManager.java:291)
 ~[ignite-replicator-3.0.0-SNAPSHOT.jar:?]
  at 
org.apache.ignite.network.DefaultMessagingService.onMessage(DefaultMessagingService.java:372)
 ~[ignite-network-3.0.0-SNAPSHOT.jar:?]
  at 
org.apache.ignite.network.DefaultMessagingService.lambda$onMessage$3(DefaultMessagingService.java:332)
 ~[ignite-network-3.0.0-SNAPSHOT.jar:?]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21108) NPE in ReplicaManager

2023-12-18 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-21108:
--
Description: 
{code:java}
java.lang.NullPointerException: null
at 
org.apache.ignite.internal.replicator.ReplicaManager.onReplicaMessageReceived(ReplicaManager.java:291)
 ~[ignite-replicator-3.0.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.network.DefaultMessagingService.onMessage(DefaultMessagingService.java:372)
 ~[ignite-network-3.0.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.network.DefaultMessagingService.lambda$onMessage$3(DefaultMessagingService.java:332)
 ~[ignite-network-3.0.0-SNAPSHOT.jar:?]{code}

  was:
java.lang.NullPointerException: null
  at 
org.apache.ignite.internal.replicator.ReplicaManager.onReplicaMessageReceived(ReplicaManager.java:291)
 ~[ignite-replicator-3.0.0-SNAPSHOT.jar:?]
  at 
org.apache.ignite.network.DefaultMessagingService.onMessage(DefaultMessagingService.java:372)
 ~[ignite-network-3.0.0-SNAPSHOT.jar:?]
  at 
org.apache.ignite.network.DefaultMessagingService.lambda$onMessage$3(DefaultMessagingService.java:332)
 ~[ignite-network-3.0.0-SNAPSHOT.jar:?]


> NPE in ReplicaManager
> -
>
> Key: IGNITE-21108
> URL: https://issues.apache.org/jira/browse/IGNITE-21108
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> {code:java}
> java.lang.NullPointerException: null
> at 
> org.apache.ignite.internal.replicator.ReplicaManager.onReplicaMessageReceived(ReplicaManager.java:291)
>  ~[ignite-replicator-3.0.0-SNAPSHOT.jar:?]
> at 
> org.apache.ignite.network.DefaultMessagingService.onMessage(DefaultMessagingService.java:372)
>  ~[ignite-network-3.0.0-SNAPSHOT.jar:?]
> at 
> org.apache.ignite.network.DefaultMessagingService.lambda$onMessage$3(DefaultMessagingService.java:332)
>  ~[ignite-network-3.0.0-SNAPSHOT.jar:?]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21108) NPE in ReplicaManager

2023-12-18 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov reassigned IGNITE-21108:
-

Assignee: Denis Chudov

> NPE in ReplicaManager
> -
>
> Key: IGNITE-21108
> URL: https://issues.apache.org/jira/browse/IGNITE-21108
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> java.lang.NullPointerException: null
>   at 
> org.apache.ignite.internal.replicator.ReplicaManager.onReplicaMessageReceived(ReplicaManager.java:291)
>  ~[ignite-replicator-3.0.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.network.DefaultMessagingService.onMessage(DefaultMessagingService.java:372)
>  ~[ignite-network-3.0.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.network.DefaultMessagingService.lambda$onMessage$3(DefaultMessagingService.java:332)
>  ~[ignite-network-3.0.0-SNAPSHOT.jar:?]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21040) Placement driver logging enhancement

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21040:
-
Fix Version/s: 3.0.0-beta2

> Placement driver logging enhancement
> 
>
> Key: IGNITE-21040
> URL: https://issues.apache.org/jira/browse/IGNITE-21040
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h3. Motivation
>  # The lease grant request has a filed {{{}leaseExpirationTime{}}}, but the 
> filed really means the timestamp until which we are waiting for a response to 
> this message.
>  # For some reason (external for the placement driver), the metastorage event 
> can handle slower and slower. Finally, we cannot receive an event about the 
> lease being prolonged for a particular period when the lease interval is 
> valid.
> h3. Definition of done
>  # Remove {{leaseExpirationTime}} from the log message on replica side on 
> lease acceptable (see {{Replica#acceptLease}} ).
>  # Add {{toString}} method for {{{}Leases{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21040) Placement driver logging enhancement

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21040:
-
Reviewer: Alexander Lapin

> Placement driver logging enhancement
> 
>
> Key: IGNITE-21040
> URL: https://issues.apache.org/jira/browse/IGNITE-21040
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h3. Motivation
>  # The lease grant request has a filed {{{}leaseExpirationTime{}}}, but the 
> filed really means the timestamp until which we are waiting for a response to 
> this message.
>  # For some reason (external for the placement driver), the metastorage event 
> can handle slower and slower. Finally, we cannot receive an event about the 
> lease being prolonged for a particular period when the lease interval is 
> valid.
> h3. Definition of done
>  # Remove {{leaseExpirationTime}} from the log message on replica side on 
> lease acceptable (see {{Replica#acceptLease}} ).
>  # Add {{toString}} method for {{{}Leases{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21040) Placement driver logging enhancement

2023-12-18 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798265#comment-17798265
 ] 

Alexander Lapin commented on IGNITE-21040:
--

[~Denis Chudov] LGTM!

> Placement driver logging enhancement
> 
>
> Key: IGNITE-21040
> URL: https://issues.apache.org/jira/browse/IGNITE-21040
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h3. Motivation
>  # The lease grant request has a filed {{{}leaseExpirationTime{}}}, but the 
> filed really means the timestamp until which we are waiting for a response to 
> this message.
>  # For some reason (external for the placement driver), the metastorage event 
> can handle slower and slower. Finally, we cannot receive an event about the 
> lease being prolonged for a particular period when the lease interval is 
> valid.
> h3. Definition of done
>  # Remove {{leaseExpirationTime}} from the log message on replica side on 
> lease acceptable (see {{Replica#acceptLease}} ).
>  # Add {{toString}} method for {{{}Leases{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20916) Remove API future related code from TableManager

2023-12-18 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798260#comment-17798260
 ] 

Mirza Aliev commented on IGNITE-20916:
--

Looks fine, thanks [~Denis Chudov] 

> Remove API future related code from TableManager
> 
>
> Key: IGNITE-20916
> URL: https://issues.apache.org/jira/browse/IGNITE-20916
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Chudov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After some changes, the map {{TableManager#tableCreateFuts}} is used in 
> single place only: {{TableManager#completeApiCreateFuture}} which makes no 
> sense. It should be removed along with 
> {{TableManager#completeApiCreateFuture}}
> -Basically, there should be a rework of partition start and moving 
> responsibility for partitions from tables to zones, and the table start 
> process will have to look completely different and it will be split between 
> different meta storage revision (unlike now: everything related to the table 
> start happens within single revision updates).-
> -For example:-
>  * -some distribution zone "zone0" is created on meta storage revision 
> {_}5{_};-
>  * -zone creation process writes assignment changes to meta storage with 
> revision {_}5+x{_}, so the zone partitions will be started on revision _5+x_-
>  * -some table "table0" is created within zone0 on revision {_}y{_}. This 
> revision y can be either less or greater than _5+x_ (but still greater than 5 
> and never equal to {_}5+x{_}). API future should wait for completion of all 
> listeners for the revision _max(5+x, y).-
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20916) Remove API future related code from TableManager

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-20916:
-
Reviewer: Mirza Aliev

> Remove API future related code from TableManager
> 
>
> Key: IGNITE-20916
> URL: https://issues.apache.org/jira/browse/IGNITE-20916
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Chudov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After some changes, the map {{TableManager#tableCreateFuts}} is used in 
> single place only: {{TableManager#completeApiCreateFuture}} which makes no 
> sense. It should be removed along with 
> {{TableManager#completeApiCreateFuture}}
> -Basically, there should be a rework of partition start and moving 
> responsibility for partitions from tables to zones, and the table start 
> process will have to look completely different and it will be split between 
> different meta storage revision (unlike now: everything related to the table 
> start happens within single revision updates).-
> -For example:-
>  * -some distribution zone "zone0" is created on meta storage revision 
> {_}5{_};-
>  * -zone creation process writes assignment changes to meta storage with 
> revision {_}5+x{_}, so the zone partitions will be started on revision _5+x_-
>  * -some table "table0" is created within zone0 on revision {_}y{_}. This 
> revision y can be either less or greater than _5+x_ (but still greater than 5 
> and never equal to {_}5+x{_}). API future should wait for completion of all 
> listeners for the revision _max(5+x, y).-
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21107) Reduce amount of lease update requests

2023-12-18 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-21107:


 Summary: Reduce amount of lease update requests
 Key: IGNITE-21107
 URL: https://issues.apache.org/jira/browse/IGNITE-21107
 Project: Ignite
  Issue Type: New Feature
Reporter: Alexander Lapin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21107) Reduce amount of lease update requests

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin reassigned IGNITE-21107:


Assignee: Alexander Lapin

> Reduce amount of lease update requests
> --
>
> Key: IGNITE-21107
> URL: https://issues.apache.org/jira/browse/IGNITE-21107
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21107) Reduce amount of lease update requests

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21107:
-
Labels: ignite-3  (was: )

> Reduce amount of lease update requests
> --
>
> Key: IGNITE-21107
> URL: https://issues.apache.org/jira/browse/IGNITE-21107
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21107) Reduce amount of lease update requests

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21107:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Reduce amount of lease update requests
> --
>
> Key: IGNITE-21107
> URL: https://issues.apache.org/jira/browse/IGNITE-21107
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Alexander Lapin
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798244#comment-17798244
 ] 

Vladislav Pyatkov commented on IGNITE-21106:


LGTM

> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Occurred that obsolete leases (expired leases for removed replication groups) 
> aren't dropped themselves. Despite the fact that it's generally a leak and 
> thus incorrect, some tests like ItSqlLogicTest may suffer from this.
> h3. Definition of Done
> Obsolete leases are removed from the meta storage and thus no longer 
> processed.
> h3. Implementation Notes
> I believe that
> {code:java}
> // Remove all expired leases that are no longer present in assignments.
> renewedLeases.entrySet().removeIf(e -> 
> e.getValue().getExpirationTime().before(now)
>   && !currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
> in 
> {code:java}
> org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
> should do the trick.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21106:
-
Reviewer: Vladislav Pyatkov

> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Occurred that obsolete leases (expired leases for removed replication groups) 
> aren't dropped themselves. Despite the fact that it's generally a leak and 
> thus incorrect, some tests like ItSqlLogicTest may suffer from this.
> h3. Definition of Done
> Obsolete leases are removed from the meta storage and thus no longer 
> processed.
> h3. Implementation Notes
> I believe that
> {code:java}
> // Remove all expired leases that are no longer present in assignments.
> renewedLeases.entrySet().removeIf(e -> 
> e.getValue().getExpirationTime().before(now)
>   && !currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
> in 
> {code:java}
> org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
> should do the trick.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21106:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Occurred that obsolete leases (expired leases for removed replication groups) 
> aren't dropped themselves. Despite the fact that it's generally a leak and 
> thus incorrect, some tests like ItSqlLogicTest may suffer from this.
> h3. Definition of Done
> Obsolete leases are removed from the meta storage and thus no longer 
> processed.
> h3. Implementation Notes
> I believe that
> {code:java}
> // Remove all expired leases that are no longer present in assignments.
> renewedLeases.entrySet().removeIf(e -> 
> e.getValue().getExpirationTime().before(now)
>   && !currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
> in 
> {code:java}
> org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
> should do the trick.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21106:
-
Description: 
h3. Motivation

Occurred that obsolete leases (expired leases for removed replication groups) 
aren't dropped themselves. Despite the fact that it's generally a leak and thus 
incorrect, some tests like ItSqlLogicTest may suffer from this.
h3. Definition of Done

Obsolete leases are removed from the meta storage and thus no longer processed.
h3. Implementation Notes

I believe that
{code:java}
// Remove all expired leases that are no longer present in assignments.
renewedLeases.entrySet().removeIf(e -> 
e.getValue().getExpirationTime().before(now)
  && !currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
in 
{code:java}
org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
should do the trick.

  was:
h3. Motivation

Occurred that obsolete leases (expired leases for removed replication groups) 
aren't dropped themselves. Despite the fact that it's generally a leak and thus 
incorrect, some tests like ItSqlLogicTest may suffer from this.
h3. Definition of Done

Obsolete leases are removed from the meta storage and thus no longer processed.
h3. Implementation Notes

I believe that
{code:java}
// Remove all expired leases that are no longer present in assignments.
            renewedLeases.entrySet().removeIf(e -> 
e.getValue().getExpirationTime().before(now)
                    && 
!currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
in 
{code:java}
org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
should do the trick.


> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Occurred that obsolete leases (expired leases for removed replication groups) 
> aren't dropped themselves. Despite the fact that it's generally a leak and 
> thus incorrect, some tests like ItSqlLogicTest may suffer from this.
> h3. Definition of Done
> Obsolete leases are removed from the meta storage and thus no longer 
> processed.
> h3. Implementation Notes
> I believe that
> {code:java}
> // Remove all expired leases that are no longer present in assignments.
> renewedLeases.entrySet().removeIf(e -> 
> e.getValue().getExpirationTime().before(now)
>   && !currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
> in 
> {code:java}
> org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
> should do the trick.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin reassigned IGNITE-21106:


Assignee: Alexander Lapin

> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Occurred that obsolete leases (expired leases for removed replication groups) 
> aren't dropped themselves. Despite the fact that it's generally a leak and 
> thus incorrect, some tests like ItSqlLogicTest may suffer from this.
> h3. Definition of Done
> Obsolete leases are removed from the meta storage and thus no longer 
> processed.
> h3. Implementation Notes
> I believe that
> {code:java}
> // Remove all expired leases that are no longer present in assignments.
> renewedLeases.entrySet().removeIf(e -> 
> e.getValue().getExpirationTime().before(now)
>   && !currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
> in 
> {code:java}
> org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
> should do the trick.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21106:
-
Issue Type: Improvement  (was: Bug)

> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Occurred that obsolete leases (expired leases for removed replication groups) 
> aren't dropped themselves. Despite the fact that it's generally a leak and 
> thus incorrect, some tests like ItSqlLogicTest may suffer from this.
> h3. Definition of Done
> Obsolete leases are removed from the meta storage and thus no longer 
> processed.
> h3. Implementation Notes
> I believe that
> {code:java}
> // Remove all expired leases that are no longer present in assignments.
> renewedLeases.entrySet().removeIf(e -> 
> e.getValue().getExpirationTime().before(now)
>   && !currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
> in 
> {code:java}
> org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
> should do the trick.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21106:
-
Description: 
h3. Motivation

Occurred that obsolete leases (expired leases for removed replication groups) 
aren't dropped themselves. Despite the fact that it's generally a leak and thus 
incorrect, some tests like ItSqlLogicTest may suffer from this.
h3. Definition of Done

Obsolete leases are removed from the meta storage and thus no longer processed.
h3. Implementation Notes

I believe that
{code:java}
// Remove all expired leases that are no longer present in assignments.
            renewedLeases.entrySet().removeIf(e -> 
e.getValue().getExpirationTime().before(now)
                    && 
!currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
in 
{code:java}
org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
should do the trick.

  was:
h3. Motivation

Occurred that obsolete leases (expired leases for removed replication groups) 
aren't dropped themselves. Despite the fact that it's generally a leak and thus 
incorrect, some tests like ItSqlLogicTest may suffer from this.
h3. Definition of Done

Obsolete leases are removed from the meta storage and thus no longer processed.
h3. Implementation Notes

I believe that

```

            // Remove all expired leases that are no longer present in 
assignments.
            renewedLeases.entrySet().removeIf(e -> 
e.getValue().getExpirationTime().before(now)
                    && 
!currentAssignmentsReplicationGroupIds.contains(e.getKey()));

```

in 
`org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal`

should do the trick.


> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Occurred that obsolete leases (expired leases for removed replication groups) 
> aren't dropped themselves. Despite the fact that it's generally a leak and 
> thus incorrect, some tests like ItSqlLogicTest may suffer from this.
> h3. Definition of Done
> Obsolete leases are removed from the meta storage and thus no longer 
> processed.
> h3. Implementation Notes
> I believe that
> {code:java}
> // Remove all expired leases that are no longer present in assignments.
>             renewedLeases.entrySet().removeIf(e -> 
> e.getValue().getExpirationTime().before(now)
>                     && 
> !currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
> in 
> {code:java}
> org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
> should do the trick.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21106:
-
Labels: ignite-3  (was: )

> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Occurred that obsolete leases (expired leases for removed replication groups) 
> aren't dropped themselves. Despite the fact that it's generally a leak and 
> thus incorrect, some tests like ItSqlLogicTest may suffer from this.
> h3. Definition of Done
> Obsolete leases are removed from the meta storage and thus no longer 
> processed.
> h3. Implementation Notes
> I believe that
> {code:java}
> // Remove all expired leases that are no longer present in assignments.
> renewedLeases.entrySet().removeIf(e -> 
> e.getValue().getExpirationTime().before(now)
>   && !currentAssignmentsReplicationGroupIds.contains(e.getKey()));{code}
> in 
> {code:java}
> org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal{code}
> should do the trick.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21106:
-
Description: 
h3. Motivation

Occurred that obsolete leases (expired leases for removed replication groups) 
aren't dropped themselves. Despite the fact that it's generally a leak and thus 
incorrect, some tests like ItSqlLogicTest may suffer from this.
h3. Definition of Done

Obsolete leases are removed from the meta storage and thus no longer processed.
h3. Implementation Notes

I believe that

```

            // Remove all expired leases that are no longer present in 
assignments.
            renewedLeases.entrySet().removeIf(e -> 
e.getValue().getExpirationTime().before(now)
                    && 
!currentAssignmentsReplicationGroupIds.contains(e.getKey()));

```

in 
`org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal`

should do the trick.

> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Occurred that obsolete leases (expired leases for removed replication groups) 
> aren't dropped themselves. Despite the fact that it's generally a leak and 
> thus incorrect, some tests like ItSqlLogicTest may suffer from this.
> h3. Definition of Done
> Obsolete leases are removed from the meta storage and thus no longer 
> processed.
> h3. Implementation Notes
> I believe that
> ```
>             // Remove all expired leases that are no longer present in 
> assignments.
>             renewedLeases.entrySet().removeIf(e -> 
> e.getValue().getExpirationTime().before(now)
>                     && 
> !currentAssignmentsReplicationGroupIds.contains(e.getKey()));
> ```
> in 
> `org.apache.ignite.internal.placementdriver.LeaseUpdater.Updater#updateLeaseBatchInternal`
> should do the trick.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21106:
-
Issue Type: Bug  (was: New Feature)

> Remove obsolete leases
> --
>
> Key: IGNITE-21106
> URL: https://issues.apache.org/jira/browse/IGNITE-21106
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21106) Remove obsolete leases

2023-12-18 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-21106:


 Summary: Remove obsolete leases
 Key: IGNITE-21106
 URL: https://issues.apache.org/jira/browse/IGNITE-21106
 Project: Ignite
  Issue Type: New Feature
Reporter: Alexander Lapin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21040) Placement driver logging enhancement

2023-12-18 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov reassigned IGNITE-21040:
-

Assignee: Denis Chudov

> Placement driver logging enhancement
> 
>
> Key: IGNITE-21040
> URL: https://issues.apache.org/jira/browse/IGNITE-21040
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
>  # The lease grant request has a filed {{{}leaseExpirationTime{}}}, but the 
> filed really means the timestamp until which we are waiting for a response to 
> this message.
>  # For some reason (external for the placement driver), the metastorage event 
> can handle slower and slower. Finally, we cannot receive an event about the 
> lease being prolonged for a particular period when the lease interval is 
> valid.
> h3. Definition of done
>  # Remove {{leaseExpirationTime}} from the log message on replica side on 
> lease acceptable (see {{Replica#acceptLease}} ).
>  # Add {{toString}} method for {{{}Leases{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20916) Remove API future related code from TableManager

2023-12-18 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov reassigned IGNITE-20916:
-

Assignee: Denis Chudov

> Remove API future related code from TableManager
> 
>
> Key: IGNITE-20916
> URL: https://issues.apache.org/jira/browse/IGNITE-20916
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Chudov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> After some changes, the map {{TableManager#tableCreateFuts}} is used in 
> single place only: {{TableManager#completeApiCreateFuture}} which makes no 
> sense. It should be removed along with 
> {{TableManager#completeApiCreateFuture}}
> -Basically, there should be a rework of partition start and moving 
> responsibility for partitions from tables to zones, and the table start 
> process will have to look completely different and it will be split between 
> different meta storage revision (unlike now: everything related to the table 
> start happens within single revision updates).-
> -For example:-
>  * -some distribution zone "zone0" is created on meta storage revision 
> {_}5{_};-
>  * -zone creation process writes assignment changes to meta storage with 
> revision {_}5+x{_}, so the zone partitions will be started on revision _5+x_-
>  * -some table "table0" is created within zone0 on revision {_}y{_}. This 
> revision y can be either less or greater than _5+x_ (but still greater than 5 
> and never equal to {_}5+x{_}). API future should wait for completion of all 
> listeners for the revision _max(5+x, y).-
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21051) Fix javadocs for IndexQuery

2023-12-18 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-21051:

Fix Version/s: 2.17

> Fix javadocs for IndexQuery
> ---
>
> Key: IGNITE-21051
> URL: https://issues.apache.org/jira/browse/IGNITE-21051
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Oleg Valuyskiy
>Priority: Major
>  Labels: ise, newbie
> Fix For: 2.17
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> It's required to fix javadoc formatting in the `IndexQuery` class. Now it 
> renders the algorithm list in single line. Should use "ul", "li" tags for 
> correct rendering.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21105) Add histogram bounds to the metric command output

2023-12-18 Thread Anastasia Iakimova (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Iakimova updated IGNITE-21105:

Labels: ise  (was: )

> Add histogram bounds to the metric command output
> -
>
> Key: IGNITE-21105
> URL: https://issues.apache.org/jira/browse/IGNITE-21105
> Project: Ignite
>  Issue Type: Task
>Reporter: Nikita Amelchev
>Assignee: Anastasia Iakimova
>Priority: Major
>  Labels: ise
>
> The metric command outputs only histogram value. But bounds can be configured 
> to any value. I suggest output bounds as in JMX, opencensus exporters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20850) Worker node shutdown failover

2023-12-18 Thread Aleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr reassigned IGNITE-20850:
--

Assignee: Aleksandr

> Worker node shutdown failover
> -
>
> Key: IGNITE-20850
> URL: https://issues.apache.org/jira/browse/IGNITE-20850
> Project: Ignite
>  Issue Type: Improvement
>  Components: compute
>Reporter: Mikhail Pochatkin
>Assignee: Aleksandr
>Priority: Major
>  Labels: ignite-3
>
> In this case, the job execution stops and you need to restart it. In this 
> case, the coordinator will see that the worker node has turned off and all 
> the tasks that this coordinator sent for execution must be redistributed to 
> other nodes. In this context, it does not matter what state the tasks were 
> in, in the queue or in the process of execution, we do not offer a safepoint 
> mechanism, and all states of jobs can be written to the cache, so when a task 
> is launched on another worker node, it will be able to read from the cache 
> the state that the job I wrote it down last time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20598) Java Thin 3.0: incorrect error for query on closed transaction through Client API

2023-12-18 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20598:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Java Thin 3.0: incorrect error for query on closed transaction through Client 
> API
> -
>
> Key: IGNITE-20598
> URL: https://issues.apache.org/jira/browse/IGNITE-20598
> Project: Ignite
>  Issue Type: Bug
>  Components: sql, thin client
>Reporter: Yury Gerzhedovich
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> During using already closed transaction for  query execution we got 
> SqlException with code Transactions.TX_FAILED_READ_WRITE_OPERATION_ERR. But 
> the Exception we have just for embeded API, for client API we have Ignite 
> Exception with INTERAL_ERROR code.
> Let's investigate and fix the issue.
> Test with reproducer are 
> org.apache.ignite.internal.sql.api.ItSqlApiBaseTest#checkTransactionsWithDml, 
> also please find tho mention of the ticket.
> {code:java}
> Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:1615f3e8-e576-411e-b2ce-19ae7edd0f7f 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:772)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:706)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:543)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:641)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:494)
>   at 
> org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63)
>   at 
> org.apache.ignite.internal.sql.api.ItSqlSynchronousApiTest.checkDml(ItSqlSynchronousApiTest.java:78)
>   at 
> org.apache.ignite.internal.sql.api.ItSqlApiBaseTest.lambda$checkTransactionsWithDml$1(ItSqlApiBaseTest.java:285)
>   at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:53)
>   ... 75 more
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:1615f3e8-e576-411e-b2ce-19ae7edd0f7f 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:426)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.onMessage(TcpClientChannel.java:231)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientConnection.onMessage(NettyClientConnection.java:111)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientMessageHandler.channelRead(NettyClientMessageHandler.java:33)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   at 
> 

[jira] [Assigned] (IGNITE-21105) Add histogram bounds to the metric command output

2023-12-18 Thread Anastasia Iakimova (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Iakimova reassigned IGNITE-21105:
---

Assignee: Anastasia Iakimova  (was: Nikita Amelchev)

> Add histogram bounds to the metric command output
> -
>
> Key: IGNITE-21105
> URL: https://issues.apache.org/jira/browse/IGNITE-21105
> Project: Ignite
>  Issue Type: Task
>Reporter: Nikita Amelchev
>Assignee: Anastasia Iakimova
>Priority: Major
>
> The metric command outputs only histogram value. But bounds can be configured 
> to any value. I suggest output bounds as in JMX, opencensus exporters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20742) Thin client cannot find the resource when it is trying to execute an operation on the finalized transaction

2023-12-18 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20742:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Thin client cannot find the resource when it is trying to execute an 
> operation on the finalized transaction
> ---
>
> Key: IGNITE-20742
> URL: https://issues.apache.org/jira/browse/IGNITE-20742
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> The exception is thrown in tests marked with this ticket. Although we cannot 
> execute an operation on the aborted transaction, we should provide an obvious 
> message and the exception type:
> {noformat}
> Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:ee36d822-9c0d-48ab-8485-107db3656f23 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:754)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:688)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:623)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:476)
>   at 
> org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63)
>   at 
> org.apache.ignite.internal.sql.api.ItSqlApiBaseTest.lambda$testLockIsNotReleasedAfterTxRollback$22(ItSqlApiBaseTest.java:734)
>   at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:53)
>   ... 73 more
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:ee36d822-9c0d-48ab-8485-107db3656f23 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:426)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.onMessage(TcpClientChannel.java:231)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientConnection.onMessage(NettyClientConnection.java:111)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientMessageHandler.channelRead(NettyClientMessageHandler.java:33)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   at 
> 

[jira] [Updated] (IGNITE-20742) Thin client cannot find the resource when it is trying to execute an operation on the finalized transaction

2023-12-18 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20742:

Fix Version/s: 3.0.0-beta2

> Thin client cannot find the resource when it is trying to execute an 
> operation on the finalized transaction
> ---
>
> Key: IGNITE-20742
> URL: https://issues.apache.org/jira/browse/IGNITE-20742
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> The exception is thrown in tests marked with this ticket. Although we cannot 
> execute an operation on the aborted transaction, we should provide an obvious 
> message and the exception type:
> {noformat}
> Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:ee36d822-9c0d-48ab-8485-107db3656f23 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:754)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:688)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:623)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:476)
>   at 
> org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63)
>   at 
> org.apache.ignite.internal.sql.api.ItSqlApiBaseTest.lambda$testLockIsNotReleasedAfterTxRollback$22(ItSqlApiBaseTest.java:734)
>   at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:53)
>   ... 73 more
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:ee36d822-9c0d-48ab-8485-107db3656f23 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:426)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.onMessage(TcpClientChannel.java:231)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientConnection.onMessage(NettyClientConnection.java:111)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientMessageHandler.channelRead(NettyClientMessageHandler.java:33)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>   at 
> 

[jira] [Assigned] (IGNITE-20742) Thin client cannot find the resource when it is trying to execute an operation on the finalized transaction

2023-12-18 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn reassigned IGNITE-20742:
---

Assignee: Pavel Tupitsyn

> Thin client cannot find the resource when it is trying to execute an 
> operation on the finalized transaction
> ---
>
> Key: IGNITE-20742
> URL: https://issues.apache.org/jira/browse/IGNITE-20742
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>
> The exception is thrown in tests marked with this ticket. Although we cannot 
> execute an operation on the aborted transaction, we should provide an obvious 
> message and the exception type:
> {noformat}
> Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:ee36d822-9c0d-48ab-8485-107db3656f23 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:754)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:688)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:623)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:476)
>   at 
> org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63)
>   at 
> org.apache.ignite.internal.sql.api.ItSqlApiBaseTest.lambda$testLockIsNotReleasedAfterTxRollback$22(ItSqlApiBaseTest.java:734)
>   at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:53)
>   ... 73 more
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:ee36d822-9c0d-48ab-8485-107db3656f23 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:426)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.onMessage(TcpClientChannel.java:231)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientConnection.onMessage(NettyClientConnection.java:111)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientMessageHandler.channelRead(NettyClientMessageHandler.java:33)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>   at 
> 

[jira] [Updated] (IGNITE-20742) Thin client cannot find the resource when it is trying to execute an operation on the finalized transaction

2023-12-18 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-20742:

Component/s: thin client

> Thin client cannot find the resource when it is trying to execute an 
> operation on the finalized transaction
> ---
>
> Key: IGNITE-20742
> URL: https://issues.apache.org/jira/browse/IGNITE-20742
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>
> The exception is thrown in tests marked with this ticket. Although we cannot 
> execute an operation on the aborted transaction, we should provide an obvious 
> message and the exception type:
> {noformat}
> Caused by: org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:ee36d822-9c0d-48ab-8485-107db3656f23 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$1.copy(ExceptionUtils.java:754)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils$ExceptionFactory.createCopy(ExceptionUtils.java:688)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:525)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCauseInternal(ExceptionUtils.java:623)
>   at 
> org.apache.ignite.internal.util.ExceptionUtils.copyExceptionWithCause(ExceptionUtils.java:476)
>   at 
> org.apache.ignite.internal.sql.AbstractSession.execute(AbstractSession.java:63)
>   at 
> org.apache.ignite.internal.sql.api.ItSqlApiBaseTest.lambda$testLockIsNotReleasedAfterTxRollback$22(ItSqlApiBaseTest.java:734)
>   at org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:53)
>   ... 73 more
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
> TraceId:ee36d822-9c0d-48ab-8485-107db3656f23 
> org.apache.ignite.internal.lang.IgniteInternalException: Failed to find 
> resource with id: 1
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.processNextMessage(TcpClientChannel.java:426)
>   at 
> org.apache.ignite.internal.client.TcpClientChannel.onMessage(TcpClientChannel.java:231)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientConnection.onMessage(NettyClientConnection.java:111)
>   at 
> org.apache.ignite.internal.client.io.netty.NettyClientMessageHandler.channelRead(NettyClientMessageHandler.java:33)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>   at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>   at 
> 

[jira] [Created] (IGNITE-21105) Add histogram bounds to the metric command output

2023-12-18 Thread Nikita Amelchev (Jira)
Nikita Amelchev created IGNITE-21105:


 Summary: Add histogram bounds to the metric command output
 Key: IGNITE-21105
 URL: https://issues.apache.org/jira/browse/IGNITE-21105
 Project: Ignite
  Issue Type: Task
Reporter: Nikita Amelchev
Assignee: Nikita Amelchev


The metric command outputs only histogram value. But bounds can be configured 
to any value. I suggest output bounds as in JMX, opencensus exporters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21104) Tests involving Storage functionality fail on Volatile Storage Engine

2023-12-18 Thread Sergey Chugunov (Jira)
Sergey Chugunov created IGNITE-21104:


 Summary: Tests involving Storage functionality fail on Volatile 
Storage Engine
 Key: IGNITE-21104
 URL: https://issues.apache.org/jira/browse/IGNITE-21104
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Reporter: Sergey Chugunov
 Fix For: 3.0.0-beta2


When IGNITE-21048 was implemented two more PRs (one for [Volatile 
Storage|https://github.com/apache/ignite-3/pull/2953], one for 
[RocksDB-based|https://github.com/apache/ignite-3/pull/2952]) were open to run 
existing tests against different Storage Engines.

Both of them hung.

In the first run for Volatile Storage at least two tests were identified that 
hang with AssertionError:
 * ItBuildIndexTest;
 * 
ItInternalTableTest.

Both tests have this assertion in logs:
{code:java}
    Caused by: java.lang.AssertionError
    at 
org.apache.ignite.internal.pagememory.tree.BplusTree$InitRoot.run(BplusTree.java:916)
 ~[main/:?]
    at 
org.apache.ignite.internal.pagememory.tree.BplusTree$InitRoot.run(BplusTree.java:896)
 ~[main/:?]
    at 
org.apache.ignite.internal.pagememory.util.PageHandler.writePage(PageHandler.java:298)
 ~[main/:?]
    at 
org.apache.ignite.internal.pagememory.datastructure.DataStructure.write(DataStructure.java:369)
 ~[main/:?]
    at 
org.apache.ignite.internal.pagememory.tree.BplusTree.initTree(BplusTree.java:1045)
 ~[main/:?]
    at 
org.apache.ignite.internal.storage.pagememory.mv.VersionChainTree.(VersionChainTree.java:76)
 ~[main/:?]
    at 
org.apache.ignite.internal.storage.pagememory.VolatilePageMemoryTableStorage.createVersionChainTree(VolatilePageMemoryTableStorage.java:156)
 ~[main/:?]
    at 
org.apache.ignite.internal.storage.pagememory.VolatilePageMemoryTableStorage.createMvPartitionStorage(VolatilePageMemoryTableStorage.java:72)
 ~[main/:?]
    at 
org.apache.ignite.internal.storage.pagememory.VolatilePageMemoryTableStorage.createMvPartitionStorage(VolatilePageMemoryTableStorage.java:40)
 ~[main/:?]
    at 
org.apache.ignite.internal.storage.pagememory.AbstractPageMemoryTableStorage.lambda$createMvPartition$4(AbstractPageMemoryTableStorage.java:164)
 ~[main/:?]
    at 
org.apache.ignite.internal.storage.util.MvPartitionStorages.lambda$create$1(MvPartitionStorages.java:121)
 ~[main/:?]
    at 
java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:680) 
~[?:?]
    ... 39 more {code}
This behavior can be reproduced locally with 100% fail rate.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21103) Thin 3.0: Rename DataStreamerOptions.batchSize to pageSize

2023-12-18 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21103:

Description: 
Rename *DataStreamerOptions.batchSize* to *pageSize* to be consistent with 
cursors and other APIs.
Update Java and .NET client.

  was:Rename *DataStreamerOptions.batchSize* to *pageSize* to be consistent 
with cursors and other APIs.


> Thin 3.0: Rename DataStreamerOptions.batchSize to pageSize
> --
>
> Key: IGNITE-21103
> URL: https://issues.apache.org/jira/browse/IGNITE-21103
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Rename *DataStreamerOptions.batchSize* to *pageSize* to be consistent with 
> cursors and other APIs.
> Update Java and .NET client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21103) Thin 3.0: Rename DataStreamerOptions.batchSize to pageSize

2023-12-18 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21103:

Description: Rename *DataStreamerOptions.batchSize* to *pageSize* to be 
consistent with cursors and other APIs.

> Thin 3.0: Rename DataStreamerOptions.batchSize to pageSize
> --
>
> Key: IGNITE-21103
> URL: https://issues.apache.org/jira/browse/IGNITE-21103
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Rename *DataStreamerOptions.batchSize* to *pageSize* to be consistent with 
> cursors and other APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21080) Improve of calculation cache sizes for SQL engine

2023-12-18 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich updated IGNITE-21080:
---
Description: 
As of now we use hardcoded constants for our internal caches, it looks wrong. 
These constants for first glance shouldn't be configured by user, but we could 
use calculated estimated size.
Let't make the improvement:
||Constant||Proposed calculation||
|SqlQueryProcessor#PARSED_RESULT_CACHE_SIZE |SqlQueryProcessor#PLAN_CACHE_SIZE 
*3|
|ExecutionServiceImpl#CACHE_SIZE|SqlQueryProcessor#PLAN_CACHE_SIZE * 3|
|SqlQueryProcessor#TABLE_CACHE_SIZE|number of tables * 2 (need to be discussed) 
|
|SqlQueryProcessor#SCHEMA_CACHE_SIZE|as of now used for two caches, need to use 
TABLE_CACHE_SIZE for table cache, and keep as is for catalog versions.|
| | |

We could introduce recreate caches for significant change amount of tables in a 
cluster.
 

  was:
As of now we use hardcoded constants for our internal caches, it looks wrong. 
These constants for first glance shouldn't be configured by user, but we could 
use calculated estimated size.
Let't make the improvement:
||Constant||Proposed calculation||
|SqlQueryProcessor#PARSED_RESULT_CACHE_SIZE |SqlQueryProcessor#PLAN_CACHE_SIZE 
*3|
|ExecutionServiceImpl#CACHE_SIZE|SqlQueryProcessor#PLAN_CACHE_SIZE * 3|
|SqlQueryProcessor#TABLE_CACHE_SIZE|number of tables * 2|
|SqlQueryProcessor#SCHEMA_CACHE_SIZE|as of now used for two caches, need to use 
TABLE_CACHE_SIZE for table cache, and keep as is for catalog versions.|
| | |

We could introduce recreate caches for significant change amount of tables in a 
cluster.
 


> Improve of calculation cache sizes for SQL engine
> -
>
> Key: IGNITE-21080
> URL: https://issues.apache.org/jira/browse/IGNITE-21080
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Yury Gerzhedovich
>Priority: Major
>  Labels: ignite-3
>
> As of now we use hardcoded constants for our internal caches, it looks wrong. 
> These constants for first glance shouldn't be configured by user, but we 
> could use calculated estimated size.
> Let't make the improvement:
> ||Constant||Proposed calculation||
> |SqlQueryProcessor#PARSED_RESULT_CACHE_SIZE 
> |SqlQueryProcessor#PLAN_CACHE_SIZE *3|
> |ExecutionServiceImpl#CACHE_SIZE|SqlQueryProcessor#PLAN_CACHE_SIZE * 3|
> |SqlQueryProcessor#TABLE_CACHE_SIZE|number of tables * 2 (need to be 
> discussed) |
> |SqlQueryProcessor#SCHEMA_CACHE_SIZE|as of now used for two caches, need to 
> use TABLE_CACHE_SIZE for table cache, and keep as is for catalog versions.|
> | | |
> We could introduce recreate caches for significant change amount of tables in 
> a cluster.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21103) Thin 3.0: Rename DataStreamerOptions.batchSize to pageSize

2023-12-18 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-21103:
---

 Summary: Thin 3.0: Rename DataStreamerOptions.batchSize to pageSize
 Key: IGNITE-21103
 URL: https://issues.apache.org/jira/browse/IGNITE-21103
 Project: Ignite
  Issue Type: Improvement
  Components: platforms, thin client
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21102) Incorrect cluster state output for ACTIVE_READ_ONLY in --baseline

2023-12-18 Thread Julia Bakulina (Jira)
Julia Bakulina created IGNITE-21102:
---

 Summary: Incorrect cluster state output for ACTIVE_READ_ONLY in 
--baseline
 Key: IGNITE-21102
 URL: https://issues.apache.org/jira/browse/IGNITE-21102
 Project: Ignite
  Issue Type: Bug
Reporter: Julia Bakulina
Assignee: Julia Bakulina


Incorrect cluster state output for ACTIVE_READ_ONLY in --baseline.

org.apache.ignite.internal.commandline.BaselineCommand#baselinePrint0
{code:java}
logger.info("Cluster state: " + (res.isActive() ? "active" : 
"inactive"));
{code}
org.apache.ignite.cluster.ClusterState#ACTIVE_READ_ONLY

 

An example of changing the cluster state:
{code:java}
Command [SET-STATE] started
Arguments: ... --set-state ACTIVE_READ_ONLY

Cluster state changed to ACTIVE_READ_ONLY
Command [SET-STATE] finished with code: 0 {code}
Cluster state in control.sh --baseline
{code:java}
Command [BASELINE] started
Arguments: ... --baseline

Cluster state: active
Current topology version: 1
Baseline auto adjustment disabled:...
Current topology version: 1 (...)
Baseline nodes:
    ...

Number of baseline nodes: 1
Other nodes not found.
Command [BASELINE] finished with code: 0 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21101) Renaming a node in named list disrupts update functionality

2023-12-18 Thread Ivan Gagarkin (Jira)
Ivan Gagarkin created IGNITE-21101:
--

 Summary: Renaming a node in named list disrupts update 
functionality
 Key: IGNITE-21101
 URL: https://issues.apache.org/jira/browse/IGNITE-21101
 Project: Ignite
  Issue Type: Bug
Reporter: Ivan Gagarkin


 

To reproduce:
 # Remove NPE check in ConfigurationNotifier on 137th line
 # Run 
org.apache.ignite.internal.configuration.notifications.ConfigurationListenerTest#namedListNodeOnRenameAndThenUpdateSubElement

{code:java}
{
    "elemetns": [
        {
            "element1": {
                "entries": [
                    {
                        "entry1": {
                            "name": "entry1",
                            "value": "value1"
                        }
                    },
                    {
                        "entry2": {
                            "name": "entry2",
                            "value": "value2"
                        }
                    }
                ]
            }
        }
    ]
} {code}
 
 * Subscribe to changes in "entry1"
 * "element1" is renamed to "element2".
 * After the renaming, update "entry1".
 * As a result, an NPE occurs.

 
{code:java}
java.util.concurrent.ExecutionException: 
org.apache.ignite.configuration.ConfigurationChangeException: Failed to change 
configuration ​ at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
 at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2022)
 at 
org.apache.ignite.internal.configuration.notifications.ConfigurationListenerTest.namedListNodeOnRenameAndThenUpdateSubElement(ConfigurationListenerTest.java:707)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
 at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
 at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
 at 

[jira] [Assigned] (IGNITE-20844) Introduce JobExecution interface

2023-12-18 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev reassigned IGNITE-20844:
-

Assignee: Vadim Pakhnushev

> Introduce JobExecution interface
> 
>
> Key: IGNITE-20844
> URL: https://issues.apache.org/jira/browse/IGNITE-20844
> Project: Ignite
>  Issue Type: Improvement
>  Components: compute
>Reporter: Mikhail Pochatkin
>Assignee: Vadim Pakhnushev
>Priority: Major
>  Labels: ignite-3
>
> Currently *org.apache.ignite.compute.IgniteCompute* have follow API
> {code:java}
>  CompletableFuture executeAsync(...); {code}
> In this task we need to inroduce JobExecution interface 
> {code:java}
> public interface JobExecution {
> CompletableStage resultAsync();
> 
> CompletableStage statusAsync();
> 
> default CompletableStage idAsync() {    
>     return status().thenApply(status -> status.id());   
> }
> CompletableStage cancelAsync();
> CompletableStage changePriority(long newPriority);
> } {code}
>  and modify public API 
> {code:java}
>   JobExecution executeAsync(...); {code}
> +*Important note*+
> Implementation of JobExecution interface on clients-side can be done in 
> follow-up tickets. But changes made in this ticket should be backward 
> compatibility. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21100) [Python Thin Client] Key class binary metadata registration is skipped

2023-12-18 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-21100:
-
Affects Version/s: python-0.6.1

> [Python Thin Client] Key class binary metadata registration is skipped
> --
>
> Key: IGNITE-21100
> URL: https://issues.apache.org/jira/browse/IGNITE-21100
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: python-0.6.1
>Reporter: Mikhail Petrov
>Priority: Major
>
> Python thin client does not register binary metadata for user classes that 
> are used as a key for cache operations. 
> It seems that it happens because 
> 1. When we calculate hash code of a key object, we serialize it. The result 
> is cached in `_buffer` attribute of the object. (see 
>   a) pyignite.datatypes.complex.BinaryObject.hashcode
>   b) pyignite/binary.py#write_footer:203 ). 
> The hash code of a key object is used to determine which node this key 
> belongs to (PA).  
> 2. But when we are about to send key object to the server side - we skip 
> cache object serialization as the result is already saved in the buffer 
> mentioned above AND skip its binary type registration.
> (see pyignite.datatypes.complex.BinaryObject.from_python_not_null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21100) [Python Thin Client] Key class binary metadata registration is skipped

2023-12-18 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-21100:
-
Component/s: python

> [Python Thin Client] Key class binary metadata registration is skipped
> --
>
> Key: IGNITE-21100
> URL: https://issues.apache.org/jira/browse/IGNITE-21100
> Project: Ignite
>  Issue Type: Bug
>  Components: python
>Affects Versions: python-0.6.1
>Reporter: Mikhail Petrov
>Priority: Major
>
> Python thin client does not register binary metadata for user classes that 
> are used as a key for cache operations. 
> It seems that it happens because 
> 1. When we calculate hash code of a key object, we serialize it. The result 
> is cached in `_buffer` attribute of the object. (see 
>   a) pyignite.datatypes.complex.BinaryObject.hashcode
>   b) pyignite/binary.py#write_footer:203 ). 
> The hash code of a key object is used to determine which node this key 
> belongs to (PA).  
> 2. But when we are about to send key object to the server side - we skip 
> cache object serialization as the result is already saved in the buffer 
> mentioned above AND skip its binary type registration.
> (see pyignite.datatypes.complex.BinaryObject.from_python_not_null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21100) [Python Thin Client] Key class binary metadata registration is skipped

2023-12-18 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-21100:
-
Issue Type: Bug  (was: Task)

> [Python Thin Client] Key class binary metadata registration is skipped
> --
>
> Key: IGNITE-21100
> URL: https://issues.apache.org/jira/browse/IGNITE-21100
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Minor
>
> Python thin client does not register binary metadata for user classes that 
> are used as a key for cache operations. 
> It seems that it happens because 
> 1. When we calculate hash code of a key object, we serialize it. The result 
> is cached in `_buffer` attribute of the object. (see 
>   a) pyignite.datatypes.complex.BinaryObject.hashcode
>   b) pyignite/binary.py#write_footer:203 ). 
> The hash code of a key object is used to determine which node this key 
> belongs to (PA).  
> 2. But when we are about to send key object to the server side - we skip 
> cache object serialization as the result is already saved in the buffer 
> mentioned above AND skip its binary type registration.
> (see pyignite.datatypes.complex.BinaryObject.from_python_not_null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21100) [Python Thin Client] Key class binary metadata registration is skipped

2023-12-18 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-21100:
-
Priority: Major  (was: Minor)

> [Python Thin Client] Key class binary metadata registration is skipped
> --
>
> Key: IGNITE-21100
> URL: https://issues.apache.org/jira/browse/IGNITE-21100
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Major
>
> Python thin client does not register binary metadata for user classes that 
> are used as a key for cache operations. 
> It seems that it happens because 
> 1. When we calculate hash code of a key object, we serialize it. The result 
> is cached in `_buffer` attribute of the object. (see 
>   a) pyignite.datatypes.complex.BinaryObject.hashcode
>   b) pyignite/binary.py#write_footer:203 ). 
> The hash code of a key object is used to determine which node this key 
> belongs to (PA).  
> 2. But when we are about to send key object to the server side - we skip 
> cache object serialization as the result is already saved in the buffer 
> mentioned above AND skip its binary type registration.
> (see pyignite.datatypes.complex.BinaryObject.from_python_not_null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21100) [Python Thin Client] Key class binary metadata registration is skipped

2023-12-18 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-21100:

Description: 
Python thin client does not register binary metadata for user classes that are 
used as a key for cache operations. 

It seems that it happens because 
1. When we calculate hash code of a key object, we serialize it. The result is 
cached in `_buffer` attribute of the object. (see 
  a) pyignite.datatypes.complex.BinaryObject.hashcode
  b) pyignite/binary.py#write_footer:203 ). 
The hash code of a key object is used to determine which node this key belongs 
to (PA).  
2. But when we are about to send key object to the server side - we skip cache 
object serialization as the result is already saved in the buffer mentioned 
above AND skip its binary type registration.
(see pyignite.datatypes.complex.BinaryObject.from_python_not_null)

  was:
Python thin client does not register binary metadata for user classes that are 
used as a key for cache operations. 

It seems that it happens because 
1. When we calculate hash code of a key object, we serialize it. The result is 
cached in `_buffer` attribute of the object. (see 
  a) pyignite.datatypes.complex.BinaryObject.hashcode
  b) pyignite/binary.py#write_footer:203 ). 
The hash code of a key object is used to determine which node this key belongs 
to (PA).  
2. But when we are about to send key object to the server side - we skip cache 
object serialization as the serialization result is already saved in the buffer 
mentioned above AND skip its binary type registration.
(see pyignite.datatypes.complex.BinaryObject.from_python_not_null)


> [Python Thin Client] Key class binary metadata registration is skipped
> --
>
> Key: IGNITE-21100
> URL: https://issues.apache.org/jira/browse/IGNITE-21100
> Project: Ignite
>  Issue Type: Task
>Reporter: Mikhail Petrov
>Priority: Minor
>
> Python thin client does not register binary metadata for user classes that 
> are used as a key for cache operations. 
> It seems that it happens because 
> 1. When we calculate hash code of a key object, we serialize it. The result 
> is cached in `_buffer` attribute of the object. (see 
>   a) pyignite.datatypes.complex.BinaryObject.hashcode
>   b) pyignite/binary.py#write_footer:203 ). 
> The hash code of a key object is used to determine which node this key 
> belongs to (PA).  
> 2. But when we are about to send key object to the server side - we skip 
> cache object serialization as the result is already saved in the buffer 
> mentioned above AND skip its binary type registration.
> (see pyignite.datatypes.complex.BinaryObject.from_python_not_null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21100) [Python Thin Client] Key class binary metadata registration is skipped

2023-12-18 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-21100:

Description: 
Python thin client does not register binary metadata for user classes that are 
used as a key for cache operations. 

It seems that it happens because 
1. When we calculate hash code of a key object, we serialize it. The result is 
cached in `_buffer` attribute of the object. (see 
  a) pyignite.datatypes.complex.BinaryObject.hashcode
  b) pyignite/binary.py#write_footer:203 ). 
The hash code of a key object is used to determine which node this key belongs 
to (PA).  
2. But when we are about to send key object to the server side - we skip cache 
object serialization as the serialization result is already saved in the buffer 
mentioned above AND skip its binary type registration.
(see pyignite.datatypes.complex.BinaryObject.from_python_not_null)

  was:
Python thin client does not register binary metadata for user classes that are 
used as a key for cache operations. 

It seems that it happens because 
1. When we calculate hash code of a key object, we serialize it. The result is 
cached in `_buffer` attribute of the object. (see 
  a) pyignite.datatypes.complex.BinaryObject.hashcode
  b) pyignite/binary.py#write_footer:203 
). 
The hash code of a key object is used to determine which node this key belongs 
to (PA).  
2. But when we are about to send key object to the server side - we skip cache 
object serialization as the serialization result is already saved in the buffer 
mentioned above AND skip its binary type registration.
(see pyignite.datatypes.complex.BinaryObject.from_python_not_null)


> [Python Thin Client] Key class binary metadata registration is skipped
> --
>
> Key: IGNITE-21100
> URL: https://issues.apache.org/jira/browse/IGNITE-21100
> Project: Ignite
>  Issue Type: Task
>Reporter: Mikhail Petrov
>Priority: Minor
>
> Python thin client does not register binary metadata for user classes that 
> are used as a key for cache operations. 
> It seems that it happens because 
> 1. When we calculate hash code of a key object, we serialize it. The result 
> is cached in `_buffer` attribute of the object. (see 
>   a) pyignite.datatypes.complex.BinaryObject.hashcode
>   b) pyignite/binary.py#write_footer:203 ). 
> The hash code of a key object is used to determine which node this key 
> belongs to (PA).  
> 2. But when we are about to send key object to the server side - we skip 
> cache object serialization as the serialization result is already saved in 
> the buffer mentioned above AND skip its binary type registration.
> (see pyignite.datatypes.complex.BinaryObject.from_python_not_null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21100) [Python Thin Client] Key class binary metadata registration is skipped

2023-12-18 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-21100:

Description: 
Python thin client does not register binary metadata for user classes that are 
used as a key for cache operations. 

It seems that it happens because 
1. When we calculate hash code of a key object, we serialize it. The result is 
cached in `_buffer` attribute of the object. (see 
  a) pyignite.datatypes.complex.BinaryObject.hashcode
  b) pyignite/binary.py#write_footer:203 
). 
The hash code of a key object is used to determine which node this key belongs 
to (PA).  
2. But when we are about to send key object to the server side - we skip cache 
object serialization as the serialization result is already saved in the buffer 
mentioned above AND skip its binary type registration.
(see pyignite.datatypes.complex.BinaryObject.from_python_not_null)

  was:
Python thin client does not register binary metadata for user classes that are 
used as a key for cache operations. 

It seems that it happens because 
1. When we calculate hash code of a key object, we serialize it. The result is 
cached in `_buffer` attribute of the object. (see 
pyignite.datatypes.complex.BinaryObject.hashcode, 
pyignite/binary.py#write_footer:203 ). The hash code of a key object is used to 
determine which node this key belongs to (PA).  
2. But when we are about to send key object to the server side - we skip cache 
object serialization as the result is already saved in the buffer mentioned 
above AND skip its binary type registration.
(see pyignite.datatypes.complex.BinaryObject.from_python_not_null)


> [Python Thin Client] Key class binary metadata registration is skipped
> --
>
> Key: IGNITE-21100
> URL: https://issues.apache.org/jira/browse/IGNITE-21100
> Project: Ignite
>  Issue Type: Task
>Reporter: Mikhail Petrov
>Priority: Minor
>
> Python thin client does not register binary metadata for user classes that 
> are used as a key for cache operations. 
> It seems that it happens because 
> 1. When we calculate hash code of a key object, we serialize it. The result 
> is cached in `_buffer` attribute of the object. (see 
>   a) pyignite.datatypes.complex.BinaryObject.hashcode
>   b) pyignite/binary.py#write_footer:203 
> ). 
> The hash code of a key object is used to determine which node this key 
> belongs to (PA).  
> 2. But when we are about to send key object to the server side - we skip 
> cache object serialization as the serialization result is already saved in 
> the buffer mentioned above AND skip its binary type registration.
> (see pyignite.datatypes.complex.BinaryObject.from_python_not_null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21100) [Python Thin Client] Key class binary metadata registration is skipped

2023-12-18 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-21100:

Summary: [Python Thin Client] Key class binary metadata registration is 
skipped  (was: [Python Thin Client] Key class binary metadata is not registered)

> [Python Thin Client] Key class binary metadata registration is skipped
> --
>
> Key: IGNITE-21100
> URL: https://issues.apache.org/jira/browse/IGNITE-21100
> Project: Ignite
>  Issue Type: Task
>Reporter: Mikhail Petrov
>Priority: Minor
>
> Python thin client does not register binary metadata for user classes that 
> are used as a key for cache operations. 
> It seems that it happens because 
> 1. When we calculate hash code of a key object, we serialize it. The result 
> is cached in `_buffer` attribute of the object. (see 
> pyignite.datatypes.complex.BinaryObject.hashcode, 
> pyignite/binary.py#write_footer:203 ). The hash code of a key object is used 
> to determine which node this key belongs to (PA).  
> 2. But when we are about to send key object to the server side - we skip 
> cache object serialization as the result is already saved in the buffer 
> mentioned above AND skip its binary type registration.
> (see pyignite.datatypes.complex.BinaryObject.from_python_not_null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21100) [Python Thin Client] Key class binary metadata is not registered

2023-12-18 Thread Mikhail Petrov (Jira)
Mikhail Petrov created IGNITE-21100:
---

 Summary: [Python Thin Client] Key class binary metadata is not 
registered
 Key: IGNITE-21100
 URL: https://issues.apache.org/jira/browse/IGNITE-21100
 Project: Ignite
  Issue Type: Task
Reporter: Mikhail Petrov


Python thin client does not register binary metadata for user classes that are 
used as a key for cache operations. 

It seems that it happens because 
1. When we calculate hash code of a key object, we serialize it. The result is 
cached in `_buffer` attribute of the object. (see 
pyignite.datatypes.complex.BinaryObject.hashcode, 
pyignite/binary.py#write_footer:203 ). The hash code of a key object is used to 
determine which node this key belongs to (PA).  
2. But when we are about to send key object to the server side - we skip cache 
object serialization as the result is already saved in the buffer mentioned 
above AND skip its binary type registration.
(see pyignite.datatypes.complex.BinaryObject.from_python_not_null)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21036) Add possibility to obtain parameters of already existing zone.

2023-12-18 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798041#comment-17798041
 ] 

Mirza Aliev edited comment on IGNITE-21036 at 12/18/23 8:06 AM:


Seems that this is quite easy to add system view in the 
{{org.apache.ignite.internal.catalog.CatalogManagerImpl}} where we can provide 
view with all available zones by the current catalog version. Just need to 
follow the guide from {{modules/system-view-api/README.md}}


was (Author: maliev):
Seems that this is quite easy to add system view in the 
`org.apache.ignite.internal.catalog.CatalogManagerImpl` where we can provide 
view with all available zones by the current catalog version. Just need to 
follow the guide from `modules/system-view-api/README.md`  

> Add possibility to obtain parameters of already existing zone.
> --
>
> Key: IGNITE-21036
> URL: https://issues.apache.org/jira/browse/IGNITE-21036
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>
> For now it`s possible to create distributed zone with different configurable 
> parameters, like:
> "CREATE ZONE name_of_the_zone WITH partitions=7, DATA_NODES_AUTO_ADJUST=100"
> but there is no way to obtain this zone params using public api (views, 
> jmx?). Seems we need such a functionality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21036) Add possibility to obtain parameters of already existing zone.

2023-12-18 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798041#comment-17798041
 ] 

Mirza Aliev commented on IGNITE-21036:
--

Seems that this is quite easy to add system view in the 
`org.apache.ignite.internal.catalog.CatalogManagerImpl` where we can provide 
view with all available zones by the current catalog version. Just need to 
follow the guide from `modules/system-view-api/README.md`  

> Add possibility to obtain parameters of already existing zone.
> --
>
> Key: IGNITE-21036
> URL: https://issues.apache.org/jira/browse/IGNITE-21036
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
>
> For now it`s possible to create distributed zone with different configurable 
> parameters, like:
> "CREATE ZONE name_of_the_zone WITH partitions=7, DATA_NODES_AUTO_ADJUST=100"
> but there is no way to obtain this zone params using public api (views, 
> jmx?). Seems we need such a functionality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)