[jira] [Updated] (IGNITE-16472) Incorporate the CMG manager into the node lifecycle

2022-04-19 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-16472:
-
Reviewer:   (was: Alexander Lapin)

> Incorporate the CMG manager into the node lifecycle
> ---
>
> Key: IGNITE-16472
> URL: https://issues.apache.org/jira/browse/IGNITE-16472
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After the CMG manager is introduced, the code around it should also be 
> updated:
>  # Node startup order should be altered in order for the CMG manager to work 
> correctly.
>  # All components after the CMG Manager should be started after the join 
> procedure has finished successfully.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16472) Incorporate the CMG manager into the node lifecycle

2022-04-19 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-16472:
-
Reviewer: Alexander Lapin  (was: Alexander Lapin)

> Incorporate the CMG manager into the node lifecycle
> ---
>
> Key: IGNITE-16472
> URL: https://issues.apache.org/jira/browse/IGNITE-16472
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr Polovtcev
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After the CMG manager is introduced, the code around it should also be 
> updated:
>  # Node startup order should be altered in order for the CMG manager to work 
> correctly.
>  # All components after the CMG Manager should be started after the join 
> procedure has finished successfully.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16668) Raft group reconfiguration on node failure

2022-04-19 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-16668:
-
Description: 
If a node storing a partition of an in-memory table fails and leaves the 
cluster all data it had is lost. From the point of view of the partition it 
looks like as the node is left forever.

Although Raft protocol tolerates leaving some amount of nodes composing Raft 
group (partition); for in-memory caches we cannot restore replica factor 
because of in-memory nature of the table.

It means that we need to detect failures of each node owning a partition and 
recalculate assignments for the table without keeping replica factor.
h4. Upd 1:
h4. Problem

By design raft has several persisted segments, e.g. raft meta 
(term/committedIndex) and stable raft log. So, by converting common raft to 
in-memory one it’s possible to break some of it’s invariants. For example Node 
C could vote for Candidate A before self-restart and vote then for Candidate B 
after one. As a result two leaders will be elected which is illegal.
 
!Screenshot from 2022-04-19 11-11-05.png!
 
h4. Solution

In order to solve the problem mentioned above it’s possible to remove and then 
return back the restarting node from the peers of the corresponding raft group. 
The peer-removal process should be finished before the restarting of the 
corresponding raft server node.
 
  !Screenshot from 2022-04-19 11-12-55.png!
 
The process of removing and then returning back the restarting node is however 
itself tricky. And to answer why it’s non-trivial action, it’s necessary to 
reveal the main ideas of the rebalance protocol.

Reconfiguration of the raft group - is a process driven by the fact of changing 
the assignments. Each partition has three corresponding sets of assignments 
stored in the metastore:
 # assignments.stable - current distribution

 # assignments.pending - partition distribution for an ongoing rebalance if any

 # assignments.planned - in some cases it’s not possible to cancel or merge 
pending rebalance with new one. In that case newly calculated assignments will 
be stored explicitly with corresponding assignments.planned key. It's worth 
noting that it doesn't make sense to keep more than one planned rebalance. Any 
new scheduled one will overwrite already existing.

However such idea of overwriting the assignments.planned key wont work within 
the context of an in-memory raft restart, because it’s not valid to overwrite 
the reduction of assignments. Let's illustrate this problem with the following 
example.
 # In-memory partition p1 is hosted on nodes A, B and C, meaning that 
p1.assignments.stable=[A,B,C]

 # Let's say that the baseline was changed, resulting in a rebalance on 
assignments.pending=[A,B,C,\\{*}D\\{*}]

 # During the non-cancelable phase of [A,B,C]->[A,B,C,D], node C fails and 
returns back, meaning that we should plan [A,B,D] and [A,B,C,D] assignments. 
Both must be recorded in the only assignments.planned key meaning that 
[A,B,C,D] will overwrite reduction [A,B,D], so no actual raft reconfiguration 
will take place, which is not acceptable.

In order to overcome given issue, let’s introduce new key _assignments.switch_ 
that will hold nodes that should be removed and then returned back and run 
following actions:
h5. On in-memory partition restart (or on partition start with cleaned-up PDS):
h6. as-is:

N/A
h6. to-be:
{code:java}
metastoreInvoke*: // atomic metastore call through multi-invoke api
if empty(partition.assignments.change.trigger.revision) || 
partition.assignments.change.trigger.revision < event.revision:
var assignmentsSwitch = union(assignments.switch, 
assignments.swith)
 
if empty(partition.assignments.pending)
partition.assignments.pending = 
substract(partition.assignments.stable, assignmentsSwitch) 
partition.assignments.switch = assignmentsSwitch
partition.assignments.change.trigger.revision = event.revision
else:
partition.assignments.switch = assignmentsSwitch
partition.assignments.change.trigger.revision = event.revision
else:

skip

{code}
h5. On rebalance done
h6. as-is:
{code:java}
metastoreInvoke: \\ atomic
partition.assignments.stable = appliedPeers
if empty(partition.assignments.planned):
partition.assignments.pending = empty
else:

partition.assignments.pending = partition.assignments.planned {code}
h6. to-be:
{code:java}
metastoreInvoke: \\ atomic
partition.assignments.stable = appliedPeers

if !empty(parition.assignments.switch)
var stableSwitchSubtract = subtract(parition.assignments.switch, 
partition.assignments.stable)
if !empty(stableSwitchSubtract) // return returned node
partition.assignments.pending = union(partition.assignments.stable, 
pendingSwitchSubtract) 
  

[jira] [Updated] (IGNITE-16875) KeyValueStorage for Meta storage is stopped twice on node stop

2022-04-19 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-16875:
--
Description: 
KeyValueStorage that is created for MetaStorageManager is also stored as the 
field of MetaStorageListener. On the node stop, is is stopped twice:
- in MetaStorageManager#stop
- in MetaStorageListener#onShutdown which is called on the stop of RAFT manager.

The latest is excessive.

  was:
KeyValueStorage that is created for MetaStorageManager is also stored as the 
field of MetaStorageListener. On the node stop, is is stopped twice:
- in MetaStorageManager#stop
- in MetaStorageListener#onShutdown which is called on the stop of RAFT manager.
The latest is excessive.


> KeyValueStorage for Meta storage is stopped twice on node stop
> --
>
> Key: IGNITE-16875
> URL: https://issues.apache.org/jira/browse/IGNITE-16875
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Chudov
>Priority: Minor
>  Labels: ignite-3
>
> KeyValueStorage that is created for MetaStorageManager is also stored as the 
> field of MetaStorageListener. On the node stop, is is stopped twice:
> - in MetaStorageManager#stop
> - in MetaStorageListener#onShutdown which is called on the stop of RAFT 
> manager.
> The latest is excessive.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16858) Check the possibility of rocksdb instances leak

2022-04-19 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524335#comment-17524335
 ] 

Denis Chudov commented on IGNITE-16858:
---

seems that there is no leak of rocksdb instances. but when researching I found 
a bug described in https://issues.apache.org/jira/browse/IGNITE-16875 .

> Check the possibility of rocksdb instances leak
> ---
>
> Key: IGNITE-16858
> URL: https://issues.apache.org/jira/browse/IGNITE-16858
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Chudov
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> Check the possibility of rocksdb instances leak



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-16875) KeyValueStorage for Meta storage is stopped twice on node stop

2022-04-19 Thread Denis Chudov (Jira)
Denis Chudov created IGNITE-16875:
-

 Summary: KeyValueStorage for Meta storage is stopped twice on node 
stop
 Key: IGNITE-16875
 URL: https://issues.apache.org/jira/browse/IGNITE-16875
 Project: Ignite
  Issue Type: Bug
Reporter: Denis Chudov


KeyValueStorage that is created for MetaStorageManager is also stored as the 
field of MetaStorageListener. On the node stop, is is stopped twice:
- in MetaStorageManager#stop
- in MetaStorageListener#onShutdown which is called on the stop of RAFT manager.
The latest is excessive.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16668) Raft group reconfiguration on node failure

2022-04-19 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-16668:
-
Description: 
If a node storing a partition of an in-memory table fails and leaves the 
cluster all data it had is lost. From the point of view of the partition it 
looks like as the node is left forever.

Although Raft protocol tolerates leaving some amount of nodes composing Raft 
group (partition); for in-memory caches we cannot restore replica factor 
because of in-memory nature of the table.

It means that we need to detect failures of each node owning a partition and 
recalculate assignments for the table without keeping replica factor.
h4. Upd 1:
h4. Problem

By design raft has several persisted segments, e.g. raft meta 
(term/committedIndex) and stable raft log. So, by converting common raft to 
in-memory one it’s possible to break some of it’s invariants. For example Node 
C could vote for Candidate A before self-restart and vote then for Candidate B 
after one. As a result two leaders will be elected which is illegal.
 
!Screenshot from 2022-04-19 11-11-05.png!
 
h4. Solution

In order to solve the problem mentioned above it’s possible to remove and then 
return back the restarting node from the peers of the corresponding raft group. 
The peer-removal process should be finished before the restarting of the 
corresponding raft server node.
 
  !Screenshot from 2022-04-19 11-12-55.png!
 
The process of removing and then returning back the restarting node is however 
itself tricky. And to answer why it’s non-trivial action, it’s necessary to 
reveal the main ideas of the rebalance protocol.

Reconfiguration of the raft group - is a process driven by the fact of changing 
the assignments. Each partition has three corresponding sets of assignments 
stored in the metastore:
 # assignments.stable - current distribution

 # assignments.pending - partition distribution for an ongoing rebalance if any

 # assignments.planned - in some cases it’s not possible to cancel or merge 
pending rebalance with new one. In that case newly calculated assignments will 
be stored explicitly with corresponding assignments.planned key. It's worth 
noting that it doesn't make sense to keep more than one planned rebalance. Any 
new scheduled one will overwrite already existing.

However such idea of overwriting the assignments.planned key wont work within 
the context of an in-memory raft restart, because it’s not valid to overwrite 
the reduction of assignments. Let's illustrate this problem with the following 
example.
 # In-memory partition p1 is hosted on nodes A, B and C, meaning that 
p1.assignments.stable=[A,B,C]

 # Let's say that the baseline was changed, resulting in a rebalance on 
assignments.pending=[A,B,C,\{*}D\{*}]

 # During the non-cancelable phase of [A,B,C]->[A,B,C,D], node C fails and 
returns back, meaning that we should plan [A,B,D] and [A,B,C,D] assignments. 
Both must be recorded in the only assignments.planned key meaning that 
[A,B,C,D] will overwrite reduction [A,B,D], so no actual raft reconfiguration 
will take place, which is not acceptable.

In order to overcome given issue, let’s introduce new key _assignments.switch_ 
that will hold nodes that should be removed and then returned back and run 
following actions:
h5. On in-memory partition restart (or on partition start with cleaned-up PDS):
h6. as-is:

N/A
h6. to-be:
{code:java}
metastoreInvoke*: // atomic metastore call through multi-invoke api
if empty(partition.assignments.change.trigger.revision) || 
partition.assignments.change.trigger.revision < event.revision:
var assignmentsSwitch = union(assignments.switch, 
assignments.swith)
 
if empty(partition.assignments.pending)
partition.assignments.pending = 
substract(partition.assignments.stable, assignmentsSwitch) 
partition.assignments.switch = assignmentsSwitch
partition.assignments.change.trigger.revision = event.revision
else:
partition.assignments.switch = assignmentsSwitch
partition.assignments.change.trigger.revision = event.revision
else:

skip

{code}
h5. On rebalance done
h6. as-is:
{code:java}
metastoreInvoke: \\ atomic
partition.assignments.stable = appliedPeers
if empty(partition.assignments.planned):
partition.assignments.pending = empty
else:

partition.assignments.pending = partition.assignments.planned {code}
h6. to-be:
{code:java}
metastoreInvoke: \\ atomic
partition.assignments.stable = appliedPeers

if !empty(parition.assignments.switch)
var stableSwitchSubstract = substract(partition.assignments.stable, 
parition.assignments.switch)
if !empty(pendingSwitchSubstract) // return returned node
partition.assignments.pending = union(partition.assignments.stable, 
pendingSwitchSubstract) 
   

[jira] [Updated] (IGNITE-16668) Raft group reconfiguration on node failure

2022-04-19 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-16668:
-
Description: 
If a node storing a partition of an in-memory table fails and leaves the 
cluster all data it had is lost. From the point of view of the partition it 
looks like as the node is left forever.

Although Raft protocol tolerates leaving some amount of nodes composing Raft 
group (partition); for in-memory caches we cannot restore replica factor 
because of in-memory nature of the table.

It means that we need to detect failures of each node owning a partition and 
recalculate assignments for the table without keeping replica factor.
h4. Upd 1:
h4. Problem

By design raft has several persisted segments, e.g. raft meta 
(term/committedIndex) and stable raft log. So, by converting common raft to 
in-memory one it’s possible to break some of it’s invariants. For example Node 
C could vote for Candidate A before self-restart and vote then for Candidate B 
after one. As a result two leaders will be elected which is illegal.
 
!Screenshot from 2022-04-19 11-11-05.png!
 
h4. Solution

In order to solve the problem mentioned above it’s possible to remove and then 
return back the restarting node from the peers of the corresponding raft group. 
The peer-removal process should be finished before the restarting of the 
corresponding raft server node.
 
 
 
The process of removing and then returning back the restarting node is however 
itself tricky. And to answer why it’s non-trivial action, it’s necessary to 
reveal the main ideas of the rebalance protocol.

Reconfiguration of the raft group - is a process driven by the fact of changing 
the assignments. Each partition has three corresponding sets of assignments 
stored in the metastore:
 # assignments.stable - current distribution

 # assignments.pending - partition distribution for an ongoing rebalance if any

 # assignments.planned - in some cases it’s not possible to cancel or merge 
pending rebalance with new one. In that case newly calculated assignments will 
be stored explicitly with corresponding assignments.planned key. It's worth 
noting that it doesn't make sense to keep more than one planned rebalance. Any 
new scheduled one will overwrite already existing.

However such idea of overwriting the assignments.planned key wont work within 
the context of an in-memory raft restart, because it’s not valid to overwrite 
the reduction of assignments. Let's illustrate this problem with the following 
example.
 # In-memory partition p1 is hosted on nodes A, B and C, meaning that 
p1.assignments.stable=[A,B,C]

 # Let's say that the baseline was changed, resulting in a rebalance on 
assignments.pending=[A,B,C,{*}D{*}]

 # During the non-cancelable phase of [A,B,C]->[A,B,C,D], node C fails and 
returns back, meaning that we should plan [A,B,D] and [A,B,C,D] assignments. 
Both must be recorded in the only assignments.planned key meaning that 
[A,B,C,D] will overwrite reduction [A,B,D], so no actual raft reconfiguration 
will take place, which is not acceptable.

In order to overcome given issue, let’s introduce new key _assignments.switch_ 
that will hold nodes that should be removed and then returned back and run 
following actions:
h5. On in-memory partition restart (or on partition start with cleaned-up PDS):
h6. as-is:

N/A
h6. to-be:

 
{code:java}
metastoreInvoke*: // atomic metastore call through multi-invoke api
if empty(partition.assignments.change.trigger.revision) || 
partition.assignments.change.trigger.revision < event.revision:
var assignmentsSwitch = union(assignments.switch, 
assignments.swith)
 
if empty(partition.assignments.pending)
partition.assignments.pending = 
substract(partition.assignments.stable, assignmentsSwitch) 
partition.assignments.switch = assignmentsSwitch
partition.assignments.change.trigger.revision = event.revision
else:
partition.assignments.switch = assignmentsSwitch
partition.assignments.change.trigger.revision = event.revision
else:

skip

{code}
 
{{}}
h5. On rebalance done
h6. as-is:

 
{code:java}
metastoreInvoke: \\ atomic
partition.assignments.stable = appliedPeers
if empty(partition.assignments.planned):
partition.assignments.pending = empty
else:

partition.assignments.pending = partition.assignments.planned {code}
 

 
h6. to-be:

 
{code:java}
metastoreInvoke: \\ atomic
partition.assignments.stable = appliedPeers

if !empty(parition.assignments.switch)
var stableSwitchSubstract = substract(partition.assignments.stable, 
parition.assignments.switch)
if !empty(pendingSwitchSubstract) // return returned node
partition.assignments.pending = union(partition.assignments.stable, 
pendingSwitchSubstract) 

[jira] [Updated] (IGNITE-16668) Raft group reconfiguration on node failure

2022-04-19 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-16668:
-
Attachment: Screenshot from 2022-04-19 11-12-55-1.png

> Raft group reconfiguration on node failure
> --
>
> Key: IGNITE-16668
> URL: https://issues.apache.org/jira/browse/IGNITE-16668
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Chugunov
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
> Attachments: Screenshot from 2022-04-19 11-11-05.png, Screenshot from 
> 2022-04-19 11-12-55-1.png, Screenshot from 2022-04-19 11-12-55.png
>
>
> If a node storing a partition of an in-memory table fails and leaves the 
> cluster all data it had is lost. From the point of view of the partition it 
> looks like as the node is left forever.
> Although Raft protocol tolerates leaving some amount of nodes composing Raft 
> group (partition); for in-memory caches we cannot restore replica factor 
> because of in-memory nature of the table.
> It means that we need to detect failures of each node owning a partition and 
> recalculate assignments for the table without keeping replica factor.
> h4. Upd 1:
> h4. Problem
> By design raft has several persisted segments, e.g. raft meta 
> (term/committedIndex) and stable raft log. So, by converting common raft to 
> in-memory one it’s possible to break some of it’s invariants. For example 
> Node C could vote for Candidate A before self-restart and vote then for 
> Candidate B after one. As a result two leaders will be elected which is 
> illegal.
>  
> !Screenshot from 2022-04-19 11-11-05.png!
>  
> h4. Solution
> In order to solve the problem mentioned above it’s possible to remove and 
> then return back the restarting node from the peers of the corresponding raft 
> group. The peer-removal process should be finished before the restarting of 
> the corresponding raft server node.
>  
>  
>  
> The process of removing and then returning back the restarting node is 
> however itself tricky. And to answer why it’s non-trivial action, it’s 
> necessary to reveal the main ideas of the rebalance protocol.
> Reconfiguration of the raft group - is a process driven by the fact of 
> changing the assignments. Each partition has three corresponding sets of 
> assignments stored in the metastore:
>  # assignments.stable - current distribution
>  # assignments.pending - partition distribution for an ongoing rebalance if 
> any
>  # assignments.planned - in some cases it’s not possible to cancel or merge 
> pending rebalance with new one. In that case newly calculated assignments 
> will be stored explicitly with corresponding assignments.planned key. It's 
> worth noting that it doesn't make sense to keep more than one planned 
> rebalance. Any new scheduled one will overwrite already existing.
> However such idea of overwriting the assignments.planned key wont work within 
> the context of an in-memory raft restart, because it’s not valid to overwrite 
> the reduction of assignments. Let's illustrate this problem with the 
> following example.
>  # In-memory partition p1 is hosted on nodes A, B and C, meaning that 
> p1.assignments.stable=[A,B,C]
>  # Let's say that the baseline was changed, resulting in a rebalance on 
> assignments.pending=[A,B,C,{*}D{*}]
>  # During the non-cancelable phase of [A,B,C]->[A,B,C,D], node C fails and 
> returns back, meaning that we should plan [A,B,D] and [A,B,C,D] assignments. 
> Both must be recorded in the only assignments.planned key meaning that 
> [A,B,C,D] will overwrite reduction [A,B,D], so no actual raft reconfiguration 
> will take place, which is not acceptable.
> In order to overcome given issue, let’s introduce new key 
> _assignments.switch_ that will hold nodes that should be removed and then 
> returned back and run following actions:
> h5. On in-memory partition restart (or on partition start with cleaned-up 
> PDS):
> h6. as-is:
> N/A
> h6. to-be:
>  
> {code:java}
> metastoreInvoke*: // atomic metastore call through multi-invoke api
> if empty(partition.assignments.change.trigger.revision) || 
> partition.assignments.change.trigger.revision < event.revision:
> var assignmentsSwitch = union(assignments.switch, 
> assignments.swith)
>  
> if empty(partition.assignments.pending)
> partition.assignments.pending = 
> substract(partition.assignments.stable, assignmentsSwitch) 
> partition.assignments.switch = assignmentsSwitch
> partition.assignments.change.trigger.revision = event.revision
> else:
> partition.assignments.switch = assignmentsSwitch
> partition.assignments.change.trigger.revision = event.revision
> else:
> skip
> {code}
>  
> {{}}
> h5. On 

[jira] [Updated] (IGNITE-16668) Raft group reconfiguration on node failure

2022-04-19 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-16668:
-
Attachment: Screenshot from 2022-04-19 11-11-05.png

> Raft group reconfiguration on node failure
> --
>
> Key: IGNITE-16668
> URL: https://issues.apache.org/jira/browse/IGNITE-16668
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Chugunov
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
> Attachments: Screenshot from 2022-04-19 11-11-05.png, Screenshot from 
> 2022-04-19 11-12-55.png
>
>
> If a node storing a partition of an in-memory table fails and leaves the 
> cluster all data it had is lost. From the point of view of the partition it 
> looks like as the node is left forever.
> Although Raft protocol tolerates leaving some amount of nodes composing Raft 
> group (partition); for in-memory caches we cannot restore replica factor 
> because of in-memory nature of the table.
> It means that we need to detect failures of each node owning a partition and 
> recalculate assignments for the table without keeping replica factor.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16668) Raft group reconfiguration on node failure

2022-04-19 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-16668:
-
Attachment: Screenshot from 2022-04-19 11-12-55.png

> Raft group reconfiguration on node failure
> --
>
> Key: IGNITE-16668
> URL: https://issues.apache.org/jira/browse/IGNITE-16668
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Chugunov
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
> Attachments: Screenshot from 2022-04-19 11-11-05.png, Screenshot from 
> 2022-04-19 11-12-55.png
>
>
> If a node storing a partition of an in-memory table fails and leaves the 
> cluster all data it had is lost. From the point of view of the partition it 
> looks like as the node is left forever.
> Although Raft protocol tolerates leaving some amount of nodes composing Raft 
> group (partition); for in-memory caches we cannot restore replica factor 
> because of in-memory nature of the table.
> It means that we need to detect failures of each node owning a partition and 
> recalculate assignments for the table without keeping replica factor.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16838) Fix TcpCommunicationSpiFreezingClientTest#testFreezingClient

2022-04-19 Thread Amelchev Nikita (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita updated IGNITE-16838:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Fix TcpCommunicationSpiFreezingClientTest#testFreezingClient
> 
>
> Key: IGNITE-16838
> URL: https://issues.apache.org/jira/browse/IGNITE-16838
> Project: Ignite
>  Issue Type: Bug
>Reporter: Nikolay Izhikov
>Assignee: Amelchev Nikita
>Priority: Minor
>  Labels: ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Test fails every run after 
> https://github.com/apache/ignite/commit/ea52fa47190f330c98c83347aa6e5547ac9b



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16874) Sql. Bump calcite version up to 1.30

2022-04-19 Thread Evgeny Stanilovsky (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524273#comment-17524273
 ] 

Evgeny Stanilovsky commented on IGNITE-16874:
-

[~tledkov-gridgain] can u approve plz ?

> Sql. Bump calcite version up to 1.30
> 
>
> Key: IGNITE-16874
> URL: https://issues.apache.org/jira/browse/IGNITE-16874
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite, ignite-3
> Fix For: 3.0.0-alpha5
>
>
> It`s time to update dependent apache calcite ver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (IGNITE-16838) Fix TcpCommunicationSpiFreezingClientTest#testFreezingClient

2022-04-19 Thread Amelchev Nikita (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524260#comment-17524260
 ] 

Amelchev Nikita commented on IGNITE-16838:
--

[~AldoRaine] , [~nizhikov] . Guys, the commit is not related to the test. 

I suggest to simulate STW on a client by stopping process (\{{kill -STOP}}). 
See [PR|https://github.com/apache/ignite/pull/9986].

> Fix TcpCommunicationSpiFreezingClientTest#testFreezingClient
> 
>
> Key: IGNITE-16838
> URL: https://issues.apache.org/jira/browse/IGNITE-16838
> Project: Ignite
>  Issue Type: Bug
>Reporter: Nikolay Izhikov
>Assignee: Amelchev Nikita
>Priority: Minor
>  Labels: ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Test fails every run after 
> https://github.com/apache/ignite/commit/ea52fa47190f330c98c83347aa6e5547ac9b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (IGNITE-16838) Fix TcpCommunicationSpiFreezingClientTest#testFreezingClient

2022-04-19 Thread Amelchev Nikita (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita reassigned IGNITE-16838:


Assignee: Amelchev Nikita

> Fix TcpCommunicationSpiFreezingClientTest#testFreezingClient
> 
>
> Key: IGNITE-16838
> URL: https://issues.apache.org/jira/browse/IGNITE-16838
> Project: Ignite
>  Issue Type: Bug
>Reporter: Nikolay Izhikov
>Assignee: Amelchev Nikita
>Priority: Minor
>  Labels: ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Test fails every run after 
> https://github.com/apache/ignite/commit/ea52fa47190f330c98c83347aa6e5547ac9b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (IGNITE-16874) Sql. Bump calcite version up to 1.30

2022-04-19 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-16874:

Labels: calcite ignite-3  (was: ignite-3)

> Sql. Bump calcite version up to 1.30
> 
>
> Key: IGNITE-16874
> URL: https://issues.apache.org/jira/browse/IGNITE-16874
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite, ignite-3
> Fix For: 3.0.0-alpha5
>
>
> It`s time to update dependent apache calcite ver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (IGNITE-16874) Sql. Bump calcite version up to 1.30

2022-04-19 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-16874:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Sql. Bump calcite version up to 1.30
> 
>
> Key: IGNITE-16874
> URL: https://issues.apache.org/jira/browse/IGNITE-16874
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>
> It`s time to update dependent apache calcite ver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (IGNITE-16874) Sql. Bump calcite version up to 1.30

2022-04-19 Thread Evgeny Stanilovsky (Jira)
Evgeny Stanilovsky created IGNITE-16874:
---

 Summary: Sql. Bump calcite version up to 1.30
 Key: IGNITE-16874
 URL: https://issues.apache.org/jira/browse/IGNITE-16874
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Evgeny Stanilovsky
Assignee: Evgeny Stanilovsky
 Fix For: 3.0.0-alpha5


It`s time to update dependent apache calcite ver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (IGNITE-16787) Sql calcite. Bump calcite version up to 1.30

2022-04-19 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-16787:

Labels: calcite  (was: calcite3-required)

> Sql calcite. Bump calcite version up to 1.30
> 
>
> Key: IGNITE-16787
> URL: https://issues.apache.org/jira/browse/IGNITE-16787
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Seems it would helpless to bump calcite dependency up to 
> [1.30|https://calcite.apache.org/docs/history.html#v1-30-0] ver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (IGNITE-16787) Sql calcite. Bump calcite version up to 1.30

2022-04-19 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-16787:

Description: Seems it would helpful to bump calcite dependency up to 
[1.30|https://calcite.apache.org/docs/history.html#v1-30-0] ver.  (was: Seems 
it would helpless to bump calcite dependency up to 
[1.30|https://calcite.apache.org/docs/history.html#v1-30-0] ver.)

> Sql calcite. Bump calcite version up to 1.30
> 
>
> Key: IGNITE-16787
> URL: https://issues.apache.org/jira/browse/IGNITE-16787
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Seems it would helpful to bump calcite dependency up to 
> [1.30|https://calcite.apache.org/docs/history.html#v1-30-0] ver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (IGNITE-16787) Sql calcite. Bump calcite version up to 1.30

2022-04-19 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-16787:

Labels: calcite3-required  (was: calcite2-required calcite3-required)

> Sql calcite. Bump calcite version up to 1.30
> 
>
> Key: IGNITE-16787
> URL: https://issues.apache.org/jira/browse/IGNITE-16787
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite3-required
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Seems it would helpless to bump calcite dependency up to 
> [1.30|https://calcite.apache.org/docs/history.html#v1-30-0] ver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (IGNITE-16787) Sql calcite. Bump calcite version up to 1.30

2022-04-19 Thread Taras Ledkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524223#comment-17524223
 ] 

Taras Ledkov commented on IGNITE-16787:
---

[~zstan], OK with me.

> Sql calcite. Bump calcite version up to 1.30
> 
>
> Key: IGNITE-16787
> URL: https://issues.apache.org/jira/browse/IGNITE-16787
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite2-required, calcite3-required
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Seems it would helpless to bump calcite dependency up to 
> [1.30|https://calcite.apache.org/docs/history.html#v1-30-0] ver.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (IGNITE-16694) The primary key of the primary table as a condition for a multi-table join query

2022-04-19 Thread Yury Gerzhedovich (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524197#comment-17524197
 ] 

Yury Gerzhedovich commented on IGNITE-16694:


[~liwen.cui] , your data is not colocated. Please check the documentation 
[https://ignite.apache.org/docs/latest/SQL/distributed-joins] .
You should colocate data or use distributedJoins flag.

> The primary key of the primary table as a condition for a multi-table join 
> query
> 
>
> Key: IGNITE-16694
> URL: https://issues.apache.org/jira/browse/IGNITE-16694
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8.1, 2.12
>Reporter: Livia
>Priority: Major
>
> When I use Ignite as a SQL database. I encountered an issue in 2.8.1 version 
> or the latest verison 2.12.0.
> There is a multi-table join query, when I ues +primary table primary key+ 
> with '=' or 'IN' condition, the corresponding result is unexpected. But if I 
> use a +primary table non-primary key+ with '=' or 'IN', the result is ok, and 
> if I use a +primary table primary key+ with '!=' or 'NOT IN'  as the 
> conditon, evenything is normal. And the issue did not happen in 2.7.5 version.
>  
> I create three tables.
>  
> {code:java}
> CREATE TABLE STUDENT(
>     ID BIGINT PRIMARY KEY,
>     NAME VARCHAR,
>     EMAIL VARCHAR,
> ) WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT";
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10001, 'Tom', 't...@123.com');
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10002, 'Lily', 'l...@123.com');
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10003, 'Sherry', 
> 'she...@123.com');
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10004, 'Petter', 
> 'pet...@123.com');
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10005, 'Livia', 
> 'li...@123.com'); 
> CREATE TABLE STUDENT_COURSE(
>     ID BIGINT PRIMARY KEY,
>     STUDENT_ID BIGINT NOT NULL,
>     COURSE_ID  BIGINT NOT NULL,
> ) WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT";
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(1, 10001, 1);
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(2, 10002, 2);
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(3, 10003, 3);
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(4, 10004, 2);
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(5, 10005, 3);
> CREATE TABLE COURSE(
>     ID BIGINT PRIMARY KEY,
>     NAME VARCHAR,
>     CREDIT_RATING INT,
> ) WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT";
> INSERT INTO COURSE (ID, NAME, CREDIT_RATING) VALUES(1, 'Criminal Evidence', 
> 20);
> INSERT INTO COURSE (ID, NAME, CREDIT_RATING) VALUES(2, 'Employment Law', 10);
> INSERT INTO COURSE (ID, NAME, CREDIT_RATING) VALUES(3, 'Jurisprudence', 
> 30);{code}
>  
> And when I run this sql. there are different result.
> {code:java}
> SELECT COURSE.NAME AS COURSE_NAME, STUDENT.NAME AS STUDENT_NAME, STUDENT.ID 
> AS STUDENT_ID FROM STUDENT
> LEFT JOIN STUDENT_COURSE
> ON STUDENT.ID = STUDENT_COURSE.STUDENT_ID
> LEFT JOIN COURSE
> ON COURSE.ID = STUDENT_COURSE.COURSE_ID
> WHERE 1=1
> -- AND STUDENT.ID IN (10001,10002)   -- All values in column COURSE_NAME are 
> null
> -- AND STUDENT.ID = 10001 or STUDENT.ID = 10002  -- All values in column 
> COURSE_NAME are null
> -- AND STUDENT.ID != 10003 and STUDENT.ID != 10004 and STUDENT.ID != 10005  
> -- OK
> -- AND STUDENT.ID NOT IN (10003, 10004, 10005)  -- OK
> -- AND STUDENT.NAME IN ('Tom','Lily')   -- OK
> -- AND STUDENT.NAME = 'Tom' or STUDENT.NAME = 'Lily'  -- OK {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (IGNITE-16694) The primary key of the primary table as a condition for a multi-table join query

2022-04-19 Thread Yury Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Gerzhedovich resolved IGNITE-16694.

Resolution: Invalid

> The primary key of the primary table as a condition for a multi-table join 
> query
> 
>
> Key: IGNITE-16694
> URL: https://issues.apache.org/jira/browse/IGNITE-16694
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8.1, 2.12
>Reporter: Livia
>Priority: Major
>
> When I use Ignite as a SQL database. I encountered an issue in 2.8.1 version 
> or the latest verison 2.12.0.
> There is a multi-table join query, when I ues +primary table primary key+ 
> with '=' or 'IN' condition, the corresponding result is unexpected. But if I 
> use a +primary table non-primary key+ with '=' or 'IN', the result is ok, and 
> if I use a +primary table primary key+ with '!=' or 'NOT IN'  as the 
> conditon, evenything is normal. And the issue did not happen in 2.7.5 version.
>  
> I create three tables.
>  
> {code:java}
> CREATE TABLE STUDENT(
>     ID BIGINT PRIMARY KEY,
>     NAME VARCHAR,
>     EMAIL VARCHAR,
> ) WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT";
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10001, 'Tom', 't...@123.com');
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10002, 'Lily', 'l...@123.com');
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10003, 'Sherry', 
> 'she...@123.com');
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10004, 'Petter', 
> 'pet...@123.com');
> INSERT INTO STUDENT (ID, NAME, EMAIL) VALUES(10005, 'Livia', 
> 'li...@123.com'); 
> CREATE TABLE STUDENT_COURSE(
>     ID BIGINT PRIMARY KEY,
>     STUDENT_ID BIGINT NOT NULL,
>     COURSE_ID  BIGINT NOT NULL,
> ) WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT";
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(1, 10001, 1);
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(2, 10002, 2);
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(3, 10003, 3);
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(4, 10004, 2);
> INSERT INTO STUDENT_COURSE (ID, STUDENT_ID, COURSE_ID) VALUES(5, 10005, 3);
> CREATE TABLE COURSE(
>     ID BIGINT PRIMARY KEY,
>     NAME VARCHAR,
>     CREDIT_RATING INT,
> ) WITH "ATOMICITY=TRANSACTIONAL_SNAPSHOT";
> INSERT INTO COURSE (ID, NAME, CREDIT_RATING) VALUES(1, 'Criminal Evidence', 
> 20);
> INSERT INTO COURSE (ID, NAME, CREDIT_RATING) VALUES(2, 'Employment Law', 10);
> INSERT INTO COURSE (ID, NAME, CREDIT_RATING) VALUES(3, 'Jurisprudence', 
> 30);{code}
>  
> And when I run this sql. there are different result.
> {code:java}
> SELECT COURSE.NAME AS COURSE_NAME, STUDENT.NAME AS STUDENT_NAME, STUDENT.ID 
> AS STUDENT_ID FROM STUDENT
> LEFT JOIN STUDENT_COURSE
> ON STUDENT.ID = STUDENT_COURSE.STUDENT_ID
> LEFT JOIN COURSE
> ON COURSE.ID = STUDENT_COURSE.COURSE_ID
> WHERE 1=1
> -- AND STUDENT.ID IN (10001,10002)   -- All values in column COURSE_NAME are 
> null
> -- AND STUDENT.ID = 10001 or STUDENT.ID = 10002  -- All values in column 
> COURSE_NAME are null
> -- AND STUDENT.ID != 10003 and STUDENT.ID != 10004 and STUDENT.ID != 10005  
> -- OK
> -- AND STUDENT.ID NOT IN (10003, 10004, 10005)  -- OK
> -- AND STUDENT.NAME IN ('Tom','Lily')   -- OK
> -- AND STUDENT.NAME = 'Tom' or STUDENT.NAME = 'Lily'  -- OK {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (IGNITE-15734) Erroneous string formatting while changing cluster tag.

2022-04-19 Thread Ivan Bessonov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-15734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524157#comment-17524157
 ] 

Ivan Bessonov commented on IGNITE-15734:


[~zstan] done, thank you for the fix!

> Erroneous string formatting while changing cluster tag.
> ---
>
> Key: IGNITE-15734
> URL: https://issues.apache.org/jira/browse/IGNITE-15734
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.11
>Reporter: Evgeny Stanilovsky
>Assignee: Evgeny Stanilovsky
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat}
> org.apache.ignite.internal.processors.cluster.ClusterProcessor#onReadyForRead
> ...
> log.info(
> "Cluster tag will be set to new value: " +
> newVal != null ? newVal.tag() : "null" +
> ", previous value was: " +
> oldVal != null ? oldVal.tag() : "null");
> {noformat}
> without braces 
> {noformat}
> "Cluster tag will be set to new value: " + newVal
> {noformat}
> always not null;



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (IGNITE-16838) Fix TcpCommunicationSpiFreezingClientTest#testFreezingClient

2022-04-19 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524114#comment-17524114
 ] 

Luchnikov Alexander commented on IGNITE-16838:
--

[~nizhikov], [~timonin.maksim] The test uses a hack to generate STW pauses, 
this hack works on some JVM versions but not all.
Maybe we should delete this test, or is it a good practice to emulate STW pause 
in a test?

> Fix TcpCommunicationSpiFreezingClientTest#testFreezingClient
> 
>
> Key: IGNITE-16838
> URL: https://issues.apache.org/jira/browse/IGNITE-16838
> Project: Ignite
>  Issue Type: Bug
>Reporter: Nikolay Izhikov
>Priority: Minor
>  Labels: ise
>
> Test fails every run after 
> https://github.com/apache/ignite/commit/ea52fa47190f330c98c83347aa6e5547ac9b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)