[jira] [Created] (IGNITE-20698) Fix compatibility tests for JDK 11 and JDK17

2023-10-19 Thread Ivan Daschinsky (Jira)
Ivan Daschinsky created IGNITE-20698:


 Summary: Fix compatibility tests for JDK 11 and JDK17
 Key: IGNITE-20698
 URL: https://issues.apache.org/jira/browse/IGNITE-20698
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Daschinsky
Assignee: Ivan Daschinsky






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20698) Fix compatibility tests for JDK 11 and JDK17

2023-10-19 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-20698:
-
Labels: ise  (was: )

> Fix compatibility tests for JDK 11 and JDK17
> 
>
> Key: IGNITE-20698
> URL: https://issues.apache.org/jira/browse/IGNITE-20698
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Daschinsky
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: ise
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20698) Fix compatibility tests for JDK 11 and JDK17

2023-10-19 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-20698:
-
Fix Version/s: 2.16

> Fix compatibility tests for JDK 11 and JDK17
> 
>
> Key: IGNITE-20698
> URL: https://issues.apache.org/jira/browse/IGNITE-20698
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Daschinsky
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20528) CDC doesn't work if the "Cache objects transformation" is applied

2023-10-19 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1371#comment-1371
 ] 

Ignite TC Bot commented on IGNITE-20528:


{panel:title=Branch: [pull/11001/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11001/head] Base: [master] : New Tests 
(276)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}PDS 2{color} [[tests 
138|https://ci2.ignite.apache.org/viewLog.html?buildId=7568601]]
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testReadAllKeysWithoutCommit[consistentId=false, 
wal=FSYNC, persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testReadAllKeysCommitAll[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testReadFromNextEntry[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testMultiNodeConsumption[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testDisable[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testReReadWhenStateWasNotStored[consistentId=false, 
wal=FSYNC, persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testCdcDirectoryMaxSize[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testReadExpireTime[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testReadAllKeysCommitEachEvent[consistentId=false, 
wal=FSYNC, persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testCdcSingleton[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsTestSuite2: 
TransformedCdcSelfTest.testCdcDirectoryMaxSize[consistentId=false, wal=FSYNC, 
persistence=false] - PASSED{color}
... and 127 new tests

{color:#8b}Disk Page Compressions 2{color} [[tests 
138|https://ci2.ignite.apache.org/viewLog.html?buildId=7568645]]
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testReadAllKeysWithoutCommit[consistentId=false, 
wal=FSYNC, persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testReadAllKeysCommitAll[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testReadFromNextEntry[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testMultiNodeConsumption[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testDisable[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testReReadWhenStateWasNotStored[consistentId=false, 
wal=FSYNC, persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testCdcDirectoryMaxSize[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testReadExpireTime[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testReadAllKeysCommitEachEvent[consistentId=false, 
wal=FSYNC, persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testCdcSingleton[consistentId=false, wal=FSYNC, 
persistence=true] - PASSED{color}
* {color:#013220}IgnitePdsCompressionTestSuite2: 
TransformedCdcSelfTest.testCdcDirectoryMaxSize[consistentId=false, wal=FSYNC, 
persistence=false] - PASSED{color}
... and 127 new tests

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7568649buildTypeId=IgniteTests24Java8_RunAll]

> CDC doesn't work if the "Cache objects transformation" is applied
> -
>
> Key: IGNITE-20528
> URL: https://issues.apache.org/jira/browse/IGNITE-20528
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Korotkov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-97, ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CDC doesn't work If some cache objects transformation is applied (see the 
> 

[jira] [Updated] (IGNITE-20694) Make lock table size confiurable

2023-10-19 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-20694:
---
Labels: ignite-3  (was: )

> Make lock table size confiurable
> 
>
> Key: IGNITE-20694
> URL: https://issues.apache.org/jira/browse/IGNITE-20694
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Scherbakov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0
>
>
> Currently lock table size implemented under IGNITE-17811 is hardcoded.
> It should be configurable instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20697) Move physical records from WAL to another storage

2023-10-19 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-20697:
---
Description: 
Currentrly, physycal records take most of the WAL size. But physical records in 
WAL files required only for crush recovery and these records are useful only 
for a short period of time (since last checkpoint). 
Size of physical records during checkpoint is more than size of all modified 
pages between checkpoints, since we need to store page snapshot record for each 
modified page and page delta records, if page is modified more than once 
between checkpoints.
We process WAL file several times in stable workflow (without crashes and 
rebalances):
 # We write records to WAL files
 # We copy WAL files to archive
 # We compact WAL files (remove phisical records + compress)

So, totally we write all physical records twice and read physical records at 
least twice.

To reduce disc workload we can move physical records to another storage and 
don't write them to WAL files. To provide the same crush recovery guarantees we 
can write modified pages twice during checkpoint. First time to some delta file 
and second time to the page storage. In this case we can recover any page if we 
crash during write to page storage from delta file (instead of WAL, as we do 
now).

This proposal has pros and cons.
Pros:
 - Less size of stored data (we don't store page delta files, only final state 
of the page)
 - Reduced disc workload (we store additionally write once all modified pages 
instead of 2 writes and 2 reads of larger amount of data)
 - Potentially reduced latancy (instead of writing physical records 
synchronously during data modification we write to WAL only logical records and 
physical pages will be written by checkpointer threads)

Cons:
 - Increased checkpoint duration (we should write doubled amount of data during 
checkpoint)

Let's try to implement it and benchmark.

  was:
Currentrly, physycal records take most of the WAL size. But physical records in 
WAL files required only for crush recovery and these records are useful only 
for a short period of time (since last checkpoint). 
Size of physical records during checkpoint is more than size of all modified 
pages between checkpoints, since we need to store page snapshot record for each 
modified page and page delta records, if page is modified more than once 
between checkpoints.
We process WAL file several times in normal workflow (without crashes):
1) We write records to WAL files
2) We copy WAL files to archive
3) We compact WAL files (remove phisical records + compress)
So, totally we write all physical records twice and read physical records 
twice. 
To reduce disc workload we can move physical records to another storage and 
don't write them to WAL files. 
To provide the same crush recovery guarantees we can write modified pages twice 
during checkpoint. First time to some delta file and second time to the page 
storage. In this case we can recover any page if we crash during write to page 
storage from delta file (instead of WAL, as we do now).
This proposal has pros and cons.
Pros:
- Less size of stored data (we don't store page delta files, only final state 
of the page)
- Reduced disc workload (we store additionally write once all modified pages 
instead of 2 writes and 2 reads of larger amount of data)
- Potentially reduced latancy (instead of writing physical records 
synchronously during data modification we write to WAL only logical records and 
physical pages will be written by checkpointer threads)
Cons:
- Increased checkpoint duration (we should write doubled amount of data during 
checkpoint)
Let's try it and benchmark.


> Move physical records from WAL to another storage 
> --
>
> Key: IGNITE-20697
> URL: https://issues.apache.org/jira/browse/IGNITE-20697
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>
> Currentrly, physycal records take most of the WAL size. But physical records 
> in WAL files required only for crush recovery and these records are useful 
> only for a short period of time (since last checkpoint). 
> Size of physical records during checkpoint is more than size of all modified 
> pages between checkpoints, since we need to store page snapshot record for 
> each modified page and page delta records, if page is modified more than once 
> between checkpoints.
> We process WAL file several times in stable workflow (without crashes and 
> rebalances):
>  # We write records to WAL files
>  # We copy WAL files to archive
>  # We compact WAL files (remove phisical records + compress)
> So, totally we write all physical records twice and read physical records at 
> least twice.
> To reduce disc 

[jira] [Created] (IGNITE-20697) Move physical records from WAL to another storage

2023-10-19 Thread Aleksey Plekhanov (Jira)
Aleksey Plekhanov created IGNITE-20697:
--

 Summary: Move physical records from WAL to another storage 
 Key: IGNITE-20697
 URL: https://issues.apache.org/jira/browse/IGNITE-20697
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksey Plekhanov
Assignee: Aleksey Plekhanov


Currentrly, physycal records take most of the WAL size. But physical records in 
WAL files required only for crush recovery and these records are useful only 
for a short period of time (since last checkpoint). 
Size of physical records during checkpoint is more than size of all modified 
pages between checkpoints, since we need to store page snapshot record for each 
modified page and page delta records, if page is modified more than once 
between checkpoints.
We process WAL file several times in normal workflow (without crashes):
1) We write records to WAL files
2) We copy WAL files to archive
3) We compact WAL files (remove phisical records + compress)
So, totally we write all physical records twice and read physical records 
twice. 
To reduce disc workload we can move physical records to another storage and 
don't write them to WAL files. 
To provide the same crush recovery guarantees we can write modified pages twice 
during checkpoint. First time to some delta file and second time to the page 
storage. In this case we can recover any page if we crash during write to page 
storage from delta file (instead of WAL, as we do now).
This proposal has pros and cons.
Pros:
- Less size of stored data (we don't store page delta files, only final state 
of the page)
- Reduced disc workload (we store additionally write once all modified pages 
instead of 2 writes and 2 reads of larger amount of data)
- Potentially reduced latancy (instead of writing physical records 
synchronously during data modification we write to WAL only logical records and 
physical pages will be written by checkpointer threads)
Cons:
- Increased checkpoint duration (we should write doubled amount of data during 
checkpoint)
Let's try it and benchmark.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20696) Investigate tests slowdown on TC

2023-10-19 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-20696:
--

 Summary: Investigate tests slowdown on TC
 Key: IGNITE-20696
 URL: https://issues.apache.org/jira/browse/IGNITE-20696
 Project: Ignite
  Issue Type: Bug
Reporter: Roman Puchkovskiy
Assignee: Roman Puchkovskiy
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20695) Cleanup resource

2023-10-19 Thread Vadim Pakhnushev (Jira)
Vadim Pakhnushev created IGNITE-20695:
-

 Summary: Cleanup resource
 Key: IGNITE-20695
 URL: https://issues.apache.org/jira/browse/IGNITE-20695
 Project: Ignite
  Issue Type: Bug
  Components: security
Reporter: Vadim Pakhnushev
Assignee: Vadim Pakhnushev


In IGNITE-20522 clusterInitializer was forgotten to be cleaned up in the 
ClusterManagementRestFactory leading to OOMs in tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20592) Parameter '--node-ids' for command 'schedule_indexes_rebuild'.

2023-10-19 Thread Nikita Amelchev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Amelchev updated IGNITE-20592:
-
Fix Version/s: 2.16

> Parameter '--node-ids' for command 'schedule_indexes_rebuild'.
> --
>
> Key: IGNITE-20592
> URL: https://issues.apache.org/jira/browse/IGNITE-20592
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Minor
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since we addded '--nodes-ids' in IGNITE-20418, 'schedule_indexes_rebuild' 
> should have it too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20592) Parameter '--node-ids' for command 'schedule_indexes_rebuild'.

2023-10-19 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1253#comment-1253
 ] 

Ignite TC Bot commented on IGNITE-20592:


{panel:title=Branch: [pull/10982/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10982/head] Base: [master] : New Tests 
(6)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Control Utility 2{color} [[tests 
6|https://ci2.ignite.apache.org/viewLog.html?buildId=7568204]]
* {color:#013220}IgniteControlUtilityTestSuite2: 
GridCommandHandlerScheduleIndexRebuildTest.testRebuildOnAllNodes[cmdHnd=cli] - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
GridCommandHandlerScheduleIndexRebuildTest.testErrorsAllNodes[cmdHnd=cli] - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
GridCommandHandlerScheduleIndexRebuildTest.testRebuildOnSpecifiedNodes[cmdHnd=cli]
 - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
GridCommandHandlerScheduleIndexRebuildTest.testRebuildOnAllNodes[cmdHnd=jmx] - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
GridCommandHandlerScheduleIndexRebuildTest.testErrorsAllNodes[cmdHnd=jmx] - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite2: 
GridCommandHandlerScheduleIndexRebuildTest.testRebuildOnSpecifiedNodes[cmdHnd=jmx]
 - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7568210buildTypeId=IgniteTests24Java8_RunAll]

> Parameter '--node-ids' for command 'schedule_indexes_rebuild'.
> --
>
> Key: IGNITE-20592
> URL: https://issues.apache.org/jira/browse/IGNITE-20592
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Minor
>  Labels: ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since we addded '--nodes-ids' in IGNITE-20418, 'schedule_indexes_rebuild' 
> should have it too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20693) NPE in placement driver actor on deactivation

2023-10-19 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1247#comment-1247
 ] 

Vladislav Pyatkov commented on IGNITE-20693:


Merged 1883697a1d63b7b05a914c710bac21dcfce0ff04

> NPE in placement driver actor on deactivation
> -
>
> Key: IGNITE-20693
> URL: https://issues.apache.org/jira/browse/IGNITE-20693
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Blocker
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The issue happens when the placement driver is still stuck in active behavior 
> during the deactivation process.
> {noformat}
> [2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
> cache initialized for placement driver 
> [groupAssignments={1_part_0=[Assignment [consistentId=test-node, 
> isPeer=true]]}]
> [2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> Stopping 
> test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
> repetition 18 of 20, cost: 43ms.
> Exception in thread "%test-node%lease-updater-45" 
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> [2023-10-19T08:56:53,691][WARN ][%test-node%lease-updater-46][LeaseUpdater] 
> Lease updater is interrupted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20694) Make lock table size confiurable

2023-10-19 Thread Alexey Scherbakov (Jira)
Alexey Scherbakov created IGNITE-20694:
--

 Summary: Make lock table size confiurable
 Key: IGNITE-20694
 URL: https://issues.apache.org/jira/browse/IGNITE-20694
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexey Scherbakov
 Fix For: 3.0


Currently lock table size implemented under IGNITE-17811 is hardcoded.

It should be configurable instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20679) org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testSQLTimestampDataType doesn't work on jdk 11 and later

2023-10-19 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-20679:
-
Labels: ise  (was: )

> org.apache.ignite.jdbc.thin.JdbcThinCacheToJdbcDataTypesCoverageTest#testSQLTimestampDataType
>  doesn't work on jdk 11 and later
> --
>
> Key: IGNITE-20679
> URL: https://issues.apache.org/jira/browse/IGNITE-20679
> Project: Ignite
>  Issue Type: Test
>Affects Versions: 2.15
>Reporter: Ivan Daschinsky
>Assignee: Ivan Daschinsky
>Priority: Minor
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20002) Implement durable unlock on primary partition re-election

2023-10-19 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1193#comment-1193
 ] 

Vladislav Pyatkov commented on IGNITE-20002:


Merged 3e8cad9947c38ea05429599ad99d18c7bb59ac1f

> Implement durable unlock on primary partition re-election
> -
>
> Key: IGNITE-20002
> URL: https://issues.apache.org/jira/browse/IGNITE-20002
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3, transaction, transaction3_recovery
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> h3. Motivation
> It's required to release all acquired locks on transaction finish in a 
> durable way. Such durability consists of two parts:
>  * Durable unlock within same primary.
>  * Durable unlock on primary change.
> This ticket is about second part only. There's a counterpart ticket for the 
> first part IGNITE-20004
> h3. Definition of Done
>  * All unreleased locks for the transactions that were finished are released 
> in case of primary re-election, including old primary failure and cluster 
> restart.
> h3. Implementation Notes
>  * We may start with adding onPrimaryElected callback.
>  * Within this callback, it's required to scan 
> `org.apache.ignite.internal.tx.storage.state.TxStateStorage#scan` local 
> TxStateStorage and call `org.apache.ignite.internal.tx.TxManager#cleanup` for 
> all transactions that have false in TxMeta.locksReleased. TxManager#cleanup 
> is an idempotent operation, thus it's safe to run it multiple time, even from 
> different nodes, e.g. old primary and new primary.
>  * It's required to add locksReleased field to TxMeta with default value 
> false.
>  * It's required to set locksReleases to true when all cleanup 
> txCleanupReplicaRequest returns successfully. That extra 
> "updateTxnState(locksReleased == true) should be asynchronous.
>  * Tests will be non-trivial here, because it'll be required to kill old 
> primary after txnStateChanged but before sending cleanup request.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20454) Sql. Extend SQL cursor with ability to check if first page is ready

2023-10-19 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-20454:
--
Fix Version/s: 3.0.0-beta2

> Sql. Extend SQL cursor with ability to check if first page is ready
> ---
>
> Key: IGNITE-20454
> URL: https://issues.apache.org/jira/browse/IGNITE-20454
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> For multi statement queries, in order to advance to the next statement we 
> have to get sure that the first page of result for current statement is ready 
> to be served. This allows not to depend on a user and finish the script even 
> if no one consumes the results.
> Definition of done: there is an API available from within 
> {{SqlQueryProcessor}} such that will allow to be notified about completion of 
> prefetch ({{AsyncRootNode#prefetch}}).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20693) NPE in placement driver actor on deactivation

2023-10-19 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20693:
-
Reviewer: Denis Chudov

> NPE in placement driver actor on deactivation
> -
>
> Key: IGNITE-20693
> URL: https://issues.apache.org/jira/browse/IGNITE-20693
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Blocker
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue happens when the placement driver is still stuck in active behavior 
> during the deactivation process.
> {noformat}
> [2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
> cache initialized for placement driver 
> [groupAssignments={1_part_0=[Assignment [consistentId=test-node, 
> isPeer=true]]}]
> [2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> Stopping 
> test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
> repetition 18 of 20, cost: 43ms.
> Exception in thread "%test-node%lease-updater-45" 
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> [2023-10-19T08:56:53,691][WARN ][%test-node%lease-updater-46][LeaseUpdater] 
> Lease updater is interrupted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20693) NPE in placement driver actor on deactivation

2023-10-19 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20693:
-
Priority: Blocker  (was: Major)

> NPE in placement driver actor on deactivation
> -
>
> Key: IGNITE-20693
> URL: https://issues.apache.org/jira/browse/IGNITE-20693
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Blocker
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue happens when the placement driver is still stuck in active behavior 
> during the deactivation process.
> {noformat}
> [2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
> cache initialized for placement driver 
> [groupAssignments={1_part_0=[Assignment [consistentId=test-node, 
> isPeer=true]]}]
> [2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> Stopping 
> test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
> repetition 18 of 20, cost: 43ms.
> Exception in thread "%test-node%lease-updater-45" 
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> [2023-10-19T08:56:53,691][WARN ][%test-node%lease-updater-46][LeaseUpdater] 
> Lease updater is interrupted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20693) NPE in placement driver actor on deactivation

2023-10-19 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-20693:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> NPE in placement driver actor on deactivation
> -
>
> Key: IGNITE-20693
> URL: https://issues.apache.org/jira/browse/IGNITE-20693
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Blocker
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue happens when the placement driver is still stuck in active behavior 
> during the deactivation process.
> {noformat}
> [2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
> cache initialized for placement driver 
> [groupAssignments={1_part_0=[Assignment [consistentId=test-node, 
> isPeer=true]]}]
> [2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> Stopping 
> test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
> repetition 18 of 20, cost: 43ms.
> Exception in thread "%test-node%lease-updater-45" 
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> [2023-10-19T08:56:53,691][WARN ][%test-node%lease-updater-46][LeaseUpdater] 
> Lease updater is interrupted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20688) Java Thin Client - Error while deserializing Collection

2023-10-19 Thread Rahul Mohan (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Mohan updated IGNITE-20688:
-
Environment: (was: !image001.png!)

> Java Thin Client - Error while deserializing Collection
> ---
>
> Key: IGNITE-20688
> URL: https://issues.apache.org/jira/browse/IGNITE-20688
> Project: Ignite
>  Issue Type: Bug
>  Components: binary, thin client
>Affects Versions: 2.9, 2.10, 2.12, 2.13, 2.14, 2.15
>Reporter: Rahul Mohan
>Assignee: Mikhail Petrov
>Priority: Major
> Attachments: image001.png
>
>
> I have encountered an issue in deserializing cache values which are of 
> Collection type.
> The issue occurs if a field in different objects  within the  collection 
> points  to the same reference.
> *Versions:*
> org.apache.ignite:ignite-core:2.9.0 to org.apache.ignite:ignite-core:2.15.0
>  
> {code:java}
> Person.java
> public class Person implements Serializable {
> private String id;
> private String firstName;
> private String lastName;
> private double salary;
> private String country;
> private String deleted;
> private Set accounts;
> }
> Client
>     ClientCacheConfiguration cacheCfg = new 
> ClientCacheConfiguration().setName(cacheName).
>     setCacheMode(CacheMode.REPLICATED).
>     
> setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>  
>     cache = client.getOrCreateCache(cacheCfg);
>  
>     Set set = new HashSet<>();
> set.add("1");
>  
>     List persons = new ArrayList<>();
>     persons.add(new Person("105286a4","Jack","Smith",1f, 
> "USA","false", set));
>     persons.add(new Person("98545b0fd3af","John", "Doe", 50f, 
> "Australia","false", null));
>     persons.add(new Person("98545b0fd3afd","Hari","M",40f, 
> "India", null, null));
>     persons.add(new 
> Person("985488b0fd3ae","Bugs","Bunny",30f,"Wabbit Land ", null, set));
>     cache.put("group1", value) // Write collection to cache
> 
> List persons = (List) cache.get("group1"); // Get 
> from cache, Exception here {code}
> 
> *Exception:*
> {code:java}
> class org.apache.ignite.binary.BinaryObjectException: Failed to deserialize 
> object [typeName=com.ignite.example.model.Person]
>     at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:927)
>     at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
>     at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
>     at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:316)
>     at 
> org.apache.ignite.internal.client.thin.ClientBinaryMarshaller.deserialize(ClientBinaryMarshaller.java:74)
>     at 
> org.apache.ignite.internal.client.thin.ClientUtils.unwrapBinary(ClientUtils.java:557)
>     at 
> org.apache.ignite.internal.client.thin.ClientUtils.unwrapCollection(ClientUtils.java:578)
>     at 
> org.apache.ignite.internal.client.thin.ClientUtils.unwrapBinary(ClientUtils.java:562)
>     at 
> org.apache.ignite.internal.client.thin.ClientUtils.readObject(ClientUtils.java:546)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache.readObject(TcpClientCache.java:556)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache.readObject(TcpClientCache.java:561)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache$$Lambda$395/1950117092.apply(Unknown
>  Source)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:284)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:219)
>     at 
> org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:198)
>     at 
> org.apache.ignite.internal.client.thin.ReliableChannel.affinityService(ReliableChannel.java:261)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache.cacheSingleKeyOperation(TcpClientCache.java:508)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache.get(TcpClientCache.java:111)
>     at 
> com.ignite.example.service.ApacheIgniteService.printAllKeys(ApacheIgniteService.java:117)
>     at 
> com.ignite.example.service.ApacheIgniteService.init(ApacheIgniteService.java:103)
>     at 
> com.ignite.example.IgniteCacheExampleApplication.run(IgniteCacheExampleApplication.java:22)

[jira] [Updated] (IGNITE-20688) Java Thin Client - Error while deserializing Collection

2023-10-19 Thread Rahul Mohan (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rahul Mohan updated IGNITE-20688:
-
 Attachment: image001.png
Description: 
I have encountered an issue in deserializing cache values which are of 
Collection type.

The issue occurs if a field in different objects  within the  collection points 
 to the same reference.

*Versions:*

org.apache.ignite:ignite-core:2.9.0 to org.apache.ignite:ignite-core:2.15.0

 
{code:java}
Person.java



public class Person implements Serializable {

private String id;
private String firstName;
private String lastName;
private double salary;

private String country;

private String deleted;


private Set accounts;


}


Client

    ClientCacheConfiguration cacheCfg = new 
ClientCacheConfiguration().setName(cacheName).
    setCacheMode(CacheMode.REPLICATED).
    
setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
 
    cache = client.getOrCreateCache(cacheCfg);
 

    Set set = new HashSet<>();
set.add("1");
 
    List persons = new ArrayList<>();
    persons.add(new Person("105286a4","Jack","Smith",1f, 
"USA","false", set));
    persons.add(new Person("98545b0fd3af","John", "Doe", 50f, 
"Australia","false", null));
    persons.add(new Person("98545b0fd3afd","Hari","M",40f, "India", 
null, null));
    persons.add(new 
Person("985488b0fd3ae","Bugs","Bunny",30f,"Wabbit Land ", null, set));

    cache.put("group1", value) // Write collection to cache

List persons = (List) cache.get("group1"); // Get 
from cache, Exception here {code}


*Exception:*
{code:java}
class org.apache.ignite.binary.BinaryObjectException: Failed to deserialize 
object [typeName=com.ignite.example.model.Person]
    at 
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:927)
    at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
    at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
    at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:316)
    at 
org.apache.ignite.internal.client.thin.ClientBinaryMarshaller.deserialize(ClientBinaryMarshaller.java:74)
    at 
org.apache.ignite.internal.client.thin.ClientUtils.unwrapBinary(ClientUtils.java:557)
    at 
org.apache.ignite.internal.client.thin.ClientUtils.unwrapCollection(ClientUtils.java:578)
    at 
org.apache.ignite.internal.client.thin.ClientUtils.unwrapBinary(ClientUtils.java:562)
    at 
org.apache.ignite.internal.client.thin.ClientUtils.readObject(ClientUtils.java:546)
    at 
org.apache.ignite.internal.client.thin.TcpClientCache.readObject(TcpClientCache.java:556)
    at 
org.apache.ignite.internal.client.thin.TcpClientCache.readObject(TcpClientCache.java:561)
    at 
org.apache.ignite.internal.client.thin.TcpClientCache$$Lambda$395/1950117092.apply(Unknown
 Source)
    at 
org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:284)
    at 
org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:219)
    at 
org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:198)
    at 
org.apache.ignite.internal.client.thin.ReliableChannel.affinityService(ReliableChannel.java:261)
    at 
org.apache.ignite.internal.client.thin.TcpClientCache.cacheSingleKeyOperation(TcpClientCache.java:508)
    at 
org.apache.ignite.internal.client.thin.TcpClientCache.get(TcpClientCache.java:111)
    at 
com.ignite.example.service.ApacheIgniteService.printAllKeys(ApacheIgniteService.java:117)
    at 
com.ignite.example.service.ApacheIgniteService.init(ApacheIgniteService.java:103)
    at 
com.ignite.example.IgniteCacheExampleApplication.run(IgniteCacheExampleApplication.java:22)
    at 
org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:768)
    at 
org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:752)
    at 
org.springframework.boot.SpringApplication.run(SpringApplication.java:314)
    at 
com.ignite.example.IgniteCacheExampleApplication.main(IgniteCacheExampleApplication.java:17)
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to read 
field [name=accounts]
    at 
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:192)
    at 

[jira] [Updated] (IGNITE-20693) NPE in placement driver actor on deactivation

2023-10-19 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-20693:
---
Labels: ignite-3  (was: )

> NPE in placement driver actor on deactivation
> -
>
> Key: IGNITE-20693
> URL: https://issues.apache.org/jira/browse/IGNITE-20693
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue happens when the placement driver is still stuck in active behavior 
> during the deactivation process.
> {noformat}
> [2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
> cache initialized for placement driver 
> [groupAssignments={1_part_0=[Assignment [consistentId=test-node, 
> isPeer=true]]}]
> [2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> 
> Stopping test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
> repetition 18 of 20, cost: 43ms.
> Exception in thread "%test-node%lease-updater-45" 
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> [2023-10-19T08:56:53,691][WARN 
> ][%test-node%lease-updater-46][LeaseUpdater] Lease updater is interrupted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20693) NPE in placement driver actor on deactivation

2023-10-19 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-20693:
---
Description: 
The issue happens when the placement driver is still stuck in active behavior 
during the deactivation process.

{noformat}
[2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
cache initialized for placement driver [groupAssignments={1_part_0=[Assignment 
[consistentId=test-node, isPeer=true]]}]
[2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> Stopping 
test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: repetition 
18 of 20, cost: 43ms.
Exception in thread "%test-node%lease-updater-45" java.lang.NullPointerException
  at 
org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
  at 
org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
  at java.base/java.lang.Thread.run(Thread.java:834)
[2023-10-19T08:56:53,691][WARN ][%test-node%lease-updater-46][LeaseUpdater] 
Lease updater is interrupted
{noformat}

  was:
The issue happens when the placement driver is still stuck in active behavior 
during the deactivation process.

{noformat}
[2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
cache initialized for placement driver [groupAssignments={1_part_0=[Assignment 
[consistentId=test-node, isPeer=true]]}]
[2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> 
Stopping test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
repetition 18 of 20, cost: 43ms.
Exception in thread "%test-node%lease-updater-45" 
java.lang.NullPointerException
  at 
org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
  at 
org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
  at java.base/java.lang.Thread.run(Thread.java:834)
[2023-10-19T08:56:53,691][WARN ][%test-node%lease-updater-46][LeaseUpdater] 
Lease updater is interrupted
{noformat}


> NPE in placement driver actor on deactivation
> -
>
> Key: IGNITE-20693
> URL: https://issues.apache.org/jira/browse/IGNITE-20693
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The issue happens when the placement driver is still stuck in active behavior 
> during the deactivation process.
> {noformat}
> [2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
> cache initialized for placement driver 
> [groupAssignments={1_part_0=[Assignment [consistentId=test-node, 
> isPeer=true]]}]
> [2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> Stopping 
> test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
> repetition 18 of 20, cost: 43ms.
> Exception in thread "%test-node%lease-updater-45" 
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> [2023-10-19T08:56:53,691][WARN ][%test-node%lease-updater-46][LeaseUpdater] 
> Lease updater is interrupted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20693) NPE in placement driver actor on deactivation

2023-10-19 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-20693:
---
Summary: NPE in placement driver actor on deactivation  (was: NPE in 
placement driver actore on deactivation)

> NPE in placement driver actor on deactivation
> -
>
> Key: IGNITE-20693
> URL: https://issues.apache.org/jira/browse/IGNITE-20693
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>
> The issue happens when the placement driver is still stuck in active behavior 
> during the deactivation process.
> {noformat}
> [2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
> cache initialized for placement driver 
> [groupAssignments={1_part_0=[Assignment [consistentId=test-node, 
> isPeer=true]]}]
> [2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> 
> Stopping test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
> repetition 18 of 20, cost: 43ms.
> Exception in thread "%test-node%lease-updater-45" 
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> [2023-10-19T08:56:53,691][WARN 
> ][%test-node%lease-updater-46][LeaseUpdater] Lease updater is interrupted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20693) NPE in placement driver actore on deactivation

2023-10-19 Thread Vladislav Pyatkov (Jira)
Vladislav Pyatkov created IGNITE-20693:
--

 Summary: NPE in placement driver actore on deactivation
 Key: IGNITE-20693
 URL: https://issues.apache.org/jira/browse/IGNITE-20693
 Project: Ignite
  Issue Type: Bug
Reporter: Vladislav Pyatkov


The issue happens when the placement driver is still stuck in active behavior 
during the deactivation process.

{noformat}
[2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
cache initialized for placement driver [groupAssignments={1_part_0=[Assignment 
[consistentId=test-node, isPeer=true]]}]
[2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> 
Stopping test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
repetition 18 of 20, cost: 43ms.
Exception in thread "%test-node%lease-updater-45" 
java.lang.NullPointerException
  at 
org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
  at 
org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
  at java.base/java.lang.Thread.run(Thread.java:834)
[2023-10-19T08:56:53,691][WARN ][%test-node%lease-updater-46][LeaseUpdater] 
Lease updater is interrupted
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20693) NPE in placement driver actore on deactivation

2023-10-19 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov reassigned IGNITE-20693:
--

Assignee: Vladislav Pyatkov

> NPE in placement driver actore on deactivation
> --
>
> Key: IGNITE-20693
> URL: https://issues.apache.org/jira/browse/IGNITE-20693
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>
> The issue happens when the placement driver is still stuck in active behavior 
> during the deactivation process.
> {noformat}
> [2023-10-19T08:56:53,665][INFO ][Test worker][AssignmentsTracker] Assignment 
> cache initialized for placement driver 
> [groupAssignments={1_part_0=[Assignment [consistentId=test-node, 
> isPeer=true]]}]
> [2023-10-19T08:56:53,703][INFO ][Test worker][LeaseUpdaterTest] >>> 
> Stopping test: LeaseUpdaterTest#testActiveDeactivateMultiThread, displayName: 
> repetition 18 of 20, cost: 43ms.
> Exception in thread "%test-node%lease-updater-45" 
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.updateLeaseBatchInternal(LeaseUpdater.java:313)
>   at 
> org.apache.ignite.internal.placementdriver.LeaseUpdater$Updater.run(LeaseUpdater.java:286)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> [2023-10-19T08:56:53,691][WARN 
> ][%test-node%lease-updater-46][LeaseUpdater] Lease updater is interrupted
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20692) Introduce Partition lifecycle events

2023-10-19 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-20692:
-
Description: 
I propose to introduce a mechanism for producing and consuming events related 
to the lifecycle of Partitions (a.k.a. Replicas). This mechanism can useful for 
other components, such as Index Manager to track which indices should be 
created or dropped because the corresponding partition has been moved.

I imagine this mechanism as follows:

{code:java}
interface ReplicaLifecycleListener {
/**
 * Called after a replica has been started on the local node.
 */
CompletableFuture afterReplicaStarted(ReplicationGroupId 
replicaGrpId);

/**
 * Called before a replica has been stopped on the local node.
 */
CompletableFuture beforeReplicaStopped(ReplicationGroupId 
replicaGrpId);
}
{code}

This listener should be notified of the events by the Replica Manager (I 
believe the correct places would be {{ReplicaManager#startReplica}} and 
{{ReplicaManager#stopReplica}}). Replica Manager should also provide API to 
register/deregister such listeners.

Also note that notification methods return CompletableFutures. These futures 
should block the corresponding operation (adding the new Replica to the 
ReplicaManager#replicas map or stopping a Replica). This will allow to obtain a 
happens-before relationship between the events and their listeners.


  was:
I propose to introduce a mechanism for producing and consuming events related 
to the lifecycle of Partitions (a.k.a. Replicas). This mechanism can useful for 
other components, such as Index Manager to track which indices should be 
created or dropped because the corresponding partition has been moved.

I imagine this mechanism as follows:

{code:java}
interface ReplicaLifecycleListener {
/**
 * Called after a replica has been started on the local node.
 */
CompletableFuture afterReplicaStarted(ReplicationGroupId 
replicaGrpId);

/**
 * Called before a replica has been stopped on the local node.
 */
CompletableFuture beforeReplicaStopped(ReplicationGroupId 
replicaGrpId);
}
{code}

This listener should be notified of the events by the Replica Manager. Replica 
Manager should also provide API to register/deregister such listeners.

Also note that notification methods return CompletableFutures. These futures 
should block the corresponding operation (adding the new Replica to the 
ReplicaManager#replicas map or stopping a Replica). This will allow to obtain a 
happens-before relationship between the events and their listneres.



> Introduce Partition lifecycle events
> 
>
> Key: IGNITE-20692
> URL: https://issues.apache.org/jira/browse/IGNITE-20692
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>
> I propose to introduce a mechanism for producing and consuming events related 
> to the lifecycle of Partitions (a.k.a. Replicas). This mechanism can useful 
> for other components, such as Index Manager to track which indices should be 
> created or dropped because the corresponding partition has been moved.
> I imagine this mechanism as follows:
> {code:java}
> interface ReplicaLifecycleListener {
> /**
>  * Called after a replica has been started on the local node.
>  */
> CompletableFuture afterReplicaStarted(ReplicationGroupId 
> replicaGrpId);
> /**
>  * Called before a replica has been stopped on the local node.
>  */
> CompletableFuture beforeReplicaStopped(ReplicationGroupId 
> replicaGrpId);
> }
> {code}
> This listener should be notified of the events by the Replica Manager (I 
> believe the correct places would be {{ReplicaManager#startReplica}} and 
> {{ReplicaManager#stopReplica}}). Replica Manager should also provide API to 
> register/deregister such listeners.
> Also note that notification methods return CompletableFutures. These futures 
> should block the corresponding operation (adding the new Replica to the 
> ReplicaManager#replicas map or stopping a Replica). This will allow to obtain 
> a happens-before relationship between the events and their listeners.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20692) Introduce Partition lifecycle events

2023-10-19 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-20692:
-
Description: 
I propose to introduce a mechanism for producing and consuming events related 
to the lifecycle of Partitions (a.k.a. Replicas). This mechanism can useful for 
other components, such as Index Manager to track which indices should be 
created or dropped because the corresponding partition has been moved.

I imagine this mechanism as follows:

{code:java}
interface ReplicaLifecycleListener {
/**
 * Called after a replica has been started on the local node.
 */
CompletableFuture afterReplicaStarted(ReplicationGroupId 
replicaGrpId);

/**
 * Called before a replica has been stopped on the local node.
 */
CompletableFuture beforeReplicaStopped(ReplicationGroupId 
replicaGrpId);
}
{code}

This listener should be notified of the events by the Replica Manager. Replica 
Manager should also provide API to register/deregister such listeners.

Also note that notification methods return CompletableFutures. These futures 
should block the corresponding operation (adding the new Replica to the 
ReplicaManager#replicas map or stopping a Replica). This will allow to obtain a 
happens-before relationship between the events and their listneres.


> Introduce Partition lifecycle events
> 
>
> Key: IGNITE-20692
> URL: https://issues.apache.org/jira/browse/IGNITE-20692
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>
> I propose to introduce a mechanism for producing and consuming events related 
> to the lifecycle of Partitions (a.k.a. Replicas). This mechanism can useful 
> for other components, such as Index Manager to track which indices should be 
> created or dropped because the corresponding partition has been moved.
> I imagine this mechanism as follows:
> {code:java}
> interface ReplicaLifecycleListener {
> /**
>  * Called after a replica has been started on the local node.
>  */
> CompletableFuture afterReplicaStarted(ReplicationGroupId 
> replicaGrpId);
> /**
>  * Called before a replica has been stopped on the local node.
>  */
> CompletableFuture beforeReplicaStopped(ReplicationGroupId 
> replicaGrpId);
> }
> {code}
> This listener should be notified of the events by the Replica Manager. 
> Replica Manager should also provide API to register/deregister such listeners.
> Also note that notification methods return CompletableFutures. These futures 
> should block the corresponding operation (adding the new Replica to the 
> ReplicaManager#replicas map or stopping a Replica). This will allow to obtain 
> a happens-before relationship between the events and their listneres.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19988) Add index creation (population) status to index view

2023-10-19 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov reassigned IGNITE-19988:
---

Assignee: (was: Mikhail Petrov)

> Add index creation (population) status to index view
> 
>
> Key: IGNITE-19988
> URL: https://issues.apache.org/jira/browse/IGNITE-19988
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.15
>Reporter: Ivan Daschinsky
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>
> Sometimes index creation can be quite long. A user might start queries 
> without waiting for finish of the index creation process and see slow 
> queries. It is necessary to provide index status information to users by 
> exposing it in the index system view.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20688) Java Thin Client - Error while deserializing Collection

2023-10-19 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov reassigned IGNITE-20688:
---

Assignee: Mikhail Petrov

> Java Thin Client - Error while deserializing Collection
> ---
>
> Key: IGNITE-20688
> URL: https://issues.apache.org/jira/browse/IGNITE-20688
> Project: Ignite
>  Issue Type: Bug
>  Components: binary, thin client
>Affects Versions: 2.9, 2.10, 2.12, 2.13, 2.14, 2.15
>Reporter: Rahul Mohan
>Assignee: Mikhail Petrov
>Priority: Major
>
> I have encountered an issue in deserializing cache values which are of 
> Collection type.
> The issue occurs if a field in different objects  within the  collection 
> points  to the same reference.
> *Versions:*
> org.apache.ignite:ignite-core:2.9.0 to org.apache.ignite:ignite-core:2.15.0
>  
> {code:java}
> Person.java
> public class Person implements Serializable {
> private String id;
> private String firstName;
> private String lastName;
> private double salary;
> private String country;
> private String deleted;
> private Set accounts;
> }
> Client
>     ClientCacheConfiguration cacheCfg = new 
> ClientCacheConfiguration().setName(cacheName).
>     setCacheMode(CacheMode.REPLICATED).
>     
> setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>  
>     cache = client.getOrCreateCache(cacheCfg);
>  
>     Set set = new HashSet<>();
> set.add("1");
>  
>     List persons = new ArrayList<>();
>     persons.add(new Person("105286a4","Jack","Smith",1f, 
> "USA","false", set));
>     persons.add(new Person("98545b0fd3af","John", "Doe", 50f, 
> "Australia","false", null));
>     persons.add(new Person("98545b0fd3afd","Hari","M",40f, 
> "India", null, null));
>     persons.add(new 
> Person("985488b0fd3ae","Bugs","Bunny",30f,"Wabbit Land ", null, set));
>     cache.put("group1", value) // Write collection to cache
> 
> List persons = (List) cache.get("group1"); // Get 
> from cache, Exception here {code}
> 
> *Exception:*
> {code:java}
> class org.apache.ignite.binary.BinaryObjectException: Failed to deserialize 
> object [typeName=com.ignite.example.model.Person]
>     at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:927)
>     at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
>     at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
>     at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:316)
>     at 
> org.apache.ignite.internal.client.thin.ClientBinaryMarshaller.deserialize(ClientBinaryMarshaller.java:74)
>     at 
> org.apache.ignite.internal.client.thin.ClientUtils.unwrapBinary(ClientUtils.java:557)
>     at 
> org.apache.ignite.internal.client.thin.ClientUtils.unwrapCollection(ClientUtils.java:578)
>     at 
> org.apache.ignite.internal.client.thin.ClientUtils.unwrapBinary(ClientUtils.java:562)
>     at 
> org.apache.ignite.internal.client.thin.ClientUtils.readObject(ClientUtils.java:546)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache.readObject(TcpClientCache.java:556)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache.readObject(TcpClientCache.java:561)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache$$Lambda$395/1950117092.apply(Unknown
>  Source)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:284)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:219)
>     at 
> org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:198)
>     at 
> org.apache.ignite.internal.client.thin.ReliableChannel.affinityService(ReliableChannel.java:261)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache.cacheSingleKeyOperation(TcpClientCache.java:508)
>     at 
> org.apache.ignite.internal.client.thin.TcpClientCache.get(TcpClientCache.java:111)
>     at 
> com.ignite.example.service.ApacheIgniteService.printAllKeys(ApacheIgniteService.java:117)
>     at 
> com.ignite.example.service.ApacheIgniteService.init(ApacheIgniteService.java:103)
>     at 
> com.ignite.example.IgniteCacheExampleApplication.run(IgniteCacheExampleApplication.java:22)
>     at 
> 

[jira] [Updated] (IGNITE-20692) Introduce Partition lifecycle events

2023-10-19 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-20692:
-
Labels: ignite-3  (was: )

> Introduce Partition lifecycle events
> 
>
> Key: IGNITE-20692
> URL: https://issues.apache.org/jira/browse/IGNITE-20692
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20692) Introduce Partition lifecycle events

2023-10-19 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev updated IGNITE-20692:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Introduce Partition lifecycle events
> 
>
> Key: IGNITE-20692
> URL: https://issues.apache.org/jira/browse/IGNITE-20692
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtcev
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-20692) Introduce Partition lifecycle events

2023-10-19 Thread Aleksandr Polovtcev (Jira)
Aleksandr Polovtcev created IGNITE-20692:


 Summary: Introduce Partition lifecycle events
 Key: IGNITE-20692
 URL: https://issues.apache.org/jira/browse/IGNITE-20692
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksandr Polovtcev






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20691) Excessive heap utilization under TPC-H benchmark

2023-10-19 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-20691:

Description: 
Ignite 3, rev. c962aa1bbb94e73a4d8ce2403aad3d629dd55666

Ignite 3 fails under [TPC-H benchmark|https://www.tpc.org/tpch/] which sends 
requests via JDBC. The code of the benchmark: 
[https://github.com/cmu-db/benchbase/tree/main/src/main/resources/benchmarks/tpch]
 

Steps:
 * Start an Ignite 3 node with the attached bootstrap config 
[^ignite-config.json]
 * Start a single instance of the benchmark with the following config: 
[^tpch_2023-10-17_08-51-49.config.xml]

*Actual result:*

The node fails with exceptions like the following:
{noformat}
2023-10-17 08:36:48:269 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.80-id-0%metastorage-watch-executor-3][ReplicaManager]
 Failed to process replica request [request=ReadWriteMultiRowReplicaRequestImpl 
[binaryRowMessages=ArrayList [BinaryRowMessageImpl 
[binaryTuple=java.nio.HeapByteBuffer[pos=0 lim=99 cap=99], schemaVersion=1]], 
commitPartitionId=TablePartitionIdMessageImpl [partitionId=20, tableId=26], 
full=false, groupId=26_part_21, requestType=RW_INSERT_ALL, skipDelayedAck=true, 
term=111248573589356783, timestampLong=111248730707264072, 
transactionId=018b3c21-8952-00a1--91e0d952]]
java.util.concurrent.CompletionException: 
org.apache.ignite.internal.replicator.exception.PrimaryReplicaMissException: 
IGN-REP-6 TraceId:fe81fc3b-bf42-4433-94f3-b460e1542523 The primary replica has 
changed [expectedLeaseholder=poc-tester-SERVER-192.168.1.80-id-0, 
currentLeaseholder=null]
    at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
    at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
    at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
    at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$completeWaitersOnUpdate$0(PendingComparableValuesTracker.java:169)
    at 
java.base/java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:122)
    at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.completeWaitersOnUpdate(PendingComparableValuesTracker.java:169)
    at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.update(PendingComparableValuesTracker.java:103)
    at 
org.apache.ignite.internal.metastorage.server.time.ClusterTimeImpl.updateSafeTime(ClusterTimeImpl.java:146)
    at 
org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl.onSafeTimeAdvanced(MetaStorageManagerImpl.java:849)
    at 
org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl$1.onSafeTimeAdvanced(MetaStorageManagerImpl.java:456)
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$advanceSafeTime$7(WatchProcessor.java:281)
    at 
java.base/java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
    at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: 
org.apache.ignite.internal.replicator.exception.PrimaryReplicaMissException: 
IGN-REP-6 TraceId:fe81fc3b-bf42-4433-94f3-b460e1542523 The primary replica has 
changed [expectedLeaseholder=poc-tester-SERVER-192.168.1.80-id-0, 
currentLeaseholder=null]
    at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$ensureReplicaIsPrimary$182(PartitionReplicaListener.java:2666)
    at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
    ... 15 more
{noformat}
The benchmark shows 0 completed requests. See 
[^tpch_2023-10-17_08-51-49.summary.json]

[https://gceasy.io/] reports that ~20% of the time the node was in GC pause. 
See [^GCeasy-report-gc-poc-tester-SERVER-192.pdf]

Logs from the node (including GC log): [^node_logs.zip]

  was:
Ignite 3, rev. 

c962aa1bbb94e73a4d8ce2403aad3d629dd55666

 

Ignite 3 fails under [TPC-H benchmark|https://www.tpc.org/tpch/] which sends 
requests via JDBC. The code of the benchmark: 
[https://github.com/cmu-db/benchbase/tree/main/src/main/resources/benchmarks/tpch]
 

 

Steps:
 * Start an Ignite 3 node with the attached bootstrap config 
[^ignite-config.json]
 * Start a single instance of the benchmark with the following config: 
[^tpch_2023-10-17_08-51-49.config.xml]

*Actual result:*

The node fails with exceptions like the following:
{noformat}

[jira] [Created] (IGNITE-20691) Excessive heap utilization under TPC-H benchmark

2023-10-19 Thread Ivan Artiukhov (Jira)
Ivan Artiukhov created IGNITE-20691:
---

 Summary: Excessive heap utilization under TPC-H benchmark
 Key: IGNITE-20691
 URL: https://issues.apache.org/jira/browse/IGNITE-20691
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Ivan Artiukhov
 Attachments: GCeasy-report-gc-poc-tester-SERVER-192.pdf, 
ignite-config.json, node_logs.zip, tpch_2023-10-17_08-51-49.config.xml, 
tpch_2023-10-17_08-51-49.summary.json

Ignite 3, rev. 

c962aa1bbb94e73a4d8ce2403aad3d629dd55666

 

Ignite 3 fails under [TPC-H benchmark|https://www.tpc.org/tpch/] which sends 
requests via JDBC. The code of the benchmark: 
[https://github.com/cmu-db/benchbase/tree/main/src/main/resources/benchmarks/tpch]
 

 

Steps:
 * Start an Ignite 3 node with the attached bootstrap config 
[^ignite-config.json]
 * Start a single instance of the benchmark with the following config: 
[^tpch_2023-10-17_08-51-49.config.xml]

*Actual result:*

The node fails with exceptions like the following:
{noformat}
2023-10-17 08:36:48:269 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.80-id-0%metastorage-watch-executor-3][ReplicaManager]
 Failed to process replica request [request=ReadWriteMultiRowReplicaRequestImpl 
[binaryRowMessages=ArrayList [BinaryRowMessageImpl 
[binaryTuple=java.nio.HeapByteBuffer[pos=0 lim=99 cap=99], schemaVersion=1]], 
commitPartitionId=TablePartitionIdMessageImpl [partitionId=20, tableId=26], 
full=false, groupId=26_part_21, requestType=RW_INSERT_ALL, skipDelayedAck=true, 
term=111248573589356783, timestampLong=111248730707264072, 
transactionId=018b3c21-8952-00a1--91e0d952]]
java.util.concurrent.CompletionException: 
org.apache.ignite.internal.replicator.exception.PrimaryReplicaMissException: 
IGN-REP-6 TraceId:fe81fc3b-bf42-4433-94f3-b460e1542523 The primary replica has 
changed [expectedLeaseholder=poc-tester-SERVER-192.168.1.80-id-0, 
currentLeaseholder=null]
    at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
    at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
    at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
    at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$completeWaitersOnUpdate$0(PendingComparableValuesTracker.java:169)
    at 
java.base/java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:122)
    at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.completeWaitersOnUpdate(PendingComparableValuesTracker.java:169)
    at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.update(PendingComparableValuesTracker.java:103)
    at 
org.apache.ignite.internal.metastorage.server.time.ClusterTimeImpl.updateSafeTime(ClusterTimeImpl.java:146)
    at 
org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl.onSafeTimeAdvanced(MetaStorageManagerImpl.java:849)
    at 
org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl$1.onSafeTimeAdvanced(MetaStorageManagerImpl.java:456)
    at 
org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$advanceSafeTime$7(WatchProcessor.java:281)
    at 
java.base/java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
    at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: 
org.apache.ignite.internal.replicator.exception.PrimaryReplicaMissException: 
IGN-REP-6 TraceId:fe81fc3b-bf42-4433-94f3-b460e1542523 The primary replica has 
changed [expectedLeaseholder=poc-tester-SERVER-192.168.1.80-id-0, 
currentLeaseholder=null]
    at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$ensureReplicaIsPrimary$182(PartitionReplicaListener.java:2666)
    at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
    ... 15 more
{noformat}
 The benchmark shows 0 completed requests. See 
[^tpch_2023-10-17_08-51-49.summary.json]

[https://gceasy.io/] reports that ~20% of the time the node was in GC pause. 
See [^GCeasy-report-gc-poc-tester-SERVER-192.pdf]

Logs from the node (including GC log): [^node_logs.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20664) Resolve compatibility issue with SnakeYAML versions in Micronaut tests

2023-10-19 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin reassigned IGNITE-20664:
--

Assignee: Mikhail Pochatkin

> Resolve compatibility issue with SnakeYAML versions in Micronaut tests
> --
>
> Key: IGNITE-20664
> URL: https://issues.apache.org/jira/browse/IGNITE-20664
> Project: Ignite
>  Issue Type: Bug
>  Components: cli
>Reporter: Ivan Gagarkin
>Assignee: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3
>
>  
> We are encountering an issue with Micronaut tests involving the 'jarhell' 
> component. Specifically, we are using two different versions of the SnakeYAML 
> library - version 1.33 and version 2.0. When running the tests from IntelliJ 
> IDEA, we observe the following error in the 
> 'org.apache.ignite.internal.rest.ItGeneratedRestClientTest' class
> {code:java}
> java.lang.NoSuchMethodError: org.yaml.snakeyaml.constructor.SafeConstructor: 
> method 'void ()' not found
>   at 
> io.micronaut.context.env.yaml.CustomSafeConstructor.(CustomSafeConstructor.java:36)
> at 
> io.micronaut.context.env.yaml.YamlPropertySourceLoader.processInput(YamlPropertySourceLoader.java:56)
> at 
> io.micronaut.context.env.AbstractPropertySourceLoader.read(AbstractPropertySourceLoader.java:117)
> at 
> io.micronaut.context.env.AbstractPropertySourceLoader.loadProperties(AbstractPropertySourceLoader.java:102)
>   at 
> io.micronaut.context.env.AbstractPropertySourceLoader.load(AbstractPropertySourceLoader.java:68)
>  at 
> io.micronaut.context.env.AbstractPropertySourceLoader.load(AbstractPropertySourceLoader.java:55)
>  at 
> io.micronaut.context.env.DefaultEnvironment.loadPropertySourceFromLoader(DefaultEnvironment.java:607)
> at 
> io.micronaut.context.env.DefaultEnvironment.readPropertySourceList(DefaultEnvironment.java:541)
>   at 
> io.micronaut.context.env.DefaultEnvironment.readPropertySourceList(DefaultEnvironment.java:527)
>   at 
> io.micronaut.context.DefaultApplicationContext$RuntimeConfiguredEnvironment.readPropertySourceList(DefaultApplicationContext.java:794)
>at 
> io.micronaut.context.env.DefaultEnvironment.readPropertySources(DefaultEnvironment.java:412)
>  at 
> io.micronaut.context.env.DefaultEnvironment.start(DefaultEnvironment.java:270)
>at 
> io.micronaut.context.DefaultApplicationContext$RuntimeConfiguredEnvironment.start(DefaultApplicationContext.java:769)
> at 
> io.micronaut.context.DefaultApplicationContext$RuntimeConfiguredEnvironment.start(DefaultApplicationContext.java:738)
> at 
> io.micronaut.context.DefaultApplicationContext.startEnvironment(DefaultApplicationContext.java:242)
>   at 
> io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:193)
>  at 
> io.micronaut.test.extensions.AbstractMicronautExtension.startApplicationContext(AbstractMicronautExtension.java:433)
>  at 
> io.micronaut.test.extensions.AbstractMicronautExtension.beforeClass(AbstractMicronautExtension.java:314)
>  at 
> io.micronaut.test.extensions.junit5.MicronautJunit5Extension.beforeAll(MicronautJunit5Extension.java:84)
>  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllCallbacks$12(ClassBasedTestDescriptor.java:395)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllCallbacks(ClassBasedTestDescriptor.java:395)
>  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:211)
>at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:84)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
> at 

[jira] [Comment Edited] (IGNITE-20689) [ducktests] Fix a flaky perf_stat_test

2023-10-19 Thread Ivan Daschinsky (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1102#comment-1102
 ] 

Ivan Daschinsky edited comment on IGNITE-20689 at 10/19/23 7:43 AM:


[~serge.korotkov] PR is ok, thanks, merged to master


was (Author: ivandasch):
[~serge.korotkov] PR is ok, merged

> [ducktests] Fix a flaky perf_stat_test
> -
>
> Key: IGNITE-20689
> URL: https://issues.apache.org/jira/browse/IGNITE-20689
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Minor
>  Labels: ducktests
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ignitetest.tests.control_utility.perf_stat_test fails sometimes.
> It's because ContinuousDataLoadApplication fails sometimes to warmup in 
> default 60 seconds timeout (warmup consists of loading of 100_000 entries). 
> At the same time it postpones the IGNITE_APPLICATION_INITIALIZED event until 
> warmup complete. So test thinks application can not start and fails.
> As a solution the number of warmup entries can be safely reduced for this 
> test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20689) [ducktests] Fix a flaky perf_stat_test

2023-10-19 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-20689:
-
Fix Version/s: 2.16

> [ducktests] Fix a flaky perf_stat_test
> -
>
> Key: IGNITE-20689
> URL: https://issues.apache.org/jira/browse/IGNITE-20689
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Minor
>  Labels: ducktests
> Fix For: 2.16
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ignitetest.tests.control_utility.perf_stat_test fails sometimes.
> It's because ContinuousDataLoadApplication fails sometimes to warmup in 
> default 60 seconds timeout (warmup consists of loading of 100_000 entries). 
> At the same time it postpones the IGNITE_APPLICATION_INITIALIZED event until 
> warmup complete. So test thinks application can not start and fails.
> As a solution the number of warmup entries can be safely reduced for this 
> test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20430) Get rid of useless code in LeaseTracker.UpdateListener#onUpdate

2023-10-19 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1096#comment-1096
 ] 

Vladislav Pyatkov commented on IGNITE-20430:


Merged 154c99c28a6166307c89d07fd380eda639b1cdd3

> Get rid of useless code in LeaseTracker.UpdateListener#onUpdate
> ---
>
> Key: IGNITE-20430
> URL: https://issues.apache.org/jira/browse/IGNITE-20430
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Useless code was discovered in 
> *org.apache.ignite.internal.placementdriver.leases.LeaseTracker.UpdateListener#onUpdate*,
>  in particular the second loop in which the predicate is never executed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20545) Test IgniteRpcTest.testDisconnect is flaky on TC

2023-10-19 Thread Aleksandr Polovtcev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1065#comment-1065
 ] 

Aleksandr Polovtcev commented on IGNITE-20545:
--

I wasn't able to reproduce this problem locally, I think it may be related to 
some stale Ignite instances running on TC. I'm going to add some logging and 
check the TC during the next failure

> Test IgniteRpcTest.testDisconnect is flaky on TC
> 
>
> Key: IGNITE-20545
> URL: https://issues.apache.org/jira/browse/IGNITE-20545
> Project: Ignite
>  Issue Type: Bug
>Reporter: Sergey Chugunov
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>
> Test failed recently on main branch ([failed 
> run|https://ci.ignite.apache.org/viewLog.html?buildId=7536503=ApacheIgnite3xGradle_Test_RunAllTests]),
>  there is an assertion in test logs:
> {code:java}
> org.opentest4j.AssertionFailedError: expected:  but was: 
> at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)
> at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:180)
> at 
> app//org.apache.ignite.raft.jraft.rpc.AbstractRpcTest.testDisconnect(AbstractRpcTest.java:128){code}
> Test history shows that it fails occasionally in different branches with the 
> same error in logs.
> Looks like there is some kind of race between events in test logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)