[jira] [Resolved] (IGNITE-21225) Redundant lambda object allocation in ClockPageReplacementFlags#setFlag
[ https://issues.apache.org/jira/browse/IGNITE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov resolved IGNITE-21225. Fix Version/s: 2.17 Release Note: Fixed redundant lambda object allocation in ClockPageReplacementFlags# Resolution: Fixed [~timonin.maksim], thanks for the review! Merged to master. > Redundant lambda object allocation in ClockPageReplacementFlags#setFlag > --- > > Key: IGNITE-21225 > URL: https://issues.apache.org/jira/browse/IGNITE-21225 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Fix For: 2.17 > > Time Spent: 20m > Remaining Estimate: 0h > > Every time we call {{ClockPageReplacementFlags#setFlag/clearFlag}} methods > the new lambda object is created, since lambda is accessing the variable in > enclosing scope. \{{ClockPageReplacementFlags#setFlag}} method called every > time when page is modified, so, it's a relatevily hot method and we should > avoid new object allocation here. > Here is the test to show redundant allocations: > > {code:java} > /** */ > @Test > public void testAllocation() { > clockFlags = new ClockPageReplacementFlags(MAX_PAGES_CNT, > region.address()); > int cnt = 1_000_000; > ThreadMXBean bean = (ThreadMXBean)ManagementFactory.getThreadMXBean(); > // Warmup. > clockFlags.setFlag(0); > long allocated0 = > bean.getThreadAllocatedBytes(Thread.currentThread().getId()); > for (int i = 0; i < cnt; i++) > clockFlags.setFlag(i % MAX_PAGES_CNT); > long allocated1 = > bean.getThreadAllocatedBytes(Thread.currentThread().getId()); > assertTrue("Too many bytes allocated: " + (allocated1 - allocated0), > allocated1 - allocated0 < cnt); > } {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21823) fix log message pageSize
[ https://issues.apache.org/jira/browse/IGNITE-21823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21823: --- Ignite Flags: Release Notes Required (was: Docs Required,Release Notes Required) > fix log message pageSize > > > Key: IGNITE-21823 > URL: https://issues.apache.org/jira/browse/IGNITE-21823 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Nikolaev >Assignee: Andrei Nadyktov >Priority: Minor > Labels: ise, newbie > Fix For: 2.17 > > Time Spent: 40m > Remaining Estimate: 0h > > If you do not indicate in the configuration, the size of pageSize, then in > the log we see the message that pageSize = 0 that is not true -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21225) Redundant lambda object allocation in ClockPageReplacementFlags#setFlag
[ https://issues.apache.org/jira/browse/IGNITE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21225: --- Labels: ise (was: ) > Redundant lambda object allocation in ClockPageReplacementFlags#setFlag > --- > > Key: IGNITE-21225 > URL: https://issues.apache.org/jira/browse/IGNITE-21225 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Time Spent: 10m > Remaining Estimate: 0h > > Every time we call {{ClockPageReplacementFlags#setFlag/clearFlag}} methods > the new lambda object is created, since lambda is accessing the variable in > enclosing scope. \{{ClockPageReplacementFlags#setFlag}} method called every > time when page is modified, so, it's a relatevily hot method and we should > avoid new object allocation here. > Here is the test to show redundant allocations: > > {code:java} > /** */ > @Test > public void testAllocation() { > clockFlags = new ClockPageReplacementFlags(MAX_PAGES_CNT, > region.address()); > int cnt = 1_000_000; > ThreadMXBean bean = (ThreadMXBean)ManagementFactory.getThreadMXBean(); > // Warmup. > clockFlags.setFlag(0); > long allocated0 = > bean.getThreadAllocatedBytes(Thread.currentThread().getId()); > for (int i = 0; i < cnt; i++) > clockFlags.setFlag(i % MAX_PAGES_CNT); > long allocated1 = > bean.getThreadAllocatedBytes(Thread.currentThread().getId()); > assertTrue("Too many bytes allocated: " + (allocated1 - allocated0), > allocated1 - allocated0 < cnt); > } {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22101) Performance drop for thin client requests
[ https://issues.apache.org/jira/browse/IGNITE-22101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-22101: --- Ignite Flags: (was: Release Notes Required) > Performance drop for thin client requests > - > > Key: IGNITE-22101 > URL: https://issues.apache.org/jira/browse/IGNITE-22101 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Fix For: 2.17 > > Attachments: perf_drop.png > > Time Spent: 1h > Remaining Estimate: 0h > > After IGNITE-21183 there is performance drop for thin client transactional > operations up to 38%: > !perf_drop.png|width=1083,height=168! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22028) Thin client: Implement invoke/invokeAll operations
[ https://issues.apache.org/jira/browse/IGNITE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-22028: --- Labels: IEP-122 ise (was: ise) > Thin client: Implement invoke/invokeAll operations > -- > > Key: IGNITE-22028 > URL: https://issues.apache.org/jira/browse/IGNITE-22028 > Project: Ignite > Issue Type: New Feature >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: IEP-122, ise > > We must implement invoke/invokeAll methods for thin client. > Dev. list thread and IEP should be started to discuss protocol and > implementation details. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-22101) Performance drop for thin client requests
[ https://issues.apache.org/jira/browse/IGNITE-22101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844343#comment-17844343 ] Aleksey Plekhanov edited comment on IGNITE-22101 at 5/7/24 3:30 PM: Benchmark results (latancy, ms) after fix: | | | |ignite-20240405-IGNITE-13114-fbdd3deb70| | | ignite-20240426-IGNITE-22069-0947a8b9fb| | | |storage|client_type|benchmark|value|error%|runs#|value|error%|runs#|dif%| |MEM|Thin|PutAllBenchmark|5.21|±4.70%|3|5.24|±5.12%|3|+0.66%| |MEM|Thin|PutAllSerializableTxBenchmark|38.17|±1.34%|3|38.40|±2.66%|3|+0.58%| |MEM|Thin|PutAllTxBenchmark|53.43|±2.30%|3|53.72|±0.57%|3|+0.55%| |MEM|Thin|PutGetTxBenchmark ("PESSIMISTIC", "REPEATABLE_READ")|3.43|±3.52%|3|3.44|±4.35%|3|+0.53%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "REPEATABLE_READ")|2.22|±1.52%|3|2.23|±3.02%|3|+0.31%| |MEM|Thin|PutBenchmark|0.46|±0.89%|3|0.46|±1.49%|3|+0.30%| |MEM|Thin|GetAllPutAllTxBenchmark ( "OPTIMISTIC")|57.18|±1.64%|3|57.23|±1.24%|3|+0.10%| |MEM|Thin|PutGetBatchBenchmark|8.32|±1.48%|3|8.32|±2.40%|3|-0.06%| |MEM|Thin|PutGetBenchmark|0.85|±2.38%|3|0.85|±2.90%|3|-0.43%| |MEM|Thin|GetAllPutAllTxBenchmark ("OPTIMISTIC", "SERIALIZABLE")|49.89|±2.02%|3|49.64|±1.51%|3|-0.49%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "SERIALIZABLE")|3.06|±5.78%|3|3.03|±2.85%|3|-0.83%| |MEM|Thin|PutTxImplicitBenchmark|0.70|±6.08%|3|0.68|±3.07%|3|-2.70%| was (Author: alex_pl): Benchmark results (latancy, ms) after fix: | |ignite-20240405-IGNITE-13114-fbdd3deb70|ignite-20240426-IGNITE-22069-0947a8b9fb| |storage|client_type|benchmark|value|error%|runs#|value|error%|runs#|dif%| |MEM|Thin|PutAllBenchmark|5.21|±4.70%|3|5.24|±5.12%|3|+0.66%| |MEM|Thin|PutAllSerializableTxBenchmark|38.17|±1.34%|3|38.40|±2.66%|3|+0.58%| |MEM|Thin|PutAllTxBenchmark|53.43|±2.30%|3|53.72|±0.57%|3|+0.55%| |MEM|Thin|PutGetTxBenchmark ("PESSIMISTIC", "REPEATABLE_READ")|3.43|±3.52%|3|3.44|±4.35%|3|+0.53%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "REPEATABLE_READ")|2.22|±1.52%|3|2.23|±3.02%|3|+0.31%| |MEM|Thin|PutBenchmark|0.46|±0.89%|3|0.46|±1.49%|3|+0.30%| |MEM|Thin|GetAllPutAllTxBenchmark ( "OPTIMISTIC")|57.18|±1.64%|3|57.23|±1.24%|3|+0.10%| |MEM|Thin|PutGetBatchBenchmark|8.32|±1.48%|3|8.32|±2.40%|3|-0.06%| |MEM|Thin|PutGetBenchmark|0.85|±2.38%|3|0.85|±2.90%|3|-0.43%| |MEM|Thin|GetAllPutAllTxBenchmark ("OPTIMISTIC", "SERIALIZABLE")|49.89|±2.02%|3|49.64|±1.51%|3|-0.49%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "SERIALIZABLE")|3.06|±5.78%|3|3.03|±2.85%|3|-0.83%| |MEM|Thin|PutTxImplicitBenchmark|0.70|±6.08%|3|0.68|±3.07%|3|-2.70%| > Performance drop for thin client requests > - > > Key: IGNITE-22101 > URL: https://issues.apache.org/jira/browse/IGNITE-22101 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Attachments: perf_drop.png > > Time Spent: 50m > Remaining Estimate: 0h > > After IGNITE-21183 there is performance drop for thin client transactional > operations up to 38%: > !perf_drop.png|width=1083,height=168! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-22101) Performance drop for thin client requests
[ https://issues.apache.org/jira/browse/IGNITE-22101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844343#comment-17844343 ] Aleksey Plekhanov edited comment on IGNITE-22101 at 5/7/24 3:28 PM: Benchmark results (latancy, ms) after fix: | |ignite-20240405-IGNITE-13114-fbdd3deb70|ignite-20240426-IGNITE-22069-0947a8b9fb| |storage|client_type|benchmark|value|error%|runs#|value|error%|runs#|dif%| |MEM|Thin|PutAllBenchmark|5.21|±4.70%|3|5.24|±5.12%|3|+0.66%| |MEM|Thin|PutAllSerializableTxBenchmark|38.17|±1.34%|3|38.40|±2.66%|3|+0.58%| |MEM|Thin|PutAllTxBenchmark|53.43|±2.30%|3|53.72|±0.57%|3|+0.55%| |MEM|Thin|PutGetTxBenchmark ("PESSIMISTIC", "REPEATABLE_READ")|3.43|±3.52%|3|3.44|±4.35%|3|+0.53%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "REPEATABLE_READ")|2.22|±1.52%|3|2.23|±3.02%|3|+0.31%| |MEM|Thin|PutBenchmark|0.46|±0.89%|3|0.46|±1.49%|3|+0.30%| |MEM|Thin|GetAllPutAllTxBenchmark ( "OPTIMISTIC")|57.18|±1.64%|3|57.23|±1.24%|3|+0.10%| |MEM|Thin|PutGetBatchBenchmark|8.32|±1.48%|3|8.32|±2.40%|3|-0.06%| |MEM|Thin|PutGetBenchmark|0.85|±2.38%|3|0.85|±2.90%|3|-0.43%| |MEM|Thin|GetAllPutAllTxBenchmark ("OPTIMISTIC", "SERIALIZABLE")|49.89|±2.02%|3|49.64|±1.51%|3|-0.49%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "SERIALIZABLE")|3.06|±5.78%|3|3.03|±2.85%|3|-0.83%| |MEM|Thin|PutTxImplicitBenchmark|0.70|±6.08%|3|0.68|±3.07%|3|-2.70%| was (Author: alex_pl): Benchmark results (latancy, ms) after fix: | |ignite-20240405-IGNITE-13114-fbdd3deb70|ignite-20240426-IGNITE-22069-0947a8b9fb| |storage|client_type|benchmark|value|error%|runs#|value|error%|runs#|dif%| |MEM|Thin|PutAllBenchmark|5.21|±4.70%|3|5.24|±5.12%|3|+0.66%| |MEM|Thin|PutAllSerializableTxBenchmark|38.17|±1.34%|3|38.40|±2.66%|3|+0.58%| |MEM|Thin|PutAllTxBenchmark|53.43|±2.30%|3|53.72|±0.57%|3|+0.55%| |MEM|Thin|PutGetTxBenchmark ("PESSIMISTIC", "REPEATABLE_READ")|3.43|±3.52%|3|3.44|±4.35%|3|+0.53%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "REPEATABLE_READ")|2.22|±1.52%|3|2.23|±3.02%|3|+0.31%| |MEM|Thin|PutBenchmark|0.46|±0.89%|3|0.46|±1.49%|3|+0.30%| |MEM|Thin|GetAllPutAllTxBenchmark ( "OPTIMISTIC")|57.18|±1.64%|3|57.23|±1.24%|3|+0.10%| |MEM|Thin|PutGetBatchBenchmark|8.32|±1.48%|3|8.32|±2.40%|3|-0.06%| |MEM|Thin|PutGetBenchmark|0.85|±2.38%|3|0.85|±2.90%|3|-0.43%| |MEM|Thin|GetAllPutAllTxBenchmark \{"options": {"txc": "OPTIMISTIC", "txi": "SERIALIZABLE"}}|49.89|±2.02%|3|49.64|±1.51%|3|-0.49%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "SERIALIZABLE")|3.06|±5.78%|3|3.03|±2.85%|3|-0.83%| |MEM|Thin|PutTxImplicitBenchmark|0.70|±6.08%|3|0.68|±3.07%|3|-2.70%| > Performance drop for thin client requests > - > > Key: IGNITE-22101 > URL: https://issues.apache.org/jira/browse/IGNITE-22101 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Attachments: perf_drop.png > > Time Spent: 50m > Remaining Estimate: 0h > > After IGNITE-21183 there is performance drop for thin client transactional > operations up to 38%: > !perf_drop.png|width=1083,height=168! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-22101) Performance drop for thin client requests
[ https://issues.apache.org/jira/browse/IGNITE-22101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844343#comment-17844343 ] Aleksey Plekhanov edited comment on IGNITE-22101 at 5/7/24 3:26 PM: Benchmark results (latancy, ms) after fix: | |ignite-20240405-IGNITE-13114-fbdd3deb70|ignite-20240426-IGNITE-22069-0947a8b9fb| |storage|client_type|benchmark|value|error%|runs#|value|error%|runs#|dif%| |MEM|Thin|PutAllBenchmark|5.21|±4.70%|3|5.24|±5.12%|3|+0.66%| |MEM|Thin|PutAllSerializableTxBenchmark|38.17|±1.34%|3|38.40|±2.66%|3|+0.58%| |MEM|Thin|PutAllTxBenchmark|53.43|±2.30%|3|53.72|±0.57%|3|+0.55%| |MEM|Thin|PutGetTxBenchmark ("PESSIMISTIC", "REPEATABLE_READ")|3.43|±3.52%|3|3.44|±4.35%|3|+0.53%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "REPEATABLE_READ")|2.22|±1.52%|3|2.23|±3.02%|3|+0.31%| |MEM|Thin|PutBenchmark|0.46|±0.89%|3|0.46|±1.49%|3|+0.30%| |MEM|Thin|GetAllPutAllTxBenchmark ( "OPTIMISTIC")|57.18|±1.64%|3|57.23|±1.24%|3|+0.10%| |MEM|Thin|PutGetBatchBenchmark|8.32|±1.48%|3|8.32|±2.40%|3|-0.06%| |MEM|Thin|PutGetBenchmark|0.85|±2.38%|3|0.85|±2.90%|3|-0.43%| |MEM|Thin|GetAllPutAllTxBenchmark \{"options": {"txc": "OPTIMISTIC", "txi": "SERIALIZABLE"}}|49.89|±2.02%|3|49.64|±1.51%|3|-0.49%| |MEM|Thin|PutGetTxBenchmark ("OPTIMISTIC", "SERIALIZABLE")|3.06|±5.78%|3|3.03|±2.85%|3|-0.83%| |MEM|Thin|PutTxImplicitBenchmark|0.70|±6.08%|3|0.68|±3.07%|3|-2.70%| was (Author: alex_pl): Benchmark results (latancy, ms) after fix: | |ignite-20240405-IGNITE-13114-fbdd3deb70|ignite-20240426-IGNITE-22069-0947a8b9fb| |storage|client_type|benchmark|value|error%|runs#|value|error%|runs#|dif%| |MEM|Thin|PutAllBenchmark|5.21|±4.70%|3|5.24|±5.12%|3|+0.66%| |MEM|Thin|PutAllSerializableTxBenchmark|38.17|±1.34%|3|38.40|±2.66%|3|+0.58%| |MEM|Thin|PutAllTxBenchmark|53.43|±2.30%|3|53.72|±0.57%|3|+0.55%| |MEM|Thin|PutGetTxBenchmark \{"options": {"txc": "PESSIMISTIC", "txi": "REPEATABLE_READ"}}|3.43|±3.52%|3|3.44|±4.35%|3|+0.53%| |MEM|Thin|PutGetTxBenchmark \{"options": {"txc": "OPTIMISTIC", "txi": "REPEATABLE_READ"}}|2.22|±1.52%|3|2.23|±3.02%|3|+0.31%| |MEM|Thin|PutBenchmark|0.46|±0.89%|3|0.46|±1.49%|3|+0.30%| |MEM|Thin|GetAllPutAllTxBenchmark \{"options": {"txc": "OPTIMISTIC"}}|57.18|±1.64%|3|57.23|±1.24%|3|+0.10%| |MEM|Thin|PutGetBatchBenchmark|8.32|±1.48%|3|8.32|±2.40%|3|-0.06%| |MEM|Thin|PutGetBenchmark|0.85|±2.38%|3|0.85|±2.90%|3|-0.43%| |MEM|Thin|GetAllPutAllTxBenchmark \{"options": {"txc": "OPTIMISTIC", "txi": "SERIALIZABLE"}}|49.89|±2.02%|3|49.64|±1.51%|3|-0.49%| |MEM|Thin|PutGetTxBenchmark \{"options": {"txc": "OPTIMISTIC", "txi": "SERIALIZABLE"}}|3.06|±5.78%|3|3.03|±2.85%|3|-0.83%| |MEM|Thin|PutTxImplicitBenchmark|0.70|±6.08%|3|0.68|±3.07%|3|-2.70%| > Performance drop for thin client requests > - > > Key: IGNITE-22101 > URL: https://issues.apache.org/jira/browse/IGNITE-22101 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Attachments: perf_drop.png > > Time Spent: 50m > Remaining Estimate: 0h > > After IGNITE-21183 there is performance drop for thin client transactional > operations up to 38%: > !perf_drop.png|width=1083,height=168! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-22101) Performance drop for thin client requests
[ https://issues.apache.org/jira/browse/IGNITE-22101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844343#comment-17844343 ] Aleksey Plekhanov commented on IGNITE-22101: Benchmark results (latancy, ms) after fix: | |ignite-20240405-IGNITE-13114-fbdd3deb70|ignite-20240426-IGNITE-22069-0947a8b9fb| |storage|client_type|benchmark|value|error%|runs#|value|error%|runs#|dif%| |MEM|Thin|PutAllBenchmark|5.21|±4.70%|3|5.24|±5.12%|3|+0.66%| |MEM|Thin|PutAllSerializableTxBenchmark|38.17|±1.34%|3|38.40|±2.66%|3|+0.58%| |MEM|Thin|PutAllTxBenchmark|53.43|±2.30%|3|53.72|±0.57%|3|+0.55%| |MEM|Thin|PutGetTxBenchmark \{"options": {"txc": "PESSIMISTIC", "txi": "REPEATABLE_READ"}}|3.43|±3.52%|3|3.44|±4.35%|3|+0.53%| |MEM|Thin|PutGetTxBenchmark \{"options": {"txc": "OPTIMISTIC", "txi": "REPEATABLE_READ"}}|2.22|±1.52%|3|2.23|±3.02%|3|+0.31%| |MEM|Thin|PutBenchmark|0.46|±0.89%|3|0.46|±1.49%|3|+0.30%| |MEM|Thin|GetAllPutAllTxBenchmark \{"options": {"txc": "OPTIMISTIC"}}|57.18|±1.64%|3|57.23|±1.24%|3|+0.10%| |MEM|Thin|PutGetBatchBenchmark|8.32|±1.48%|3|8.32|±2.40%|3|-0.06%| |MEM|Thin|PutGetBenchmark|0.85|±2.38%|3|0.85|±2.90%|3|-0.43%| |MEM|Thin|GetAllPutAllTxBenchmark \{"options": {"txc": "OPTIMISTIC", "txi": "SERIALIZABLE"}}|49.89|±2.02%|3|49.64|±1.51%|3|-0.49%| |MEM|Thin|PutGetTxBenchmark \{"options": {"txc": "OPTIMISTIC", "txi": "SERIALIZABLE"}}|3.06|±5.78%|3|3.03|±2.85%|3|-0.83%| |MEM|Thin|PutTxImplicitBenchmark|0.70|±6.08%|3|0.68|±3.07%|3|-2.70%| > Performance drop for thin client requests > - > > Key: IGNITE-22101 > URL: https://issues.apache.org/jira/browse/IGNITE-22101 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Attachments: perf_drop.png > > Time Spent: 50m > Remaining Estimate: 0h > > After IGNITE-21183 there is performance drop for thin client transactional > operations up to 38%: > !perf_drop.png|width=1083,height=168! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22101) Performance drop for thin client requests
[ https://issues.apache.org/jira/browse/IGNITE-22101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-22101: --- Description: After IGNITE-21183 there is performance drop for thin client transactional operations up to 38%: !perf_drop.png|width=1083,height=168! was: After IGNITE-21183 there is performance drop for thin client transactional operations up to 38%: !image-2024-04-24-18-40-52-125.png|width=884,height=137! > Performance drop for thin client requests > - > > Key: IGNITE-22101 > URL: https://issues.apache.org/jira/browse/IGNITE-22101 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Attachments: perf_drop.png > > > After IGNITE-21183 there is performance drop for thin client transactional > operations up to 38%: > !perf_drop.png|width=1083,height=168! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22101) Performance drop for thin client requests
[ https://issues.apache.org/jira/browse/IGNITE-22101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-22101: --- Attachment: (was: image-2024-04-24-18-40-52-125.png) > Performance drop for thin client requests > - > > Key: IGNITE-22101 > URL: https://issues.apache.org/jira/browse/IGNITE-22101 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Attachments: perf_drop.png > > > After IGNITE-21183 there is performance drop for thin client transactional > operations up to 38%: > !image-2024-04-24-18-40-52-125.png|width=884,height=137! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22101) Performance drop for thin client requests
Aleksey Plekhanov created IGNITE-22101: -- Summary: Performance drop for thin client requests Key: IGNITE-22101 URL: https://issues.apache.org/jira/browse/IGNITE-22101 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Attachments: perf_drop.png After IGNITE-21183 there is performance drop for thin client transactional operations up to 38%: !image-2024-04-24-18-40-52-125.png|width=884,height=137! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-22069) Fix contention on expiration for persistent caches
[ https://issues.apache.org/jira/browse/IGNITE-22069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838695#comment-17838695 ] Aleksey Plekhanov commented on IGNITE-22069: Benchmark results ({{{}JmhCacheExpireBenchmark{}}}) on my laptop: Before fix: {noformat} Benchmark (persistence) Mode Cnt Score Error Units JmhCacheExpireBenchmark.putWithExpire TRUE thrpt 3 29,968 ± 15,287 ops/ms {noformat} After fix: {noformat} Benchmark (persistence) Mode Cnt Score Error Units JmhCacheExpireBenchmark.putWithExpire TRUE thrpt3 172,777 ± 22,737 ops/ms {noformat} > Fix contention on expiration for persistent caches > -- > > Key: IGNITE-22069 > URL: https://issues.apache.org/jira/browse/IGNITE-22069 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Time Spent: 10m > Remaining Estimate: 0h > > We've fixed contention on expiration for in-memory caches by IGNITE-14341 and > IGNITE-21929 tickets, but persistent caches use another method to expire > entries and this method should be fixed too. Moreover, there are some other > optimizations related to expiration we can made: > # Use batch pending tree entries removal for persistent caches (already > implemented for in-memory) > # Randomize iteration over cache data stores during expiration to reduce > contention > # For each transaction, we try to expire entries for every cache in the > cluster. At least we can limit the list of caches to caches related to > transaction. > # On cache destroy batch removal from pending entries tree can be used > (instead of one-by-one deletion). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22069) Fix contention on expiration for persistent caches
Aleksey Plekhanov created IGNITE-22069: -- Summary: Fix contention on expiration for persistent caches Key: IGNITE-22069 URL: https://issues.apache.org/jira/browse/IGNITE-22069 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov We've fixed contention on expiration for in-memory caches by IGNITE-14341 and IGNITE-21929 tickets, but persistent caches use another method to expire entries and this method should be fixed too. Moreover, there are some other optimizations related to expiration we can made: # Use batch pending tree entries removal for persistent caches (already implemented for in-memory) # Randomize iteration over cache data stores during expiration to reduce contention # For each transaction, we try to expire entries for every cache in the cluster. At least we can limit the list of caches to caches related to transaction. # On cache destroy batch removal from pending entries tree can be used (instead of one-by-one deletion). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22028) Thin client: Implement invoke/invokeAll operations
Aleksey Plekhanov created IGNITE-22028: -- Summary: Thin client: Implement invoke/invokeAll operations Key: IGNITE-22028 URL: https://issues.apache.org/jira/browse/IGNITE-22028 Project: Ignite Issue Type: New Feature Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov We must implement invoke/invokeAll methods for thin client. Dev. list thread and IEP should be started to discuss protocol and implementation details. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21990) [PerfStat] Report can skip properties/rows/reads records from remote nodes
[ https://issues.apache.org/jira/browse/IGNITE-21990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21990: --- Labels: ise (was: ) > [PerfStat] Report can skip properties/rows/reads records from remote nodes > -- > > Key: IGNITE-21990 > URL: https://issues.apache.org/jira/browse/IGNITE-21990 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > In IGNITE-21863 after processing "query" record, we store aggregated result, > assuming that after this point there are no more records related to this > query. But such records may exist in other files (from other nodes). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21990) [PerfStat] Report can skip properties/rows/reads records from remote nodes
[ https://issues.apache.org/jira/browse/IGNITE-21990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21990: --- Ignite Flags: (was: Docs Required,Release Notes Required) > [PerfStat] Report can skip properties/rows/reads records from remote nodes > -- > > Key: IGNITE-21990 > URL: https://issues.apache.org/jira/browse/IGNITE-21990 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > > In IGNITE-21863 after processing "query" record, we store aggregated result, > assuming that after this point there are no more records related to this > query. But such records may exist in other files (from other nodes). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-21929) Skip pending list extra cleanup in TTL Manager
[ https://issues.apache.org/jira/browse/IGNITE-21929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov resolved IGNITE-21929. Release Note: Fixed pending tree redundant cleanup on entries expiration Resolution: Fixed [~yuri.naryshkin], looks good to me. Merged to master. Thanks for the contribution! > Skip pending list extra cleanup in TTL Manager > -- > > Key: IGNITE-21929 > URL: https://issues.apache.org/jira/browse/IGNITE-21929 > Project: Ignite > Issue Type: Improvement >Reporter: Yuri Naryshkin >Assignee: Yuri Naryshkin >Priority: Major > Labels: ise > Fix For: 2.17 > > Time Spent: 50m > Remaining Estimate: 0h > > Currently when records expire with high rate, several threads (sys-stripe, > client-connector) try to cleanup those records and get stuck waiting to > acquire lock on main page of PendingEntriesTree. This is unnecessary > contention. > After introducing fix for IGNITE-14341 expired records PendingEntriesTree > cleanup is done using range. After that each record is deleted from dataTree. > And after that another attempt is done to remove each record from > PendingEntriesTree once again, which is not necessary as the record is > already removed. > This ticket is to improve cleaning up expired entries by skipping second > attempt to remove each record from PendingEntriesTree. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21990) [PerfStat] Report can skip properties/rows/reads records from remote nodes
Aleksey Plekhanov created IGNITE-21990: -- Summary: [PerfStat] Report can skip properties/rows/reads records from remote nodes Key: IGNITE-21990 URL: https://issues.apache.org/jira/browse/IGNITE-21990 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov In IGNITE-21863 after processing "query" record, we store aggregated result, assuming that after this point there are no more records related to this query. But such records may exist in other files (from other nodes). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21961) Don't remove entries one-by-one for in-memory node on shutdown
Aleksey Plekhanov created IGNITE-21961: -- Summary: Don't remove entries one-by-one for in-memory node on shutdown Key: IGNITE-21961 URL: https://issues.apache.org/jira/browse/IGNITE-21961 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Currently, for in-memory node we remove each entry one-by-one on cluster deactivation or on node shutdown. If there are a lot of entries in cache it can take a long time. But it's a redundant action, since all page memory will be released after deactivation/shutdown. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21886) Refactor CompressionProcessorImpl, move code partially to ignite-core module
Aleksey Plekhanov created IGNITE-21886: -- Summary: Refactor CompressionProcessorImpl, move code partially to ignite-core module Key: IGNITE-21886 URL: https://issues.apache.org/jira/browse/IGNITE-21886 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov CompressionProcessorImpl contains some logic, which can be used without extra dependencies. For example, all page compaction logic implemented in ignite-core, but we still can't enable SKIP_GARBAGE compression mode without ignite-compress module. Mode SKIP_GARBAGE without extra dependencies can be helpful for WAL page snapshot compression or for IGNITE-20697. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-21863) [PerfStat] OOM when using build-report.sh from performance statistics
[ https://issues.apache.org/jira/browse/IGNITE-21863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov reassigned IGNITE-21863: -- Assignee: Aleksey Plekhanov > [PerfStat] OOM when using build-report.sh from performance statistics > -- > > Key: IGNITE-21863 > URL: https://issues.apache.org/jira/browse/IGNITE-21863 > Project: Ignite > Issue Type: Improvement >Affects Versions: 2.16 >Reporter: Luchnikov Alexander >Assignee: Aleksey Plekhanov >Priority: Minor > Labels: ise > > The problem is reproduced on a large volume collected using > {code:java} > ./control.sh --performance-statistics > {code} > statistics, in our cases the total volume was 50GB. > Increasing xmx to 64gb did not solve the problem. > {code:java} > Exception in thread "main" java.lang.OutOfMemoryError: Java heap space > at java.base/java.util.HashMap.resize(HashMap.java:700) > at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1112) > at > org.apache.ignite.internal.performancestatistics.handlers.QueryHandler.queryProperty(QueryHandler.java:160) > at > org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.deserialize(FilePerformanceStatisticsReader.java:345) > at > org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.read(FilePerformanceStatisticsReader.java:169) > at > org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.createReport(PerformanceStatisticsReportBuilder.java:124) > at > org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.main(PerformanceStatisticsReportBuilder.java:69) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21866) Calcite engine. Add memory quotas control for Cursor.getAll() method
Aleksey Plekhanov created IGNITE-21866: -- Summary: Calcite engine. Add memory quotas control for Cursor.getAll() method Key: IGNITE-21866 URL: https://issues.apache.org/jira/browse/IGNITE-21866 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Cursor.getAll() method can collect a lot of rows before return result to the user and can cause OOM errors. We should control memory consumption for Cursor.getAll() the same way as we do for execution nodes. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-21478) OOM crash with unstable topology
[ https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov resolved IGNITE-21478. Fix Version/s: 2.17 Release Note: Fixed OOM crash on unstable topology Resolution: Fixed [~yuri.naryshkin], looks good to me. Merged to master. Thanks for the contribution! > OOM crash with unstable topology > > > Key: IGNITE-21478 > URL: https://issues.apache.org/jira/browse/IGNITE-21478 > Project: Ignite > Issue Type: Bug >Reporter: Luchnikov Alexander >Assignee: Yuri Naryshkin >Priority: Minor > Labels: ise > Fix For: 2.17 > > Attachments: HistoMinorTop.png, histo.png > > Time Spent: 20m > Remaining Estimate: 0h > > User cases: > 1) Frequent entry/exit of a thick client into the topology leads to a crash > of the server node due to OMM. > 2) Frequent creation and destroy of caches leads to a server node crash due > to OOM. > topVer=20098 > *Real case* > Part of the log before the OOM crash, pay attention to *topVer=20098* > {code:java} > Metrics for local node (to disable set 'metricsLogFrequency' to 0) > ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274] > ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, > minorTopVer=6] > ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, > commPort=47100] > ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%] > ^-- Heap [used=867MB, free=15.29%, comm=1024MB] > ^-- Outbound messages queue [size=0] > ^-- Public thread pool [active=0, idle=7, qSize=0] > ^-- System thread pool [active=0, idle=8, qSize=0] > ^-- Striped thread pool [active=0, idle=8, qSize=0] > {code} > Histogram from heap-dump after node failed > !histo.png! > *MinorTop example* > {code:java} > @Test > public void testMinorVer() throws Exception { > Ignite server = startGrids(1); > IgniteEx client = startClientGrid(); > String cacheName = "cacheName"; > for (int i = 0; i < 500; i++) { > client.getOrCreateCache(cacheName); > client.destroyCache(cacheName); > } > System.err.println("Heap dump time"); > Thread.sleep(100); > } > {code} > {code:java} > [INFO > ][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager] > AffinityTopologyVersion [topVer=2, minorTopVer=1000], > evt=DISCOVERY_CUSTOM_EVT, evtNode=52b4c130-1a01-4858-813a-ebc8a5dabf1e, > client=true] > {code} > !HistoMinorTop.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21769) [ducktests] ignitetest/tests/dns_failure_test.py doesn't start with JDK11 and JDK17
[ https://issues.apache.org/jira/browse/IGNITE-21769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21769: --- Ignite Flags: (was: Docs Required,Release Notes Required) > [ducktests] ignitetest/tests/dns_failure_test.py doesn't start with JDK11 and > JDK17 > --- > > Key: IGNITE-21769 > URL: https://issues.apache.org/jira/browse/IGNITE-21769 > Project: Ignite > Issue Type: Task >Reporter: Sergey Korotkov >Assignee: Sergey Korotkov >Priority: Minor > Labels: ise > Fix For: 2.17 > > Attachments: dns_failure_test_jdk11.zip, dns_failure_test_jdk17.zip > > Time Spent: 20m > Remaining Estimate: 0h > > ignitetest/tests/dns_failure_test.py fails on JDK11 and JDK17 > Ignite node can not start with the following exception: > {noformat} > Class not found: java.net.BlockingDnsInet4AddressImpl: > check impl.prefix property in your properties file. > java.lang.Error: System property impl.prefix incorrect > at java.base/java.net.InetAddress.loadImpl(InetAddress.java:1734) > at > java.base/java.net.InetAddressImplFactory.create(InetAddress.java:1807) > at java.base/java.net.InetAddress.(InetAddress.java:1141) > at > org.apache.logging.log4j.core.util.NetUtils.getLocalHostname(NetUtils.java:56) > at > org.apache.logging.log4j.core.LoggerContext.lambda$setConfiguration$0(LoggerContext.java:625) > at > java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705) > at > org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:625) > at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:713) > at > org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:735) > at > org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:260) > at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:154) > at > org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:46) > at org.apache.logging.log4j.LogManager.getContext(LogManager.java:197) > at > org.apache.commons.logging.LogAdapter$Log4jLog.(LogAdapter.java:155) > at > org.apache.commons.logging.LogAdapter$Log4jAdapter.createLog(LogAdapter.java:122) > at org.apache.commons.logging.LogAdapter.createLog(LogAdapter.java:89) > at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:67) > at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:59) > at > org.springframework.context.support.AbstractApplicationContext.(AbstractApplicationContext.java:164) > at > org.springframework.context.support.GenericApplicationContext.(GenericApplicationContext.java:111) > at > org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.prepareSpringContext(IgniteSpringHelperImpl.java:458) > at > org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:386) > {noformat} > Logs attached obtained with the below jdk versions: > OpenJDK Runtime Environment 11.0.19+7 Eclipse Adoptium OpenJDK 64-Bit Server > VM 11.0.19+7 > OpenJDK Runtime Environment 17.0.7+7 Eclipse Adoptium OpenJDK 64-Bit Server > VM 17.0.7+7 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21630) Cluster falls apart on topology change when DNS service is unavailable
Aleksey Plekhanov created IGNITE-21630: -- Summary: Cluster falls apart on topology change when DNS service is unavailable Key: IGNITE-21630 URL: https://issues.apache.org/jira/browse/IGNITE-21630 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Requests to DNS service performed synchroniously by some critical discovery threads. Timeout for such requests can't be controlled by java code (see [https://bugs.openjdk.org/browse/JDK-6450279]). This leads to segmentation of nodes and falling apart cluster. For example, stack of {{tcp-disco-msg-worker}} thread with request to DNS service: {noformat} at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1330) at java.net.InetAddress.getAllByName0(InetAddress.java:1283) at java.net.InetAddress.getAllByName(InetAddress.java:1199) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at java.net.InetAddress.getByName(InetAddress.java:1077) at java.net.InetSocketAddress.(InetSocketAddress.java:220) at org.apache.ignite.internal.util.IgniteUtils.createResolved(IgniteUtils.java:9829) at org.apache.ignite.internal.util.IgniteUtils.toSocketAddresses(IgniteUtils.java:9792) at org.apache.ignite.internal.util.IgniteUtils.toSocketAddresses(IgniteUtils.java:9770) at org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode.socketAddresses(TcpDiscoveryNode.java:392) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.getNodeAddresses(TcpDiscoverySpi.java:1267) at org.apache.ignite.spi.discovery.tcp.ServerImpl.interruptPing(ServerImpl.java:985) at org.apache.ignite.spi.discovery.tcp.ServerImpl.access$6800(ServerImpl.java:206) at org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeLeftMessage(ServerImpl.java:5433) at org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:3221) at org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2894) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21587) Calcite engine. Add operations authorization
Aleksey Plekhanov created IGNITE-21587: -- Summary: Calcite engine. Add operations authorization Key: IGNITE-21587 URL: https://issues.apache.org/jira/browse/IGNITE-21587 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Currently, Calcite engine do not check authorization to perform SELECT operation. For INSERT/UPDATE/DELETE/MERGE operations authorization is checked internally by {{{}cache.invoke{}}}, but security context is not initialized and server node security context is checked. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-14317) IgniteCache.removeAsync(key,val) fails inside an optimistic transaction
[ https://issues.apache.org/jira/browse/IGNITE-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815247#comment-17815247 ] Aleksey Plekhanov commented on IGNITE-14317: Related sync op fix: https://github.com/apache/ignite/commit/126ab60fe6fa0f47e19a26dafecc7feb7c57b60b > IgniteCache.removeAsync(key,val) fails inside an optimistic transaction > --- > > Key: IGNITE-14317 > URL: https://issues.apache.org/jira/browse/IGNITE-14317 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.9.1 >Reporter: Denis Garus >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > [reproducer|https://github.com/apache/ignite/pull/8841/files] > IgniteCache.removeAsync(key,val) fails inside an optimistic tx with the > exception: > {code:java} > [17:39:28] (err) Failed to notify listener: > o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$6...@19c520dbjava.lang.AssertionError[17:39:28] > (err) Failed to notify listener: > o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$6...@19c520dbjava.lang.AssertionError > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$17.apply(GridNearTxLocal.java:2955) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$17.apply(GridNearTxLocal.java:2937) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.processLoaded(GridNearTxLocal.java:3475) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$21.apply(GridNearTxLocal.java:3217) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$21.apply(GridNearTxLocal.java:3212) > at > org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78) > at > org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:70) > at > org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:30) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:511) > at > org.apache.ignite.internal.processors.cache.GridCacheFutureAdapter.onDone(GridCacheFutureAdapter.java:55) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:490) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onDone(GridPartitionedSingleGetFuture.java:935) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:467) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.setSkipValueResult(GridPartitionedSingleGetFuture.java:759) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onResult(GridPartitionedSingleGetFuture.java:636) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetResponse(GridDhtCacheAdapter.java:368) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.access$100(GridDhtColocatedCache.java:88) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$2.apply(GridDhtColocatedCache.java:133) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$2.apply(GridDhtColocatedCache.java:131) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1143) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:592) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:393) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:319) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:309) > at > org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1908) > at > org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1529) > at > org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1422) > at >
[jira] [Assigned] (IGNITE-14317) IgniteCache.removeAsync(key,val) fails inside an optimistic transaction
[ https://issues.apache.org/jira/browse/IGNITE-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov reassigned IGNITE-14317: -- Assignee: Aleksey Plekhanov > IgniteCache.removeAsync(key,val) fails inside an optimistic transaction > --- > > Key: IGNITE-14317 > URL: https://issues.apache.org/jira/browse/IGNITE-14317 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.9.1 >Reporter: Denis Garus >Assignee: Aleksey Plekhanov >Priority: Major > > [reproducer|https://github.com/apache/ignite/pull/8841/files] > IgniteCache.removeAsync(key,val) fails inside an optimistic tx with the > exception: > {code:java} > [17:39:28] (err) Failed to notify listener: > o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$6...@19c520dbjava.lang.AssertionError[17:39:28] > (err) Failed to notify listener: > o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$6...@19c520dbjava.lang.AssertionError > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$17.apply(GridNearTxLocal.java:2955) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$17.apply(GridNearTxLocal.java:2937) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.processLoaded(GridNearTxLocal.java:3475) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$21.apply(GridNearTxLocal.java:3217) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$21.apply(GridNearTxLocal.java:3212) > at > org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78) > at > org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:70) > at > org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:30) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:511) > at > org.apache.ignite.internal.processors.cache.GridCacheFutureAdapter.onDone(GridCacheFutureAdapter.java:55) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:490) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onDone(GridPartitionedSingleGetFuture.java:935) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:467) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.setSkipValueResult(GridPartitionedSingleGetFuture.java:759) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onResult(GridPartitionedSingleGetFuture.java:636) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetResponse(GridDhtCacheAdapter.java:368) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.access$100(GridDhtColocatedCache.java:88) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$2.apply(GridDhtColocatedCache.java:133) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$2.apply(GridDhtColocatedCache.java:131) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1143) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:592) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:393) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:319) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:309) > at > org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1908) > at > org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1529) > at > org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1422) > at > org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55) > at > org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:569) > at
[jira] [Updated] (IGNITE-14317) IgniteCache.removeAsync(key,val) fails inside an optimistic transaction
[ https://issues.apache.org/jira/browse/IGNITE-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-14317: --- Labels: ise (was: ) > IgniteCache.removeAsync(key,val) fails inside an optimistic transaction > --- > > Key: IGNITE-14317 > URL: https://issues.apache.org/jira/browse/IGNITE-14317 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.9.1 >Reporter: Denis Garus >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > [reproducer|https://github.com/apache/ignite/pull/8841/files] > IgniteCache.removeAsync(key,val) fails inside an optimistic tx with the > exception: > {code:java} > [17:39:28] (err) Failed to notify listener: > o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$6...@19c520dbjava.lang.AssertionError[17:39:28] > (err) Failed to notify listener: > o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$6...@19c520dbjava.lang.AssertionError > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$17.apply(GridNearTxLocal.java:2955) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$17.apply(GridNearTxLocal.java:2937) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.processLoaded(GridNearTxLocal.java:3475) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$21.apply(GridNearTxLocal.java:3217) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$21.apply(GridNearTxLocal.java:3212) > at > org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78) > at > org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:70) > at > org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:30) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:511) > at > org.apache.ignite.internal.processors.cache.GridCacheFutureAdapter.onDone(GridCacheFutureAdapter.java:55) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:490) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onDone(GridPartitionedSingleGetFuture.java:935) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:467) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.setSkipValueResult(GridPartitionedSingleGetFuture.java:759) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onResult(GridPartitionedSingleGetFuture.java:636) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetResponse(GridDhtCacheAdapter.java:368) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.access$100(GridDhtColocatedCache.java:88) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$2.apply(GridDhtColocatedCache.java:133) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$2.apply(GridDhtColocatedCache.java:131) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1143) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:592) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:393) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:319) > at > org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:309) > at > org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1908) > at > org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1529) > at > org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1422) > at > org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55) > at >
[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology
[ https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21478: --- Labels: ise (was: ) > OOM crash with unstable topology > > > Key: IGNITE-21478 > URL: https://issues.apache.org/jira/browse/IGNITE-21478 > Project: Ignite > Issue Type: Bug >Reporter: Luchnikov Alexander >Priority: Minor > Labels: ise > Attachments: histo.png > > > User cases: > 1) Frequent entry/exit of a thick client into the topology leads to a crash > of the server node due to OMM. > 2) Frequent creation and destroy of caches leads to a server node crash due > to OOM. > topVer=20098 > Part of the log before the OOM crash, pay attention to *topVer=20098* > {code:java} > Metrics for local node (to disable set 'metricsLogFrequency' to 0) > ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274] > ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, > minorTopVer=6] > ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, > commPort=47100] > ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%] > ^-- Heap [used=867MB, free=15.29%, comm=1024MB] > ^-- Outbound messages queue [size=0] > ^-- Public thread pool [active=0, idle=7, qSize=0] > ^-- System thread pool [active=0, idle=8, qSize=0] > ^-- Striped thread pool [active=0, idle=8, qSize=0] > {code} > Histogram from heap-dump after node failed > !histo.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology
[ https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21478: --- Ignite Flags: Release Notes Required (was: Docs Required,Release Notes Required) > OOM crash with unstable topology > > > Key: IGNITE-21478 > URL: https://issues.apache.org/jira/browse/IGNITE-21478 > Project: Ignite > Issue Type: Bug >Reporter: Luchnikov Alexander >Priority: Minor > Labels: ise > Attachments: histo.png > > > User cases: > 1) Frequent entry/exit of a thick client into the topology leads to a crash > of the server node due to OMM. > 2) Frequent creation and destroy of caches leads to a server node crash due > to OOM. > topVer=20098 > Part of the log before the OOM crash, pay attention to *topVer=20098* > {code:java} > Metrics for local node (to disable set 'metricsLogFrequency' to 0) > ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274] > ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, > minorTopVer=6] > ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, > commPort=47100] > ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%] > ^-- Heap [used=867MB, free=15.29%, comm=1024MB] > ^-- Outbound messages queue [size=0] > ^-- Public thread pool [active=0, idle=7, qSize=0] > ^-- System thread pool [active=0, idle=8, qSize=0] > ^-- Striped thread pool [active=0, idle=8, qSize=0] > {code} > Histogram from heap-dump after node failed > !histo.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21366) AssertionError during the execution of the request
[ https://issues.apache.org/jira/browse/IGNITE-21366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21366: --- Ignite Flags: Release Notes Required (was: Docs Required,Release Notes Required) > AssertionError during the execution of the request > --- > > Key: IGNITE-21366 > URL: https://issues.apache.org/jira/browse/IGNITE-21366 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Nikolaev >Assignee: Aleksandr Nikolaev >Priority: Major > Labels: ise > Fix For: 2.17 > > Time Spent: 20m > Remaining Estimate: 0h > > If GridH2Table#cache size is greater than int, then we get an AssertionError: > {code} > -26T19:32:35,247][ERROR][main][] Test failed > [test=RowCountTableStatisticsUsageTest#compareJoinsWithConditionsOnBothTables[cacheMode=REPLICATED], > duration=10] > java.lang.AssertionError: totalRowCnt=-4294967096, localRowCount=-2147483548 > at > org.apache.ignite.internal.processors.query.h2.opt.TableStatistics.(TableStatistics.java:34) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.refreshStatsIfNeeded(GridH2Table.java:1055) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.getRowCountApproximation(GridH2Table.java:1013) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2IndexBase.getRowCountApproximation(GridH2IndexBase.java:226) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.opt.H2ScanIndex.getRowCountApproximation(H2ScanIndex.java:158) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.opt.H2ScanIndex.getCost(H2ScanIndex.java:289) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.opt.H2TableScanIndex.getCost(H2TableScanIndex.java:74) > ~[classes/:?] > at org.h2.table.Table.getBestPlanItem(Table.java:714) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.table.TableFilter.getBestPlanItem(TableFilter.java:224) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.table.Plan.calculateCost(Plan.java:121) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.command.dml.Optimizer.testPlan(Optimizer.java:180) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.command.dml.Optimizer.calculateBestPlan(Optimizer.java:81) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.command.dml.Optimizer.optimize(Optimizer.java:239) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.command.dml.Select.preparePlan(Select.java:1018) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.command.dml.Select.prepare(Select.java:884) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.command.dml.Explain.prepare(Explain.java:49) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.command.Parser.prepareCommand(Parser.java:283) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.engine.Session.prepareLocal(Session.java:611) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.engine.Session.prepareCommand(Session.java:549) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1247) > ~[h2-1.4.197.jar:1.4.197] > at > org.h2.jdbc.JdbcPreparedStatement.(JdbcPreparedStatement.java:76) > ~[h2-1.4.197.jar:1.4.197] > at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:694) > ~[h2-1.4.197.jar:1.4.197] > at > org.apache.ignite.internal.processors.query.h2.H2Connection.prepareStatementNoCache(H2Connection.java:191) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.H2PooledConnection.prepareStatementNoCache(H2PooledConnection.java:109) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.QueryParser.parseH2(QueryParser.java:341) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.QueryParser.parse0(QueryParser.java:225) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.QueryParser.parse(QueryParser.java:138) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1011) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3115) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3086) > ~[classes/:?] > at > org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3821) > ~[classes/:?] > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$3(GridQueryProcessor.java:3132) >
[jira] [Created] (IGNITE-21421) Calcite engine. Tuple (row) comparison is not working
Aleksey Plekhanov created IGNITE-21421: -- Summary: Calcite engine. Tuple (row) comparison is not working Key: IGNITE-21421 URL: https://issues.apache.org/jira/browse/IGNITE-21421 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Currentrly, row comparison fails with an error and don't use index. Reproducer: {code:java} sql("CREATE TABLE test (id INTEGER, val INTEGER)"); sql("CREATE INDEX test_idx ON test (id, val)"); sql("INSERT INTO test VALUES (0, 0), (0, 1), (1, 0), (1, 1)"); assertQuery("SELECT * FROM test WHERE (id, val) >= (?, ?)") .withParams(0, 1) //.matches(QueryChecker.containsIndexScan("PUBLIC", "TEST", "TEST_IDX")) .returns(0, 1) .returns(1, 0) .returns(1, 1) .check(); {code} Exception: {noformat} Caused by: java.lang.RuntimeException: while resolving method 'ge[class [Ljava.lang.Object;, class [Ljava.lang.Object;]' in class class org.apache.calcite.runtime.SqlFunctions at org.apache.calcite.linq4j.tree.Types.lookupMethod(Types.java:318) at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:449) at org.apache.ignite.internal.processors.query.calcite.exec.exp.RexImpTable$BinaryImplementor.implementSafe(RexImpTable.java:1233) at org.apache.ignite.internal.processors.query.calcite.exec.exp.RexImpTable$AbstractRexCallImplementor.genValueStatement(RexImpTable.java:1950) at org.apache.ignite.internal.processors.query.calcite.exec.exp.RexImpTable$AbstractRexCallImplementor.implement(RexImpTable.java:1911) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21351) NPE on metric "TransactionsHoldingLockNumber" if tx is not initialized
[ https://issues.apache.org/jira/browse/IGNITE-21351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21351: --- Labels: ise (was: ) > NPE on metric "TransactionsHoldingLockNumber" if tx is not initialized > -- > > Key: IGNITE-21351 > URL: https://issues.apache.org/jira/browse/IGNITE-21351 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > Attempt to get metric "TransactionsHoldingLockNumber" via JMX for not > initialized transaction failed with: > {noformat} > java.lang.NullPointerException > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxStateImpl.empty(IgniteTxStateImpl.java:448) > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.empty(IgniteTxLocalAdapter.java:244) > at > org.apache.ignite.internal.processors.cache.transactions.TransactionMetricsAdapter.txHoldingLockNum(TransactionMetricsAdapter.java:368) > at > org.apache.ignite.internal.processors.cache.transactions.TransactionMetricsAdapter.getTransactionsHoldingLockNumber(TransactionMetricsAdapter.java:188) > at > org.apache.ignite.internal.processors.cache.transactions.TransactionMetricsAdapter$TransactionMetricsSnapshot.getTransactionsHoldingLockNumber(TransactionMetricsAdapter.java:468) > at > org.apache.ignite.internal.TransactionMetricsMxBeanImpl.getTransactionsHoldingLockNumber(TransactionMetricsMxBeanImpl.java:102) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21351) NPE on metric "TransactionsHoldingLockNumber" if tx is not initialized
Aleksey Plekhanov created IGNITE-21351: -- Summary: NPE on metric "TransactionsHoldingLockNumber" if tx is not initialized Key: IGNITE-21351 URL: https://issues.apache.org/jira/browse/IGNITE-21351 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Attempt to get metric "TransactionsHoldingLockNumber" via JMX for not initialized transaction failed with: {noformat} java.lang.NullPointerException at org.apache.ignite.internal.processors.cache.transactions.IgniteTxStateImpl.empty(IgniteTxStateImpl.java:448) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.empty(IgniteTxLocalAdapter.java:244) at org.apache.ignite.internal.processors.cache.transactions.TransactionMetricsAdapter.txHoldingLockNum(TransactionMetricsAdapter.java:368) at org.apache.ignite.internal.processors.cache.transactions.TransactionMetricsAdapter.getTransactionsHoldingLockNumber(TransactionMetricsAdapter.java:188) at org.apache.ignite.internal.processors.cache.transactions.TransactionMetricsAdapter$TransactionMetricsSnapshot.getTransactionsHoldingLockNumber(TransactionMetricsAdapter.java:468) at org.apache.ignite.internal.TransactionMetricsMxBeanImpl.getTransactionsHoldingLockNumber(TransactionMetricsMxBeanImpl.java:102) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21349) Calcite engine. Failure on DDL when INCLUDE_SENSITIVE is false and DDL statement contains literals
Aleksey Plekhanov created IGNITE-21349: -- Summary: Calcite engine. Failure on DDL when INCLUDE_SENSITIVE is false and DDL statement contains literals Key: IGNITE-21349 URL: https://issues.apache.org/jira/browse/IGNITE-21349 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Queries like: {noformat} CREATE INDEX ON test(val) INLINE_SIZE 10{noformat} Can't be executed when property IGNITE_TO_STRING_INCLUDE_SENSITIVE=false. Error stack: {noformat} java.lang.UnsupportedOperationException: class org.apache.calcite.sql.SqlSyntax$7: SPECIAL at org.apache.calcite.util.Util.needToImplement(Util.java:) at org.apache.calcite.sql.SqlSyntax$7.unparse(SqlSyntax.java:129) at org.apache.calcite.sql.SqlOperator.unparse(SqlOperator.java:385) at org.apache.calcite.sql.SqlDialect.unparseCall(SqlDialect.java:466) at org.apache.calcite.sql.SqlCall.unparse(SqlCall.java:126) at org.apache.calcite.sql.SqlNode.toSqlString(SqlNode.java:156) at org.apache.calcite.sql.SqlNode.toString(SqlNode.java:131) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.removeSensitive(CalciteQueryProcessor.java:555) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.parseAndProcessQuery(CalciteQueryProcessor.java:520) {noformat} SqlCall is created when cloning custom DDL commands and calcite can't unparse it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21171) Calcite engine. Field nullability flag lost for data types with precession or scale
[ https://issues.apache.org/jira/browse/IGNITE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21171: --- Release Note: SQL Calcite: Fixed column nullability check for data types with precession or scale (was: Fixed column nullability check for data types with precession or scale) > Calcite engine. Field nullability flag lost for data types with precession or > scale > --- > > Key: IGNITE-21171 > URL: https://issues.apache.org/jira/browse/IGNITE-21171 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Fix For: 2.17 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Reproducer: > {code:java} > CREATE TABLE test(id INT PRIMARY KEY, val DECIMAL(10,2)); > INSERT INTO test(id, val) VALUES (0, NULL); {code} > Fail with: {{Column 'VAL' has no default value and does not allow NULLs}} > But it works if {{val}} data type is {{DECIMAL}} or {{INT}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21315) Node can't join the cluster when create index in progress and caches have the same deploymentId
[ https://issues.apache.org/jira/browse/IGNITE-21315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21315: --- Summary: Node can't join the cluster when create index in progress and caches have the same deploymentId (was: Node can't join then cluster when create index in progress and caches have the same deploymentId) > Node can't join the cluster when create index in progress and caches have the > same deploymentId > --- > > Key: IGNITE-21315 > URL: https://issues.apache.org/jira/browse/IGNITE-21315 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > Reproducer: > {code:java} > public class DynamicIndexCreateAfterClusterRestartTest extends > GridCommonAbstractTest { > /** {@inheritDoc} */ > @Override protected IgniteConfiguration getConfiguration(String > igniteInstanceName) throws Exception { > IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName) > .setDataStorageConfiguration( > new > DataStorageConfiguration().setDefaultDataRegionConfiguration( > new > DataRegionConfiguration().setPersistenceEnabled(true))); > cfg.setConsistentId(igniteInstanceName); > return cfg; > } > /** */ > @Test > public void testNodeJoinOnCreateIndex() throws Exception { > IgniteEx grid = startGrids(2); > grid.cluster().state(ClusterState.ACTIVE); > grid.getOrCreateCache(new > CacheConfiguration<>("CACHE1").setSqlSchema("PUBLIC") > .setIndexedTypes(Integer.class, Integer.class)); > grid.getOrCreateCache(new > CacheConfiguration<>("CACHE2").setSqlSchema("PUBLIC") > .setIndexedTypes(Integer.class, TestValue.class)); > stopAllGrids(); > startGrids(2); > try (IgniteDataStreamer ds = > grid(0).dataStreamer("CACHE2")) { > for (int i = 0; i < 1_500_000; i++) > ds.addData(i, new TestValue(i)); > } > GridTestUtils.runAsync(() -> { > grid(1).cache("CACHE2").query(new SqlFieldsQuery("CREATE INDEX ON > TestValue(val)")).getAll(); > }); > doSleep(100); > stopGrid(0, true); > cleanPersistenceDir(getTestIgniteInstanceName(0)); > startGrid(0); > } > /** */ > private static class TestValue { > /** */ > @QuerySqlField > private final int val; > /** */ > private TestValue(int val) { > this.val = val; > } > } > } > {code} > Fails on last node join with an exception: > {noformat} > java.lang.AssertionError > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:1124) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:1257) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$ff7b936b$1(GridCacheProcessor.java:1869) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$16(GridCacheProcessor.java:1754) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1863) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1753) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnLocalJoin(GridCacheProcessor.java:1699) > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:1162) > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1007) > at > org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3336) > at > org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3170) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) > at java.lang.Thread.run(Thread.java:748){noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21315) Node can't join then cluster when create index in progress and caches have the same deploymentId
[ https://issues.apache.org/jira/browse/IGNITE-21315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21315: --- Labels: ise (was: ) > Node can't join then cluster when create index in progress and caches have > the same deploymentId > > > Key: IGNITE-21315 > URL: https://issues.apache.org/jira/browse/IGNITE-21315 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > Reproducer: > {code:java} > public class DynamicIndexCreateAfterClusterRestartTest extends > GridCommonAbstractTest { > /** {@inheritDoc} */ > @Override protected IgniteConfiguration getConfiguration(String > igniteInstanceName) throws Exception { > IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName) > .setDataStorageConfiguration( > new > DataStorageConfiguration().setDefaultDataRegionConfiguration( > new > DataRegionConfiguration().setPersistenceEnabled(true))); > cfg.setConsistentId(igniteInstanceName); > return cfg; > } > /** */ > @Test > public void testNodeJoinOnCreateIndex() throws Exception { > IgniteEx grid = startGrids(2); > grid.cluster().state(ClusterState.ACTIVE); > grid.getOrCreateCache(new > CacheConfiguration<>("CACHE1").setSqlSchema("PUBLIC") > .setIndexedTypes(Integer.class, Integer.class)); > grid.getOrCreateCache(new > CacheConfiguration<>("CACHE2").setSqlSchema("PUBLIC") > .setIndexedTypes(Integer.class, TestValue.class)); > stopAllGrids(); > startGrids(2); > try (IgniteDataStreamer ds = > grid(0).dataStreamer("CACHE2")) { > for (int i = 0; i < 1_500_000; i++) > ds.addData(i, new TestValue(i)); > } > GridTestUtils.runAsync(() -> { > grid(1).cache("CACHE2").query(new SqlFieldsQuery("CREATE INDEX ON > TestValue(val)")).getAll(); > }); > doSleep(100); > stopGrid(0, true); > cleanPersistenceDir(getTestIgniteInstanceName(0)); > startGrid(0); > } > /** */ > private static class TestValue { > /** */ > @QuerySqlField > private final int val; > /** */ > private TestValue(int val) { > this.val = val; > } > } > } > {code} > Fails on last node join with an exception: > {noformat} > java.lang.AssertionError > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:1124) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:1257) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$ff7b936b$1(GridCacheProcessor.java:1869) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$16(GridCacheProcessor.java:1754) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1863) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1753) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnLocalJoin(GridCacheProcessor.java:1699) > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:1162) > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1007) > at > org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3336) > at > org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3170) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) > at java.lang.Thread.run(Thread.java:748){noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21315) Node can't join then cluster when create index in progress and caches have the same deploymentId
Aleksey Plekhanov created IGNITE-21315: -- Summary: Node can't join then cluster when create index in progress and caches have the same deploymentId Key: IGNITE-21315 URL: https://issues.apache.org/jira/browse/IGNITE-21315 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Reproducer: {code:java} public class DynamicIndexCreateAfterClusterRestartTest extends GridCommonAbstractTest { /** {@inheritDoc} */ @Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception { IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName) .setDataStorageConfiguration( new DataStorageConfiguration().setDefaultDataRegionConfiguration( new DataRegionConfiguration().setPersistenceEnabled(true))); cfg.setConsistentId(igniteInstanceName); return cfg; } /** */ @Test public void testNodeJoinOnCreateIndex() throws Exception { IgniteEx grid = startGrids(2); grid.cluster().state(ClusterState.ACTIVE); grid.getOrCreateCache(new CacheConfiguration<>("CACHE1").setSqlSchema("PUBLIC") .setIndexedTypes(Integer.class, Integer.class)); grid.getOrCreateCache(new CacheConfiguration<>("CACHE2").setSqlSchema("PUBLIC") .setIndexedTypes(Integer.class, TestValue.class)); stopAllGrids(); startGrids(2); try (IgniteDataStreamer ds = grid(0).dataStreamer("CACHE2")) { for (int i = 0; i < 1_500_000; i++) ds.addData(i, new TestValue(i)); } GridTestUtils.runAsync(() -> { grid(1).cache("CACHE2").query(new SqlFieldsQuery("CREATE INDEX ON TestValue(val)")).getAll(); }); doSleep(100); stopGrid(0, true); cleanPersistenceDir(getTestIgniteInstanceName(0)); startGrid(0); } /** */ private static class TestValue { /** */ @QuerySqlField private final int val; /** */ private TestValue(int val) { this.val = val; } } } {code} Fails on last node join with an exception: {noformat} java.lang.AssertionError at org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:1124) at org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:1257) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$ff7b936b$1(GridCacheProcessor.java:1869) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$16(GridCacheProcessor.java:1754) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1863) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1753) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnLocalJoin(GridCacheProcessor.java:1699) at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.initCachesOnLocalJoin(GridDhtPartitionsExchangeFuture.java:1162) at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1007) at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3336) at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3170) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) at java.lang.Thread.run(Thread.java:748){noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-21171) Calcite engine. Field nullability flag lost for data types with precession or scale
[ https://issues.apache.org/jira/browse/IGNITE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807199#comment-17807199 ] Aleksey Plekhanov commented on IGNITE-21171: [~jooger] it should allow null values, but it doesn't allowed now for types with scale. > Calcite engine. Field nullability flag lost for data types with precession or > scale > --- > > Key: IGNITE-21171 > URL: https://issues.apache.org/jira/browse/IGNITE-21171 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Time Spent: 1h > Remaining Estimate: 0h > > Reproducer: > {code:java} > CREATE TABLE test(id INT PRIMARY KEY, val DECIMAL(10,2)); > INSERT INTO test(id, val) VALUES (0, NULL); {code} > Fail with: {{Column 'VAL' has no default value and does not allow NULLs}} > But it works if {{val}} data type is {{DECIMAL}} or {{INT}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21239) Take into account SUSPENDED transaction state for some operations
[ https://issues.apache.org/jira/browse/IGNITE-21239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21239: --- Description: There are some operations in Ignite, which check only {{ACTIVE}} transaction state, but looks like {{SUSPENDED}} state should also match the condition for these operations. Transaction state {{SUSPENDED}} is almost the same as {{{}ACTIVE{}}}, but detached from a thread. Examples of methods where {{SUSPENDED}} state should be threated the same way as {{{}ACTIVE{}}}: * {{GridCachePartitionExchangeManager#dumpLongRunningOperations0}} * {{IncrementalSnapshotMarkWalFuture#init}} * {{TransactionMetricsAdapter#txHoldingLockNum}} (here condition is strange, perhaps should be rewriten) * {{IgniteTxManager#salvageTx}} (not sure about this method, further analysis required) was: There are some operations in Ignite, which check only \{{ACTIVE}} transaction state, but looks like {{SUSPENDED}} state should also match the condition for these operations. Transaction state {{SUSPENDED}} is almost the same as \{{ACTIVE}}, but detached from a thread. Examples of methods where {{SUSPENDED}} state should be threated the same way as {{{}ACTIVE{}}}: * {\{GridCachePartitionExchangeManager#dumpLongRunningOperations0}} * {{IncrementalSnapshotMarkWalFuture#init}} * {{TransactionMetricsAdapter#txHoldingLockNum}} (here condition is strange, perhaps should be rewriten) * {{IgniteTxManager#salvageTx}} (not sure about this method, further analysis required) > Take into account SUSPENDED transaction state for some operations > - > > Key: IGNITE-21239 > URL: https://issues.apache.org/jira/browse/IGNITE-21239 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Priority: Major > Labels: ise > > There are some operations in Ignite, which check only {{ACTIVE}} transaction > state, but looks like {{SUSPENDED}} state should also match the condition for > these operations. > Transaction state {{SUSPENDED}} is almost the same as {{{}ACTIVE{}}}, but > detached from a thread. > Examples of methods where {{SUSPENDED}} state should be threated the same way > as {{{}ACTIVE{}}}: > * > {{GridCachePartitionExchangeManager#dumpLongRunningOperations0}} > * {{IncrementalSnapshotMarkWalFuture#init}} > * {{TransactionMetricsAdapter#txHoldingLockNum}} (here condition is strange, > perhaps should be rewriten) > * {{IgniteTxManager#salvageTx}} (not sure about this method, further > analysis required) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21239) Take into account SUSPENDED transaction state for some operations
Aleksey Plekhanov created IGNITE-21239: -- Summary: Take into account SUSPENDED transaction state for some operations Key: IGNITE-21239 URL: https://issues.apache.org/jira/browse/IGNITE-21239 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov There are some operations in Ignite, which check only \{{ACTIVE}} transaction state, but looks like {{SUSPENDED}} state should also match the condition for these operations. Transaction state {{SUSPENDED}} is almost the same as \{{ACTIVE}}, but detached from a thread. Examples of methods where {{SUSPENDED}} state should be threated the same way as {{{}ACTIVE{}}}: * {\{GridCachePartitionExchangeManager#dumpLongRunningOperations0}} * {{IncrementalSnapshotMarkWalFuture#init}} * {{TransactionMetricsAdapter#txHoldingLockNum}} (here condition is strange, perhaps should be rewriten) * {{IgniteTxManager#salvageTx}} (not sure about this method, further analysis required) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-21185) In DistributionFunction strings are compared with == instead of equals()
[ https://issues.apache.org/jira/browse/IGNITE-21185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17805032#comment-17805032 ] Aleksey Plekhanov commented on IGNITE-21185: [~dkryukov] {{name()}} method returns interned string (see [https://github.com/apache/ignite/blob/master/modules/calcite/src/main/java/org/apache/ignite/internal/processors/query/calcite/trait/DistributionFunction.java#L59]), so it's safe to use {{==}} operator here. > In DistributionFunction strings are compared with == instead of equals() > > > Key: IGNITE-21185 > URL: https://issues.apache.org/jira/browse/IGNITE-21185 > Project: Ignite > Issue Type: Bug >Reporter: Dmitrii Kriukov >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Line 157: > {color:#cc7832}if {color}(f0 == f1 || f0.name() == f1.name()) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21225) Redundant lambda object allocation in ClockPageReplacementFlags#setFlag
Aleksey Plekhanov created IGNITE-21225: -- Summary: Redundant lambda object allocation in ClockPageReplacementFlags#setFlag Key: IGNITE-21225 URL: https://issues.apache.org/jira/browse/IGNITE-21225 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Every time we call {{ClockPageReplacementFlags#setFlag/clearFlag}} methods the new lambda object is created, since lambda is accessing the variable in enclosing scope. \{{ClockPageReplacementFlags#setFlag}} method called every time when page is modified, so, it's a relatevily hot method and we should avoid new object allocation here. Here is the test to show redundant allocations: {code:java} /** */ @Test public void testAllocation() { clockFlags = new ClockPageReplacementFlags(MAX_PAGES_CNT, region.address()); int cnt = 1_000_000; ThreadMXBean bean = (ThreadMXBean)ManagementFactory.getThreadMXBean(); // Warmup. clockFlags.setFlag(0); long allocated0 = bean.getThreadAllocatedBytes(Thread.currentThread().getId()); for (int i = 0; i < cnt; i++) clockFlags.setFlag(i % MAX_PAGES_CNT); long allocated1 = bean.getThreadAllocatedBytes(Thread.currentThread().getId()); assertTrue("Too many bytes allocated: " + (allocated1 - allocated0), allocated1 - allocated0 < cnt); } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21183) Thin client: Avoid blocking of client-connector threads by transactional operations
Aleksey Plekhanov created IGNITE-21183: -- Summary: Thin client: Avoid blocking of client-connector threads by transactional operations Key: IGNITE-21183 URL: https://issues.apache.org/jira/browse/IGNITE-21183 Project: Ignite Issue Type: Improvement Components: thin client Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Currently client-connector threads (workers for thin-client operations) can be blocked for a long time by cache operation within transaction. If there is not enough threads configured it can lead to deadlocks. For example, if we have {{n}} threads and {{n+1}} clients which start the pessimistic transaction and try to modify the same key, first client lock the key, other {{n}} clients wait on locked key and hold the whole thread put by blocking operations. Commit/rollback from the first client can never be proceeded, since all threads are occupied, and threads can't be released, since they are waiting for commit/rollback from the first client. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21171) Calcite engine. Field nullability flag lost for data types with precession or scale
Aleksey Plekhanov created IGNITE-21171: -- Summary: Calcite engine. Field nullability flag lost for data types with precession or scale Key: IGNITE-21171 URL: https://issues.apache.org/jira/browse/IGNITE-21171 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Reproducer: {code:java} CREATE TABLE test(id INT PRIMARY KEY, val DECIMAL(10,2)); INSERT INTO test(id, val) VALUES (0, NULL); {code} Fail with: {{Column 'VAL' has no default value and does not allow NULLs}} But it works if {{val}} data type is {{DECIMAL}} or {{INT}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21161) Node failure on timeout objects intersection
Aleksey Plekhanov created IGNITE-21161: -- Summary: Node failure on timeout objects intersection Key: IGNITE-21161 URL: https://issues.apache.org/jira/browse/IGNITE-21161 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Timeout objects (see \{{GridTimeoutObject}} class) can intersect by timeout timestamp and id for different subsystem (for example compute and atomic near cache update) with the following error: {noformat} [11:32:10,554][SEVERE][sys-stripe-3-#4%timeout.TimeoutObjectsIntersectionTest1%][] Critical system error detected. Will be handled accordingly to configured handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=CRITICAL_ERROR, err=java.lang.AssertionError: Duplicate timeout object found: o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$DeferredUpdateTimeout@11da2a92]] java.lang.AssertionError: Duplicate timeout object found: org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$DeferredUpdateTimeout@11da2a92 at org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor.addTimeoutObject(GridTimeoutProcessor.java:114) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.sendDeferredUpdateResponse(GridDhtAtomicCache.java:3480) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processDhtAtomicUpdateRequest(GridDhtAtomicCache.java:3427) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:147) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:310) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:305) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1164) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:605) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:406) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:324) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:112) at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:314) at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1906) at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1527) at org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:242) at org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1420) at org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55) at org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) at java.lang.Thread.run(Thread.java:748) {noformat} It can happend when one subsystem use IgniteUuid with local node Id and another subsystem use IgniteUuid with remote node id. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21131) Calcite engine. OR operator with dynamic parameters can't be used for index scans
Aleksey Plekhanov created IGNITE-21131: -- Summary: Calcite engine. OR operator with dynamic parameters can't be used for index scans Key: IGNITE-21131 URL: https://issues.apache.org/jira/browse/IGNITE-21131 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Calcite can compose OR operator with literals to SEARCH/SARG, and SEARCH/SARG can be used for index scans. But we can't do this for OR operator with dynamic parameters. For example expression {{a IN (1, 2, 3)}} can be converted to {{a = 1 OR a = 2 OR a = 3}} and after that can be converted to {{SEARCH(a, SARG(1, 2, 3))}}, but expression {{a IN (?, ?, ?)}} can be converted only to {{a = ? OR a = ? OR a = ?}} and can't be used for index scan. To fix this issue we can create ranges from dynamic parameters during planning, sort and remove intersections from these ranges in runtime. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21082) Ignite Extensions: Excessive memory usage by performance statistics QueryHandler
[ https://issues.apache.org/jira/browse/IGNITE-21082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21082: --- Summary: Ignite Extensions: Excessive memory usage by performance statistics QueryHandler (was: Ignite Extensions: Exessive memory usage by performance statistics QueryHandler) > Ignite Extensions: Excessive memory usage by performance statistics > QueryHandler > > > Key: IGNITE-21082 > URL: https://issues.apache.org/jira/browse/IGNITE-21082 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > When processing queryProperty or queryRows events new strings are generated > and written to the maps (as keys or values). Most of strings are not unique > and already contained in other maps as keys or values, but as different > instance. GC can't collect doublicated strings and this leads to OOM in some > cases. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21082) Ignite Extensions: Exessive memory usage by performance statistics QueryHandler
Aleksey Plekhanov created IGNITE-21082: -- Summary: Ignite Extensions: Exessive memory usage by performance statistics QueryHandler Key: IGNITE-21082 URL: https://issues.apache.org/jira/browse/IGNITE-21082 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov When processing queryProperty or queryRows events new strings are generated and written to the maps (as keys or values). Most of strings are not unique and already contained in other maps as keys or values, but as different instance. GC can't collect doublicated strings and this leads to OOM in some cases. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21078) .NET: Platform cache is not updated on topology change when cache id is negative
[ https://issues.apache.org/jira/browse/IGNITE-21078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21078: --- Description: Reproducer: {code:java} public void TestPlatformCacheWithNegativeId() { InitNodes(1); var cacheName = "negative_cache_id"; var cacheConfiguration = new CacheConfiguration(cacheName) { PlatformCacheConfiguration = new PlatformCacheConfiguration() }; var cache = _ignite[0].GetOrCreateCache(cacheConfiguration); var key = 0; var val = new Foo(-1); cache[key] = val; InitNode(1); Assert.AreEqual(val, cache[key]); } {code} Fails with: {noformat} Apache.Ignite.Core.Common.IgniteException : Java exception occurred [class=java.lang.AssertionError, message=Affinity partition is out of range [part=-1, partitions=1024]] > Apache.Ignite.Core.Common.JavaException : java.lang.AssertionError: Affinity partition is out of range [part=-1, partitions=1024] at org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignmentImpl.get(HistoryAffinityAssignmentImpl.java:244) at org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.primaryChanged(GridAffinityAssignmentCache.java:865) at org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryChanged(GridCacheAffinityManager.java:403) at org.apache.ignite.internal.processors.platform.cache.affinity.PlatformAffinityManager.processInStreamOutLong(PlatformAffinityManager.java:62) at org.apache.ignite.internal.processors.platform.PlatformAbstractTarget.processInStreamOutLong(PlatformAbstractTarget.java:87) at org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutLong(PlatformTargetProxyImpl.java:67){noformat} was: Reproducer: {code:java} public void TestPlatformCacheWithNegativeId() { InitNodes(1); var cacheName = "negative_cache_id"; var cacheConfiguration = new CacheConfiguration(cacheName) { PlatformCacheConfiguration = new PlatformCacheConfiguration() }; var cache = _ignite[0].GetOrCreateCache(cacheConfiguration); var key = 0; var val = new Foo(-1); cache[key] = val; InitNode(1); Assert.AreEqual(val, cache[key]); } {code} > .NET: Platform cache is not updated on topology change when cache id is > negative > > > Key: IGNITE-21078 > URL: https://issues.apache.org/jira/browse/IGNITE-21078 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > Reproducer: > {code:java} > public void TestPlatformCacheWithNegativeId() > { > InitNodes(1); > var cacheName = "negative_cache_id"; > var cacheConfiguration = new CacheConfiguration(cacheName) > { > PlatformCacheConfiguration = new PlatformCacheConfiguration() > }; > > var cache = _ignite[0].GetOrCreateCache(cacheConfiguration); > var key = 0; > var val = new Foo(-1); > cache[key] = val; > InitNode(1); > Assert.AreEqual(val, cache[key]); > } {code} > Fails with: > {noformat} > Apache.Ignite.Core.Common.IgniteException : Java exception occurred > [class=java.lang.AssertionError, message=Affinity partition is out of range > [part=-1, partitions=1024]] > > Apache.Ignite.Core.Common.JavaException : java.lang.AssertionError: > Affinity partition is out of range [part=-1, partitions=1024] > at > org.apache.ignite.internal.processors.affinity.HistoryAffinityAssignmentImpl.get(HistoryAffinityAssignmentImpl.java:244) > at > org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.primaryChanged(GridAffinityAssignmentCache.java:865) > at > org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryChanged(GridCacheAffinityManager.java:403) > at > org.apache.ignite.internal.processors.platform.cache.affinity.PlatformAffinityManager.processInStreamOutLong(PlatformAffinityManager.java:62) > at > org.apache.ignite.internal.processors.platform.PlatformAbstractTarget.processInStreamOutLong(PlatformAbstractTarget.java:87) > at > org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutLong(PlatformTargetProxyImpl.java:67){noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-21078) .NET: Platform cache is not updated on topology change when cache id is negative
[ https://issues.apache.org/jira/browse/IGNITE-21078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-21078: --- Labels: ise (was: ) > .NET: Platform cache is not updated on topology change when cache id is > negative > > > Key: IGNITE-21078 > URL: https://issues.apache.org/jira/browse/IGNITE-21078 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > Reproducer: > {code:java} > public void TestPlatformCacheWithNegativeId() > { > InitNodes(1); > var cacheName = "negative_cache_id"; > var cacheConfiguration = new CacheConfiguration(cacheName) > { > PlatformCacheConfiguration = new PlatformCacheConfiguration() > }; > > var cache = _ignite[0].GetOrCreateCache(cacheConfiguration); > var key = 0; > var val = new Foo(-1); > cache[key] = val; > InitNode(1); > Assert.AreEqual(val, cache[key]); > } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21078) .NET: Platform cache is not updated on topology change when cache id is negative
Aleksey Plekhanov created IGNITE-21078: -- Summary: .NET: Platform cache is not updated on topology change when cache id is negative Key: IGNITE-21078 URL: https://issues.apache.org/jira/browse/IGNITE-21078 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Reproducer: {code:java} public void TestPlatformCacheWithNegativeId() { InitNodes(1); var cacheName = "negative_cache_id"; var cacheConfiguration = new CacheConfiguration(cacheName) { PlatformCacheConfiguration = new PlatformCacheConfiguration() }; var cache = _ignite[0].GetOrCreateCache(cacheConfiguration); var key = 0; var val = new Foo(-1); cache[key] = val; InitNode(1); Assert.AreEqual(val, cache[key]); } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-21031) Calcite engine. Query fails on performance statistics in case of nested scans
Aleksey Plekhanov created IGNITE-21031: -- Summary: Calcite engine. Query fails on performance statistics in case of nested scans Key: IGNITE-21031 URL: https://issues.apache.org/jira/browse/IGNITE-21031 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Nested scan can be performed by Calcite engine, for example, in case of UNION ALL, when the first table scan is completed (and {{{}downstream().end(){}}}) method is invoked and UNION ALL operator proceed to the next table scan. Reproducer: {code:java} public void testPerformanceStatisticsNestedScan() throws Exception { sql(grid(0), "CREATE TABLE test_perf_stat_nested (a INT) WITH template=REPLICATED"); sql(grid(0), "INSERT INTO test_perf_stat_nested VALUES (0), (1), (2), (3), (4)"); startCollectStatistics(); sql(grid(0), "SELECT * FROM test_perf_stat_nested UNION ALL SELECT * FROM test_perf_stat_nested"); }{code} Fails on: {noformat} at org.apache.ignite.internal.metric.IoStatisticsQueryHelper.startGatheringQueryStatistics(IoStatisticsQueryHelper.java:35) at org.apache.ignite.internal.processors.query.calcite.exec.tracker.PerformanceStatisticsIoTracker.startTracking(PerformanceStatisticsIoTracker.java:65) at org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanStorageNode.processNextBatch(ScanStorageNode.java:68) at org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.push(ScanNode.java:145) at org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.request(ScanNode.java:95) at org.apache.ignite.internal.processors.query.calcite.exec.rel.UnionAllNode.end(UnionAllNode.java:79) at org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.processNextBatch(ScanNode.java:185) at org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanStorageNode.processNextBatch(ScanStorageNode.java:70) at org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.push(ScanNode.java:145) at org.apache.ignite.internal.processors.query.calcite.exec.rel.ScanNode.request(ScanNode.java:95) at org.apache.ignite.internal.processors.query.calcite.exec.rel.UnionAllNode.request(UnionAllNode.java:56) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20950) Calcite engine. NPE when performance statistics is enabled after query already cached
[ https://issues.apache.org/jira/browse/IGNITE-20950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20950: --- Description: Reproducer: {code:java} public void testPerformanceStatisticsEnableAfterQuery() throws Exception { cleanPerformanceStatisticsDir(); String qry = "SELECT * FROM table(system_range(1, 1000))"; sql(grid(0), qry); startCollectStatistics(); sql(grid(0), qry); } {code} Throws an exception: {noformat} java.lang.NullPointerException at org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsWriter.cacheIfPossible(FilePerformanceStatisticsWriter.java:540) at org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsWriter.queryProperty(FilePerformanceStatisticsWriter.java:320) at org.apache.ignite.internal.processors.performancestatistics.PerformanceStatisticsProcessor.lambda$queryProperty$11(PerformanceStatisticsProcessor.java:207) at org.apache.ignite.internal.processors.performancestatistics.PerformanceStatisticsProcessor.write(PerformanceStatisticsProcessor.java:428) at org.apache.ignite.internal.processors.performancestatistics.PerformanceStatisticsProcessor.queryProperty(PerformanceStatisticsProcessor.java:207) at org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.mapAndExecutePlan(ExecutionServiceImpl.java:668) at org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.executePlan(ExecutionServiceImpl.java:505) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.lambda$parseAndProcessQuery$4(CalciteQueryProcessor.java:495) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.processQuery(CalciteQueryProcessor.java:616) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.parseAndProcessQuery(CalciteQueryProcessor.java:495) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.query(CalciteQueryProcessor.java:389){noformat} was: Reproducer: {code:java} public void testPerformanceStatisticsEnableAfterQuery() throws Exception { cleanPerformanceStatisticsDir(); String qry = "SELECT * FROM table(system_range(1, 1000))"; sql(grid(0), qry); startCollectStatistics(); sql(grid(0), qry); } {code} Throws an exception: {noformat} java.lang.NullPointerException at org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsWriter.cacheIfPossible(FilePerformanceStatisticsWriter.java:540) at org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsWriter.queryProperty(FilePerformanceStatisticsWriter.java:320) at org.apache.ignite.internal.processors.performancestatistics.PerformanceStatisticsProcessor.lambda$queryProperty$11(PerformanceStatisticsProcessor.java:207) at org.apache.ignite.internal.processors.performancestatistics.PerformanceStatisticsProcessor.write(PerformanceStatisticsProcessor.java:428) at org.apache.ignite.internal.processors.performancestatistics.PerformanceStatisticsProcessor.queryProperty(PerformanceStatisticsProcessor.java:207) at org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.mapAndExecutePlan(ExecutionServiceImpl.java:668) at org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.executePlan(ExecutionServiceImpl.java:505) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.lambda$parseAndProcessQuery$4(CalciteQueryProcessor.java:495) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.processQuery(CalciteQueryProcessor.java:616) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.parseAndProcessQuery(CalciteQueryProcessor.java:495) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.query(CalciteQueryProcessor.java:389){noformat} > Calcite engine. NPE when performance statistics is enabled after query > already cached > - > > Key: IGNITE-20950 > URL: https://issues.apache.org/jira/browse/IGNITE-20950 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: calcite, ise > Time Spent: 10m > Remaining Estimate: 0h > > Reproducer: > {code:java} > public void testPerformanceStatisticsEnableAfterQuery() throws Exception { > cleanPerformanceStatisticsDir(); > String qry = "SELECT * FROM table(system_range(1, 1000))"; > sql(grid(0), qry); > startCollectStatistics(); > sql(grid(0), qry); > } {code} > Throws an exception: > {noformat} >
[jira] [Created] (IGNITE-20950) Calcite engine. NPE when performance statistics is enabled after query already cached
Aleksey Plekhanov created IGNITE-20950: -- Summary: Calcite engine. NPE when performance statistics is enabled after query already cached Key: IGNITE-20950 URL: https://issues.apache.org/jira/browse/IGNITE-20950 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Reproducer: {code:java} public void testPerformanceStatisticsEnableAfterQuery() throws Exception { cleanPerformanceStatisticsDir(); String qry = "SELECT * FROM table(system_range(1, 1000))"; sql(grid(0), qry); startCollectStatistics(); sql(grid(0), qry); } {code} Throws an exception: {noformat} java.lang.NullPointerException at org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsWriter.cacheIfPossible(FilePerformanceStatisticsWriter.java:540) at org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsWriter.queryProperty(FilePerformanceStatisticsWriter.java:320) at org.apache.ignite.internal.processors.performancestatistics.PerformanceStatisticsProcessor.lambda$queryProperty$11(PerformanceStatisticsProcessor.java:207) at org.apache.ignite.internal.processors.performancestatistics.PerformanceStatisticsProcessor.write(PerformanceStatisticsProcessor.java:428) at org.apache.ignite.internal.processors.performancestatistics.PerformanceStatisticsProcessor.queryProperty(PerformanceStatisticsProcessor.java:207) at org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.mapAndExecutePlan(ExecutionServiceImpl.java:668) at org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.executePlan(ExecutionServiceImpl.java:505) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.lambda$parseAndProcessQuery$4(CalciteQueryProcessor.java:495) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.processQuery(CalciteQueryProcessor.java:616) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.parseAndProcessQuery(CalciteQueryProcessor.java:495) at org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.query(CalciteQueryProcessor.java:389){noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20697) Move physical records from WAL to another storage
[ https://issues.apache.org/jira/browse/IGNITE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20697: --- Description: Currently, physycal records take most of the WAL size. But physical records in WAL files required only for crash recovery and these records are useful only for a short period of time (since last checkpoint). Size of physical records during checkpoint is more than size of all modified pages between checkpoints, since we need to store page snapshot record for each modified page and page delta records, if page is modified more than once between checkpoints. We process WAL file several times in stable workflow (without crashes and rebalances): # We write records to WAL files # We copy WAL files to archive # We compact WAL files (remove phisical records + compress) So, totally we write all physical records twice and read physical records at least twice. To reduce disc workload we can move physical records to another storage and don't write them to WAL files. To provide the same crash recovery guarantees we can write modified pages twice during checkpoint. First time to some delta file and second time to the page storage. In this case we can recover any page if we crash during write to page storage from delta file (instead of WAL, as we do now). This proposal has pros and cons. Pros: - Less size of stored data (we don't store page delta files, only final state of the page) - Reduced disc workload (we write all modified pages once instead of 2 writes and 2 reads of larger amount of data) - Potentially reduced latency (instead of writing physical records synchronously during data modification we write to WAL only logical records and physical pages will be written by checkpointer threads) Cons: - Increased checkpoint duration (we should write doubled amount of data during checkpoint) Let's try to implement it and benchmark. was: Currentrly, physycal records take most of the WAL size. But physical records in WAL files required only for crash recovery and these records are useful only for a short period of time (since last checkpoint). Size of physical records during checkpoint is more than size of all modified pages between checkpoints, since we need to store page snapshot record for each modified page and page delta records, if page is modified more than once between checkpoints. We process WAL file several times in stable workflow (without crashes and rebalances): # We write records to WAL files # We copy WAL files to archive # We compact WAL files (remove phisical records + compress) So, totally we write all physical records twice and read physical records at least twice. To reduce disc workload we can move physical records to another storage and don't write them to WAL files. To provide the same crash recovery guarantees we can write modified pages twice during checkpoint. First time to some delta file and second time to the page storage. In this case we can recover any page if we crash during write to page storage from delta file (instead of WAL, as we do now). This proposal has pros and cons. Pros: - Less size of stored data (we don't store page delta files, only final state of the page) - Reduced disc workload (we store additionally write once all modified pages instead of 2 writes and 2 reads of larger amount of data) - Potentially reduced latency (instead of writing physical records synchronously during data modification we write to WAL only logical records and physical pages will be written by checkpointer threads) Cons: - Increased checkpoint duration (we should write doubled amount of data during checkpoint) Let's try to implement it and benchmark. > Move physical records from WAL to another storage > -- > > Key: IGNITE-20697 > URL: https://issues.apache.org/jira/browse/IGNITE-20697 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: iep-113, ise > Time Spent: 10m > Remaining Estimate: 0h > > Currently, physycal records take most of the WAL size. But physical records > in WAL files required only for crash recovery and these records are useful > only for a short period of time (since last checkpoint). > Size of physical records during checkpoint is more than size of all modified > pages between checkpoints, since we need to store page snapshot record for > each modified page and page delta records, if page is modified more than once > between checkpoints. > We process WAL file several times in stable workflow (without crashes and > rebalances): > # We write records to WAL files > # We copy WAL files to archive > # We compact WAL files (remove phisical records + compress) >
[jira] [Updated] (IGNITE-20697) Move physical records from WAL to another storage
[ https://issues.apache.org/jira/browse/IGNITE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20697: --- Labels: iep-113 ise (was: ise) > Move physical records from WAL to another storage > -- > > Key: IGNITE-20697 > URL: https://issues.apache.org/jira/browse/IGNITE-20697 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: iep-113, ise > Time Spent: 10m > Remaining Estimate: 0h > > Currentrly, physycal records take most of the WAL size. But physical records > in WAL files required only for crash recovery and these records are useful > only for a short period of time (since last checkpoint). > Size of physical records during checkpoint is more than size of all modified > pages between checkpoints, since we need to store page snapshot record for > each modified page and page delta records, if page is modified more than once > between checkpoints. > We process WAL file several times in stable workflow (without crashes and > rebalances): > # We write records to WAL files > # We copy WAL files to archive > # We compact WAL files (remove phisical records + compress) > So, totally we write all physical records twice and read physical records at > least twice. > To reduce disc workload we can move physical records to another storage and > don't write them to WAL files. To provide the same crash recovery guarantees > we can write modified pages twice during checkpoint. First time to some delta > file and second time to the page storage. In this case we can recover any > page if we crash during write to page storage from delta file (instead of > WAL, as we do now). > This proposal has pros and cons. > Pros: > - Less size of stored data (we don't store page delta files, only final > state of the page) > - Reduced disc workload (we store additionally write once all modified pages > instead of 2 writes and 2 reads of larger amount of data) > - Potentially reduced latency (instead of writing physical records > synchronously during data modification we write to WAL only logical records > and physical pages will be written by checkpointer threads) > Cons: > - Increased checkpoint duration (we should write doubled amount of data > during checkpoint) > Let's try to implement it and benchmark. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20697) Move physical records from WAL to another storage
[ https://issues.apache.org/jira/browse/IGNITE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20697: --- Labels: ise (was: ) > Move physical records from WAL to another storage > -- > > Key: IGNITE-20697 > URL: https://issues.apache.org/jira/browse/IGNITE-20697 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > Currentrly, physycal records take most of the WAL size. But physical records > in WAL files required only for crash recovery and these records are useful > only for a short period of time (since last checkpoint). > Size of physical records during checkpoint is more than size of all modified > pages between checkpoints, since we need to store page snapshot record for > each modified page and page delta records, if page is modified more than once > between checkpoints. > We process WAL file several times in stable workflow (without crashes and > rebalances): > # We write records to WAL files > # We copy WAL files to archive > # We compact WAL files (remove phisical records + compress) > So, totally we write all physical records twice and read physical records at > least twice. > To reduce disc workload we can move physical records to another storage and > don't write them to WAL files. To provide the same crash recovery guarantees > we can write modified pages twice during checkpoint. First time to some delta > file and second time to the page storage. In this case we can recover any > page if we crash during write to page storage from delta file (instead of > WAL, as we do now). > This proposal has pros and cons. > Pros: > - Less size of stored data (we don't store page delta files, only final > state of the page) > - Reduced disc workload (we store additionally write once all modified pages > instead of 2 writes and 2 reads of larger amount of data) > - Potentially reduced latency (instead of writing physical records > synchronously during data modification we write to WAL only logical records > and physical pages will be written by checkpointer threads) > Cons: > - Increased checkpoint duration (we should write doubled amount of data > during checkpoint) > Let's try to implement it and benchmark. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-20501) Calcite engine. Memory leak in MailboxRegistryImpl#remotes on JOINs
[ https://issues.apache.org/jira/browse/IGNITE-20501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov resolved IGNITE-20501. Fix Version/s: 2.16 Release Note: SQL Calcite: Fixed memory leak in MailboxRegistryImpl#remotes Resolution: Fixed [~ivandasch], thanks for the review! Merged to master. > Calcite engine. Memory leak in MailboxRegistryImpl#remotes on JOINs > --- > > Key: IGNITE-20501 > URL: https://issues.apache.org/jira/browse/IGNITE-20501 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: calcite, ise > Fix For: 2.16 > > Time Spent: 20m > Remaining Estimate: 0h > > When JOIN relational operator is executed, downstream of JOIN can be closed > if only one side of JOIN is already drained (see last lines of > {{MergeJoinNode.InnerJoin#join}}, for example). In this case query can be > prematurely closed, and after this, message for another side of join can > arrive and register new {{Inbox}} (see {{ExchangeServiceImpl#onMessage(UUID, > QueryBatchMessage)}}), that never will be unregistered. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-20697) Move physical records from WAL to another storage
[ https://issues.apache.org/jira/browse/IGNITE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1674#comment-1674 ] Aleksey Plekhanov edited comment on IGNITE-20697 at 10/20/23 9:48 AM: -- [~ktkale...@gridgain.com], sure, I have plans to create IEP and write to the dev list, but first I want to create POC. {quote}It also turns out that if users do not gracefully shut down the cluster before switching to a new version of the ignite, they may experience problems starting nodes since there will be a new data recovery mechanism. {quote} I suppose we will provide both mechanisms and allow user to configure it. In the next release we can use physical records by defaul, in the following release we can swith default to checkpoint delta file. On recovery Ignite can decide what to do by analyzing files for current checkpoint. was (Author: alex_pl): [~ktkale...@gridgain.com], sure, I have plans to create IEP and write to the dev list, but first I want to create POC. {quote}It also turns out that if users do not gracefully shut down the cluster before switching to a new version of the ignite, they may experience problems starting nodes since there will be a new data recovery mechanism. {quote} I suppose we will provide both mechanisms and allow user to configure it. In the next release we can use physical records by defaul, in the following release we can swith default to checkpoint delta file. On recovery Ignite can decide what to do by analyzing files for corrent checkpoint. > Move physical records from WAL to another storage > -- > > Key: IGNITE-20697 > URL: https://issues.apache.org/jira/browse/IGNITE-20697 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > > Currentrly, physycal records take most of the WAL size. But physical records > in WAL files required only for crush recovery and these records are useful > only for a short period of time (since last checkpoint). > Size of physical records during checkpoint is more than size of all modified > pages between checkpoints, since we need to store page snapshot record for > each modified page and page delta records, if page is modified more than once > between checkpoints. > We process WAL file several times in stable workflow (without crashes and > rebalances): > # We write records to WAL files > # We copy WAL files to archive > # We compact WAL files (remove phisical records + compress) > So, totally we write all physical records twice and read physical records at > least twice. > To reduce disc workload we can move physical records to another storage and > don't write them to WAL files. To provide the same crush recovery guarantees > we can write modified pages twice during checkpoint. First time to some delta > file and second time to the page storage. In this case we can recover any > page if we crash during write to page storage from delta file (instead of > WAL, as we do now). > This proposal has pros and cons. > Pros: > - Less size of stored data (we don't store page delta files, only final > state of the page) > - Reduced disc workload (we store additionally write once all modified pages > instead of 2 writes and 2 reads of larger amount of data) > - Potentially reduced latancy (instead of writing physical records > synchronously during data modification we write to WAL only logical records > and physical pages will be written by checkpointer threads) > Cons: > - Increased checkpoint duration (we should write doubled amount of data > during checkpoint) > Let's try to implement it and benchmark. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-20697) Move physical records from WAL to another storage
[ https://issues.apache.org/jira/browse/IGNITE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1674#comment-1674 ] Aleksey Plekhanov commented on IGNITE-20697: [~ktkale...@gridgain.com], sure, I have plans to create IEP and write to the dev list, but first I want to create POC. {quote}It also turns out that if users do not gracefully shut down the cluster before switching to a new version of the ignite, they may experience problems starting nodes since there will be a new data recovery mechanism. {quote} I suppose we will provide both mechanisms and allow user to configure it. In the next release we can use physical records by defaul, in the following release we can swith default to checkpoint delta file. On recovery Ignite can decide what to do by analyzing files for corrent checkpoint. > Move physical records from WAL to another storage > -- > > Key: IGNITE-20697 > URL: https://issues.apache.org/jira/browse/IGNITE-20697 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > > Currentrly, physycal records take most of the WAL size. But physical records > in WAL files required only for crush recovery and these records are useful > only for a short period of time (since last checkpoint). > Size of physical records during checkpoint is more than size of all modified > pages between checkpoints, since we need to store page snapshot record for > each modified page and page delta records, if page is modified more than once > between checkpoints. > We process WAL file several times in stable workflow (without crashes and > rebalances): > # We write records to WAL files > # We copy WAL files to archive > # We compact WAL files (remove phisical records + compress) > So, totally we write all physical records twice and read physical records at > least twice. > To reduce disc workload we can move physical records to another storage and > don't write them to WAL files. To provide the same crush recovery guarantees > we can write modified pages twice during checkpoint. First time to some delta > file and second time to the page storage. In this case we can recover any > page if we crash during write to page storage from delta file (instead of > WAL, as we do now). > This proposal has pros and cons. > Pros: > - Less size of stored data (we don't store page delta files, only final > state of the page) > - Reduced disc workload (we store additionally write once all modified pages > instead of 2 writes and 2 reads of larger amount of data) > - Potentially reduced latancy (instead of writing physical records > synchronously during data modification we write to WAL only logical records > and physical pages will be written by checkpointer threads) > Cons: > - Increased checkpoint duration (we should write doubled amount of data > during checkpoint) > Let's try to implement it and benchmark. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20697) Move physical records from WAL to another storage
[ https://issues.apache.org/jira/browse/IGNITE-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20697: --- Description: Currentrly, physycal records take most of the WAL size. But physical records in WAL files required only for crush recovery and these records are useful only for a short period of time (since last checkpoint). Size of physical records during checkpoint is more than size of all modified pages between checkpoints, since we need to store page snapshot record for each modified page and page delta records, if page is modified more than once between checkpoints. We process WAL file several times in stable workflow (without crashes and rebalances): # We write records to WAL files # We copy WAL files to archive # We compact WAL files (remove phisical records + compress) So, totally we write all physical records twice and read physical records at least twice. To reduce disc workload we can move physical records to another storage and don't write them to WAL files. To provide the same crush recovery guarantees we can write modified pages twice during checkpoint. First time to some delta file and second time to the page storage. In this case we can recover any page if we crash during write to page storage from delta file (instead of WAL, as we do now). This proposal has pros and cons. Pros: - Less size of stored data (we don't store page delta files, only final state of the page) - Reduced disc workload (we store additionally write once all modified pages instead of 2 writes and 2 reads of larger amount of data) - Potentially reduced latancy (instead of writing physical records synchronously during data modification we write to WAL only logical records and physical pages will be written by checkpointer threads) Cons: - Increased checkpoint duration (we should write doubled amount of data during checkpoint) Let's try to implement it and benchmark. was: Currentrly, physycal records take most of the WAL size. But physical records in WAL files required only for crush recovery and these records are useful only for a short period of time (since last checkpoint). Size of physical records during checkpoint is more than size of all modified pages between checkpoints, since we need to store page snapshot record for each modified page and page delta records, if page is modified more than once between checkpoints. We process WAL file several times in normal workflow (without crashes): 1) We write records to WAL files 2) We copy WAL files to archive 3) We compact WAL files (remove phisical records + compress) So, totally we write all physical records twice and read physical records twice. To reduce disc workload we can move physical records to another storage and don't write them to WAL files. To provide the same crush recovery guarantees we can write modified pages twice during checkpoint. First time to some delta file and second time to the page storage. In this case we can recover any page if we crash during write to page storage from delta file (instead of WAL, as we do now). This proposal has pros and cons. Pros: - Less size of stored data (we don't store page delta files, only final state of the page) - Reduced disc workload (we store additionally write once all modified pages instead of 2 writes and 2 reads of larger amount of data) - Potentially reduced latancy (instead of writing physical records synchronously during data modification we write to WAL only logical records and physical pages will be written by checkpointer threads) Cons: - Increased checkpoint duration (we should write doubled amount of data during checkpoint) Let's try it and benchmark. > Move physical records from WAL to another storage > -- > > Key: IGNITE-20697 > URL: https://issues.apache.org/jira/browse/IGNITE-20697 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > > Currentrly, physycal records take most of the WAL size. But physical records > in WAL files required only for crush recovery and these records are useful > only for a short period of time (since last checkpoint). > Size of physical records during checkpoint is more than size of all modified > pages between checkpoints, since we need to store page snapshot record for > each modified page and page delta records, if page is modified more than once > between checkpoints. > We process WAL file several times in stable workflow (without crashes and > rebalances): > # We write records to WAL files > # We copy WAL files to archive > # We compact WAL files (remove phisical records + compress) > So, totally we write all physical records twice and read physical records at > least twice. > To reduce disc
[jira] [Created] (IGNITE-20697) Move physical records from WAL to another storage
Aleksey Plekhanov created IGNITE-20697: -- Summary: Move physical records from WAL to another storage Key: IGNITE-20697 URL: https://issues.apache.org/jira/browse/IGNITE-20697 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Currentrly, physycal records take most of the WAL size. But physical records in WAL files required only for crush recovery and these records are useful only for a short period of time (since last checkpoint). Size of physical records during checkpoint is more than size of all modified pages between checkpoints, since we need to store page snapshot record for each modified page and page delta records, if page is modified more than once between checkpoints. We process WAL file several times in normal workflow (without crashes): 1) We write records to WAL files 2) We copy WAL files to archive 3) We compact WAL files (remove phisical records + compress) So, totally we write all physical records twice and read physical records twice. To reduce disc workload we can move physical records to another storage and don't write them to WAL files. To provide the same crush recovery guarantees we can write modified pages twice during checkpoint. First time to some delta file and second time to the page storage. In this case we can recover any page if we crash during write to page storage from delta file (instead of WAL, as we do now). This proposal has pros and cons. Pros: - Less size of stored data (we don't store page delta files, only final state of the page) - Reduced disc workload (we store additionally write once all modified pages instead of 2 writes and 2 reads of larger amount of data) - Potentially reduced latancy (instead of writing physical records synchronously during data modification we write to WAL only logical records and physical pages will be written by checkpointer threads) Cons: - Increased checkpoint duration (we should write doubled amount of data during checkpoint) Let's try it and benchmark. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20674) Move ignite-yardstick module to ignite-extensions
Aleksey Plekhanov created IGNITE-20674: -- Summary: Move ignite-yardstick module to ignite-extensions Key: IGNITE-20674 URL: https://issues.apache.org/jira/browse/IGNITE-20674 Project: Ignite Issue Type: Improvement Components: yardstick Reporter: Aleksey Plekhanov Currently, we include ignite-yardstick module into binary release, but: - It has dependencies that never used in other Ignite modules. Sometimes it requires dependencies versions update to avoid CVEs of dependencies. - It increases size of binary release by 100Mb. - It used only by release engeneers. - It has a lot of bugs in org.yardstickframework.yardstick artifact, which can't be fixed in our repository (for example, out of the box it can be run only on java8). We should exclude it from binary release and move it to the ignite-extensions repository. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20587) SQL hints documentation. INDEX, NO_INDEX
[ https://issues.apache.org/jira/browse/IGNITE-20587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20587: --- Description: Add documentation for SQL hints > SQL hints documentation. INDEX, NO_INDEX > > > Key: IGNITE-20587 > URL: https://issues.apache.org/jira/browse/IGNITE-20587 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Vladimir Steshin >Assignee: Vladimir Steshin >Priority: Major > Labels: ise > Fix For: 2.16 > > Time Spent: 5h 50m > Remaining Estimate: 0h > > Add documentation for SQL hints -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20587) SQL hints documentation. INDEX, NO_INDEX
[ https://issues.apache.org/jira/browse/IGNITE-20587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20587: --- Component/s: documentation > SQL hints documentation. INDEX, NO_INDEX > > > Key: IGNITE-20587 > URL: https://issues.apache.org/jira/browse/IGNITE-20587 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Vladimir Steshin >Assignee: Vladimir Steshin >Priority: Major > Labels: ise > Fix For: 2.16 > > Time Spent: 5h 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20587) SQL hints documentation. INDEX, NO_INDEX
[ https://issues.apache.org/jira/browse/IGNITE-20587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20587: --- Ignite Flags: (was: Docs Required,Release Notes Required) > SQL hints documentation. INDEX, NO_INDEX > > > Key: IGNITE-20587 > URL: https://issues.apache.org/jira/browse/IGNITE-20587 > Project: Ignite > Issue Type: Task >Reporter: Vladimir Steshin >Assignee: Vladimir Steshin >Priority: Major > Labels: ise > Fix For: 2.16 > > Time Spent: 5h 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20501) Calcite engine. Memory leak in MailboxRegistryImpl#remotes on JOINs
Aleksey Plekhanov created IGNITE-20501: -- Summary: Calcite engine. Memory leak in MailboxRegistryImpl#remotes on JOINs Key: IGNITE-20501 URL: https://issues.apache.org/jira/browse/IGNITE-20501 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov When JOIN relational operator is executed, downstream of JOIN can be closed if only one side of JOIN is already drained (see last lines of {{MergeJoinNode.InnerJoin#join}}, for example). In this case query can be prematurely closed, and after this, message for another side of join can arrive and register new {{Inbox}} (see {{ExchangeServiceImpl#onMessage(UUID, QueryBatchMessage)}}), that never will be unregistered. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20488) Add metrics for count of page merges/splits in BPlusTree
Aleksey Plekhanov created IGNITE-20488: -- Summary: Add metrics for count of page merges/splits in BPlusTree Key: IGNITE-20488 URL: https://issues.apache.org/jira/browse/IGNITE-20488 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov It will be helpful to have such metrics as count of merges/splits in BPlusTree -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20487) Add system views for IgniteStripedThreadPoolExecutor
Aleksey Plekhanov created IGNITE-20487: -- Summary: Add system views for IgniteStripedThreadPoolExecutor Key: IGNITE-20487 URL: https://issues.apache.org/jira/browse/IGNITE-20487 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Currently, we have two views for {{StripedExecutor}}'s ({{stripedExecSvc}}, {{dataStreamerExecSvc}}), but we have another type of striped executor: {{IgniteStripedThreadPoolExecutor}} ({{rebalanceStripedExecSvc}}, {{callbackExecSvc}}, calcite query task executor), it will be helpful to have system views for these executors too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20462) Idle_verify prints partitions hash conflicts when entries are expiring concurrently
Aleksey Plekhanov created IGNITE-20462: -- Summary: Idle_verify prints partitions hash conflicts when entries are expiring concurrently Key: IGNITE-20462 URL: https://issues.apache.org/jira/browse/IGNITE-20462 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Background process of entries expire (ttl-cleaner-worker) always working on activated cluster, so, during idle_verify execution entries still can be expired even without workload on cluster. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19981) Calcite engine. Optimize mapping sending with query start request
[ https://issues.apache.org/jira/browse/IGNITE-19981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-19981: --- Ignite Flags: (was: Release Notes Required) > Calcite engine. Optimize mapping sending with query start request > - > > Key: IGNITE-19981 > URL: https://issues.apache.org/jira/browse/IGNITE-19981 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Time Spent: 50m > Remaining Estimate: 0h > > Currently we send the whole fragment mapping with query start request to each > node, but on the node we need only local mapping (we need only set of > partition to process by current node). If there are a lot of nodes and a lot > of partitions - mapping can take a lot of space -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-19981) Calcite engine. Optimize mapping sending with query start request
[ https://issues.apache.org/jira/browse/IGNITE-19981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov resolved IGNITE-19981. Resolution: Won't Fix Looks like there is no performance boost after implementation. Benchmarks show reduced network usage, but increased CPU usage and lower overall throughput. > Calcite engine. Optimize mapping sending with query start request > - > > Key: IGNITE-19981 > URL: https://issues.apache.org/jira/browse/IGNITE-19981 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Time Spent: 50m > Remaining Estimate: 0h > > Currently we send the whole fragment mapping with query start request to each > node, but on the node we need only local mapping (we need only set of > partition to process by current node). If there are a lot of nodes and a lot > of partitions - mapping can take a lot of space -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-18330) Fix javadoc in Transaction#resume(), Transaction#suspend
[ https://issues.apache.org/jira/browse/IGNITE-18330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov resolved IGNITE-18330. Fix Version/s: 2.16 Resolution: Fixed [~__zz250], looks good to me. Merged to master. Thanks for the contribution! > Fix javadoc in Transaction#resume(), Transaction#suspend > > > Key: IGNITE-18330 > URL: https://issues.apache.org/jira/browse/IGNITE-18330 > Project: Ignite > Issue Type: Improvement >Reporter: Luchnikov Alexander >Assignee: bin.yin >Priority: Trivial > Labels: ise, newbie > Fix For: 2.16 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > After implementation IGNITE-5714, this api can be used with pessimistic > transactions. > Now in javadoc - Supported only for optimistic transactions.. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20383) Calcite engine. Convert one input of a join to the broadcast distribution
[ https://issues.apache.org/jira/browse/IGNITE-20383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20383: --- Description: Sometimes, if join inputs are not collocated it's worth to broadcast one of the inputs, for example, query: {code:sql} SELECT * FROM emps WHERE emps.salary = (SELECT AVG(emps.salary) FROM emps){code} Currently has plan: {noformat} IgniteProject(ID=[$0], NAME=[$1], SALARY=[$2]) IgniteNestedLoopJoin(condition=[=($2, $3)], joinType=[inner]) IgniteExchange(distribution=[single]) IgniteTableScan(table=[[PUBLIC, EMPS]]) IgniteReduceHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteExchange(distribution=[single]) IgniteMapHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[TST], requiredColumns=[{2}], collation=[[2 ASC-nulls-first]]) {noformat} But this plan is not optimal, since we should send entire table EMP from all nodes to the single node. For such a query it's better to broadcast result of the aggregation, in this case plan will be something like: {noformat} IgniteExchange(distribution=[single]) IgniteProject(...) IgniteCorrelatedNestedLoopJoin(...) IgniteExchange(distribution=[broadcast]) IgniteReduceHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteExchange(distribution=[single]) IgniteMapHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[SALARY_IDX]) IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[SALARY_IDX]) {noformat} But currently we don't try to convert any of the join inputs to the broadcast distribution. We should try to do this. was: Sometimes, if join inputs are not collocated it's worth to broadcast one of the inputs, for example, query: {code:sql} SELECT * FROM emps WHERE emps.salary = (SELECT AVG(emps.salary) FROM emps){code} Currently has plan: {noformat} IgniteProject(ID=[$0], NAME=[$1], SALARY=[$2]) IgniteNestedLoopJoin(condition=[=($2, $3)], joinType=[inner]) IgniteExchange(distribution=[single]) IgniteTableScan(table=[[PUBLIC, EMPS]]) IgniteReduceHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteExchange(distribution=[single]) IgniteMapHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[TST], requiredColumns=[{2}], collation=[[2 ASC-nulls-first]]) {noformat} But this plan is not optimal, since we should send entire table EMP from all nodes to the single node. For such a query it's better to broadcast result of the aggregation, in this case plan will be something like: {noformat} IgniteExchange(distribution=[single]) IgniteProject(ID=[$0], NAME=[$1], SALARY=[$2]) IgniteCorrelatedNestedLoopJoin(...) IgniteExchange(distribution=[broadcast]) IgniteReduceHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteExchange(distribution=[single]) IgniteMapHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[SALARY_IDX]) IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[SALARY_IDX]) {noformat} But currently we don't try to convert any of the join inputs to the broadcast distribution. We should try to do this. > Calcite engine. Convert one input of a join to the broadcast distribution > - > > Key: IGNITE-20383 > URL: https://issues.apache.org/jira/browse/IGNITE-20383 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: calcite, ise > > Sometimes, if join inputs are not collocated it's worth to broadcast one of > the inputs, for example, query: > {code:sql} > SELECT * FROM emps WHERE emps.salary = (SELECT AVG(emps.salary) FROM > emps){code} > Currently has plan: > {noformat} > IgniteProject(ID=[$0], NAME=[$1], SALARY=[$2]) > IgniteNestedLoopJoin(condition=[=($2, $3)], joinType=[inner]) > IgniteExchange(distribution=[single]) > IgniteTableScan(table=[[PUBLIC, EMPS]]) > IgniteReduceHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) > IgniteExchange(distribution=[single]) > IgniteMapHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) > IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[TST], > requiredColumns=[{2}], collation=[[2 ASC-nulls-first]]) > {noformat} > But this plan is not optimal, since we should send entire table EMP from all > nodes to the single node. For such a query it's better to broadcast result of > the aggregation, in this case plan will be something like: > {noformat} > IgniteExchange(distribution=[single]) > IgniteProject(...)
[jira] [Created] (IGNITE-20383) Calcite engine. Convert one input of a join to the broadcast distribution
Aleksey Plekhanov created IGNITE-20383: -- Summary: Calcite engine. Convert one input of a join to the broadcast distribution Key: IGNITE-20383 URL: https://issues.apache.org/jira/browse/IGNITE-20383 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Sometimes, if join inputs are not collocated it's worth to broadcast one of the inputs, for example, query: {code:sql} SELECT * FROM emps WHERE emps.salary = (SELECT AVG(emps.salary) FROM emps){code} Currently has plan: {noformat} IgniteProject(ID=[$0], NAME=[$1], SALARY=[$2]) IgniteNestedLoopJoin(condition=[=($2, $3)], joinType=[inner]) IgniteExchange(distribution=[single]) IgniteTableScan(table=[[PUBLIC, EMPS]]) IgniteReduceHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteExchange(distribution=[single]) IgniteMapHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[TST], requiredColumns=[{2}], collation=[[2 ASC-nulls-first]]) {noformat} But this plan is not optimal, since we should send entire table EMP from all nodes to the single node. For such a query it's better to broadcast result of the aggregation, in this case plan will be something like: {noformat} IgniteExchange(distribution=[single]) IgniteProject(ID=[$0], NAME=[$1], SALARY=[$2]) IgniteCorrelatedNestedLoopJoin(...) IgniteExchange(distribution=[broadcast]) IgniteReduceHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteExchange(distribution=[single]) IgniteMapHashAggregate(group=[{}], AVG(EMPS.SALARY)=[AVG($0)]) IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[SALARY_IDX]) IgniteIndexScan(table=[[PUBLIC, EMPS]], index=[SALARY_IDX]) {noformat} But currently we don't try to convert any of the join inputs to the broadcast distribution. We should try to do this. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20382) Calcite engine. Add metrics for CalciteQueryExecutor thread pool
Aleksey Plekhanov created IGNITE-20382: -- Summary: Calcite engine. Add metrics for CalciteQueryExecutor thread pool Key: IGNITE-20382 URL: https://issues.apache.org/jira/browse/IGNITE-20382 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Currently, all thread-pools except CalciteQueryExecutor can be monitored via metrics, we should add this ability for CalciteQueryExecutor too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20353) Calcite engine. Clause 'WITH affinity_key=...' silently ignored when wrong column is specified
Aleksey Plekhanov created IGNITE-20353: -- Summary: Calcite engine. Clause 'WITH affinity_key=...' silently ignored when wrong column is specified Key: IGNITE-20353 URL: https://issues.apache.org/jira/browse/IGNITE-20353 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Calcite-based SQL engine silently ignore clause WITH affinity_key=... when wrong column is specified. H2-based engine in this case throw an error: {noformat} org.apache.ignite.internal.processors.query.IgniteSQLException: Affinity key column with given name not found: test {noformat} Reproducer: {code:sql} CREATE TABLE order_items (id varchar, orderId int, sku varchar, PRIMARY KEY (id, orderId) WITH "affinity_key=test"); {code} Also, there is some problem with case-sensitivity, for example: {code:sql} CREATE TABLE order_items (id varchar, orderId int, sku varchar, PRIMARY KEY (id, orderId) WITH "affinity_key=orderId"); {code} Works well for H2-based engine ({{orderId}} in {{affinity_key}} converted to {{ORDERID}} and matches {{orderId}} columns alias), but silently ignired for Calcite-based engine ({{orderId}} in {{affinity_key}} remains without case change). But: {code:sql} CREATE TABLE order_items (id varchar, orderId int, sku varchar, PRIMARY KEY (id, orderId) WITH "affinity_key=ORDERID"); {code} Works well for both engines. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20194) Calcite engine. Dependency common-codec required for some functions
Aleksey Plekhanov created IGNITE-20194: -- Summary: Calcite engine. Dependency common-codec required for some functions Key: IGNITE-20194 URL: https://issues.apache.org/jira/browse/IGNITE-20194 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Some functions (md5, soundex) require common-codec dependency, but this dependency is not included explicitly to dependencies list and doesn't added to calcite library folder on build. Tests run with transitive dependencies and don't trigger the problem. Queries fail only when run on Ignite started from binary release package. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20006) Calcite engine. Make table/index scan iterators yieldable
[ https://issues.apache.org/jira/browse/IGNITE-20006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20006: --- Ignite Flags: (was: Release Notes Required) > Calcite engine. Make table/index scan iterators yieldable > -- > > Key: IGNITE-20006 > URL: https://issues.apache.org/jira/browse/IGNITE-20006 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: calcite, ise > Fix For: 2.16 > > Time Spent: 50m > Remaining Estimate: 0h > > Currently, index/table iterators can scan unpredictable count of cache > entries during one {{hasNext()}}/{{next()}} call. These iterators contain > filter, which applyed to each entry and row produced only for entries that > satisfy filter. If filter contains "always false" rule, one {{hasNext()}} > call may scan entiry table uninterruptably, without timeouts and yields to > make another queries do their job. We should fix this behaviour. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20079) Calcite engine. Write additional performance statistics info for queries
Aleksey Plekhanov created IGNITE-20079: -- Summary: Calcite engine. Write additional performance statistics info for queries Key: IGNITE-20079 URL: https://issues.apache.org/jira/browse/IGNITE-20079 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Currently, we write query SQL/query time/query page reads to performance statistics. But it will be useful to write also real plans and count of entries scanned from caches to detect problems (plan can be changed during the time, so explain plan later can show differrent value) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20038) [Thin Client] Cache operations with PA enabled can fail with BufferUnderflowException
[ https://issues.apache.org/jira/browse/IGNITE-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20038: --- Affects Version/s: 2.15 2.14 > [Thin Client] Cache operations with PA enabled can fail with > BufferUnderflowException > --- > > Key: IGNITE-20038 > URL: https://issues.apache.org/jira/browse/IGNITE-20038 > Project: Ignite > Issue Type: Task >Affects Versions: 2.14, 2.15 > Environment: >Reporter: Mikhail Petrov >Assignee: Mikhail Petrov >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Cache operations with PA enabled can fail on thin clients with > BufferUnderflowException due to broken ClientCachePartitionAwarenessGroup > serialization. > Exception: > {code:java} > java.nio.BufferUnderflowException > at java.nio.Buffer.nextGetIndex(Buffer.java:532) > at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:366) > at > org.apache.ignite.internal.binary.streams.BinaryByteBufferInputStream.readInt(BinaryByteBufferInputStream.java:111) > at > org.apache.ignite.internal.binary.BinaryReaderExImpl.readInt(BinaryReaderExImpl.java:746) > at > org.apache.ignite.internal.client.thin.ClientCacheAffinityMapping.readCacheKeyConfiguration(ClientCacheAffinityMapping.java:240) > at > org.apache.ignite.internal.client.thin.ClientCacheAffinityMapping.readResponse(ClientCacheAffinityMapping.java:197) > at > org.apache.ignite.internal.client.thin.ClientCacheAffinityContext.readPartitionsUpdateResponse(ClientCacheAffinityContext.java:154) > at > org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:412) > at > org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:311) > at > org.apache.ignite.internal.client.thin.ThinClientAbstractPartitionAwarenessTest$TestTcpClientChannel.service(ThinClientAbstractPartitionAwarenessTest.java:345) > at > org.apache.ignite.internal.client.thin.ReliableChannel.lambda$affinityInfoIsUpToDate$6(ReliableChannel.java:423) > at > org.apache.ignite.internal.client.thin.ReliableChannel.applyOnNodeChannel(ReliableChannel.java:746) > at > org.apache.ignite.internal.client.thin.ReliableChannel.affinityInfoIsUpToDate(ReliableChannel.java:422) > at > org.apache.ignite.internal.client.thin.ReliableChannel.affinityService(ReliableChannel.java:316) > at > org.apache.ignite.internal.client.thin.TcpClientCache.txAwareService(TcpClientCache.java:1139) > at > org.apache.ignite.internal.client.thin.TcpClientCache.cacheSingleKeyOperation(TcpClientCache.java:1198) > at > org.apache.ignite.internal.client.thin.TcpClientCache.get(TcpClientCache.java:146) > at > org.apache.ignite.internal.client.thin.ThinClientPartitionAwarenessStableTopologyTest.lambda$testMultipleCacheGroupPartitionsRequest$8(ThinClientPartitionAwarenessStableTopologyTest.java:250) > at > org.apache.ignite.testframework.GridTestUtils.lambda$runAsync$4(GridTestUtils.java:1229) > at > org.apache.ignite.testframework.GridTestUtils$7.call(GridTestUtils.java:1570) > at > org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:88) > {code} > Reproducer: > {code:java} > /** */ > @Test > public void test() throws Exception { > Ignite ignite = startGrid(0); > ignite.createCache(new > CacheConfiguration<>("test-cache-0").setCacheMode(REPLICATED)); > ignite.createCache(new > CacheConfiguration<>("test-cache-1").setCacheMode(PARTITIONED)); > try (IgniteClient cli = Ignition.startClient(new > ClientConfiguration().setAddresses("127.0.0.1:10800"))) { > ClientCacheAffinityContext affCtx = > ((TcpIgniteClient)cli).reliableChannel().affinityContext(); > IgniteInternalFuture replCacheOpFut; > IgniteInternalFuture partCacheOpFut; > synchronized (affCtx.cacheKeyMapperFactoryMap) { > partCacheOpFut = GridTestUtils.runAsync(() -> > cli.cache("test-cache-0").get(0)); > replCacheOpFut = GridTestUtils.runAsync(() -> > cli.cache("test-cache-1").get(0)); > GridTestUtils.waitForCondition( > () -> > affCtx.pendingCacheIds.containsAll(F.transform(asList("test-cache-0", > "test-cache-1"), CU::cacheId)), > getTestTimeout() > ); > } > partCacheOpFut.get(); > replCacheOpFut.get(); > } > } > {code} > Explanation: > Take a look at the ClientCachePartitionAwarenessGroup#write method. During > its serialization we write to the buffer the variable "dfltAffinity". Then > take a
[jira] [Created] (IGNITE-20010) Calcite engine. Query leaks on remote fragment initialization phase failure
Aleksey Plekhanov created IGNITE-20010: -- Summary: Calcite engine. Query leaks on remote fragment initialization phase failure Key: IGNITE-20010 URL: https://issues.apache.org/jira/browse/IGNITE-20010 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov If any error occurs on remote fragment initialization phase (for example, LogicalRelImplementor throws an exception), query doesn't removed from running query manager -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-20006) Calcite engine. Make table/index scan iterators yieldable
[ https://issues.apache.org/jira/browse/IGNITE-20006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-20006: --- Labels: calcite ise (was: ) > Calcite engine. Make table/index scan iterators yieldable > -- > > Key: IGNITE-20006 > URL: https://issues.apache.org/jira/browse/IGNITE-20006 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: calcite, ise > > Currently, index/table iterators can scan unpredictable count of cache > entries during one {{hasNext()}}/{{next()}} call. These iterators contain > filter, which applyed to each entry and row produced only for entries that > satisfy filter. If filter contains "always false" rule, one {{hasNext()}} > call may scan entiry table uninterruptably, without timeouts and yields to > make another queries do their job. We should fix this behaviour. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-20006) Calcite engine. Make table/index scan iterators yieldable
Aleksey Plekhanov created IGNITE-20006: -- Summary: Calcite engine. Make table/index scan iterators yieldable Key: IGNITE-20006 URL: https://issues.apache.org/jira/browse/IGNITE-20006 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Currently, index/table iterators can scan unpredictable count of cache entries during one {{hasNext()}}/{{next()}} call. These iterators contain filter, which applyed to each entry and row produced only for entries that satisfy filter. If filter contains "always false" rule, one {{hasNext()}} call may scan entiry table uninterruptably, without timeouts and yields to make another queries do their job. We should fix this behaviour. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19814) Calcite engine uses 0 as an inlineSize for index created by INT column
[ https://issues.apache.org/jira/browse/IGNITE-19814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-19814: --- Labels: calcite ise (was: calcite) > Calcite engine uses 0 as an inlineSize for index created by INT column > -- > > Key: IGNITE-19814 > URL: https://issues.apache.org/jira/browse/IGNITE-19814 > Project: Ignite > Issue Type: Bug >Reporter: Sergey Korotkov >Assignee: Aleksey Plekhanov >Priority: Minor > Labels: calcite, ise > Fix For: 2.16 > > Attachments: > 0001-IGNITE-19814-Add-calcite-test-for-integer-column-ind.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > Using the Calcite query engine the following SQL statement > {code:sql} > CREATE TABLE TEST (id INT, name VARCHAR, PRIMARY KEY(id)) > {code} > creates index for INT id column with the inlineSize = 0. > If the _key_type_ is specified in the WITH clause index is created correctly > with the inlineSize = 5: > {code:sql} > CREATE TABLE TEST (id INT, name VARCHAR, PRIMARY KEY(id)) > WITH "key_type=Integer" > {code} > For H2 engine in both cases inlineSize is 5. > Reproducer is attached as a unit test. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19981) Calcite engine. Optimize mapping sending with query start request
Aleksey Plekhanov created IGNITE-19981: -- Summary: Calcite engine. Optimize mapping sending with query start request Key: IGNITE-19981 URL: https://issues.apache.org/jira/browse/IGNITE-19981 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Currently we send the whole fragment mapping with query start request to each node, but on the node we need only local mapping (we need only set of partition to process by current node). If there are a lot of nodes and a lot of partitions - mapping can take a lot of space -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19759) Calcite engine. Review list of reserved keywords
[ https://issues.apache.org/jira/browse/IGNITE-19759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-19759: --- Description: For the calcite engine we have too strict list of reserved keywords. For example, lexems such as "TYPE" and "OPTIONS" are reserved keywords and can't be used as columns or table names. But "TYPE" is frequently used by users as column name and we should exclude it from the list of reserved keywords (add it to non-reserved keywords, see {{config.fmpp}} file {{nonReservedKeywords}} section). Other vendors allow to use "TYPE" as column name. On the other hand Calcite-based SQL engine in Ignite now allows to use some keywords which should not be allowed as table or column names, for example, such query executes without any problem: {noformat} sql("create table true (like varchar, and int, as int)"); sql("insert into true values ('1', 1, 1)"); sql("select as as as from true where like like '%' and and between and and and"); {noformat} Current list of reserved keywords copied from "Babel" dialect of Calcite. Calcite has "default" dialect with default list of reserved keywords (see [1]), this list is close to SQL stantard, but looks quite strict too. Other vendors lists are less restrictive. For example, in SQL standard build-in functions and all built-in types are reserved keywords, in MySQL built-in functions are not reserved, but build-in types are reserved, in PostgreeSQL only minimal amount of keywords required for correct parsing are reserved (built-in functions are not reserved, built-in types are not reserved). See comparison table [2]. Our old SQL engine based on H2 database and H2 reserved keywords (see [3]). H2 approach is close to PostgreeSQL approach (minimal amount of keywords are reserved). I propose to use such an approach for Ignite too, to maximaze compatibility between our SQL engines. [1] https://calcite.apache.org/docs/reference.html#keywords [2] https://en.wikipedia.org/wiki/List_of_SQL_reserved_words [3] https://www.h2database.com/html/advanced.html#keywords was: For the calcite engine we have too strict list of reserved keywords. For example, lexems such as "TYPE" and "OPTIONS" are reserved keywords and can't be used as columns or table names. But "TYPE" is frequently used by users as column name and we should exclude it from the list of reserved keywords (add it to non-reserved keywords, see {{config.fmpp}} file \{{nonReservedKeywords}} section). Other vendors allow to use "TYPE" as column name. We should also review the whole list of reserved keywords (see generated {{{}Parser.jj{}}}), perhaps some other keywords should be excluded from reserved list too. > Calcite engine. Review list of reserved keywords > > > Key: IGNITE-19759 > URL: https://issues.apache.org/jira/browse/IGNITE-19759 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > Time Spent: 20m > Remaining Estimate: 0h > > For the calcite engine we have too strict list of reserved keywords. For > example, lexems such as "TYPE" and "OPTIONS" are reserved keywords and can't > be used as columns or table names. But "TYPE" is frequently used by users as > column name and we should exclude it from the list of reserved keywords (add > it to non-reserved keywords, see {{config.fmpp}} file {{nonReservedKeywords}} > section). Other vendors allow to use "TYPE" as column name. > On the other hand Calcite-based SQL engine in Ignite now allows to use some > keywords which should not be allowed as table or column names, for example, > such query executes without any problem: > {noformat} > sql("create table true (like varchar, and int, as int)"); > sql("insert into true values ('1', 1, 1)"); > sql("select as as as from true where like like '%' and and between > and and and"); > {noformat} > Current list of reserved keywords copied from "Babel" dialect of Calcite. > Calcite has "default" dialect with default list of reserved keywords (see > [1]), this list is close to SQL stantard, but looks quite strict too. > Other vendors lists are less restrictive. For example, in SQL standard > build-in functions and all built-in types are reserved keywords, in MySQL > built-in functions are not reserved, but build-in types are reserved, in > PostgreeSQL only minimal amount of keywords required for correct parsing are > reserved (built-in functions are not reserved, built-in types are not > reserved). See comparison table [2]. Our old SQL engine based on H2 database > and H2 reserved keywords (see [3]). H2 approach is close to PostgreeSQL > approach (minimal amount of keywords are
[jira] [Assigned] (IGNITE-19814) Calcite engine uses 0 as an inlineSize for index created by INT column
[ https://issues.apache.org/jira/browse/IGNITE-19814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov reassigned IGNITE-19814: -- Assignee: Aleksey Plekhanov > Calcite engine uses 0 as an inlineSize for index created by INT column > -- > > Key: IGNITE-19814 > URL: https://issues.apache.org/jira/browse/IGNITE-19814 > Project: Ignite > Issue Type: Bug >Reporter: Sergey Korotkov >Assignee: Aleksey Plekhanov >Priority: Minor > Labels: calcite > Attachments: > 0001-IGNITE-19814-Add-calcite-test-for-integer-column-ind.patch > > > Using the Calcite query engine the following SQL statement > {code:sql} > CREATE TABLE TEST (id INT, name VARCHAR, PRIMARY KEY(id)) > {code} > creates index for INT id column with the inlineSize = 0. > If the _key_type_ is specified in the WITH clause index is created correctly > with the inlineSize = 5: > {code:sql} > CREATE TABLE TEST (id INT, name VARCHAR, PRIMARY KEY(id)) > WITH "key_type=Integer" > {code} > For H2 engine in both cases inlineSize is 5. > Reproducer is attached as a unit test. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19814) Calcite engine uses 0 as an inlineSize for index created by INT column
[ https://issues.apache.org/jira/browse/IGNITE-19814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-19814: --- Ignite Flags: Release Notes Required (was: Docs Required,Release Notes Required) > Calcite engine uses 0 as an inlineSize for index created by INT column > -- > > Key: IGNITE-19814 > URL: https://issues.apache.org/jira/browse/IGNITE-19814 > Project: Ignite > Issue Type: Bug >Reporter: Sergey Korotkov >Assignee: Aleksey Plekhanov >Priority: Minor > Labels: calcite > Attachments: > 0001-IGNITE-19814-Add-calcite-test-for-integer-column-ind.patch > > > Using the Calcite query engine the following SQL statement > {code:sql} > CREATE TABLE TEST (id INT, name VARCHAR, PRIMARY KEY(id)) > {code} > creates index for INT id column with the inlineSize = 0. > If the _key_type_ is specified in the WITH clause index is created correctly > with the inlineSize = 5: > {code:sql} > CREATE TABLE TEST (id INT, name VARCHAR, PRIMARY KEY(id)) > WITH "key_type=Integer" > {code} > For H2 engine in both cases inlineSize is 5. > Reproducer is attached as a unit test. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19818) Calcite engine. Query planning failed when cache size is too big
[ https://issues.apache.org/jira/browse/IGNITE-19818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-19818: --- Labels: ise (was: ) > Calcite engine. Query planning failed when cache size is too big > > > Key: IGNITE-19818 > URL: https://issues.apache.org/jira/browse/IGNITE-19818 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > We use cache size as estimation for row count while planning, but use > {{cache.size()}} method, that returns int value. But if cache size is more > than {{Integer.MAX_VALUE}} we get an wrong size or even negative sometimes, > which cause assertion errors during planning. > We should fix it to {{cache.sizeLong()}}. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19818) Calcite engine. Query planning failed when cache size is too big
Aleksey Plekhanov created IGNITE-19818: -- Summary: Calcite engine. Query planning failed when cache size is too big Key: IGNITE-19818 URL: https://issues.apache.org/jira/browse/IGNITE-19818 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov We use cache size as estimation for row count while planning, but use {{cache.size()}} method, that returns int value. But if cache size is more than {{Integer.MAX_VALUE}} we get an wrong size or even negative sometimes, which cause assertion errors during planning. We should fix it to {{cache.sizeLong()}}. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19811) Continuous qieries backup acknowledge message sending fails for expired entries
Aleksey Plekhanov created IGNITE-19811: -- Summary: Continuous qieries backup acknowledge message sending fails for expired entries Key: IGNITE-19811 URL: https://issues.apache.org/jira/browse/IGNITE-19811 Project: Ignite Issue Type: Bug Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov Expire entry event has {{null}} in topology version field (see {{CacheContinuousQueryEntry}} constructor in the {{CacheContinuousQueryManager#onEntryExpired}} method). When Backup acknowledge is sending for such a message it silently (without warnings to log) fails with NPE on {{GridDiscoveryManager#cacheGroupAffinityNodes}} -> {{GridDiscoveryManager#resolveDiscoCache}} for {{null}} topology version (see {{CacheContinuousQueryHandler#sendBackupAcknowledge}}). This can lead to leaks in query entries buffer ({{CacheContinuousQueryEventBuffer#backupQ}}). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19767) Update Ignite dependency: Jetty
[ https://issues.apache.org/jira/browse/IGNITE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-19767: --- Ignite Flags: Release Notes Required (was: Docs Required,Release Notes Required) > Update Ignite dependency: Jetty > --- > > Key: IGNITE-19767 > URL: https://issues.apache.org/jira/browse/IGNITE-19767 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Nikolaev >Assignee: Aleksandr Nikolaev >Priority: Major > Labels: ise > Fix For: 2.16 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Update Jetty dependency 9.4.43.v20210629 to 9.4.51.v20230217 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19759) Calcite engine. Review list of reserved keywords
[ https://issues.apache.org/jira/browse/IGNITE-19759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-19759: --- Labels: ise (was: ) > Calcite engine. Review list of reserved keywords > > > Key: IGNITE-19759 > URL: https://issues.apache.org/jira/browse/IGNITE-19759 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise > > For the calcite engine we have too strict list of reserved keywords. For > example, lexems such as "TYPE" and "OPTIONS" are reserved keywords and can't > be used as columns or table names. But "TYPE" is frequently used by users as > column name and we should exclude it from the list of reserved keywords (add > it to non-reserved keywords, see {{config.fmpp}} file > \{{nonReservedKeywords}} section). Other vendors allow to use "TYPE" as > column name. > We should also review the whole list of reserved keywords (see generated > {{{}Parser.jj{}}}), perhaps some other keywords should be excluded from > reserved list too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19759) Calcite engine. Review list of reserved keywords
Aleksey Plekhanov created IGNITE-19759: -- Summary: Calcite engine. Review list of reserved keywords Key: IGNITE-19759 URL: https://issues.apache.org/jira/browse/IGNITE-19759 Project: Ignite Issue Type: Improvement Reporter: Aleksey Plekhanov Assignee: Aleksey Plekhanov For the calcite engine we have too strict list of reserved keywords. For example, lexems such as "TYPE" and "OPTIONS" are reserved keywords and can't be used as columns or table names. But "TYPE" is frequently used by users as column name and we should exclude it from the list of reserved keywords (add it to non-reserved keywords, see {{config.fmpp}} file \{{nonReservedKeywords}} section). Other vendors allow to use "TYPE" as column name. We should also review the whole list of reserved keywords (see generated {{{}Parser.jj{}}}), perhaps some other keywords should be excluded from reserved list too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19748) Calcite engine. Support queries timeout
[ https://issues.apache.org/jira/browse/IGNITE-19748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov updated IGNITE-19748: --- Description: We can set timeout for SQL queries: * Using {{SqlFieldsQuery.timeout}} property (for certain query) * Using "sql.defaultQueryTimeout" distributed property (default for all queries) But Calcite-based SQL query engine ignore these timeouts. Only possible timeout supported by the new engine is planning timeout. We should support execution timeouts too. was: We can set timeout for SQL queries: * Using {{SqlFieldsQuery.timeout}} property (for certain query) * Using "sql.defaultQueryTimeout" distributed property (default for all queries) But Calcite-based SQL query engine ignore these timeouts. Only possible timeout supported by the new engine is planning timeout. We should support execution timeouts too. > Calcite engine. Support queries timeout > --- > > Key: IGNITE-19748 > URL: https://issues.apache.org/jira/browse/IGNITE-19748 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: calcite > > We can set timeout for SQL queries: > * Using {{SqlFieldsQuery.timeout}} property (for certain query) > * Using "sql.defaultQueryTimeout" distributed property (default for all > queries) > But Calcite-based SQL query engine ignore these timeouts. Only possible > timeout supported by the new engine is planning timeout. We should support > execution timeouts too. -- This message was sent by Atlassian Jira (v8.20.10#820010)