[jira] [Assigned] (IGNITE-21830) Add logging of connection check for each address

2024-04-09 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander reassigned IGNITE-21830:


Assignee: Luchnikov Alexander

> Add logging of connection check for each address
> 
>
> Key: IGNITE-21830
> URL: https://issues.apache.org/jira/browse/IGNITE-21830
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ilya Shishkov
>Assignee: Luchnikov Alexander
>Priority: Trivial
>  Labels: ise, newbie
>
> Currently, exception thrown during checking of address is ignored [1]. It 
> would be useful to print message with connection check summary including each 
> address checking state and error message (if any).
> # 
> https://github.com/apache/ignite/blob/7cd0c7a7d1150bbf6be6aae5efe80627a73757c0/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/ServerImpl.java#L7293



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21865) [PerfStat] Add metadata about tables, columns, indexes

2024-03-28 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21865:
-
Labels: ise  (was: )

> [PerfStat] Add metadata about tables, columns, indexes
> --
>
> Key: IGNITE-21865
> URL: https://issues.apache.org/jira/browse/IGNITE-21865
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
>
> The PerfStat report contains information about which SQL queries were 
> executed and what their execution plan was. But there is no information about 
> what tables are in the cluster, what columns are in the tables, what indexes 
> are created.
> In our case, we had to request additional information about tables and 
> indexes to understand the cause of the problematic query plan. Used:
> {code:java}
> ./control.sh --system-view TABLES
> ./control.sh --system-view TABLES_COLUMNS
> ./control.sh --system-view INDEXES
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21865) [PerfStat] Add metadata about tables, columns, indexes

2024-03-28 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-21865:


 Summary: [PerfStat] Add metadata about tables, columns, indexes
 Key: IGNITE-21865
 URL: https://issues.apache.org/jira/browse/IGNITE-21865
 Project: Ignite
  Issue Type: Improvement
Reporter: Luchnikov Alexander


The PerfStat report contains information about which SQL queries were executed 
and what their execution plan was. But there is no information about what 
tables are in the cluster, what columns are in the tables, what indexes are 
created.

In our case, we had to request additional information about tables and indexes 
to understand the cause of the problematic query plan. Used:
{code:java}
./control.sh --system-view TABLES
./control.sh --system-view TABLES_COLUMNS
./control.sh --system-view INDEXES
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21863) [PerfStat] OOM when using build-report.sh from performance statistics

2024-03-28 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21863:
-
Summary:  [PerfStat] OOM when using build-report.sh from performance 
statistics  (was:  OOM when using build-report.sh from performance statistics)

>  [PerfStat] OOM when using build-report.sh from performance statistics
> --
>
> Key: IGNITE-21863
> URL: https://issues.apache.org/jira/browse/IGNITE-21863
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.16
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
>
> The problem is reproduced on a large volume collected using 
> {code:java}
> ./control.sh --performance-statistics
> {code}
> statistics, in our cases the total volume was 50GB.
> Increasing xmx to 64gb did not solve the problem.
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/java.util.HashMap.resize(HashMap.java:700)
> at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1112)
> at 
> org.apache.ignite.internal.performancestatistics.handlers.QueryHandler.queryProperty(QueryHandler.java:160)
> at 
> org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.deserialize(FilePerformanceStatisticsReader.java:345)
> at 
> org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.read(FilePerformanceStatisticsReader.java:169)
> at 
> org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.createReport(PerformanceStatisticsReportBuilder.java:124)
> at 
> org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.main(PerformanceStatisticsReportBuilder.java:69)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21863) OOM when using build-report.sh from performance statistics

2024-03-28 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21863:
-
Labels: ise  (was: )

>  OOM when using build-report.sh from performance statistics
> ---
>
> Key: IGNITE-21863
> URL: https://issues.apache.org/jira/browse/IGNITE-21863
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.16
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
>
> The problem is reproduced on a large volume collected using 
> {code:java}
> ./control.sh --performance-statistics
> {code}
> statistics, in our cases the total volume was 50GB.
> Increasing xmx to 64gb did not solve the problem.
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/java.util.HashMap.resize(HashMap.java:700)
> at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1112)
> at 
> org.apache.ignite.internal.performancestatistics.handlers.QueryHandler.queryProperty(QueryHandler.java:160)
> at 
> org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.deserialize(FilePerformanceStatisticsReader.java:345)
> at 
> org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.read(FilePerformanceStatisticsReader.java:169)
> at 
> org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.createReport(PerformanceStatisticsReportBuilder.java:124)
> at 
> org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.main(PerformanceStatisticsReportBuilder.java:69)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21863) OOM when using build-report.sh from performance statistics

2024-03-28 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21863:
-
Description: 
The problem is reproduced on a large volume collected using 
{code:java}
./control.sh --performance-statistics
{code}
statistics, in our cases the total volume was 50GB.

Increasing xmx to 64gb did not solve the problem.

{code:java}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.HashMap.resize(HashMap.java:700)
at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1112)
at 
org.apache.ignite.internal.performancestatistics.handlers.QueryHandler.queryProperty(QueryHandler.java:160)
at 
org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.deserialize(FilePerformanceStatisticsReader.java:345)
at 
org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.read(FilePerformanceStatisticsReader.java:169)
at 
org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.createReport(PerformanceStatisticsReportBuilder.java:124)
at 
org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.main(PerformanceStatisticsReportBuilder.java:69)
{code}


  was:
The problem is reproduced on a large volume collected using 
{code:java}
control.sh --performance-statistics
{code}
statistics, in our cases the total volume was 50GB.

Increasing xmx to 64gb did not solve the problem.

{code:java}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.HashMap.resize(HashMap.java:700)
at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1112)
at 
org.apache.ignite.internal.performancestatistics.handlers.QueryHandler.queryProperty(QueryHandler.java:160)
at 
org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.deserialize(FilePerformanceStatisticsReader.java:345)
at 
org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.read(FilePerformanceStatisticsReader.java:169)
at 
org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.createReport(PerformanceStatisticsReportBuilder.java:124)
at 
org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.main(PerformanceStatisticsReportBuilder.java:69)
{code}



>  OOM when using build-report.sh from performance statistics
> ---
>
> Key: IGNITE-21863
> URL: https://issues.apache.org/jira/browse/IGNITE-21863
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.16
>Reporter: Luchnikov Alexander
>Priority: Minor
>
> The problem is reproduced on a large volume collected using 
> {code:java}
> ./control.sh --performance-statistics
> {code}
> statistics, in our cases the total volume was 50GB.
> Increasing xmx to 64gb did not solve the problem.
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/java.util.HashMap.resize(HashMap.java:700)
> at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1112)
> at 
> org.apache.ignite.internal.performancestatistics.handlers.QueryHandler.queryProperty(QueryHandler.java:160)
> at 
> org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.deserialize(FilePerformanceStatisticsReader.java:345)
> at 
> org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.read(FilePerformanceStatisticsReader.java:169)
> at 
> org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.createReport(PerformanceStatisticsReportBuilder.java:124)
> at 
> org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.main(PerformanceStatisticsReportBuilder.java:69)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21863) OOM when using build-report.sh from performance statistics

2024-03-28 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-21863:


 Summary:  OOM when using build-report.sh from performance 
statistics
 Key: IGNITE-21863
 URL: https://issues.apache.org/jira/browse/IGNITE-21863
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.16
Reporter: Luchnikov Alexander


The problem is reproduced on a large volume collected using 
{code:java}
control.sh --performance-statistics
{code}
statistics, in our cases the total volume was 50GB.

Increasing xmx to 64gb did not solve the problem.

{code:java}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.HashMap.resize(HashMap.java:700)
at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1112)
at 
org.apache.ignite.internal.performancestatistics.handlers.QueryHandler.queryProperty(QueryHandler.java:160)
at 
org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.deserialize(FilePerformanceStatisticsReader.java:345)
at 
org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.read(FilePerformanceStatisticsReader.java:169)
at 
org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.createReport(PerformanceStatisticsReportBuilder.java:124)
at 
org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.main(PerformanceStatisticsReportBuilder.java:69)
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21863) OOM when using build-report.sh from performance statistics

2024-03-28 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21863:
-
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

>  OOM when using build-report.sh from performance statistics
> ---
>
> Key: IGNITE-21863
> URL: https://issues.apache.org/jira/browse/IGNITE-21863
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.16
>Reporter: Luchnikov Alexander
>Priority: Minor
>
> The problem is reproduced on a large volume collected using 
> {code:java}
> control.sh --performance-statistics
> {code}
> statistics, in our cases the total volume was 50GB.
> Increasing xmx to 64gb did not solve the problem.
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/java.util.HashMap.resize(HashMap.java:700)
> at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1112)
> at 
> org.apache.ignite.internal.performancestatistics.handlers.QueryHandler.queryProperty(QueryHandler.java:160)
> at 
> org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.deserialize(FilePerformanceStatisticsReader.java:345)
> at 
> org.apache.ignite.internal.processors.performancestatistics.FilePerformanceStatisticsReader.read(FilePerformanceStatisticsReader.java:169)
> at 
> org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.createReport(PerformanceStatisticsReportBuilder.java:124)
> at 
> org.apache.ignite.internal.performancestatistics.PerformanceStatisticsReportBuilder.main(PerformanceStatisticsReportBuilder.java:69)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology

2024-02-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21478:
-
Description: 
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

*Real case*
Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed
 !histo.png! 

*MinorTop example*
{code:java}
@Test
public void testMinorVer() throws Exception {
Ignite server = startGrids(1);
IgniteEx client = startClientGrid();
String cacheName = "cacheName";
for (int i = 0; i < 500; i++) {
client.getOrCreateCache(cacheName);
client.destroyCache(cacheName);
}
System.err.println("Heap dump time");
Thread.sleep(100);
}
{code}

{code:java}
[INFO 
][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager]
 AffinityTopologyVersion [topVer=2, minorTopVer=1000], 
evt=DISCOVERY_CUSTOM_EVT, evtNode=52b4c130-1a01-4858-813a-ebc8a5dabf1e, 
client=true]
{code}

 !HistoMinorTop.png! 









  was:
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed
 !histo.png! 

*MinorTop example
*
{code:java}
@Test
public void testMinorVer() throws Exception {
Ignite server = startGrids(1);
IgniteEx client = startClientGrid();
String cacheName = "cacheName";
for (int i = 0; i < 500; i++) {
client.getOrCreateCache(cacheName);
client.destroyCache(cacheName);
}
System.err.println("Heap dump time");
Thread.sleep(100);
}
{code}

{code:java}
[INFO 
][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager]
 AffinityTopologyVersion [topVer=2, minorTopVer=1000], 
evt=DISCOVERY_CUSTOM_EVT, evtNode=52b4c130-1a01-4858-813a-ebc8a5dabf1e, 
client=true]
{code}

 !HistoMinorTop.png! 










> OOM crash with unstable topology
> 
>
> Key: IGNITE-21478
> URL: https://issues.apache.org/jira/browse/IGNITE-21478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: HistoMinorTop.png, histo.png
>
>
> User cases:
> 1) Frequent entry/exit of a thick client into the topology leads to a crash 
> of the server node due to OMM.
> 2) Frequent creation and destroy of caches leads to a server node crash due 
> to OOM.
>  topVer=20098
> *Real case*
> Part of the log before the OOM crash, pay attention to *topVer=20098*
> {code:java}
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
> ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
> minorTopVer=6]
> ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
> commPort=47100]
> ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
> ^-- Heap [used=867MB, free=15.29%, comm=1024MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=7, qSize=0]
> ^-- System thread pool [active=0, idle=8, qSize=0]
> ^-- 

[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology

2024-02-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21478:
-
Attachment: HistoMinorTop.png

> OOM crash with unstable topology
> 
>
> Key: IGNITE-21478
> URL: https://issues.apache.org/jira/browse/IGNITE-21478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: HistoMinorTop.png, histo.png
>
>
> User cases:
> 1) Frequent entry/exit of a thick client into the topology leads to a crash 
> of the server node due to OMM.
> 2) Frequent creation and destroy of caches leads to a server node crash due 
> to OOM.
>  topVer=20098
> Part of the log before the OOM crash, pay attention to *topVer=20098*
> {code:java}
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
> ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
> minorTopVer=6]
> ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
> commPort=47100]
> ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
> ^-- Heap [used=867MB, free=15.29%, comm=1024MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=7, qSize=0]
> ^-- System thread pool [active=0, idle=8, qSize=0]
> ^-- Striped thread pool [active=0, idle=8, qSize=0]
> {code}
> Histogram from heap-dump after node failed
>  !histo.png! 
> *MinorTop example
> *
> {code:java}
> @Test
> public void testMinorVer() throws Exception {
> Ignite server = startGrids(1);
> IgniteEx client = startClientGrid();
> String cacheName = "cacheName";
> for (int i = 0; i < 500; i++) {
> client.getOrCreateCache(cacheName);
> client.destroyCache(cacheName);
> }
> System.err.println("Heap dump time");
> Thread.sleep(100);
> }
> {code}
> {code:java}
> [INFO 
> ][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager]
>  AffinityTopologyVersion [topVer=2, minorTopVer=1000], 
> evt=DISCOVERY_CUSTOM_EVT, evtNode=52b4c130-1a01-4858-813a-ebc8a5dabf1e, 
> client=true]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology

2024-02-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21478:
-
Description: 
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed
 !histo.png! 

*MinorTop example
*
{code:java}
@Test
public void testMinorVer() throws Exception {
Ignite server = startGrids(1);
IgniteEx client = startClientGrid();
String cacheName = "cacheName";
for (int i = 0; i < 500; i++) {
client.getOrCreateCache(cacheName);
client.destroyCache(cacheName);
}
System.err.println("Heap dump time");
Thread.sleep(100);
}
{code}

{code:java}
[INFO 
][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager]
 AffinityTopologyVersion [topVer=2, minorTopVer=1000], 
evt=DISCOVERY_CUSTOM_EVT, evtNode=52b4c130-1a01-4858-813a-ebc8a5dabf1e, 
client=true]
{code}











  was:
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed
 !histo.png! 

MinorTop example

{code:java}
@Test
public void testMinorVer() throws Exception {
Ignite server = startGrids(1);
IgniteEx client = startClientGrid();
String cacheName = "cacheName";
for (int i = 0; i < 500; i++) {
client.getOrCreateCache(cacheName);
client.destroyCache(cacheName);
}
System.err.println("Heap dump time");
Thread.sleep(100);
}
{code}

{code:java}
[INFO 
][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager]
 AffinityTopologyVersion [topVer=2, minorTopVer=1000], 
evt=DISCOVERY_CUSTOM_EVT, evtNode=52b4c130-1a01-4858-813a-ebc8a5dabf1e, 
client=true]
{code}










> OOM crash with unstable topology
> 
>
> Key: IGNITE-21478
> URL: https://issues.apache.org/jira/browse/IGNITE-21478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: HistoMinorTop.png, histo.png
>
>
> User cases:
> 1) Frequent entry/exit of a thick client into the topology leads to a crash 
> of the server node due to OMM.
> 2) Frequent creation and destroy of caches leads to a server node crash due 
> to OOM.
>  topVer=20098
> Part of the log before the OOM crash, pay attention to *topVer=20098*
> {code:java}
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
> ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
> minorTopVer=6]
> ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
> commPort=47100]
> ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
> ^-- Heap [used=867MB, free=15.29%, comm=1024MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=7, qSize=0]
> ^-- System thread pool [active=0, idle=8, qSize=0]
> ^-- Striped thread pool [active=0, idle=8, qSize=0]
> {code}
> Histogram from 

[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology

2024-02-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21478:
-
Description: 
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed
 !histo.png! 

MinorTop example

{code:java}
@Test
public void testMinorVer() throws Exception {
Ignite server = startGrids(1);
IgniteEx client = startClientGrid();
String cacheName = "cacheName";
for (int i = 0; i < 500; i++) {
client.getOrCreateCache(cacheName);
client.destroyCache(cacheName);
}
System.err.println("Heap dump time");
Thread.sleep(100);
}
{code}

{code:java}
[INFO 
][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager]
 AffinityTopologyVersion [topVer=2, minorTopVer=1000], 
evt=DISCOVERY_CUSTOM_EVT, evtNode=52b4c130-1a01-4858-813a-ebc8a5dabf1e, 
client=true]
{code}









  was:
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed
 !histo.png! 




> OOM crash with unstable topology
> 
>
> Key: IGNITE-21478
> URL: https://issues.apache.org/jira/browse/IGNITE-21478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: HistoMinorTop.png, histo.png
>
>
> User cases:
> 1) Frequent entry/exit of a thick client into the topology leads to a crash 
> of the server node due to OMM.
> 2) Frequent creation and destroy of caches leads to a server node crash due 
> to OOM.
>  topVer=20098
> Part of the log before the OOM crash, pay attention to *topVer=20098*
> {code:java}
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
> ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
> minorTopVer=6]
> ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
> commPort=47100]
> ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
> ^-- Heap [used=867MB, free=15.29%, comm=1024MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=7, qSize=0]
> ^-- System thread pool [active=0, idle=8, qSize=0]
> ^-- Striped thread pool [active=0, idle=8, qSize=0]
> {code}
> Histogram from heap-dump after node failed
>  !histo.png! 
> MinorTop example
> {code:java}
> @Test
> public void testMinorVer() throws Exception {
> Ignite server = startGrids(1);
> IgniteEx client = startClientGrid();
> String cacheName = "cacheName";
> for (int i = 0; i < 500; i++) {
> client.getOrCreateCache(cacheName);
> client.destroyCache(cacheName);
> }
> System.err.println("Heap dump time");
> Thread.sleep(100);
> }
> {code}
> {code:java}
> [INFO 
> ][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager]
>  AffinityTopologyVersion [topVer=2, minorTopVer=1000], 
> evt=DISCOVERY_CUSTOM_EVT, 

[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology

2024-02-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21478:
-
Description: 
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed
 !histo.png! 

*MinorTop example
*
{code:java}
@Test
public void testMinorVer() throws Exception {
Ignite server = startGrids(1);
IgniteEx client = startClientGrid();
String cacheName = "cacheName";
for (int i = 0; i < 500; i++) {
client.getOrCreateCache(cacheName);
client.destroyCache(cacheName);
}
System.err.println("Heap dump time");
Thread.sleep(100);
}
{code}

{code:java}
[INFO 
][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager]
 AffinityTopologyVersion [topVer=2, minorTopVer=1000], 
evt=DISCOVERY_CUSTOM_EVT, evtNode=52b4c130-1a01-4858-813a-ebc8a5dabf1e, 
client=true]
{code}

 !HistoMinorTop.png! 









  was:
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed
 !histo.png! 

*MinorTop example
*
{code:java}
@Test
public void testMinorVer() throws Exception {
Ignite server = startGrids(1);
IgniteEx client = startClientGrid();
String cacheName = "cacheName";
for (int i = 0; i < 500; i++) {
client.getOrCreateCache(cacheName);
client.destroyCache(cacheName);
}
System.err.println("Heap dump time");
Thread.sleep(100);
}
{code}

{code:java}
[INFO 
][exchange-worker-#149%internal.IgniteOomTest%][GridCachePartitionExchangeManager]
 AffinityTopologyVersion [topVer=2, minorTopVer=1000], 
evt=DISCOVERY_CUSTOM_EVT, evtNode=52b4c130-1a01-4858-813a-ebc8a5dabf1e, 
client=true]
{code}












> OOM crash with unstable topology
> 
>
> Key: IGNITE-21478
> URL: https://issues.apache.org/jira/browse/IGNITE-21478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: HistoMinorTop.png, histo.png
>
>
> User cases:
> 1) Frequent entry/exit of a thick client into the topology leads to a crash 
> of the server node due to OMM.
> 2) Frequent creation and destroy of caches leads to a server node crash due 
> to OOM.
>  topVer=20098
> Part of the log before the OOM crash, pay attention to *topVer=20098*
> {code:java}
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
> ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
> minorTopVer=6]
> ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
> commPort=47100]
> ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
> ^-- Heap [used=867MB, free=15.29%, comm=1024MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=7, qSize=0]
> ^-- System thread pool [active=0, idle=8, qSize=0]
> ^-- Striped thread pool [active=0, idle=8, qSize=0]
> 

[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology

2024-02-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21478:
-
Description: 
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed



  was:
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed



> OOM crash with unstable topology
> 
>
> Key: IGNITE-21478
> URL: https://issues.apache.org/jira/browse/IGNITE-21478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
> Attachments: histo.png
>
>
> User cases:
> 1) Frequent entry/exit of a thick client into the topology leads to a crash 
> of the server node due to OMM.
> 2) Frequent creation and destroy of caches leads to a server node crash due 
> to OOM.
>  topVer=20098
> Part of the log before the OOM crash, pay attention to *topVer=20098*
> {code:java}
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
> ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
> minorTopVer=6]
> ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
> commPort=47100]
> ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
> ^-- Heap [used=867MB, free=15.29%, comm=1024MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=7, qSize=0]
> ^-- System thread pool [active=0, idle=8, qSize=0]
> ^-- Striped thread pool [active=0, idle=8, qSize=0]
> {code}
> Histogram from heap-dump after node failed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology

2024-02-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21478:
-
Description: 
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed
 !histo.png! 



  was:
User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed




> OOM crash with unstable topology
> 
>
> Key: IGNITE-21478
> URL: https://issues.apache.org/jira/browse/IGNITE-21478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
> Attachments: histo.png
>
>
> User cases:
> 1) Frequent entry/exit of a thick client into the topology leads to a crash 
> of the server node due to OMM.
> 2) Frequent creation and destroy of caches leads to a server node crash due 
> to OOM.
>  topVer=20098
> Part of the log before the OOM crash, pay attention to *topVer=20098*
> {code:java}
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
> ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
> minorTopVer=6]
> ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
> commPort=47100]
> ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
> ^-- Heap [used=867MB, free=15.29%, comm=1024MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=7, qSize=0]
> ^-- System thread pool [active=0, idle=8, qSize=0]
> ^-- Striped thread pool [active=0, idle=8, qSize=0]
> {code}
> Histogram from heap-dump after node failed
>  !histo.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21478) OOM crash with unstable topology

2024-02-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-21478:
-
Attachment: histo.png

> OOM crash with unstable topology
> 
>
> Key: IGNITE-21478
> URL: https://issues.apache.org/jira/browse/IGNITE-21478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
> Attachments: histo.png
>
>
> User cases:
> 1) Frequent entry/exit of a thick client into the topology leads to a crash 
> of the server node due to OMM.
> 2) Frequent creation and destroy of caches leads to a server node crash due 
> to OOM.
>  topVer=20098
> Part of the log before the OOM crash, pay attention to *topVer=20098*
> {code:java}
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
> ^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
> minorTopVer=6]
> ^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
> commPort=47100]
> ^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
> ^-- Heap [used=867MB, free=15.29%, comm=1024MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=7, qSize=0]
> ^-- System thread pool [active=0, idle=8, qSize=0]
> ^-- Striped thread pool [active=0, idle=8, qSize=0]
> {code}
> Histogram from heap-dump after node failed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21478) OOM crash with unstable topology

2024-02-07 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-21478:


 Summary: OOM crash with unstable topology
 Key: IGNITE-21478
 URL: https://issues.apache.org/jira/browse/IGNITE-21478
 Project: Ignite
  Issue Type: Bug
Reporter: Luchnikov Alexander


User cases:
1) Frequent entry/exit of a thick client into the topology leads to a crash of 
the server node due to OMM.
2) Frequent creation and destroy of caches leads to a server node crash due to 
OOM.
 topVer=20098

Part of the log before the OOM crash, pay attention to *topVer=20098*
{code:java}
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=f080abcd, uptime=3 days, 09:00:55.274]
^-- Cluster [hosts=4, CPUs=6, servers=2, clients=2, topVer=20098, 
minorTopVer=6]
^-- Network [addrs=[192.168.1.2, 127.0.0.1], discoPort=47500, 
commPort=47100]
^-- CPU [CPUs=2, curLoad=86.83%, avgLoad=21.9%, GC=23.9%]
^-- Heap [used=867MB, free=15.29%, comm=1024MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=7, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
{code}

Histogram from heap-dump after node failed




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-15326) IgniteClientException in server node

2023-04-27 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander resolved IGNITE-15326.
--
Resolution: Cannot Reproduce

> IgniteClientException in server node
> 
>
> Key: IGNITE-15326
> URL: https://issues.apache.org/jira/browse/IGNITE-15326
> Project: Ignite
>  Issue Type: Task
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
>
> In cases of exception on a thin client like
> {code:java}
> class 
> org.apache.ignite.internal.processors.platform.client.IgniteClientException: 
> Cache transaction timed out: 
> GridNearTxLocal[xid=e38a9145b71--0e58-8a37--0001, 
> xidVersion=GridCacheVersion [topVer=240683575, order=1629203572798, 
> nodeOrder=1], nearXidVersion=GridCacheVersion [topVer=240683575, 
> order=1629203572798, nodeOrder=1], concurrency=OPTIMISTIC, 
> isolation=SERIALIZABLE, state=ROLLED_BACK, invalidate=false, 
> rollbackOnly=true, nodeId=12f44683-043f-4bfb-bf6d-7dc6f6348327, timeout=1000, 
> startTime=1629203894182, duration=5021, label=null]
> at 
> org.apache.ignite.internal.processors.platform.client.tx.ClientTxEndRequest.process(ClientTxEndRequest.java:72)
> at 
> org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:99)
> at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:202)
> at 
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:56)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at 
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at 
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: class 
> org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException: 
> Cache transaction timed out: 
> GridNearTxLocal[xid=e38a9145b71--0e58-8a37--0001, 
> xidVersion=GridCacheVersion [topVer=240683575, order=1629203572798, 
> nodeOrder=1], nearXidVersion=GridCacheVersion [topVer=240683575, 
> order=1629203572798, nodeOrder=1], concurrency=OPTIMISTIC, 
> isolation=SERIALIZABLE, state=ROLLED_BACK, invalidate=false, 
> rollbackOnly=true, nodeId=12f44683-043f-4bfb-bf6d-7dc6f6348327, timeout=1000, 
> startTime=1629203894182, duration=5021, label=null]
> at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.checkValid(IgniteTxLocalAdapter.java:1389)
> at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.resume(GridNearTxLocal.java:3698)
> at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.resume(GridNearTxLocal.java:3683)
> at 
> org.apache.ignite.internal.processors.platform.client.tx.ClientTxContext.acquire(ClientTxContext.java:58)
> at 
> org.apache.ignite.internal.processors.platform.client.tx.ClientTxEndRequest.process(ClientTxEndRequest.java:62)
> ... 11 more
> {code}
> In the log of the server node, we have the ERROR message level
> {code:java}
> [15:43:45,494][SEVERE][client-connector-#73][ClientListenerNioListener] 
> Failed to process client request 
> [req=o.a.i.i.processors.platform.client.tx.ClientTxEndRequest@63f02b7f]
> class 
> org.apache.ignite.internal.processors.platform.client.IgniteClientException: 
> Cache transaction timed out: 
> GridNearTxLocal[xid=a4633245b71--0e58-8ca9--0001, 
> xidVersion=GridCacheVersion [topVer=240684201, order=1629204198986, 
> nodeOrder=1], nearXidVersion=GridCacheVersion [topVer=240684201, 
> order=1629204198986, nodeOrder=1], concurrency=OPTIMISTIC, 
> isolation=SERIALIZABLE, state=ROLLED_BACK, invalidate=false, 
> rollbackOnly=true, nodeId=73114d40-c975-4410-9fc1-910e72f45c16, timeout=1000, 
> startTime=1629204220475, duration=5014, label=null]
> at 
> org.apache.ignite.internal.processors.platform.client.tx.ClientTxEndRequest.process(ClientTxEndRequest.java:72)
> at 
> org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:99)
>  

[jira] [Commented] (IGNITE-17345) [IEP-35] Metric to track PA enabled request on ThinClient

2023-04-13 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711732#comment-17711732
 ] 

Luchnikov Alexander commented on IGNITE-17345:
--

Unable to complete the task, you can take it to work.

> [IEP-35] Metric to track PA enabled request on ThinClient
> -
>
> Key: IGNITE-17345
> URL: https://issues.apache.org/jira/browse/IGNITE-17345
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-35, ise
> Fix For: 2.15
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> The crucial point to understand ThinClient performance is to know - Partition 
> Awareness enabled or not.
> For now, it's impossible to understand how many request goes to node that is 
> primary for key.
> It seems useful metrics to analyze PA behavior - two counters to track amount 
> of requests for each server node 
> - one counter for keys current node is primary.
> - another counter for keys which require extra network hop between server 
> nodes to serve the request.
> For environment with optimal performance second counter should be close to 
> zero.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17345) [IEP-35] Metric to track PA enabled request on ThinClient

2023-04-13 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander reassigned IGNITE-17345:


Assignee: (was: Luchnikov Alexander)

> [IEP-35] Metric to track PA enabled request on ThinClient
> -
>
> Key: IGNITE-17345
> URL: https://issues.apache.org/jira/browse/IGNITE-17345
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-35, ise
> Fix For: 2.15
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> The crucial point to understand ThinClient performance is to know - Partition 
> Awareness enabled or not.
> For now, it's impossible to understand how many request goes to node that is 
> primary for key.
> It seems useful metrics to analyze PA behavior - two counters to track amount 
> of requests for each server node 
> - one counter for keys current node is primary.
> - another counter for keys which require extra network hop between server 
> nodes to serve the request.
> For environment with optimal performance second counter should be close to 
> zero.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-02-01 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683002#comment-17683002
 ] 

Luchnikov Alexander commented on IGNITE-18534:
--

[~zstan] could you please take a look?

> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Assignee: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java, patch.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].
> Quick fix patch  [^patch.patch].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-02-01 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17682994#comment-17682994
 ] 

Luchnikov Alexander edited comment on IGNITE-18534 at 2/1/23 12:26 PM:
---

Results of reproducer before fix
{code:java}
[WARN ][main][] >>> REPORT: {walMode=LOG_ONLY=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0], walMode=FSYNC=[1049108, 2098258, 3145859, 4194373, 
5243456, 6291815, 7339121, 8388446, 9436414, 10484015, 11532977, 12581012, 
13629532, 14681087, 15726022, 16773290, 17822132, 18871023, 19918624, 
20967597], walMode=BACKGROUND=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0]}
{code}
after fix
{code:java}
[WARN ][main][] >>> REPORT: {walMode=LOG_ONLY=[1049506, 2096889, 3144710, 
4194337, 5242085, 6289895, 7339524, 8387920, 9435820, 10483503, 11532449, 
12580499, 13628020, 14676257, 15724492, 16772470, 17820948, 18868506, 19914180, 
20961781], walMode=FSYNC=[1049563, 2096946, 3144826, 4193340, 5242378, 6290782, 
7339017, 8387531, 9435499, 10488430, 11531195, 12580086, 13627675, 14675985, 
15724801, 16772772, 17821276, 18869238, 19918129, 20971125], 
walMode=BACKGROUND=[1049565, 2097066, 3144887, 4194514, 5242262, 6290993, 
7339346, 8386745, 9435701, 10483384, 11531866, 12579549, 13632820, 14675929, 
15723864, 16772386, 17821417, 18870092, 19917693, 20966584]}
{code}
in a table view

 
||LOG_ONLY||LOG_ONLY+fix||FSYNC||FSYNC+fix||BACKGROUND||BACKGROUND+fix||
|0|1049506|1049108|1049563|0|1049565|
|0|2096889|2098258|2096946|0|2097066|
|0|3144710|3145859|3144826|0|3144887|
|0|4194337|4194373|4193340|0|4194514|
|0|5242085|5243456|5242378|0|5242262|
|0|6289895|6291815|6290782|0|6290993|
|0|7339524|7339121|7339017|0|7339346|
|0|8387920|8388446|8387531|0|8386745|
|0|9435820|9436414|9435499|0|9435701|
|0|10483503|10484015|10488430|0|10483384|
|0|11532449|11532977|11531195|0|11531866|
|0|12580499|12581012|12580086|0|12579549|
|0|13628020|13629532|13627675|0|13632820|
|0|14676257|14681087|14675985|0|14675929|
|0|15724492|15726022|15724801|0|15723864|
|0|16772470|16773290|16772772|0|16772386|
|0|17820948|17822132|17821276|0|17821417|
|0|18868506|18871023|18869238|0|18870092|
|0|19914180|19918624|19918129|0|19917693|
|0|20961781|20967597|20971125|0|20966584|

 

 


was (Author: aldoraine):
Results of reproducer befor fix
{code:java}
[WARN ][main][] >>> REPORT: {walMode=LOG_ONLY=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0], walMode=FSYNC=[1049108, 2098258, 3145859, 4194373, 
5243456, 6291815, 7339121, 8388446, 9436414, 10484015, 11532977, 12581012, 
13629532, 14681087, 15726022, 16773290, 17822132, 18871023, 19918624, 
20967597], walMode=BACKGROUND=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0]}
{code}
after fix
{code:java}
[WARN ][main][] >>> REPORT: {walMode=LOG_ONLY=[1049506, 2096889, 3144710, 
4194337, 5242085, 6289895, 7339524, 8387920, 9435820, 10483503, 11532449, 
12580499, 13628020, 14676257, 15724492, 16772470, 17820948, 18868506, 19914180, 
20961781], walMode=FSYNC=[1049563, 2096946, 3144826, 4193340, 5242378, 6290782, 
7339017, 8387531, 9435499, 10488430, 11531195, 12580086, 13627675, 14675985, 
15724801, 16772772, 17821276, 18869238, 19918129, 20971125], 
walMode=BACKGROUND=[1049565, 2097066, 3144887, 4194514, 5242262, 6290993, 
7339346, 8386745, 9435701, 10483384, 11531866, 12579549, 13632820, 14675929, 
15723864, 16772386, 17821417, 18870092, 19917693, 20966584]}
{code}
in a table view

 
||LOG_ONLY||LOG_ONLY+fix||FSYNC||FSYNC+fix||BACKGROUND||BACKGROUND+fix||
|0|1049506|1049108|1049563|0|1049565|
|0|2096889|2098258|2096946|0|2097066|
|0|3144710|3145859|3144826|0|3144887|
|0|4194337|4194373|4193340|0|4194514|
|0|5242085|5243456|5242378|0|5242262|
|0|6289895|6291815|6290782|0|6290993|
|0|7339524|7339121|7339017|0|7339346|
|0|8387920|8388446|8387531|0|8386745|
|0|9435820|9436414|9435499|0|9435701|
|0|10483503|10484015|10488430|0|10483384|
|0|11532449|11532977|11531195|0|11531866|
|0|12580499|12581012|12580086|0|12579549|
|0|13628020|13629532|13627675|0|13632820|
|0|14676257|14681087|14675985|0|14675929|
|0|15724492|15726022|15724801|0|15723864|
|0|16772470|16773290|16772772|0|16772386|
|0|17820948|17822132|17821276|0|17821417|
|0|18868506|18871023|18869238|0|18870092|
|0|19914180|19918624|19918129|0|19917693|
|0|20961781|20967597|20971125|0|20966584|

 

 

> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Assignee: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java, 

[jira] [Commented] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-02-01 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17682994#comment-17682994
 ] 

Luchnikov Alexander commented on IGNITE-18534:
--

Results of reproducer befor fix
{code:java}
[WARN ][main][] >>> REPORT: {walMode=LOG_ONLY=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0], walMode=FSYNC=[1049108, 2098258, 3145859, 4194373, 
5243456, 6291815, 7339121, 8388446, 9436414, 10484015, 11532977, 12581012, 
13629532, 14681087, 15726022, 16773290, 17822132, 18871023, 19918624, 
20967597], walMode=BACKGROUND=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0, 0, 0, 0]}
{code}
after fix
{code:java}
[WARN ][main][] >>> REPORT: {walMode=LOG_ONLY=[1049506, 2096889, 3144710, 
4194337, 5242085, 6289895, 7339524, 8387920, 9435820, 10483503, 11532449, 
12580499, 13628020, 14676257, 15724492, 16772470, 17820948, 18868506, 19914180, 
20961781], walMode=FSYNC=[1049563, 2096946, 3144826, 4193340, 5242378, 6290782, 
7339017, 8387531, 9435499, 10488430, 11531195, 12580086, 13627675, 14675985, 
15724801, 16772772, 17821276, 18869238, 19918129, 20971125], 
walMode=BACKGROUND=[1049565, 2097066, 3144887, 4194514, 5242262, 6290993, 
7339346, 8386745, 9435701, 10483384, 11531866, 12579549, 13632820, 14675929, 
15723864, 16772386, 17821417, 18870092, 19917693, 20966584]}
{code}
in a table view

 
||LOG_ONLY||LOG_ONLY+fix||FSYNC||FSYNC+fix||BACKGROUND||BACKGROUND+fix||
|0|1049506|1049108|1049563|0|1049565|
|0|2096889|2098258|2096946|0|2097066|
|0|3144710|3145859|3144826|0|3144887|
|0|4194337|4194373|4193340|0|4194514|
|0|5242085|5243456|5242378|0|5242262|
|0|6289895|6291815|6290782|0|6290993|
|0|7339524|7339121|7339017|0|7339346|
|0|8387920|8388446|8387531|0|8386745|
|0|9435820|9436414|9435499|0|9435701|
|0|10483503|10484015|10488430|0|10483384|
|0|11532449|11532977|11531195|0|11531866|
|0|12580499|12581012|12580086|0|12579549|
|0|13628020|13629532|13627675|0|13632820|
|0|14676257|14681087|14675985|0|14675929|
|0|15724492|15726022|15724801|0|15723864|
|0|16772470|16773290|16772772|0|16772386|
|0|17820948|17822132|17821276|0|17821417|
|0|18868506|18871023|18869238|0|18870092|
|0|19914180|19918624|19918129|0|19917693|
|0|20961781|20967597|20971125|0|20966584|

 

 

> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Assignee: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java, patch.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].
> Quick fix patch  [^patch.patch].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-01-31 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander reassigned IGNITE-18534:


Assignee: Luchnikov Alexander

> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Assignee: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java, patch.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].
> Quick fix patch  [^patch.patch].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18538) Calcite engine does not allow creating tables with column names type, options

2023-01-12 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18538:
-
Affects Version/s: 2.14

>  Calcite engine does not allow creating tables with column names type, options
> --
>
> Key: IGNITE-18538
> URL: https://issues.apache.org/jira/browse/IGNITE-18538
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: ColumnNameKeywordsTest.java
>
>
> Calcite engine does not allow creating tables with column names type, options.
> Reproducer  [^ColumnNameKeywordsTest.java] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-18538) Calcite engine does not allow creating tables with column names type, options

2023-01-12 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-18538:


 Summary:  Calcite engine does not allow creating tables with 
column names type, options
 Key: IGNITE-18538
 URL: https://issues.apache.org/jira/browse/IGNITE-18538
 Project: Ignite
  Issue Type: Bug
Reporter: Luchnikov Alexander
 Attachments: ColumnNameKeywordsTest.java

Calcite engine does not allow creating tables with column names type, options.

Reproducer  [^ColumnNameKeywordsTest.java] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-01-12 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18534:
-
Description: 
The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
BACKGROUND.
Reproducer  [^IoDatastorageMetricsTest.java].
Quick fix patch 


  was:
The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
BACKGROUND.
Reproducer  [^IoDatastorageMetricsTest.java].


> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java, patch.patch
>
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].
> Quick fix patch 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-01-12 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18534:
-
Attachment: patch.patch

> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java, patch.patch
>
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].
> Quick fix patch 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-01-12 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18534:
-
Description: 
The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
BACKGROUND.
Reproducer  [^IoDatastorageMetricsTest.java].
Quick fix patch  [^patch.patch].


  was:
The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
BACKGROUND.
Reproducer  [^IoDatastorageMetricsTest.java].
Quick fix patch  [^patch.patch] 



> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java, patch.patch
>
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].
> Quick fix patch  [^patch.patch].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-01-12 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18534:
-
Description: 
The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
BACKGROUND.
Reproducer  [^IoDatastorageMetricsTest.java].
Quick fix patch  [^patch.patch] 


  was:
The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
BACKGROUND.
Reproducer  [^IoDatastorageMetricsTest.java].
Quick fix patch 



> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java, patch.patch
>
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].
> Quick fix patch  [^patch.patch] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-01-12 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18534:
-
Affects Version/s: 2.14

> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java
>
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-01-12 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18534:
-
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java
>
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-01-12 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18534:
-
Labels: ise  (was: )

> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND
> --
>
> Key: IGNITE-18534
> URL: https://issues.apache.org/jira/browse/IGNITE-18534
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: IoDatastorageMetricsTest.java
>
>
> The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
> BACKGROUND.
> Reproducer  [^IoDatastorageMetricsTest.java].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-18534) The WalWritingRate metric is not calculated when walMode is LOG_ONLY or BACKGROUND

2023-01-12 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-18534:


 Summary: The WalWritingRate metric is not calculated when walMode 
is LOG_ONLY or BACKGROUND
 Key: IGNITE-18534
 URL: https://issues.apache.org/jira/browse/IGNITE-18534
 Project: Ignite
  Issue Type: Bug
Reporter: Luchnikov Alexander
 Attachments: IoDatastorageMetricsTest.java

The WalWritingRate metric is not calculated when walMode is LOG_ONLY or 
BACKGROUND.
Reproducer  [^IoDatastorageMetricsTest.java].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18494) Near cache not created with getCache

2023-01-10 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18494:
-
Description: 
The documentation 
(https://ignite.apache.org/docs/latest/configuring-caches/near-cache#configuring-near-cache)
 says:
"Once configured in this way, the near cache is created on any node that 
requests data from the underlying cache, including both server nodes and client 
nodes."
We expect the Near cache to be created for the cache proxy obtained with 
getCache, and getOrCreateNearCache does not need to be called because the 
NearConfiguration was initialized when the cache was created.
Reproducers show that this is not so.
Java reproducer [^Issue.java] without platformcache. 

  was:
The documentation 
(https://ignite.apache.org/docs/latest/configuring-caches/near-cache#configuring-near-cache)
 says:
"Once configured in this way, the near cache is created on any node that 
requests data from the underlying cache, including both server nodes and client 
nodes."
We expect the Near cache to be created for the cache proxy obtained with 
getCache, and getOrCreateNearCache does not need to be called because the 
NearConfiguration was initialized when the cache was created.
Reproducers show that this is not so.
 [^Issue.java] 


> Near cache not created with getCache
> 
>
> Key: IGNITE-18494
> URL: https://issues.apache.org/jira/browse/IGNITE-18494
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: Issue.java, NearCacheTest.cs, NearCacheTest.java
>
>
> The documentation 
> (https://ignite.apache.org/docs/latest/configuring-caches/near-cache#configuring-near-cache)
>  says:
> "Once configured in this way, the near cache is created on any node that 
> requests data from the underlying cache, including both server nodes and 
> client nodes."
> We expect the Near cache to be created for the cache proxy obtained with 
> getCache, and getOrCreateNearCache does not need to be called because the 
> NearConfiguration was initialized when the cache was created.
> Reproducers show that this is not so.
> Java reproducer [^Issue.java] without platformcache. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18494) Near cache not created with getCache

2023-01-10 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18494:
-
Description: 
The documentation 
(https://ignite.apache.org/docs/latest/configuring-caches/near-cache#configuring-near-cache)
 says:
"Once configured in this way, the near cache is created on any node that 
requests data from the underlying cache, including both server nodes and client 
nodes."
We expect the Near cache to be created for the cache proxy obtained with 
getCache, and getOrCreateNearCache does not need to be called because the 
NearConfiguration was initialized when the cache was created.
Reproducers show that this is not so.
 [^Issue.java] 

  was:
The documentation 
(https://ignite.apache.org/docs/latest/configuring-caches/near-cache#configuring-near-cache)
 says:
"Once configured in this way, the near cache is created on any node that 
requests data from the underlying cache, including both server nodes and client 
nodes."
We expect the Near cache to be created for the cache proxy obtained with 
getCache, and getOrCreateNearCache does not need to be called because the 
NearConfiguration was initialized when the cache was created.
Reproducers show that this is not so.


> Near cache not created with getCache
> 
>
> Key: IGNITE-18494
> URL: https://issues.apache.org/jira/browse/IGNITE-18494
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: Issue.java, NearCacheTest.cs, NearCacheTest.java
>
>
> The documentation 
> (https://ignite.apache.org/docs/latest/configuring-caches/near-cache#configuring-near-cache)
>  says:
> "Once configured in this way, the near cache is created on any node that 
> requests data from the underlying cache, including both server nodes and 
> client nodes."
> We expect the Near cache to be created for the cache proxy obtained with 
> getCache, and getOrCreateNearCache does not need to be called because the 
> NearConfiguration was initialized when the cache was created.
> Reproducers show that this is not so.
>  [^Issue.java] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18494) Near cache not created with getCache

2023-01-10 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18494:
-
Attachment: Issue.java

> Near cache not created with getCache
> 
>
> Key: IGNITE-18494
> URL: https://issues.apache.org/jira/browse/IGNITE-18494
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: Issue.java, NearCacheTest.cs, NearCacheTest.java
>
>
> The documentation 
> (https://ignite.apache.org/docs/latest/configuring-caches/near-cache#configuring-near-cache)
>  says:
> "Once configured in this way, the near cache is created on any node that 
> requests data from the underlying cache, including both server nodes and 
> client nodes."
> We expect the Near cache to be created for the cache proxy obtained with 
> getCache, and getOrCreateNearCache does not need to be called because the 
> NearConfiguration was initialized when the cache was created.
> Reproducers show that this is not so.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18511) Incomprehensible error when using a reserved word in ddl via jdbc

2023-01-09 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18511:
-
Labels: ise  (was: )

>  Incomprehensible error when using a reserved word in ddl via jdbc
> --
>
> Key: IGNITE-18511
> URL: https://issues.apache.org/jira/browse/IGNITE-18511
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
>
> When creating a table via sqlline, the column name of which uses a reserved 
> word (in our case, the name is "type") on the calcite engine, we fall with an 
> error due to which the cause cannot be determined.
> When using the h2 engine, the table is created.
> {code:java}
> ./sqlline.sh --verbose=true -u 
> jdbc:ignite:thin://127.0.0.1:10800?queryEngine=calcite
> 0: jdbc:ignite:thin://127.0.0.1:10800> CREATE TABLE test (
> . . . . . . . . . . . . . . . . . . )>  id BIGINT,
> . . . . . . . . . . . . . . . . . . )>  name VARCHAR,
> . . . . . . . . . . . . . . . . . . )>  type VARCHAR,
> . . . . . . . . . . . . . . . . . . )>  PRIMARY KEY (id)
> . . . . . . . . . . . . . . . . . . )>  );
> Error: Failed to parse query. (state=42000,code=1001)
> java.sql.SQLException: Failed to parse query.
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1010)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:234)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:560)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18511) Incomprehensible error when using a reserved word in ddl via jdbc

2023-01-09 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18511:
-
Affects Version/s: 2.14

>  Incomprehensible error when using a reserved word in ddl via jdbc
> --
>
> Key: IGNITE-18511
> URL: https://issues.apache.org/jira/browse/IGNITE-18511
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Minor
>
> When creating a table via sqlline, the column name of which uses a reserved 
> word (in our case, the name is "type") on the calcite engine, we fall with an 
> error due to which the cause cannot be determined.
> When using the h2 engine, the table is created.
> {code:java}
> ./sqlline.sh --verbose=true -u 
> jdbc:ignite:thin://127.0.0.1:10800?queryEngine=calcite
> 0: jdbc:ignite:thin://127.0.0.1:10800> CREATE TABLE test (
> . . . . . . . . . . . . . . . . . . )>  id BIGINT,
> . . . . . . . . . . . . . . . . . . )>  name VARCHAR,
> . . . . . . . . . . . . . . . . . . )>  type VARCHAR,
> . . . . . . . . . . . . . . . . . . )>  PRIMARY KEY (id)
> . . . . . . . . . . . . . . . . . . )>  );
> Error: Failed to parse query. (state=42000,code=1001)
> java.sql.SQLException: Failed to parse query.
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1010)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:234)
>   at 
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:560)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-18511) Incomprehensible error when using a reserved word in ddl via jdbc

2023-01-09 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-18511:


 Summary:  Incomprehensible error when using a reserved word in ddl 
via jdbc
 Key: IGNITE-18511
 URL: https://issues.apache.org/jira/browse/IGNITE-18511
 Project: Ignite
  Issue Type: Improvement
Reporter: Luchnikov Alexander


When creating a table via sqlline, the column name of which uses a reserved 
word (in our case, the name is "type") on the calcite engine, we fall with an 
error due to which the cause cannot be determined.
When using the h2 engine, the table is created.
{code:java}
./sqlline.sh --verbose=true -u 
jdbc:ignite:thin://127.0.0.1:10800?queryEngine=calcite
0: jdbc:ignite:thin://127.0.0.1:10800> CREATE TABLE test (
. . . . . . . . . . . . . . . . . . )>  id BIGINT,
. . . . . . . . . . . . . . . . . . )>  name VARCHAR,
. . . . . . . . . . . . . . . . . . )>  type VARCHAR,
. . . . . . . . . . . . . . . . . . )>  PRIMARY KEY (id)
. . . . . . . . . . . . . . . . . . )>  );
Error: Failed to parse query. (state=42000,code=1001)
java.sql.SQLException: Failed to parse query.
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1010)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:234)
at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:560)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-12652) Add example of failure handling

2023-01-08 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander reassigned IGNITE-12652:


Assignee: (was: Luchnikov Alexander)

> Add example of failure handling
> ---
>
> Key: IGNITE-12652
> URL: https://issues.apache.org/jira/browse/IGNITE-12652
> Project: Ignite
>  Issue Type: Task
>  Components: examples
>Reporter: Anton Kalashnikov
>Priority: Major
>  Labels: newbie
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ignite has the following feature - 
> https://apacheignite.readme.io/docs/critical-failures-handling, but there is 
> not an example of how to use it correctly. So it is good to add some examples.
> Also, Ignite has DiagnosticProcessor which invokes when the failure handler 
> is triggered. Maybe it is a good idea to add to this example some samples of 
> diagnostic work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-12652) Add example of failure handling

2023-01-08 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-12652:
-
Labels: newbie  (was: ise newbie)

> Add example of failure handling
> ---
>
> Key: IGNITE-12652
> URL: https://issues.apache.org/jira/browse/IGNITE-12652
> Project: Ignite
>  Issue Type: Task
>  Components: examples
>Reporter: Anton Kalashnikov
>Assignee: Luchnikov Alexander
>Priority: Major
>  Labels: newbie
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ignite has the following feature - 
> https://apacheignite.readme.io/docs/critical-failures-handling, but there is 
> not an example of how to use it correctly. So it is good to add some examples.
> Also, Ignite has DiagnosticProcessor which invokes when the failure handler 
> is triggered. Maybe it is a good idea to add to this example some samples of 
> diagnostic work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-18494) Near cache not created with getCache

2022-12-30 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-18494:


 Summary: Near cache not created with getCache
 Key: IGNITE-18494
 URL: https://issues.apache.org/jira/browse/IGNITE-18494
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.14
Reporter: Luchnikov Alexander
 Attachments: NearCacheTest.cs, NearCacheTest.java

The documentation 
(https://ignite.apache.org/docs/latest/configuring-caches/near-cache#configuring-near-cache)
 says:
"Once configured in this way, the near cache is created on any node that 
requests data from the underlying cache, including both server nodes and client 
nodes."
We expect the Near cache to be created for the cache proxy obtained with 
getCache, and getOrCreateNearCache does not need to be called because the 
NearConfiguration was initialized when the cache was created.
Reproducers show that this is not so.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18438) .NET: NullReferenceException when serializing composite type

2022-12-30 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18438:
-
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> .NET: NullReferenceException when serializing composite type
> 
>
> Key: IGNITE-18438
> URL: https://issues.apache.org/jira/browse/IGNITE-18438
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: .NET, ise
> Attachments: Issue.Reproducer.csproj, Test.cs
>
>
> Description of the scenario is described in the file  [^Test.cs] 
> {code:java}
> [xUnit.net 00:00:07.28] 
> Issue.Reproducer.Test.IgniteCanSerializeCompositeStructure [FAIL]
>   Failed Issue.Reproducer.Test.IgniteCanSerializeCompositeStructure [6 s]
>   Error Message:
>System.NullReferenceException : Object reference not set to an instance of 
> an object.
>   Stack Trace:
>  at 
> Apache.Ignite.Core.Impl.Binary.Metadata.BinaryType.UpdateFields(IDictionary`2 
> fields)
>at 
> Apache.Ignite.Core.Impl.Binary.BinaryWriter.SaveMetadata(IBinaryTypeDescriptor
>  desc, IDictionary`2 fields)
>at 
> Apache.Ignite.Core.Impl.Binary.Structure.BinaryStructureTracker.UpdateWriterStructure(BinaryWriter
>  writer, Boolean isNewSchema)
>at Apache.Ignite.Core.Impl.Binary.BinaryWriter.Write[T](T obj)
>at Apache.Ignite.Core.Impl.Binary.Binary.ToBinary[T](Object obj)
>at Issue.Reproducer.Test.IgniteCanSerializeCompositeStructure()
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18438) .NET: NullReferenceException when serializing composite type

2022-12-21 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18438:
-
Description: 
Description of the scenario is described in the file  [^Test.cs] 

{code:java}
[xUnit.net 00:00:07.28] 
Issue.Reproducer.Test.IgniteCanSerializeCompositeStructure [FAIL]
  Failed Issue.Reproducer.Test.IgniteCanSerializeCompositeStructure [6 s]
  Error Message:
   System.NullReferenceException : Object reference not set to an instance of 
an object.
  Stack Trace:
 at 
Apache.Ignite.Core.Impl.Binary.Metadata.BinaryType.UpdateFields(IDictionary`2 
fields)
   at 
Apache.Ignite.Core.Impl.Binary.BinaryWriter.SaveMetadata(IBinaryTypeDescriptor 
desc, IDictionary`2 fields)
   at 
Apache.Ignite.Core.Impl.Binary.Structure.BinaryStructureTracker.UpdateWriterStructure(BinaryWriter
 writer, Boolean isNewSchema)
   at Apache.Ignite.Core.Impl.Binary.BinaryWriter.Write[T](T obj)
   at Apache.Ignite.Core.Impl.Binary.Binary.ToBinary[T](Object obj)
   at Issue.Reproducer.Test.IgniteCanSerializeCompositeStructure()
{code}


  was:Description of the scenario is described in the file  [^Test.cs] 


> .NET: NullReferenceException when serializing composite type
> 
>
> Key: IGNITE-18438
> URL: https://issues.apache.org/jira/browse/IGNITE-18438
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.14
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: .NET, ise
> Attachments: Issue.Reproducer.csproj, Test.cs
>
>
> Description of the scenario is described in the file  [^Test.cs] 
> {code:java}
> [xUnit.net 00:00:07.28] 
> Issue.Reproducer.Test.IgniteCanSerializeCompositeStructure [FAIL]
>   Failed Issue.Reproducer.Test.IgniteCanSerializeCompositeStructure [6 s]
>   Error Message:
>System.NullReferenceException : Object reference not set to an instance of 
> an object.
>   Stack Trace:
>  at 
> Apache.Ignite.Core.Impl.Binary.Metadata.BinaryType.UpdateFields(IDictionary`2 
> fields)
>at 
> Apache.Ignite.Core.Impl.Binary.BinaryWriter.SaveMetadata(IBinaryTypeDescriptor
>  desc, IDictionary`2 fields)
>at 
> Apache.Ignite.Core.Impl.Binary.Structure.BinaryStructureTracker.UpdateWriterStructure(BinaryWriter
>  writer, Boolean isNewSchema)
>at Apache.Ignite.Core.Impl.Binary.BinaryWriter.Write[T](T obj)
>at Apache.Ignite.Core.Impl.Binary.Binary.ToBinary[T](Object obj)
>at Issue.Reproducer.Test.IgniteCanSerializeCompositeStructure()
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-18438) .NET: NullReferenceException when serializing composite type

2022-12-21 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-18438:


 Summary: .NET: NullReferenceException when serializing composite 
type
 Key: IGNITE-18438
 URL: https://issues.apache.org/jira/browse/IGNITE-18438
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 2.14
Reporter: Luchnikov Alexander
 Attachments: Issue.Reproducer.csproj, Test.cs

Description of the scenario is described in the file  [^Test.cs] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18427) .NET: Platform cache is not updated when ReadFromBackup is true

2022-12-16 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18427:
-
Labels: .NET ise  (was: .NET)

> .NET: Platform cache is not updated when ReadFromBackup is true
> ---
>
> Key: IGNITE-18427
> URL: https://issues.apache.org/jira/browse/IGNITE-18427
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.14
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ise
> Fix For: 2.15
>
> Attachments: IgniteReproducer.csproj, Test.cs
>
>
> See attached reproducer. Client 2 has stale value when *ReadFromBackup = 
> true*.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18330) Fix javadoc in Transaction#resume(), Transaction#suspend

2022-12-04 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18330:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Fix javadoc in Transaction#resume(), Transaction#suspend
> 
>
> Key: IGNITE-18330
> URL: https://issues.apache.org/jira/browse/IGNITE-18330
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Luchnikov Alexander
>Priority: Trivial
>  Labels: ise
>
> After implementation IGNITE-5714, this api can be used with pessimistic 
> transactions.
> Now in javadoc - Supported only for optimistic transactions..



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18330) Fix javadoc in Transaction#resume(), Transaction#suspend

2022-12-04 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-18330:
-
Labels: ise  (was: )

> Fix javadoc in Transaction#resume(), Transaction#suspend
> 
>
> Key: IGNITE-18330
> URL: https://issues.apache.org/jira/browse/IGNITE-18330
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Luchnikov Alexander
>Priority: Trivial
>  Labels: ise
>
> After implementation IGNITE-5714, this api can be used with pessimistic 
> transactions.
> Now in javadoc - Supported only for optimistic transactions..



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-18330) Fix javadoc in Transaction#resume(), Transaction#suspend

2022-12-04 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-18330:


 Summary: Fix javadoc in Transaction#resume(), Transaction#suspend
 Key: IGNITE-18330
 URL: https://issues.apache.org/jira/browse/IGNITE-18330
 Project: Ignite
  Issue Type: Improvement
Reporter: Luchnikov Alexander


After implementation IGNITE-5714, this api can be used with pessimistic 
transactions.
Now in javadoc - Supported only for optimistic transactions..



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17321) Document which api can work with partition awareness

2022-10-17 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618891#comment-17618891
 ] 

Luchnikov Alexander commented on IGNITE-17321:
--

[~timonin.maksim]
After I made a PR, I thought that it would be more correct to describe this 
information in:
# https://cwiki.apache.org/confluence/display/IGNITE/Thin+clients+features
# 
https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness
 for all platform
# javadoc

Make a subtask for my PR?

> Document which api can work with partition awareness
> 
>
> Key: IGNITE-17321
> URL: https://issues.apache.org/jira/browse/IGNITE-17321
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Luchnikov Alexander
>Assignee: Luchnikov Alexander
>Priority: Minor
>  Labels: docuentation, ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In javadoc 
> org.apache.ignite.configuration.ClientConfiguration#partitionAwarenessEnabled 
> and in the description of functionality
> https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness,
>  it is not described with which api this functionality will work and in what 
> cases. For example, will it work with getAll, in a transaction?
> Describe in the documentation and in the javadoc in which cases it works and 
> with which api.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17321) Document which api can work with partition awareness

2022-10-17 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618886#comment-17618886
 ] 

Luchnikov Alexander commented on IGNITE-17321:
--

[~timonin.maksim] Could you please take a look?

> Document which api can work with partition awareness
> 
>
> Key: IGNITE-17321
> URL: https://issues.apache.org/jira/browse/IGNITE-17321
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Luchnikov Alexander
>Assignee: Luchnikov Alexander
>Priority: Minor
>  Labels: docuentation, ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In javadoc 
> org.apache.ignite.configuration.ClientConfiguration#partitionAwarenessEnabled 
> and in the description of functionality
> https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness,
>  it is not described with which api this functionality will work and in what 
> cases. For example, will it work with getAll, in a transaction?
> Describe in the documentation and in the javadoc in which cases it works and 
> with which api.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17321) Document which api can work with partition awareness

2022-10-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander reassigned IGNITE-17321:


Assignee: Luchnikov Alexander

> Document which api can work with partition awareness
> 
>
> Key: IGNITE-17321
> URL: https://issues.apache.org/jira/browse/IGNITE-17321
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Luchnikov Alexander
>Assignee: Luchnikov Alexander
>Priority: Minor
>  Labels: docuentation, ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In javadoc 
> org.apache.ignite.configuration.ClientConfiguration#partitionAwarenessEnabled 
> and in the description of functionality
> https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness,
>  it is not described with which api this functionality will work and in what 
> cases. For example, will it work with getAll, in a transaction?
> Describe in the documentation and in the javadoc in which cases it works and 
> with which api.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-17717) Logging cdc in ignite2ignite mode

2022-09-29 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610820#comment-17610820
 ] 

Luchnikov Alexander edited comment on IGNITE-17717 at 9/29/22 6:20 AM:
---

[~nizhikov]
I'll try to reproduce the problem on the current master.
The problem was related to ${sys:appId}, in the logger configuration, the file 
name was set to fileName="${sys:IGNITE_HOME}/work/log/${sys:appId}.log and when 
running ./ignite-cdc.sh, ${sys:appId} was set to "ignite-cdc" and all messages 
went to ignite-cdc.log. But after the client node started,
appId was set to "ignite" and messages started going to ignite.log.


was (Author: aldoraine):
I'll try to reproduce the problem on the current master.
The problem was related to ${sys:appId}, in the logger configuration, the file 
name was set to fileName="${sys:IGNITE_HOME}/work/log/${sys:appId}.log and when 
running ./ignite-cdc.sh, ${sys:appId} was set to "ignite-cdc" and all messages 
went to ignite-cdc.log. But after the client node started,
appId was set to "ignite" and messages started going to ignite.log.

> Logging cdc in ignite2ignite mode
> -
>
> Key: IGNITE-17717
> URL: https://issues.apache.org/jira/browse/IGNITE-17717
> Project: Ignite
>  Issue Type: Task
>Reporter: Luchnikov Alexander
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: ise
> Attachments: 3b799724-998a-434b-8ca3-eb9877490ce9.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When using cdc in ignite2ignite mode, there is a problem with logging.
> When running ignite-cdc.sh, the log is written to ignite-cdc.log until the 
> ignite client starts, after it starts, the recording continues to ignite.log.
> Probably the problem is related to the replacement of appId at the start of 
> the client node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17717) Logging cdc in ignite2ignite mode

2022-09-29 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610820#comment-17610820
 ] 

Luchnikov Alexander commented on IGNITE-17717:
--

I'll try to reproduce the problem on the current master.
The problem was related to ${sys:appId}, in the logger configuration, the file 
name was set to fileName="${sys:IGNITE_HOME}/work/log/${sys:appId}.log and when 
running ./ignite-cdc.sh, ${sys:appId} was set to "ignite-cdc" and all messages 
went to ignite-cdc.log. But after the client node started,
appId was set to "ignite" and messages started going to ignite.log.

> Logging cdc in ignite2ignite mode
> -
>
> Key: IGNITE-17717
> URL: https://issues.apache.org/jira/browse/IGNITE-17717
> Project: Ignite
>  Issue Type: Task
>Reporter: Luchnikov Alexander
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: ise
> Attachments: 3b799724-998a-434b-8ca3-eb9877490ce9.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When using cdc in ignite2ignite mode, there is a problem with logging.
> When running ignite-cdc.sh, the log is written to ignite-cdc.log until the 
> ignite client starts, after it starts, the recording continues to ignite.log.
> Probably the problem is related to the replacement of appId at the start of 
> the client node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17717) Logging cdc in ignite2ignite mode

2022-09-19 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-17717:


 Summary: Logging cdc in ignite2ignite mode
 Key: IGNITE-17717
 URL: https://issues.apache.org/jira/browse/IGNITE-17717
 Project: Ignite
  Issue Type: Task
Reporter: Luchnikov Alexander


When using cdc in ignite2ignite mode, there is a problem with logging.
When running ignite-cdc.sh, the log is written to ignite-cdc.log until the 
ignite client starts, after it starts, the recording continues to ignite.log.
Probably the problem is related to the replacement of appId at the start of the 
client node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17459) Different output from !desc "SYS".TABLES and !desc TABLE

2022-08-03 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-17459:


 Summary: Different output from !desc "SYS".TABLES and !desc TABLE
 Key: IGNITE-17459
 URL: https://issues.apache.org/jira/browse/IGNITE-17459
 Project: Ignite
  Issue Type: Wish
Reporter: Luchnikov Alexander


When running commands in sqlline, we get different results:
* !desc "SYS".TABLES lists view columns, their names, types, and so on
* !desc TABLE outputs the contents of the view as if !tables were executed

When running !desc "SYS".INDEXES and !desc INDEXES the result is the same.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17369) Snapshot is inconsistent under streamed loading.

2022-07-14 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17369:
-
Labels: ise ise.lts  (was: )

> Snapshot is inconsistent under streamed loading.
> 
>
> Key: IGNITE-17369
> URL: https://issues.apache.org/jira/browse/IGNITE-17369
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Priority: Major
>  Labels: ise, ise.lts
> Attachments: IgniteClusterShanpshotStreamerTest.java
>
>
> Ignite fails to restore snapshot created under streamed load:
> {code:java}
> Conflict partition: PartitionKeyV2 [grpId=109386747, 
> grpName=SQL_PUBLIC_TEST_TBL1, partId=148]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=snapshot.IgniteClusterShanpshotStreamerTest0, updateCntr=29, 
> partitionState=OWNING, size=29, partHash=827765854], PartitionHashRecordV2 
> [isPrimary=false, consistentId=snapshot.IgniteClusterShanpshotStreamerTest1, 
> updateCntr=9, partitionState=OWNING, size=9, partHash=-1515069105]]
> Conflict partition: PartitionKeyV2 [grpId=109386747, 
> grpName=SQL_PUBLIC_TEST_TBL1, partId=146]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=snapshot.IgniteClusterShanpshotStreamerTest0, updateCntr=28, 
> partitionState=OWNING, size=28, partHash=1497908810], PartitionHashRecordV2 
> [isPrimary=false, consistentId=snapshot.IgniteClusterShanpshotStreamerTest1, 
> updateCntr=5, partitionState=OWNING, size=5, partHash=821195757]]
> {code}
> Test (attached):
> {code:java}
> public void testClusterSnapshotConsistencyWithStreamer() throws Exception 
> {
> int grids = 2;
> CountDownLatch loadNumberBeforeSnapshot = new CountDownLatch(60_000);
> AtomicBoolean stopLoading = new AtomicBoolean(false);
> dfltCacheCfg = null;
> Class.forName("org.apache.ignite.IgniteJdbcDriver");
> String tableName = "TEST_TBL1";
> startGrids(grids);
> grid(0).cluster().state(ACTIVE);
> IgniteInternalFuture load1 = runLoad(tableName, false, 1, true, 
> stopLoading, loadNumberBeforeSnapshot);
> loadNumberBeforeSnapshot.await();
> grid(0).snapshot().createSnapshot(SNAPSHOT_NAME).get();
> stopLoading.set(true);
> load1.get();
> grid(0).cache("SQL_PUBLIC_" + tableName).destroy();
> grid(0).snapshot().restoreSnapshot(SNAPSHOT_NAME, 
> F.asList("SQL_PUBLIC_TEST_TBL1")).get();
> }
> /** */
> private IgniteInternalFuture runLoad(String tblName, boolean useCache, 
> int backups, boolean streaming, AtomicBoolean stop,
> CountDownLatch startSnp) {
> return GridTestUtils.runMultiThreadedAsync(() -> {
> if(useCache) {
> String cacheName = "SQL_PUBLIC_" + tblName.toUpperCase();
> IgniteCache cache = grid(0)
> .createCache(new CacheConfiguration Object>(cacheName).setBackups(backups)
> .setCacheMode(CacheMode.REPLICATED));
> try (IgniteDataStreamer ds = 
> grid(0).dataStreamer(cacheName)) {
> for (int i = 0; !stop.get(); ++i) {
> if (streaming)
> ds.addData(i, new Account(i, i - 1));
> else
> cache.put(i, new Account(i, i - 1));
> if (startSnp.getCount() > 0)
> startSnp.countDown();
> Thread.yield();
> }
> }
> } else {
> try (Connection conn = 
> DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/")) {
> createTable(conn, tblName, backups);
> try (PreparedStatement stmt = 
> conn.prepareStatement("INSERT INTO " + tblName +
> "(id, name, orgid, dep) VALUES(?, ?, ?, ?)")) {
> if (streaming)
> conn.prepareStatement("SET STREAMING 
> ON;").execute();
> int leftLimit = 97; // letter 'a'
> int rightLimit = 122; // letter'z'
> int targetStringLength = 15;
> Random rand = new Random();
> //
> for (int i = 0; !stop.get(); ++i) {
> int orgid = rand.ints(1, 0, 
> 5).findFirst().getAsInt();
> String val = rand.ints(leftLimit, rightLimit + 
> 1).limit(targetStringLength)
> .collect(StringBuilder::new, 
> StringBuilder::appendCodePoint, StringBuilder::append)
> .toString();
> stmt.setInt(1, i);
>   

[jira] [Assigned] (IGNITE-17345) [IEP-35] Metric to track PA enabled request on ThinClient

2022-07-11 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander reassigned IGNITE-17345:


Assignee: Luchnikov Alexander

> [IEP-35] Metric to track PA enabled request on ThinClient
> -
>
> Key: IGNITE-17345
> URL: https://issues.apache.org/jira/browse/IGNITE-17345
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Luchnikov Alexander
>Priority: Major
>  Labels: IEP-35, ise
>
> The crucial point to understand ThinClient performance is to know - Partition 
> Awareness enabled or not.
> For now, it's impossible to understand how many request goes to node that is 
> primary for key.
> It seems useful metrics to analyze PA behavior - two counters to track amount 
> of requests for each server node 
> - one counter for keys current node is primary.
> - another counter for keys which require extra network hop between server 
> nodes to serve the request.
> For environment with optimal performance second counter should be close to 
> zero.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17321) Document which api can work with partition awareness

2022-07-06 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-17321:


 Summary: Document which api can work with partition awareness
 Key: IGNITE-17321
 URL: https://issues.apache.org/jira/browse/IGNITE-17321
 Project: Ignite
  Issue Type: Improvement
  Components: thin client
Reporter: Luchnikov Alexander


In javadoc 
org.apache.ignite.configuration.ClientConfiguration#partitionAwarenessEnabled 
and in the description of functionality
https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness,
 it is not described with which api this functionality will work and in what 
cases. For example, will it work with getAll, in a transaction?

Describe in the documentation and in the javadoc in which cases it works and 
with which api.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17111) Remove the ability to set the lazy flag in SqlFieldsQuery

2022-06-29 Thread Luchnikov Alexander (Jira)
Title: Message Title


 
 
 
 

 
 
 

 
   
 Luchnikov Alexander commented on  IGNITE-17111  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
  Re: Remove the ability to set the lazy flag in SqlFieldsQuery   
 

  
 
 
 
 

 
 Evgeny Stanilovsky If in LAZY mode it is possible to set the page size equal to Long.MAX_VALUE, then are tests a mandatory requirement?  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v8.20.10#820010-sha1:ace47f9)  
 
 

 
   
 

  
 

  
 

   



[jira] [Updated] (IGNITE-12117) Historical rebalance should NOT be processed in striped way

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-12117:
-
Labels: iep-16 ise  (was: iep-16)

> Historical rebalance should NOT be processed in striped way
> ---
>
> Key: IGNITE-12117
> URL: https://issues.apache.org/jira/browse/IGNITE-12117
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov (Obsolete, actual is "av")
>Assignee: Alexey Scherbakov
>Priority: Major
>  Labels: iep-16, ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Test 
> {{org.apache.ignite.internal.processors.cache.transactions.TxPartitionCounterStateConsistencyTest#testPartitionConsistencyWithBackupsRestart}}
>  have failure on attempt to handle historical rebalance using un-striped pool.
> You can reproduce it by replacing
> {noformat}
>  if (historical) // Can not be reordered.
> 
> ctx.kernalContext().getStripedRebalanceExecutorService().execute(r, 
> Math.abs(nodeId.hashCode()));
> {noformat}
> with
> {noformat}
>  if (historical) // Can be reordered?
> ctx.kernalContext().getRebalanceExecutorService().execute(r);
> {noformat}
> and you will gain the following
> {noformat}
> ava.lang.AssertionError: idle_verify failed on 1 node.
> idle_verify check has finished, found 7 conflict partitions: 
> [counterConflicts=0, hashConflicts=7]
> Hash conflicts:
> Conflict partition: PartitionKeyV2 [grpId=1544803905, grpName=default, 
> partId=23]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=nodetransactions.TxPartitionCounterStateConsistencyHistoryRebalanceTest1,
>  updateCntr=707143, partitionState=OWNING, size=495, partHash=-1503789370], 
> PartitionHashRecordV2 [isPrimary=false, 
> consistentId=nodetransactions.TxPartitionCounterStateConsistencyHistoryRebalanceTest2,
>  updateCntr=707143, partitionState=OWNING, size=494, partHash=-1538739200]]
> Conflict partition: PartitionKeyV2 [grpId=1544803905, grpName=default, 
> partId=8]
> 
> {noformat}
> So, we need to investigate reasons and provide proper historical rebalance 
> refactoring to use the unstriped pool, if possible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-13777) idle_verify should report real size of the partitions

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-13777:
-
Labels: ise  (was: )

> idle_verify should report real size of the partitions
> -
>
> Key: IGNITE-13777
> URL: https://issues.apache.org/jira/browse/IGNITE-13777
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Stanislav Lukyanov
>Priority: Major
>  Labels: ise
>
> Currently, idle_verify checks the content of partitions (through hash) and 
> the partition size that is stored in the partition meta. It will be better if 
> idle_verify also counts the entries inside the partition and returns both the 
> size from the meta AND the real size.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-8874) Blinking node in cluster may cause data corruption

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-8874:

Labels: ise  (was: )

> Blinking node in cluster may cause data corruption
> --
>
> Key: IGNITE-8874
> URL: https://issues.apache.org/jira/browse/IGNITE-8874
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: Dmitry Sherstobitov
>Priority: Critical
>  Labels: ise
>
> All caches with 2 backups
> 4 nodes in cluster
>  # Start cluster, load data
>  # Start transactional loading (8 threads, 100 ops/second put/get in each op)
>  # Repeat 10 times: kill one node, clean LFS, start node again, wait for 
> rebalance
>  # Check idle_verify, check data corruption
> Here is idle_verify report:
> node2 - node that was blinking while test. Update counter are equal between 
> partitions but data is different.
> {code:java}
> Conflict partition: PartitionKey [grpId=374280886, grpName=cache_group_3, 
> partId=41]
> Partition instances: [PartitionHashRecord [isPrimary=true, 
> partHash=885018783, updateCntr=16, size=15, consistentId=node4], 
> PartitionHashRecord [isPrimary=false, partHash=885018783, updateCntr=16, 
> size=15, consistentId=node3], PartitionHashRecord [isPrimary=false, 
> partHash=-357162793, updateCntr=16, size=15, consistentId=node2]]
> Conflict partition: PartitionKey [grpId=1586135625, 
> grpName=cache_group_1_015, partId=15]
> Partition instances: [PartitionHashRecord [isPrimary=true, 
> partHash=-562597978, updateCntr=22, size=16, consistentId=node3], 
> PartitionHashRecord [isPrimary=false, partHash=-562597978, updateCntr=22, 
> size=16, consistentId=node1], PartitionHashRecord [isPrimary=false, 
> partHash=780813725, updateCntr=22, size=16, consistentId=node2]]
> Conflict partition: PartitionKey [grpId=374280885, grpName=cache_group_2, 
> partId=75]
> Partition instances: [PartitionHashRecord [isPrimary=true, 
> partHash=-1500797699, updateCntr=21, size=16, consistentId=node3], 
> PartitionHashRecord [isPrimary=false, partHash=-1500797699, updateCntr=21, 
> size=16, consistentId=node1], PartitionHashRecord [isPrimary=false, 
> partHash=-1592034435, updateCntr=21, size=16, consistentId=node2]]
> Conflict partition: PartitionKey [grpId=374280884, grpName=cache_group_1, 
> partId=713]
> Partition instances: [PartitionHashRecord [isPrimary=false, 
> partHash=-63058826, updateCntr=4, size=2, consistentId=node3], 
> PartitionHashRecord [isPrimary=true, partHash=-63058826, updateCntr=4, 
> size=2, consistentId=node1], PartitionHashRecord [isPrimary=false, 
> partHash=670869467, updateCntr=4, size=2, consistentId=node2]]
> Conflict partition: PartitionKey [grpId=374280886, grpName=cache_group_3, 
> partId=11]
> Partition instances: [PartitionHashRecord [isPrimary=false, 
> partHash=-224572810, updateCntr=17, size=16, consistentId=node3], 
> PartitionHashRecord [isPrimary=true, partHash=-224572810, updateCntr=17, 
> size=16, consistentId=node1], PartitionHashRecord [isPrimary=false, 
> partHash=176419075, updateCntr=17, size=16, consistentId=node2]]{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-9905) After transaction load cluster inconsistent

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-9905:

Labels: ise  (was: )

> After transaction load cluster inconsistent
> ---
>
> Key: IGNITE-9905
> URL: https://issues.apache.org/jira/browse/IGNITE-9905
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7
>Reporter: ARomantsov
>Assignee: Ilya Lantukh
>Priority: Critical
>  Labels: ise
>
> Loaded data into the cluster using transactions consisting of two get / two 
> put
> Test env: one server, two server node, one client
> {code:java}
> idle_verify check has finished, found 60 conflict partitions: 
> [counterConflicts=45, hashConflicts=15]
> Update counter conflicts:
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=98]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=1519, size=596, partHash=-1167688484], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=1520, 
> size=596, partHash=-1167688484]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=34]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=node2, updateCntr=1539, size=596, partHash=-99631005], 
> PartitionHashRecordV2 [isPrimary=true, consistentId=node1, updateCntr=1537, 
> size=596, partHash=-1284437377]]
> Conflict partition: PartitionKeyV2 [grpId=770187303, 
> grpName=CACHEGROUP_PARTICLE_1, partId=31]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=15, size=4, partHash=-1125172674], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=16, 
> size=4, partHash=-1125172674]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=39]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=1555, size=596, partHash=-40303136], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=1556, 
> size=596, partHash=-40303136]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=90]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=node2, updateCntr=1557, size=596, partHash=-1295145299], 
> PartitionHashRecordV2 [isPrimary=true, consistentId=node1, updateCntr=1556, 
> size=596, partHash=-1221175703]]
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-10825) After node restart and and new node to BLT due load - some partition inconsistent

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-10825:
-
Labels: ise  (was: )

> After node restart and and new node to BLT due load - some partition 
> inconsistent
> -
>
> Key: IGNITE-10825
> URL: https://issues.apache.org/jira/browse/IGNITE-10825
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.8
>Reporter: ARomantsov
>Priority: Critical
>  Labels: ise
>
> {code:java}
> 14:12:20 [14:12:20][:573 :252] idle_verify check has finished, found 2 
> conflict partitions: [counterConflicts=1, hashConflicts=1]
> 14:12:20 [14:12:20][:573 :252] Update counter conflicts:
> 14:12:20 [14:12:20][:573 :252] Conflict partition: PartitionKeyV2 
> [grpId=374280887, grpName=cache_group_4, partId=115]
> 14:12:20 [14:12:20][:573 :252] Partition instances: 
>   
>   [PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_5, 
> updateCntr=10, size=2, partHash=-979021948], 
>   
>PartitionHashRecordV2 [isPrimary=true, consistentId=node_1_2, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_1, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_3, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_6, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_4, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_10001, 
> updateCntr=11, size=2, partHash=-731597536]]
> 14:12:20 [14:12:20][:573 :252] Hash conflicts:
> 14:12:20 [14:12:20][:573 :252] Conflict partition: PartitionKeyV2 
> [grpId=374280887, grpName=cache_group_4, partId=115]
> 14:12:20 [14:12:20][:573 :252] Partition instances: 
>   
> [PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_5, 
> updateCntr=10, size=2, partHash=-979021948], 
>   
> PartitionHashRecordV2 [isPrimary=true, consistentId=node_1_2, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_1, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_3, 
> updateCntr=11, size=2, partHash=-731597536],
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_6, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_4, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_10001, 
> updateCntr=11, size=2, partHash=-731597536]]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-10979) Add documentation for control.sh idle_verify --check-crc

2022-06-17 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1700#comment-1700
 ] 

Luchnikov Alexander commented on IGNITE-10979:
--

[~Artem Budnikov] 
Сould you describe in the documentation:
* When should this flag be used?
* Is there any additional overhead when using it?


> Add documentation for control.sh idle_verify --check-crc
> 
>
> Key: IGNITE-10979
> URL: https://issues.apache.org/jira/browse/IGNITE-10979
> Project: Ignite
>  Issue Type: New Feature
>  Components: control.sh, documentation
>Reporter: Sergey Antonov
>Assignee: Artem Budnikov
>Priority: Major
> Fix For: 2.14
>
>
> We should document new option --check-crc in control.sh idle_verify command.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-10979) Add documentation for control.sh idle_verify --check-crc

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-10979:
-
Labels: ise  (was: )

> Add documentation for control.sh idle_verify --check-crc
> 
>
> Key: IGNITE-10979
> URL: https://issues.apache.org/jira/browse/IGNITE-10979
> Project: Ignite
>  Issue Type: New Feature
>  Components: control.sh, documentation
>Reporter: Sergey Antonov
>Assignee: Artem Budnikov
>Priority: Major
>  Labels: ise
> Fix For: 2.14
>
>
> We should document new option --check-crc in control.sh idle_verify command.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-11076) Add documentation for control.sh idle_verify --exclude-caches and --cache-filter

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-11076:
-
Labels: ise  (was: )

> Add documentation for control.sh idle_verify --exclude-caches and 
> --cache-filter
> 
>
> Key: IGNITE-11076
> URL: https://issues.apache.org/jira/browse/IGNITE-11076
> Project: Ignite
>  Issue Type: Task
>  Components: control.sh, documentation
>Reporter: Sergey Antonov
>Assignee: Artem Budnikov
>Priority: Major
>  Labels: ise
> Fix For: 2.14
>
>
> control.sh cache --help output 
> {noformat}
> The '--cache subcommand' is used to get information about and perform actions 
> with caches. The command has the following syntax:
> control.sh [--host HOST_OR_IP] [--port PORT] [--user USER] [--password 
> PASSWORD] [--ping-interval PING_INTERVAL] [--ping-timeout PING_TIMEOUT] 
> [--ssl-protocol SSL_PROTOCOL[, SSL_PROTOCOL_2, ..., SSL_PROTOCOL_N]] 
> [--ssl-cipher-suites SSL_CIPHER_1[, SSL_CIPHER_2, ..., SSL_CIPHER_N]] 
> [--ssl-key-algorithm SSL_KEY_ALGORITHM] [--keystore-type KEYSTORE_TYPE] 
> [--keystore KEYSTORE_PATH] [--keystore-password KEYSTORE_PASSWORD] 
> [--truststore-type TRUSTSTORE_TYPE] [--truststore TRUSTSTORE_PATH] 
> [--truststore-password TRUSTSTORE_PASSWORD] --cache [subcommand] 
> 
> The subcommands that take [nodeId] as an argument ('list', 'contention' and 
> 'validate_indexes') will be executed on the given node or on all server nodes 
> if the option is not specified. Other commands will run on a random server 
> node.
> Subcommands:
> 
> --cache list regexPattern [--groups|--seq] [nodeId] [--config] 
> [--output-format multi-line]
> Show information about caches, groups or sequences that match a regular 
> expression. When executed without parameters, this subcommand prints the list 
> of caches.
> Parameters:
> --config - print all configuration parameters for each cache.
> --output-format multi-line - print configuration parameters per line. This 
> option has effect only when used with --config and without [--groups|--seq].
> --groups - print information about groups.
> --seq - print information about sequences.
> 
> --cache contention minQueueSize [nodeId] [maxPrint]
> Show the keys that are point of contention for multiple transactions.
> 
> --cache idle_verify [--dump] [--skip-zeros] [--check-crc] [(--exclude-caches 
> cacheName1,...,cacheNameN)|(--cache-filter 
> ALL|SYSTEM|PERSISTENT|NOT_PERSISTENT)|cacheName1,...,cacheNameN]
> Verify counters and hash sums of primary and backup partitions for the 
> specified caches on an idle cluster and print out the differences, if any.
> Parameters:
> --check-crc - check the CRC-sum of pages stored on disk before verifying data 
> consistency in partitions between primary and backup nodes.
> 
> --cache validate_indexes [cacheName1,...,cacheNameN] [nodeId] [--check-first 
> N|--check-through K]
> Validate indexes on an idle cluster and print out the keys that are missing 
> in the indexes.
> Parameters:
> --check-first N - validate only the first N keys
> --check-through K - validate every Kth key
> 
> --cache distribution nodeId|null [cacheName1,...,cacheNameN] 
> [--user-attributes attrName1,...,attrNameN]
> Prints the information about partition distribution.
> 
> --cache reset_lost_partitions cacheName1,...,cacheNameN
> Reset the state of lost partitions for the specified caches.{noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-13371) Sporadic partition inconsistency after historical rebalancing of updates with same key put-remove pattern

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-13371:
-
Labels: ise  (was: )

> Sporadic partition inconsistency after historical rebalancing of updates with 
> same key put-remove pattern
> -
>
> Key: IGNITE-13371
> URL: https://issues.apache.org/jira/browse/IGNITE-13371
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Rakov
>Assignee: Ivan Rakov
>Priority: Major
>  Labels: ise
>
> h4. scenario
> # start 3 servers 3 clients, create caches
> # clients start combined put + 1% remove of data in transactions 
> PESSIMISTIC/REPEATABLE_READ
> ## kill one node
> ## restart one node
> # ensure all transactions completed
> # run idle_verify
> Expected: no conflicts found
> Actual:
> {noformat}
> [12:03:18][:55 :230] Control utility --cache idle_verify --skip-zeros 
> --cache-filter PERSISTENT
> [12:03:20][:55 :230] Control utility [ver. 8.7.13#20200228-sha1:7b016d63]
> [12:03:20][:55 :230] 2020 Copyright(C) GridGain Systems, Inc. and Contributors
> [12:03:20][:55 :230] User: prtagent
> [12:03:20][:55 :230] Time: 2020-03-03T12:03:19.836
> [12:03:20][:55 :230] Command [CACHE] started
> [12:03:20][:55 :230] Arguments: --host 172.25.1.11 --port 11211 --cache 
> idle_verify --skip-zeros --cache-filter PERSISTENT 
> [12:03:20][:55 :230] 
> 
> [12:03:20][:55 :230] idle_verify task was executed with the following args: 
> caches=[], excluded=[], cacheFilter=[PERSISTENT]
> [12:03:20][:55 :230] idle_verify check has finished, found 1 conflict 
> partitions: [counterConflicts=0, hashConflicts=1]
> [12:03:20][:55 :230] Hash conflicts:
> [12:03:20][:55 :230] Conflict partition: PartitionKeyV2 [grpId=1338167321, 
> grpName=cache_group_3_088_1, partId=24]
> [12:03:20][:55 :230] Partition instances: [PartitionHashRecordV2 
> [isPrimary=false, consistentId=node_1_2, updateCntr=172349, 
> partitionState=OWNING, size=6299, partHash=157875238], PartitionHashRecordV2 
> [isPrimary=true, consistentId=node_1_1, updateCntr=172349, 
> partitionState=OWNING, size=6299, partHash=157875238], PartitionHashRecordV2 
> [isPrimary=false, consistentId=node_1_4, updateCntr=172349, 
> partitionState=OWNING, size=6300, partHash=-944532882]]
> [12:03:20][:55 :230] Command [CACHE] finished with code: 0
> [12:03:20][:55 :230] Control utility has completed execution at: 
> 2020-03-03T12:03:20.593
> [12:03:20][:55 :230] Execution time: 757 ms
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-15167) Control.sh should be able to fix cache inconsistency using Read Repair

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-15167:
-
Labels: iep-31 ise  (was: iep-31)

> Control.sh should be able to fix cache inconsistency using Read Repair
> --
>
> Key: IGNITE-15167
> URL: https://issues.apache.org/jira/browse/IGNITE-15167
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Critical
>  Labels: iep-31, ise
>
> Inconsistent caches can be found using idle_verify 
> (https://ignite.apache.org/docs/latest/tools/control-script#verifying-partition-checksums).
> Additional commands allow finding/fixing inconsistent entries should be added 
> to control.sh.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-15327) Idle_verify fails on cluster check when nodesFilter is used

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-15327:
-
Labels: ise  (was: )

> Idle_verify fails on cluster check when nodesFilter is used
> ---
>
> Key: IGNITE-15327
> URL: https://issues.apache.org/jira/browse/IGNITE-15327
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Priority: Major
>  Labels: ise
>
> Start cluster and create a cache with a filter
> {noformat}
> cfg.setNodeFilter(node -> !node.consistentId().equals(filteredId)); 
> {noformat}
> and idle_verify will return you
> {noformat}
> The check procedure task was executed with the following args: caches=[], 
> excluded=[], cacheFilter=[DEFAULT]
> The check procedure failed.
> There are no caches matching given filter options.
> The check procedure failed on nodes:
> Node ID: 157c034a-4dfa-428f-a671-569fbad2 [127.0.0.1]
> Consistent ID: gridCommandHandlerTest2
> See log for additional information. 
> /Users/user/IdeaProjects/ignite/work/idle_verify-2021-08-17T16-56-15_283.txt
> Control utility [ver. 2.12.0-SNAPSHOT#20210817-sha1:DEV]
> 2021 Copyright(C) Apache Software Foundation
> User: user
> Time: 2021-08-17T16:56:09.858
> Command [CACHE] started
> Arguments: --cache idle_verify --yes 
> 
> Command [CACHE] finished with code: 0
> Control utility has completed execution at: 2021-08-17T16:56:15.298
> Execution time: 5440 ms
> {noformat}
> because of empty caches list on a filtered node
> {noformat}
> class 
> org.apache.ignite.internal.processors.cache.verify.NoMatchingCachesException: 
> null
>   at 
> org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2$VerifyBackupPartitionsJobV2.getGroupIds(VerifyBackupPartitionsTaskV2.java:335)
>   at 
> org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2$VerifyBackupPartitionsJobV2.execute(VerifyBackupPartitionsTaskV2.java:206)
>   at 
> org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2$VerifyBackupPartitionsJobV2.execute(VerifyBackupPartitionsTaskV2.java:171)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:601)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7253)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:595)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:522)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>   at 
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1305)
>   at 
> org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:2155)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1908)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1529)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:242)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1422)
>   at 
> org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
> {noformat}
> BTW, don't forget to remove the following
> {noformat}
> // Another cache without nodeFilter required to perform idle_verify check.
> // See https://issues.apache.org/jira/browse/IGNITE-15327 for details.
> ignite.getOrCreateCache(cacheConfiguration(true)).getName();
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-15835) Add control.sh utility feature to detect partition reserve counter(HWM) inconsistency.

2022-06-17 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-15835:
-
Labels: ise  (was: )

> Add control.sh utility feature to detect partition reserve counter(HWM) 
> inconsistency.
> --
>
> Key: IGNITE-15835
> URL: https://issues.apache.org/jira/browse/IGNITE-15835
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Eduard Rakhmankulov
>Assignee: Eduard Rakhmankulov
>Priority: Major
>  Labels: ise
>
> Check during idle_verify that update and reserve counters are consistent (HWM 
> >= LWM).
> The current transaction protocol implementation allows lagging of HWM from 
> LWM on *backup* partitions. Therefore, idle_verify should check only primary 
> partitions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17112) Consistency check must fix counter after the consistency fix

2022-06-09 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17112:
-
Labels: ise  (was: )

> Consistency check must fix counter after the consistency fix
> 
>
> Key: IGNITE-17112
> URL: https://issues.apache.org/jira/browse/IGNITE-17112
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: ise
> Fix For: 2.14
>
>
> Consistency repair repairs the consistency for the data committed on at least 
> single node.
> But partition counter may have gaps for prepared, but not committed data, and 
> such gaps will cause exception on cluster activation: 
> {noformat}
> 2022-06-03 22:01:59.695 
> [ERROR][sys-#322][org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager]
>  Failed to update partition counter. Most probably a node with most actual 
> data is out of topology or data streamer is used in preload mode 
> (allowOverride=false) concurrently with cache transactions [grpName=XXX, 
> partId=9099]
> org.apache.ignite.IgniteCheckedException: Failed to update the counter 
> [newVal=4854911, curState=Counter [lwm=4854911, holes={4854912=Item 
> [start=4854912, delta=1]}, maxApplied=4854913, hwm=4854911]]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> org.apache.ignite.internal.processors.cache.PartitionUpdateCounterTrackingImpl.update(PartitionUpdateCounterTrackingImpl.java:153)
>  ~[ignite-core-2.11.0-p5.jar:2.11.0-p5]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> org.apache.ignite.internal.processors.cache.PartitionUpdateCounterErrorWrapper.update(PartitionUpdateCounterErrorWrapper.java:97)
>  ~[ignite-core-2.11.0-p5.jar:2.11.0-p5]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.updateCounter(IgniteCacheOffheapManagerImpl.java:1687)
>  ~[ignite-core-2.11.0-p5.jar:2.11.0-p5]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.updateCounter(GridCacheOffheapManager.java:2530)
>  ~[ignite-core-2.11.0-p5.jar:2.11.0-p5]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition.updateCounter(GridDhtLocalPartition.java:913)
>  ~[ignite-core-2.11.0-p5.jar:2.11.0-p5]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl.update(GridDhtPartitionTopologyImpl.java:1491)
>  ~[ignite-core-2.11.0-p5.jar:2.11.0-p5]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.lambda$updatePartitionFullMap$81bdb8e8$1(GridDhtPartitionsExchangeFuture.java:4817)
>  ~[ignite-core-2.11.0-p5.jar:2.11.0-p5]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> org.apache.ignite.internal.util.IgniteUtils.lambda$null$1(IgniteUtils.java:11358)
>  ~[ignite-core-2.11.0-p5.jar:2.11.0-p5]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_322]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_322]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_322]
>  ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣ ⁣at java.lang.Thread.run(Thread.java:750)
> {noformat}
> Consistency check via cli must close this gaps on successful consistency 
> repair.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17111) Remove the ability to set the lazy flag in SqlFieldsQuery

2022-06-08 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551428#comment-17551428
 ] 

Luchnikov Alexander commented on IGNITE-17111:
--

[~jooger] I understand this, I started the task so as not to forget.
Not the first year we have to solve problems caused by lazy=false.
In the first iteration, I would like not to remove setLazy from the public API, 
so as not to force everyone to update the application code. It is enough to 
mark this method as deprecated, with a detailed comment.
When trying to use this method, write to the WARN log, but so as not to spam 
the log - for example, once an hour.
I'll discuss it on the dev list.

> Remove the ability to set the lazy flag in SqlFieldsQuery
> -
>
> Key: IGNITE-17111
> URL: https://issues.apache.org/jira/browse/IGNITE-17111
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
>
> Remove the ability to set the lazy flag in SqlFieldsQuery. SqlFieldsQuery 
> must always be executed as lazy=true. 
> This property 
> (org.apache.igniteIgniteSystemProperties#IGNITE_SQL_FORCE_LAZY_RESULT_SET) 
> refers to the same functionality, but is not used in the code.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-15167) Control.sh should be able to fix cache inconsistency using Read Repair

2022-06-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-15167:
-
Labels: idle_verify iep-31  (was: iep-31)

> Control.sh should be able to fix cache inconsistency using Read Repair
> --
>
> Key: IGNITE-15167
> URL: https://issues.apache.org/jira/browse/IGNITE-15167
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Critical
>  Labels: idle_verify, iep-31
>
> Inconsistent caches can be found using idle_verify 
> (https://ignite.apache.org/docs/latest/tools/control-script#verifying-partition-checksums).
> Additional commands allow finding/fixing inconsistent entries should be added 
> to control.sh.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-15167) Control.sh should be able to fix cache inconsistency using Read Repair

2022-06-07 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-15167:
-
Labels: iep-31  (was: idle_verify iep-31)

> Control.sh should be able to fix cache inconsistency using Read Repair
> --
>
> Key: IGNITE-15167
> URL: https://issues.apache.org/jira/browse/IGNITE-15167
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Critical
>  Labels: iep-31
>
> Inconsistent caches can be found using idle_verify 
> (https://ignite.apache.org/docs/latest/tools/control-script#verifying-partition-checksums).
> Additional commands allow finding/fixing inconsistent entries should be added 
> to control.sh.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17114) Idle_verify must print and compare full partition counter state instead of just LWM

2022-06-06 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17114:
-
Labels: ise  (was: )

> Idle_verify must print and compare full partition counter state instead of 
> just LWM
> ---
>
> Key: IGNITE-17114
> URL: https://issues.apache.org/jira/browse/IGNITE-17114
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: ise
> Fix For: 2.14
>
>
> Gaps also should be printed/compared.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17111) Remove the ability to set the lazy flag in SqlFieldsQuery

2022-06-06 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17111:
-
Labels: ise  (was: )

> Remove the ability to set the lazy flag in SqlFieldsQuery
> -
>
> Key: IGNITE-17111
> URL: https://issues.apache.org/jira/browse/IGNITE-17111
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
>
> Remove the ability to set the lazy flag in SqlFieldsQuery. SqlFieldsQuery 
> must always be executed as lazy=true. 
> This property 
> (org.apache.igniteIgniteSystemProperties#IGNITE_SQL_FORCE_LAZY_RESULT_SET) 
> refers to the same functionality, but is not used in the code.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17111) Remove the ability to set the lazy flag in SqlFieldsQuery

2022-06-06 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-17111:


 Summary: Remove the ability to set the lazy flag in SqlFieldsQuery
 Key: IGNITE-17111
 URL: https://issues.apache.org/jira/browse/IGNITE-17111
 Project: Ignite
  Issue Type: Improvement
Reporter: Luchnikov Alexander


Remove the ability to set the lazy flag in SqlFieldsQuery. SqlFieldsQuery must 
always be executed as lazy=true. 
This property 
(org.apache.igniteIgniteSystemProperties#IGNITE_SQL_FORCE_LAZY_RESULT_SET) 
refers to the same functionality, but is not used in the code.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16103) Failed to create index for table "table" with some options

2022-05-27 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542811#comment-17542811
 ] 

Luchnikov Alexander commented on IGNITE-16103:
--

[~timonin.maksim] As agreed earlier - assign a ticket to you, for evaluation 
its implementation as suggested in https://github.com/apache/ignite/pull/9837 
based on switch to calcite, defect priority and fix cost.

> Failed to create index for table "table" with some options
> --
>
> Key: IGNITE-16103
> URL: https://issues.apache.org/jira/browse/IGNITE-16103
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Minor
>  Labels: good-first-issue, newbie
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> 1. How to reproduce - all options CACHE_NAME, VALUE_TYPE, INLINE_SIZE should 
> be in the queries to reproduce failure:
> ```
> create table table(id int PRIMARY KEY, fld1 int, fld2 int) with 
> "CACHE_NAME=TEST_CACHE_NAME,VALUE_TYPE=TEST_VALUE_TYPE";
> create index idx_0 on table(fld1, fld2) INLINE_SIZE 0;
> ```
> Creation of index fails with exception:
> Syntax error in SQL statement "CREATE INDEX IDX_0 ON TABLE(FLD1, FLD2) 
> INLINE_SIZE[*] 0 "; SQL statement:
> create index IDX_0 on table(fld1, fld2) INLINE_SIZE 0 [42000-197]
> at
>  
> 2. How to fix: need to debug why parameters matters, looks like appearance of 
> this options triggers some checks that doesn't run while no options 
> specified. Then this checks should be triggers independently on specified 
> options or should be removed (depends on).
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (IGNITE-16103) Failed to create index for table "table" with some options

2022-05-27 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander reassigned IGNITE-16103:


Assignee: Maksim Timonin  (was: Luchnikov Alexander)

> Failed to create index for table "table" with some options
> --
>
> Key: IGNITE-16103
> URL: https://issues.apache.org/jira/browse/IGNITE-16103
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Minor
>  Labels: good-first-issue, newbie
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> 1. How to reproduce - all options CACHE_NAME, VALUE_TYPE, INLINE_SIZE should 
> be in the queries to reproduce failure:
> ```
> create table table(id int PRIMARY KEY, fld1 int, fld2 int) with 
> "CACHE_NAME=TEST_CACHE_NAME,VALUE_TYPE=TEST_VALUE_TYPE";
> create index idx_0 on table(fld1, fld2) INLINE_SIZE 0;
> ```
> Creation of index fails with exception:
> Syntax error in SQL statement "CREATE INDEX IDX_0 ON TABLE(FLD1, FLD2) 
> INLINE_SIZE[*] 0 "; SQL statement:
> create index IDX_0 on table(fld1, fld2) INLINE_SIZE 0 [42000-197]
> at
>  
> 2. How to fix: need to debug why parameters matters, looks like appearance of 
> this options triggers some checks that doesn't run while no options 
> specified. Then this checks should be triggers independently on specified 
> options or should be removed (depends on).
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (IGNITE-16798) Problem with registers of field names when rolling a patch (QueryEntity)

2022-05-27 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander resolved IGNITE-16798.
--
Resolution: Duplicate

>  Problem with registers of field names when rolling a patch (QueryEntity)
> -
>
> Key: IGNITE-16798
> URL: https://issues.apache.org/jira/browse/IGNITE-16798
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.12
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: ignite16798.patch
>
>
> reproducer  [^ignite16798.patch] 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (IGNITE-16798) Problem with registers of field names when rolling a patch (QueryEntity)

2022-05-27 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander closed IGNITE-16798.


>  Problem with registers of field names when rolling a patch (QueryEntity)
> -
>
> Key: IGNITE-16798
> URL: https://issues.apache.org/jira/browse/IGNITE-16798
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.12
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: ignite16798.patch
>
>
> reproducer  [^ignite16798.patch] 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Reopened] (IGNITE-16798) Problem with registers of field names when rolling a patch (QueryEntity)

2022-05-27 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander reopened IGNITE-16798:
--

>  Problem with registers of field names when rolling a patch (QueryEntity)
> -
>
> Key: IGNITE-16798
> URL: https://issues.apache.org/jira/browse/IGNITE-16798
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.12
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: ignite16798.patch
>
>
> reproducer  [^ignite16798.patch] 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (IGNITE-16798) Problem with registers of field names when rolling a patch (QueryEntity)

2022-05-27 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander resolved IGNITE-16798.
--
Resolution: Duplicate

>  Problem with registers of field names when rolling a patch (QueryEntity)
> -
>
> Key: IGNITE-16798
> URL: https://issues.apache.org/jira/browse/IGNITE-16798
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.12
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: ignite16798.patch
>
>
> reproducer  [^ignite16798.patch] 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17041) Normalize query entity after it is modified during merge process.

2022-05-27 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17041:
-
Labels: ise  (was: )

> Normalize query entity after it is modified during merge process.
> -
>
> Key: IGNITE-17041
> URL: https://issues.apache.org/jira/browse/IGNITE-17041
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Major
>  Labels: ise
>
> It is needed to to normalize query entity after it is modified during MERGE 
> process as it is done during the first cache configuration processing. 
> Currently new table columns that was created based on Query Entity fields 
> which was added during MERGE process has the naming that differs from columns 
> that were created based on initial Query Entity fields.
> For example if CacheConfiguration#isSqlEscapeAll flag is disabled - all 
> QueryEntity fields are converted to upper case and used as such to name 
> columns. But it does not happen if Query Entity field was added during MERGE 
> process. It confuses users and leads to the situations when column conflicts 
> cannot be found because column names are different.
> Reproducer:
> {code:java}
> public class TestClass extends GridCommonAbstractTest {
> /**
>  * Start cluster nodes.
>  */
> public static final int NODES_CNT = 2;
> /**
>  * Count of backup partitions.
>  */
> public static final int BACKUPS = 2;
> @Override
> protected IgniteConfiguration getConfiguration(String igniteInstanceName) 
> throws Exception {
> QueryEntity queryEntity = new QueryEntity(String.class, Person.class)
> .setTableName("PERSON")
> .addQueryField("id", Boolean.class.getName(), null)
> .addQueryField("name", String.class.getName(), null);
> CacheConfiguration configuration = new 
> CacheConfiguration<>(GridAbstractTest.DEFAULT_CACHE_NAME)
> .setBackups(BACKUPS)
> .setQueryEntities(Collections.singletonList(queryEntity));
> if (igniteInstanceName.endsWith("1"))
> queryEntity.addQueryField("age", Boolean.class.getName(), null);
> IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName)
> .setConsistentId(igniteInstanceName)
> .setDataStorageConfiguration(new DataStorageConfiguration()
> .setDefaultDataRegionConfiguration(new 
> DataRegionConfiguration()))
> .setCacheConfiguration(
> configuration);
> return cfg;
> }
> /**
>  * {@inheritDoc}
>  */
> @Override
> protected void afterTest() throws Exception {
> stopAllGrids();
> }
> /**
>  *
>  */
> @Test
> public void testIssue() throws Exception {
> startGrid(0);
> grid(0);
> grid(0).cache(GridAbstractTest.DEFAULT_CACHE_NAME).query(new 
> SqlFieldsQuery("ALTER TABLE PERSON ADD age INTEGER")).getAll();
> GridTestUtils.assertThrows(log, () -> startGrid(1), Exception.class, 
> "");
> grid(0).cluster().state(ClusterState.INACTIVE);
> startGrid(1);
> grid(0).cluster().state(ClusterState.ACTIVE);
> System.out.println(grid(0).cache(DEFAULT_CACHE_NAME).query(new 
> SqlFieldsQuery("select * from \"SYS\".TABLE_COLUMNS"))
> .getAll());
> }
> class Person {
> private int id;
> private String name;
> private boolean age;
> }
> }
> {code}
> As a result we can see that "age" column is duplicated in UPPER and CAMEL 
> case. And no conflicts were found.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16936) Incorrect DML syntax error message contains sensitive information

2022-05-24 Thread Luchnikov Alexander (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541466#comment-17541466
 ] 

Luchnikov Alexander commented on IGNITE-16936:
--

[~jooger] 
IGNITE-7001 fixed a similar issue, but on the Duplicate key during INSERT event.
In my case, we break down at the request parsing phase and display everything 
that we tried to execute - Syntax error in SQL statement "INSERT TEST[*] (ID, 
VAL) VALUES (3, 'SENSITIVE_DATA') "; expected "INTO"; SQL statement:


> Incorrect DML syntax error message contains sensitive information
> -
>
> Key: IGNITE-16936
> URL: https://issues.apache.org/jira/browse/IGNITE-16936
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Major
>  Labels: ise
> Attachments: 
> IGNITE-16936_Ignore_IGNITE_TO_STRING_INCLUDE_SENSITIVE_in_wrong_syntax_DML_error_message_-.patch
>
>
> Incorrect DML syntax error message contains sensitive information.
> Regardless of the value of IGNITE_TO_STRING_INCLUDE_SENSITIVE.
> Reproducer  
> [^IGNITE-16936_Ignore_IGNITE_TO_STRING_INCLUDE_SENSITIVE_in_wrong_syntax_DML_error_message_-.patch]
>  show what SENSITIVE contains in message.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17027:
-
Description: 
Description of the case, in some environment, the DML operation does not delete 
all data (this DML delete operation is an example). To reproduce the problem, 
you must:
# Create a table with a varchar field.
# Create an index on the given field.
# Fill the table with data, use int as the value.
# Delete data by specifying in the DML operation, as a condition, an indexed 
value, without String.valueOf(intValue)

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.

The result of all tests should be the same, specifically in this example (DML 
delete) - the number of entries in the cache, according to the result of each 
test, should be equal to zero.

  was:
Description of the case, in some environment, the DML operation does not delete 
all data (this DML delete operation is an example). To reproduce the problem, 
you must:
# Create a table with a varchar field.
# Create an index on the given field.
# Fill the table with data, use int as the value.
# Delete data by specifying in the DML operation, as a condition, an indexed 
value, without String.valueOf(intValue)

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.


>  Incorrect result of the DML delete operation, in some environment
> --
>
> Key: IGNITE-17027
> URL: https://issues.apache.org/jira/browse/IGNITE-17027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: IndexSetArgsAndCastTest.patch
>
>
> Description of the case, in some environment, the DML operation does not 
> delete all data (this DML delete operation is an example). To reproduce the 
> problem, you must:
> # Create a table with a varchar field.
> # Create an index on the given field.
> # Fill the table with data, use int as the value.
> # Delete data by specifying in the DML operation, as a condition, an indexed 
> value, without String.valueOf(intValue)
> The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
> indexOnAutocastOff() test.
> The result of all tests should be the same, specifically in this example (DML 
> delete) - the number of entries in the cache, according to the result of each 
> test, should be equal to zero.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17027:
-
Description: 
Description of the case, in some environment, the DML operation does not delete 
all data (this DML delete operation is an example). To reproduce the problem, 
you must:
# Create a table with a varchar field.
# Create an index on the given field.
# Fill the table with data, use int as the value.
# Delete data by specifying in the DML operation, as a condition, an indexed 
value, without String.valueOf(intValue)

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.

  was:
Description of the case, in some environment, the DML operation does not delete 
all data (this DML delete operation is an example). To reproduce the problem, 
you must:
# Create a table with a varchar field.
# Create an index on the given field.
# Fill the table with data.
# Delete data by specifying in the DML operation, as a condition, an indexed 
value.

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.


>  Incorrect result of the DML delete operation, in some environment
> --
>
> Key: IGNITE-17027
> URL: https://issues.apache.org/jira/browse/IGNITE-17027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: IndexSetArgsAndCastTest.patch
>
>
> Description of the case, in some environment, the DML operation does not 
> delete all data (this DML delete operation is an example). To reproduce the 
> problem, you must:
> # Create a table with a varchar field.
> # Create an index on the given field.
> # Fill the table with data, use int as the value.
> # Delete data by specifying in the DML operation, as a condition, an indexed 
> value, without String.valueOf(intValue)
> The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
> indexOnAutocastOff() test.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17027:
-
Description: 
Description of the case, in some environment, the DML operation does not delete 
all data (this DML delete operation is an example). To reproduce the problem, 
you must:
# Create a table with a varchar field.
# Create an index on the given field.
# Fill the table with data.
# Delete data by specifying in the DML operation, as a condition, an indexed 
value.

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.

  was:
Description of the case, in some environment, the DML operation does not delete 
all data (this DML delete operation is an example). To reproduce the problem, 
you must:
# Create a table with a varchar field.
# Create an index on the given field.
# Fill the table with data.
# Delete data by specifying in the DML operation, as a condition, an indexed 
value.
The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.


>  Incorrect result of the DML delete operation, in some environment
> --
>
> Key: IGNITE-17027
> URL: https://issues.apache.org/jira/browse/IGNITE-17027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: IndexSetArgsAndCastTest.patch
>
>
> Description of the case, in some environment, the DML operation does not 
> delete all data (this DML delete operation is an example). To reproduce the 
> problem, you must:
> # Create a table with a varchar field.
> # Create an index on the given field.
> # Fill the table with data.
> # Delete data by specifying in the DML operation, as a condition, an indexed 
> value.
> The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
> indexOnAutocastOff() test.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17027:
-
Description: 
Description of the case, in some environment, the DML operation does not delete 
all data (this DML delete operation is an example). To reproduce the problem, 
you must:
# Create a table with a varchar field.
# Create an index on the given field.
# Fill the table with data.
# Delete data by specifying in the DML operation, as a condition, an indexed 
value.
The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.

  was:
Case descriptions, in some environment,

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.


>  Incorrect result of the DML delete operation, in some environment
> --
>
> Key: IGNITE-17027
> URL: https://issues.apache.org/jira/browse/IGNITE-17027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: IndexSetArgsAndCastTest.patch
>
>
> Description of the case, in some environment, the DML operation does not 
> delete all data (this DML delete operation is an example). To reproduce the 
> problem, you must:
> # Create a table with a varchar field.
> # Create an index on the given field.
> # Fill the table with data.
> # Delete data by specifying in the DML operation, as a condition, an indexed 
> value.
> The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
> indexOnAutocastOff() test.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17027:
-
Description: 
Case descriptions, in some environment,

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.

  was:
Case descriptions, шт ыщьу

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.


>  Incorrect result of the DML delete operation, in some environment
> --
>
> Key: IGNITE-17027
> URL: https://issues.apache.org/jira/browse/IGNITE-17027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: IndexSetArgsAndCastTest.patch
>
>
> Case descriptions, in some environment,
> The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
> indexOnAutocastOff() test.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17027:
-
Description: 
Case descriptions, шт ыщьу

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.

  was:
Case descriptions:
# 

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.


>  Incorrect result of the DML delete operation, in some environment
> --
>
> Key: IGNITE-17027
> URL: https://issues.apache.org/jira/browse/IGNITE-17027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: IndexSetArgsAndCastTest.patch
>
>
> Case descriptions, шт ыщьу
> The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
> indexOnAutocastOff() test.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17027:
-
Description: 
Case descriptions:
# 

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
indexOnAutocastOff() test.

  was:
Case descriptions:
# 

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior.


>  Incorrect result of the DML delete operation, in some environment
> --
>
> Key: IGNITE-17027
> URL: https://issues.apache.org/jira/browse/IGNITE-17027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: IndexSetArgsAndCastTest.patch
>
>
> Case descriptions:
> # 
> The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior in 
> indexOnAutocastOff() test.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17027:
-
Description: 
Case descriptions:
# 

The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior.

  was:
Case descriptions:
# 
The reproducer() shows this behavior.


>  Incorrect result of the DML delete operation, in some environment
> --
>
> Key: IGNITE-17027
> URL: https://issues.apache.org/jira/browse/IGNITE-17027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: IndexSetArgsAndCastTest.patch
>
>
> Case descriptions:
> # 
> The reproducer( [^IndexSetArgsAndCastTest.patch] ) shows this behavior.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)
Luchnikov Alexander created IGNITE-17027:


 Summary:  Incorrect result of the DML delete operation, in some 
environment
 Key: IGNITE-17027
 URL: https://issues.apache.org/jira/browse/IGNITE-17027
 Project: Ignite
  Issue Type: Bug
Reporter: Luchnikov Alexander
 Attachments: IndexSetArgsAndCastTest.patch

Case descriptions:
# 
The reproducer() shows this behavior.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17027) Incorrect result of the DML delete operation, in some environment

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17027:
-
Labels: ise  (was: )

>  Incorrect result of the DML delete operation, in some environment
> --
>
> Key: IGNITE-17027
> URL: https://issues.apache.org/jira/browse/IGNITE-17027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: IndexSetArgsAndCastTest.patch
>
>
> Case descriptions:
> # 
> The reproducer() shows this behavior.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17025) Remove the ability to manually set INLINE_SIZE for types with a fixed length

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17025:
-
Description: 
The reproducer ( [^InlineIndexTest1.patch] ) shows index.bin size growing when 
INLINE_SIZE increases when creating indexes on fixed length fields.The negative 
point is that a place is reserved that does not carry any profit.

As a solution. When trying to build an index on a field with a fixed length 
type, do not allow this. The value of INLINE_SIZE for such types is calculated 
automatically. And display a WARN level message about it.

To see the size of index.bin, run the
{code:java}
du -sh IGNITE_HOME/db/node*/*INLINE*/* | grep index.bin" 
{code}
after running the reproducer.

  was:
The reproducer ( [^InlineIndexTest1.patch] ) shows index.bin size growing when 
INLINE_SIZE increases when creating indexes on fixed length fields.The negative 
point is that a place is reserved that does not carry any profit.

As a solution. When trying to build an index on a field with a fixed length 
type, do not allow this. The value of INLINE_SIZE for such types is calculated 
automatically. And display a WARN level message about it.

To see the size of index.bin, run the "du -sh IGNITE_HOME/db/node*/*INLINE*/* | 
grep index.bin" after running the reproducer.


> Remove the ability to manually set INLINE_SIZE for types with a fixed length
> 
>
> Key: IGNITE-17025
> URL: https://issues.apache.org/jira/browse/IGNITE-17025
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: InlineIndexTest1.patch
>
>
> The reproducer ( [^InlineIndexTest1.patch] ) shows index.bin size growing 
> when INLINE_SIZE increases when creating indexes on fixed length fields.The 
> negative point is that a place is reserved that does not carry any profit.
> As a solution. When trying to build an index on a field with a fixed length 
> type, do not allow this. The value of INLINE_SIZE for such types is 
> calculated automatically. And display a WARN level message about it.
> To see the size of index.bin, run the
> {code:java}
> du -sh IGNITE_HOME/db/node*/*INLINE*/* | grep index.bin" 
> {code}
> after running the reproducer.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17025) Remove the ability to manually set INLINE_SIZE for types with a fixed length

2022-05-24 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander updated IGNITE-17025:
-
Description: 
The reproducer ( [^InlineIndexTest1.patch] ) shows index.bin size growing when 
INLINE_SIZE increases when creating indexes on fixed length fields.The negative 
point is that a place is reserved that does not carry any profit.

As a solution. When trying to build an index on a field with a fixed length 
type, do not allow this. The value of INLINE_SIZE for such types is calculated 
automatically. And display a WARN level message about it.

To see the size of index.bin, run the
{code:java}
du -sh IGNITE_HOME/db/node*/*INLINE*/* | grep index.bin" 
{code}
after running the reproducer.
{code:java}
36K BIGINT_INLINE10/index.bin
320K BIGINT_INLINE100/index.bin
128K INT_INLINE10/index.bin
324K INT_INLINE100/index.bin
 64K NUMBER_INLINE10/index.bin
 64K NUMBER_INLINE100/index.bin
128K VARCHAR_INLINE10/index.bin
256K VARCHAR_INLINE100/index.bin
{code}


  was:
The reproducer ( [^InlineIndexTest1.patch] ) shows index.bin size growing when 
INLINE_SIZE increases when creating indexes on fixed length fields.The negative 
point is that a place is reserved that does not carry any profit.

As a solution. When trying to build an index on a field with a fixed length 
type, do not allow this. The value of INLINE_SIZE for such types is calculated 
automatically. And display a WARN level message about it.

To see the size of index.bin, run the
{code:java}
du -sh IGNITE_HOME/db/node*/*INLINE*/* | grep index.bin" 
{code}
after running the reproducer.


> Remove the ability to manually set INLINE_SIZE for types with a fixed length
> 
>
> Key: IGNITE-17025
> URL: https://issues.apache.org/jira/browse/IGNITE-17025
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Luchnikov Alexander
>Priority: Minor
>  Labels: ise
> Attachments: InlineIndexTest1.patch
>
>
> The reproducer ( [^InlineIndexTest1.patch] ) shows index.bin size growing 
> when INLINE_SIZE increases when creating indexes on fixed length fields.The 
> negative point is that a place is reserved that does not carry any profit.
> As a solution. When trying to build an index on a field with a fixed length 
> type, do not allow this. The value of INLINE_SIZE for such types is 
> calculated automatically. And display a WARN level message about it.
> To see the size of index.bin, run the
> {code:java}
> du -sh IGNITE_HOME/db/node*/*INLINE*/* | grep index.bin" 
> {code}
> after running the reproducer.
> {code:java}
> 36K BIGINT_INLINE10/index.bin
> 320K BIGINT_INLINE100/index.bin
> 128K INT_INLINE10/index.bin
> 324K INT_INLINE100/index.bin
>  64K NUMBER_INLINE10/index.bin
>  64K NUMBER_INLINE100/index.bin
> 128K VARCHAR_INLINE10/index.bin
> 256K VARCHAR_INLINE100/index.bin
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


  1   2   3   >