[jira] [Updated] (IGNITE-17349) Ignite3 CLI output formatting

2022-07-11 Thread Aleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr updated IGNITE-17349:
---
Description: 
Ignite3 CLI now is not consistent from formatting/styles perspective. Messages 
about what went wrong differ from each other. Somewhere 'Done!' is a marker of 
successful operation ({{ignite bootstrap}}), somewhere it is just a sentence 
notifying that something is done ({{ignite connect}}). Tables are rendered with 
different borders for {{ignite bootstrap}}, i{{gnite node list}} and for 
{{ignite topology}} commands.

The goal of this ticket is to develop user-facing interface components and use 
them in the CLI code. The list of the components is also a part of this ticket 
but here are some of them:
- problem json render
- table render
- success action render
- suggestion render.

  was:
Ignite3 CLI now is not consistent from formatting/styles perspective. Messages 
about what went wrong differ from each other. Somewhere 'Done!' is a marker of 
successful operation (ignite bootstrap), somewhere it is just a sentence 
notifying that something is done (ignite connect). Tables are rendered in with 
different borders for ignite bootstrap, ignite node list and for ignite 
topology.

The goal of this ticket is to develop user-facing interface components and use 
them in the CLI code. The list of the components is also a part of this ticket 
but here are some of them:
- problem json render
- table render
- success action render
- suggestion render 


> Ignite3 CLI output formatting
> -
>
> Key: IGNITE-17349
> URL: https://issues.apache.org/jira/browse/IGNITE-17349
> Project: Ignite
>  Issue Type: Task
>  Components: cli
>Reporter: Aleksandr
>Priority: Major
>  Labels: ignite-3, ignite-3-cli-tool
>
> Ignite3 CLI now is not consistent from formatting/styles perspective. 
> Messages about what went wrong differ from each other. Somewhere 'Done!' is a 
> marker of successful operation ({{ignite bootstrap}}), somewhere it is just a 
> sentence notifying that something is done ({{ignite connect}}). Tables are 
> rendered with different borders for {{ignite bootstrap}}, i{{gnite node 
> list}} and for {{ignite topology}} commands.
> The goal of this ticket is to develop user-facing interface components and 
> use them in the CLI code. The list of the components is also a part of this 
> ticket but here are some of them:
> - problem json render
> - table render
> - success action render
> - suggestion render.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17349) Ignite3 CLI output formatting

2022-07-11 Thread Aleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr updated IGNITE-17349:
---
Description: 
Ignite3 CLI now is not consistent from formatting/styles perspective. Messages 
about what went wrong differ from each other. Somewhere 'Done!' is a marker of 
successful operation ({{ignite bootstrap}}), somewhere it is just a sentence 
notifying that something is done ({{ignite connect}}). Tables are rendered with 
different borders for {{ignite bootstrap}}, {{ignite node list}} and for 
{{ignite topology}} commands.

The goal of this ticket is to develop user-facing interface components and use 
them in the CLI code. The list of the components is also a part of this ticket 
but here are some of them:
- problem json render
- table render
- success action render
- suggestion render.

  was:
Ignite3 CLI now is not consistent from formatting/styles perspective. Messages 
about what went wrong differ from each other. Somewhere 'Done!' is a marker of 
successful operation ({{ignite bootstrap}}), somewhere it is just a sentence 
notifying that something is done ({{ignite connect}}). Tables are rendered with 
different borders for {{ignite bootstrap}}, i{{gnite node list}} and for 
{{ignite topology}} commands.

The goal of this ticket is to develop user-facing interface components and use 
them in the CLI code. The list of the components is also a part of this ticket 
but here are some of them:
- problem json render
- table render
- success action render
- suggestion render.


> Ignite3 CLI output formatting
> -
>
> Key: IGNITE-17349
> URL: https://issues.apache.org/jira/browse/IGNITE-17349
> Project: Ignite
>  Issue Type: Task
>  Components: cli
>Reporter: Aleksandr
>Priority: Major
>  Labels: ignite-3, ignite-3-cli-tool
>
> Ignite3 CLI now is not consistent from formatting/styles perspective. 
> Messages about what went wrong differ from each other. Somewhere 'Done!' is a 
> marker of successful operation ({{ignite bootstrap}}), somewhere it is just a 
> sentence notifying that something is done ({{ignite connect}}). Tables are 
> rendered with different borders for {{ignite bootstrap}}, {{ignite node 
> list}} and for {{ignite topology}} commands.
> The goal of this ticket is to develop user-facing interface components and 
> use them in the CLI code. The list of the components is also a part of this 
> ticket but here are some of them:
> - problem json render
> - table render
> - success action render
> - suggestion render.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17349) Ignite3 CLI output formatting

2022-07-11 Thread Aleksandr (Jira)
Aleksandr created IGNITE-17349:
--

 Summary: Ignite3 CLI output formatting
 Key: IGNITE-17349
 URL: https://issues.apache.org/jira/browse/IGNITE-17349
 Project: Ignite
  Issue Type: Task
  Components: cli
Reporter: Aleksandr


Ignite3 CLI now is not consistent from formatting/styles perspective. Messages 
about what went wrong differ from each other. Somewhere 'Done!' is a marker of 
successful operation (ignite bootstrap), somewhere it is just a sentence 
notifying that something is done (ignite connect). Tables are rendered in with 
different borders for ignite bootstrap, ignite node list and for ignite 
topology.

The goal of this ticket is to develop user-facing interface components and use 
them in the CLI code. The list of the components is also a part of this ticket 
but here are some of them:
- problem json render
- table render
- success action render
- suggestion render 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-12852) Comma in field is not supported by COPY command

2022-07-11 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565166#comment-17565166
 ] 

Ignite TC Bot commented on IGNITE-12852:


{panel:title=Branch: [pull/10141/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Basic 1{color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=6673569]]

{panel}
{panel:title=Branch: [pull/10141/head] Base: [master] : New Tests 
(90)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}JDBC Driver{color} [[tests 
90|https://ci.ignite.apache.org/viewLog.html?buildId=6671694]]
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testCsvLoadWithCommaDelimiter[0] - PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testCsvLoadWithPipeDelimiter[0] - PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testCsvLoadWithCommaDelimiter[1] - PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testCsvLoadWithPipeDelimiter[1] - PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testOneLineFileForSingleEndQuote[0] - PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testCsvLoadWithDefaultDelimiter[0] - PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testOneLineFileForQuoteInContent[0] - PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testOneLineFileForQuoteInQuotedContent[0] - 
PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testOneLineFileForUnmatchedEndQuote[0] - PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testOneLineFileForUnmatchedStartQuote[0] - 
PASSED{color}
* {color:#013220}IgniteJdbcDriverTestSuite: 
JdbcThinBulkLoadSelfTest.testCsvLoadWithCommaDelimiter[4] - PASSED{color}
... and 79 new tests

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6671748buildTypeId=IgniteTests24Java8_RunAll]

> Comma in field is not supported by COPY command
> ---
>
> Key: IGNITE-12852
> URL: https://issues.apache.org/jira/browse/IGNITE-12852
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8
>Reporter: YuJue Li
>Assignee: Anton Kurbanov
>Priority: Critical
> Fix For: 2.14
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> CREATE TABLE test(a int,b varchar(100),c int,PRIMARY key(a)); 
>  
> a.csv: 
> 1,"a,b",2 
>  
> COPY FROM '/data/a.csv' INTO test (a,b,c) FORMAT CSV; 
>  
> The copy command fails because there is a comma in the second field,but this 
> is a fully legal and compliant CSV format



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17348) Add quit/exit command to Ignite3 CLI sql REPL

2022-07-11 Thread Yury Yudin (Jira)
Yury Yudin created IGNITE-17348:
---

 Summary: Add quit/exit command to Ignite3 CLI sql REPL
 Key: IGNITE-17348
 URL: https://issues.apache.org/jira/browse/IGNITE-17348
 Project: Ignite
  Issue Type: Task
  Components: cli
Reporter: Yury Yudin


turns out there is no way to exit from sql REPL in ignite3 CLI other than 
pressing Ctrl+D - while this works, a separate command to exit to the top REPL 
would be nice to have.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17347) Add port parameters to Ignite3 CLI node start command

2022-07-11 Thread Yury Yudin (Jira)
Yury Yudin created IGNITE-17347:
---

 Summary: Add port parameters to Ignite3 CLI node start command
 Key: IGNITE-17347
 URL: https://issues.apache.org/jira/browse/IGNITE-17347
 Project: Ignite
  Issue Type: Task
  Components: cli
Reporter: Yury Yudin


Currently Ignite3 CLI node start command only provides a way to set different 
port parameters via supplying a different configuration file. This makes it 
somewhat cumbersome to start multple nodes on the same machine.

Let's provide --port and --rest-port parameters to the node start command to 
make it easier.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17346) Ignite3 CLI long output not readable in windows terminal

2022-07-11 Thread Yury Yudin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yury Yudin updated IGNITE-17346:

Component/s: cli

> Ignite3 CLI long output not readable in windows terminal
> 
>
> Key: IGNITE-17346
> URL: https://issues.apache.org/jira/browse/IGNITE-17346
> Project: Ignite
>  Issue Type: Task
>  Components: cli
>Reporter: Yury Yudin
>Priority: Major
>  Labels: ignite-3
>
> When output of ignite3 command does not fit the window it scrolls and it's 
> history is lost in the windows terminal. Scrolling up shows previous commands 
> but not the output...
> {noformat}
>  [disconnected]>
> 
>       "memoryAllocator" : {
>         "type" : "unsafe"
>       },
>       "replacementMode" : "CLOCK",
>       "size" : 268435456
>     },
>     "pageSize" : 16384,
>     "regions" : [ ]
>   },
>   "rocksDb" : {
>     "defaultRegion" : {
>       "cache" : "lru",
>       "numShardBits" : -1,
>       "size" : 268435456,
>       "writeBufferSize" : 67108864
>     },
>     "regions" : [ ]
>   },
>   "table" : {
>     "defaultDataStorage" : "rocksdb",
>     "tables" : [ ]
>   }
> }
> [test1]>{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17346) Ignite3 CLI long output not readable in windows terminal

2022-07-11 Thread Yury Yudin (Jira)
Yury Yudin created IGNITE-17346:
---

 Summary: Ignite3 CLI long output not readable in windows terminal
 Key: IGNITE-17346
 URL: https://issues.apache.org/jira/browse/IGNITE-17346
 Project: Ignite
  Issue Type: Task
Reporter: Yury Yudin


When output of ignite3 command does not fit the window it scrolls and it's 
history is lost in the windows terminal. Scrolling up shows previous commands 
but not the output...
{noformat}
 [disconnected]>

      "memoryAllocator" : {
        "type" : "unsafe"
      },
      "replacementMode" : "CLOCK",
      "size" : 268435456
    },
    "pageSize" : 16384,
    "regions" : [ ]
  },
  "rocksDb" : {
    "defaultRegion" : {
      "cache" : "lru",
      "numShardBits" : -1,
      "size" : 268435456,
      "writeBufferSize" : 67108864
    },
    "regions" : [ ]
  },
  "table" : {
    "defaultDataStorage" : "rocksdb",
    "tables" : [ ]
  }
}
[test1]>{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17291) Implement metastorage cursor batching

2022-07-11 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-17291:
-
Fix Version/s: 3.0.0-alpha6

> Implement metastorage cursor batching
> -
>
> Key: IGNITE-17291
> URL: https://issues.apache.org/jira/browse/IGNITE-17291
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha6
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In order to increase metaStorage.range() performance that currently retrieves 
> entries one by one it's possible to implement simple batching. As an initial 
> solution we might hardcode the batch-size.
> Basically speaking it's required to update CursorNextCommand.
> Instead of
> {code:java}
> Entry e = (Entry) cursorDesc.cursor().next();
> clo.result(new SingleEntryResponse(e.key(), e.value(), e.revision(), 
> e.updateCounter())); {code}
> we might use something similar to
> {code:java}
> List batch = new 
> ArrayList<>(RANGE_CURSOR_BATCH_SIZE);
> for (int i = 0; i < RANGE_CURSOR_BATCH_SIZE; i++) {
> Entry e = (Entry) cursorDesc.cursor().next();
> batch.add(new SingleEntryResponse(e.key(), e.value(), e.revision(), 
> e.updateCounter()));
> if (! cursorDesc.cursor.hasNext()) {
> break;
> }
> }
> clo.result(new MultipleEntryResponse(batch));{code}
> It's not trivial to reimplement rocks cursors to also use batching, however 
> it's not that important because rocks cursors are always local.
>  
> Besides that it's required to update 
> org.apache.ignite.internal.metastorage.client.CursorImpl with 
> client-side-iteration over batched data and requesting new portion if nothing 
> left.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17291) Implement metastorage cursor batching

2022-07-11 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565065#comment-17565065
 ] 

Alexander Lapin commented on IGNITE-17291:
--

[~Denis Chudov] LGTM

> Implement metastorage cursor batching
> -
>
> Key: IGNITE-17291
> URL: https://issues.apache.org/jira/browse/IGNITE-17291
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In order to increase metaStorage.range() performance that currently retrieves 
> entries one by one it's possible to implement simple batching. As an initial 
> solution we might hardcode the batch-size.
> Basically speaking it's required to update CursorNextCommand.
> Instead of
> {code:java}
> Entry e = (Entry) cursorDesc.cursor().next();
> clo.result(new SingleEntryResponse(e.key(), e.value(), e.revision(), 
> e.updateCounter())); {code}
> we might use something similar to
> {code:java}
> List batch = new 
> ArrayList<>(RANGE_CURSOR_BATCH_SIZE);
> for (int i = 0; i < RANGE_CURSOR_BATCH_SIZE; i++) {
> Entry e = (Entry) cursorDesc.cursor().next();
> batch.add(new SingleEntryResponse(e.key(), e.value(), e.revision(), 
> e.updateCounter()));
> if (! cursorDesc.cursor.hasNext()) {
> break;
> }
> }
> clo.result(new MultipleEntryResponse(batch));{code}
> It's not trivial to reimplement rocks cursors to also use batching, however 
> it's not that important because rocks cursors are always local.
>  
> Besides that it's required to update 
> org.apache.ignite.internal.metastorage.client.CursorImpl with 
> client-side-iteration over batched data and requesting new portion if nothing 
> left.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17220) YCSB benchmark run for ignite2 vs ignite3

2022-07-11 Thread Alexander Belyak (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Belyak updated IGNITE-17220:
--
Attachment: results.zip

> YCSB benchmark run for ignite2 vs ignite3
> -
>
> Key: IGNITE-17220
> URL: https://issues.apache.org/jira/browse/IGNITE-17220
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Alexander Belyak
>Priority: Major
>  Labels: ignite-3
> Attachments: results.zip
>
>
> For further investigation of ignite3 performance issues, we need to run the 
> following benchmarks to compare ignite2 vs ignite3 performance:
>  * Usual ycsb benchmark with mixed load patterns
>  * Insert-only ycsb benchmark
> For ignite2 and ignite3 in the following configurations:
>  * 3 ignite nodes setup (so, table must have 1 partition and 3 replicas)
>  * 1 ignite node setup (so, table must have 1 partitoin and 1 replica)
> Also, please provide:
>  * Hardware configuration of the environment, where benchmark was executed
>  * JFRs for every node in every run.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17220) YCSB benchmark run for ignite2 vs ignite3

2022-07-11 Thread Alexander Belyak (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565056#comment-17565056
 ] 

Alexander Belyak commented on IGNITE-17220:
---

I ran out test on Ignite3 master, java11openjdk in docker, centos and get the 
following results:
|3 node|all ops|1|13|
| | |2|13|
| |insert|1|234|
| | |2|173|
|1 node|all ops|1|8|
| | |2|8|
| |insert|1|202|
| | |2|181|

Used 4 servers (for each server node and an additional one for the benchbase 
client): 2x Xeon E5-2609v4 16vCPU, 96GB, 2.4Tb SSD RAID0.

Start 3 nodes with the default config (only with all 3 nodes ip specified in 
config), activate cluster with  --meta-storage-node= 
and do all 4 benchbase tests (via the first node in jdbc string). Then I ran 
clear single node cluster and repeat tests for it.

All logs and JFR in attachment.

> YCSB benchmark run for ignite2 vs ignite3
> -
>
> Key: IGNITE-17220
> URL: https://issues.apache.org/jira/browse/IGNITE-17220
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Alexander Belyak
>Priority: Major
>  Labels: ignite-3
>
> For further investigation of ignite3 performance issues, we need to run the 
> following benchmarks to compare ignite2 vs ignite3 performance:
>  * Usual ycsb benchmark with mixed load patterns
>  * Insert-only ycsb benchmark
> For ignite2 and ignite3 in the following configurations:
>  * 3 ignite nodes setup (so, table must have 1 partition and 3 replicas)
>  * 1 ignite node setup (so, table must have 1 partitoin and 1 replica)
> Also, please provide:
>  * Hardware configuration of the environment, where benchmark was executed
>  * JFRs for every node in every run.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17130) Profilies support for CLI configuration

2022-07-11 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin updated IGNITE-17130:
---
Description: 
h3. Config file site:

The Ignite CLI currently only has one set of properties. Need to support 
multiple profiles, this means that the config must have multiple sections for 
each user profile. INI format completely covers everything we need, so need 
migrate config file format from properties to ini.

Default profile name [default].
h3. CLI command site:

Create profile command
{code:java}
ignite cli config profile create
--name profile_name (REQUIRED) 
--copy-from profile-name (OPTIONAL) 
--activate true\false (OPTIONAL,  DEFAULT VALUE = false){code}
Activate profile command as default profile
{code:java}
ignite cli config profile --select-current profile_name {code}
Read\write property with profile 
{code:java}
ignite cli config --profile profile_name get\set propertyKey=propertyValue 
{code}
Read current profile 
{code:java}
ignite cli config profile show{code}

  was:
h3. Config file site:

The Ignite CLI currently only has one set of properties. Need to support 
multiple profiles, this means that the config must have multiple sections for 
each user profile. INI format completely covers everything we need, so need 
migrate config file format from properties to ini.

Default profile name [default].
h3. CLI command site:

Create profile command
{code:java}
ignite cli config profile create
--name profile_name (REQUIRED) 
--copy-from profile-name (OPTIONAL) 
--activate true\false (OPTIONAL,  DEFAULT VALUE = false){code}
Activate profile command as default profile
{code:java}
ignite cli config profile use profile_name {code}
Read\write property with profile 
{code:java}
ignite cli config --profile profile_name get\set propertyKey=propertyValue 
{code}
Read current profile 
{code:java}
ignite cli config profile show{code}


> Profilies support for CLI configuration
> ---
>
> Key: IGNITE-17130
> URL: https://issues.apache.org/jira/browse/IGNITE-17130
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Mikhail Pochatkin
>Assignee: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3, ignite-3-cli-tool
> Fix For: 3.0.0-alpha6
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> h3. Config file site:
> The Ignite CLI currently only has one set of properties. Need to support 
> multiple profiles, this means that the config must have multiple sections for 
> each user profile. INI format completely covers everything we need, so need 
> migrate config file format from properties to ini.
> Default profile name [default].
> h3. CLI command site:
> Create profile command
> {code:java}
> ignite cli config profile create
> --name profile_name (REQUIRED) 
> --copy-from profile-name (OPTIONAL) 
> --activate true\false (OPTIONAL,  DEFAULT VALUE = false){code}
> Activate profile command as default profile
> {code:java}
> ignite cli config profile --select-current profile_name {code}
> Read\write property with profile 
> {code:java}
> ignite cli config --profile profile_name get\set propertyKey=propertyValue 
> {code}
> Read current profile 
> {code:java}
> ignite cli config profile show{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17344) Move "metrics" SQL system view to GridSystemViewManager

2022-07-11 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565051#comment-17565051
 ] 

Aleksey Plekhanov commented on IGNITE-17344:


[~NIzhikov], thanks for the review! Merged to master

> Move "metrics" SQL system view to GridSystemViewManager
> ---
>
> Key: IGNITE-17344
> URL: https://issues.apache.org/jira/browse/IGNITE-17344
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently we have dedicated adapter for "metrics" system view 
> (\{{MetricRegistryLocalSystemView}}), this view doesn't registered in common 
> view registry (via \{{GridSystemViewManager}}) and can't be used by any views 
> exporter except H2 SQL engine (for example by Calcite SQL engine). 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17344) Move "metrics" SQL system view to GridSystemViewManager

2022-07-11 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565048#comment-17565048
 ] 

Ignite TC Bot commented on IGNITE-17344:


{panel:title=Branch: [pull/10150/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10150/head] Base: [master] : New Tests 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Calcite SQL{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=6676883]]
* {color:#013220}IgniteCalciteTestSuite: 
SystemViewsIntegrationTest.testMetricsView - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6677101buildTypeId=IgniteTests24Java8_RunAll]

> Move "metrics" SQL system view to GridSystemViewManager
> ---
>
> Key: IGNITE-17344
> URL: https://issues.apache.org/jira/browse/IGNITE-17344
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently we have dedicated adapter for "metrics" system view 
> (\{{MetricRegistryLocalSystemView}}), this view doesn't registered in common 
> view registry (via \{{GridSystemViewManager}}) and can't be used by any views 
> exporter except H2 SQL engine (for example by Calcite SQL engine). 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (IGNITE-17130) Profilies support for CLI configuration

2022-07-11 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin closed IGNITE-17130.
--

> Profilies support for CLI configuration
> ---
>
> Key: IGNITE-17130
> URL: https://issues.apache.org/jira/browse/IGNITE-17130
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Mikhail Pochatkin
>Assignee: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3, ignite-3-cli-tool
> Fix For: 3.0.0-alpha6
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> h3. Config file site:
> The Ignite CLI currently only has one set of properties. Need to support 
> multiple profiles, this means that the config must have multiple sections for 
> each user profile. INI format completely covers everything we need, so need 
> migrate config file format from properties to ini.
> Default profile name [default].
> h3. CLI command site:
> Create profile command
> {code:java}
> ignite cli config profile create
> --name profile_name (REQUIRED) 
> --copy-from profile-name (OPTIONAL) 
> --activate true\false (OPTIONAL,  DEFAULT VALUE = false){code}
> Activate profile command as default profile
> {code:java}
> ignite cli config profile use profile_name {code}
> Read\write property with profile 
> {code:java}
> ignite cli config --profile profile_name get\set propertyKey=propertyValue 
> {code}
> Read current profile 
> {code:java}
> ignite cli config profile show{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17345) [IEP-35] Metric to track PA enabled request on ThinClient

2022-07-11 Thread Luchnikov Alexander (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luchnikov Alexander reassigned IGNITE-17345:


Assignee: Luchnikov Alexander

> [IEP-35] Metric to track PA enabled request on ThinClient
> -
>
> Key: IGNITE-17345
> URL: https://issues.apache.org/jira/browse/IGNITE-17345
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Luchnikov Alexander
>Priority: Major
>  Labels: IEP-35, ise
>
> The crucial point to understand ThinClient performance is to know - Partition 
> Awareness enabled or not.
> For now, it's impossible to understand how many request goes to node that is 
> primary for key.
> It seems useful metrics to analyze PA behavior - two counters to track amount 
> of requests for each server node 
> - one counter for keys current node is primary.
> - another counter for keys which require extra network hop between server 
> nodes to serve the request.
> For environment with optimal performance second counter should be close to 
> zero.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-14986) Re-work error handling in meta storage component in accordance with error groups

2022-07-11 Thread Vyacheslav Koptilin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565010#comment-17565010
 ] 

Vyacheslav Koptilin commented on IGNITE-14986:
--

Hello [~ibessonov],

Could you please take a look at the following PR 
https://github.com/apache/ignite-3/pull/928?

> Re-work error handling in meta storage component in accordance with error 
> groups
> 
>
> Key: IGNITE-14986
> URL: https://issues.apache.org/jira/browse/IGNITE-14986
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: iep-84, ignite-3
>
> Need to introduce a new error group related to Meta Storage Service and add 
> all needed error codes.
> Also, the implementation should using _IgniteInternalException_ and 
> _IgniteInternalCheckedException_ with specific error codes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-14986) Re-work error handling in meta storage component in accordance with error groups

2022-07-11 Thread Vyacheslav Koptilin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565010#comment-17565010
 ] 

Vyacheslav Koptilin edited comment on IGNITE-14986 at 7/11/22 1:33 PM:
---

Hello [~ibessonov],

Could you please take a look at the following PR 
https://github.com/apache/ignite-3/pull/928 ?


was (Author: slava.koptilin):
Hello [~ibessonov],

Could you please take a look at the following PR 
https://github.com/apache/ignite-3/pull/928?

> Re-work error handling in meta storage component in accordance with error 
> groups
> 
>
> Key: IGNITE-14986
> URL: https://issues.apache.org/jira/browse/IGNITE-14986
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: iep-84, ignite-3
>
> Need to introduce a new error group related to Meta Storage Service and add 
> all needed error codes.
> Also, the implementation should using _IgniteInternalException_ and 
> _IgniteInternalCheckedException_ with specific error codes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-14524) Historical rebalance doesn't work if cache has configured rebalanceDelay

2022-07-11 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov updated IGNITE-14524:
---
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Historical rebalance doesn't work if cache has configured rebalanceDelay
> 
>
> Key: IGNITE-14524
> URL: https://issues.apache.org/jira/browse/IGNITE-14524
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.10
>Reporter: Dmitry Lazurkin
>Priority: Major
>  Labels: ise
>
> I have big cache with configured rebalanceMode = ASYNC, rebalanceDelay = 
> 10_000ms. Persistence is enabled, maxWalArchiveSize = 10GB. And I passed
> -DIGNITE_PREFER_WAL_REBALANCE=true and -DIGNITE_PDS_WAL_REBALANCE_THRESHOLD=1 
> to Ignite. So node should use historical rebalance if there is enough WAL. 
> But it doesn't. After investigation I found that 
> GridDhtPreloader#generateAssignments always get called with exchFut = null, 
> and this method can't set histPartitions without exchFut. I think, that 
> problem in GridCachePartitionExchangeManager
> (https://github.com/apache/ignite/blob/bc24f6baf3e9b4f98cf98cc5df67fb5deb5ceb6c/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java#L3486).
>  It doesn't call generateAssignments without forcePreload if rebalanceDelay 
> is configured.
> Historical rebalance works after removing rebalanceDelay.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-14524) Historical rebalance doesn't work if cache has configured rebalanceDelay

2022-07-11 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov resolved IGNITE-14524.

Release Note: 
Rebalance delay is about to be removed from the project as a part of 
[IGNITE-12662]

Closing issue as won't fix, feel free to reopen if the issue is to be open.
  Resolution: Won't Fix

> Historical rebalance doesn't work if cache has configured rebalanceDelay
> 
>
> Key: IGNITE-14524
> URL: https://issues.apache.org/jira/browse/IGNITE-14524
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.10
>Reporter: Dmitry Lazurkin
>Priority: Major
>  Labels: ise
>
> I have big cache with configured rebalanceMode = ASYNC, rebalanceDelay = 
> 10_000ms. Persistence is enabled, maxWalArchiveSize = 10GB. And I passed
> -DIGNITE_PREFER_WAL_REBALANCE=true and -DIGNITE_PDS_WAL_REBALANCE_THRESHOLD=1 
> to Ignite. So node should use historical rebalance if there is enough WAL. 
> But it doesn't. After investigation I found that 
> GridDhtPreloader#generateAssignments always get called with exchFut = null, 
> and this method can't set histPartitions without exchFut. I think, that 
> problem in GridCachePartitionExchangeManager
> (https://github.com/apache/ignite/blob/bc24f6baf3e9b4f98cf98cc5df67fb5deb5ceb6c/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java#L3486).
>  It doesn't call generateAssignments without forcePreload if rebalanceDelay 
> is configured.
> Historical rebalance works after removing rebalanceDelay.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-10825) After node restart and and new node to BLT due load - some partition inconsistent

2022-07-11 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov updated IGNITE-10825:
---
Ignite Flags:   (was: Docs Required)

> After node restart and and new node to BLT due load - some partition 
> inconsistent
> -
>
> Key: IGNITE-10825
> URL: https://issues.apache.org/jira/browse/IGNITE-10825
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.8
>Reporter: ARomantsov
>Priority: Critical
>  Labels: ise
>
> {code:java}
> 14:12:20 [14:12:20][:573 :252] idle_verify check has finished, found 2 
> conflict partitions: [counterConflicts=1, hashConflicts=1]
> 14:12:20 [14:12:20][:573 :252] Update counter conflicts:
> 14:12:20 [14:12:20][:573 :252] Conflict partition: PartitionKeyV2 
> [grpId=374280887, grpName=cache_group_4, partId=115]
> 14:12:20 [14:12:20][:573 :252] Partition instances: 
>   
>   [PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_5, 
> updateCntr=10, size=2, partHash=-979021948], 
>   
>PartitionHashRecordV2 [isPrimary=true, consistentId=node_1_2, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_1, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_3, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_6, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_4, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_10001, 
> updateCntr=11, size=2, partHash=-731597536]]
> 14:12:20 [14:12:20][:573 :252] Hash conflicts:
> 14:12:20 [14:12:20][:573 :252] Conflict partition: PartitionKeyV2 
> [grpId=374280887, grpName=cache_group_4, partId=115]
> 14:12:20 [14:12:20][:573 :252] Partition instances: 
>   
> [PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_5, 
> updateCntr=10, size=2, partHash=-979021948], 
>   
> PartitionHashRecordV2 [isPrimary=true, consistentId=node_1_2, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_1, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_3, 
> updateCntr=11, size=2, partHash=-731597536],
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_6, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_4, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_10001, 
> updateCntr=11, size=2, partHash=-731597536]]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-10825) After node restart and and new node to BLT due load - some partition inconsistent

2022-07-11 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov resolved IGNITE-10825.

Release Note: 
Presented information is not sufficient for reproducing the issue.

Most likely the issue was fixed as a part of another ticket.

Resolving the issue as Can't reproduce. Feel free to reopen if it is likely 
that the bug is still there
  Resolution: Cannot Reproduce

> After node restart and and new node to BLT due load - some partition 
> inconsistent
> -
>
> Key: IGNITE-10825
> URL: https://issues.apache.org/jira/browse/IGNITE-10825
> Project: Ignite
>  Issue Type: Bug
>  Components: data structures
>Affects Versions: 2.8
>Reporter: ARomantsov
>Priority: Critical
>  Labels: ise
>
> {code:java}
> 14:12:20 [14:12:20][:573 :252] idle_verify check has finished, found 2 
> conflict partitions: [counterConflicts=1, hashConflicts=1]
> 14:12:20 [14:12:20][:573 :252] Update counter conflicts:
> 14:12:20 [14:12:20][:573 :252] Conflict partition: PartitionKeyV2 
> [grpId=374280887, grpName=cache_group_4, partId=115]
> 14:12:20 [14:12:20][:573 :252] Partition instances: 
>   
>   [PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_5, 
> updateCntr=10, size=2, partHash=-979021948], 
>   
>PartitionHashRecordV2 [isPrimary=true, consistentId=node_1_2, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_1, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_3, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_6, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_4, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
>PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_10001, 
> updateCntr=11, size=2, partHash=-731597536]]
> 14:12:20 [14:12:20][:573 :252] Hash conflicts:
> 14:12:20 [14:12:20][:573 :252] Conflict partition: PartitionKeyV2 
> [grpId=374280887, grpName=cache_group_4, partId=115]
> 14:12:20 [14:12:20][:573 :252] Partition instances: 
>   
> [PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_5, 
> updateCntr=10, size=2, partHash=-979021948], 
>   
> PartitionHashRecordV2 [isPrimary=true, consistentId=node_1_2, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_1, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_3, 
> updateCntr=11, size=2, partHash=-731597536],
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_6, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_4, 
> updateCntr=11, size=2, partHash=-731597536], 
>   
> PartitionHashRecordV2 [isPrimary=false, consistentId=node_1_10001, 
> updateCntr=11, size=2, partHash=-731597536]]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-16756) .NET: Thin 3.0: Implement SQL API

2022-07-11 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564967#comment-17564967
 ] 

Pavel Tupitsyn commented on IGNITE-16756:
-

Merged to main: 
https://github.com/apache/ignite-3/commit/ea4d63194671177e2c29062c73538a580368c39d

> .NET: Thin 3.0: Implement SQL API
> -
>
> Key: IGNITE-16756
> URL: https://issues.apache.org/jira/browse/IGNITE-16756
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql, thin client
>Affects Versions: 3.0.0-alpha4
>Reporter: Igor Sapego
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We need to implement basic SQL API for .NET thin client in 3.0. Should be 
> done after IGNITE-14972 and re-use protocol messages introduced there.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17345) [IEP-35] Metric to track PA enabled request on ThinClient

2022-07-11 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov updated IGNITE-17345:
-
Description: 
The crucial point to understand ThinClient performance is to know - Partition 
Awareness enabled or not.
For now, it's impossible to understand how many request goes to node that is 
primary for key.

It seems useful metrics to analyze PA behavior - two counters to track amount 
of requests for each server node 
- one counter for keys current node is primary.
- another counter for keys which require extra network hop between server nodes 
to serve the request.


For environment with optimal performance second counter should be close to zero.

  was:
The crucial point to understand ThinClient performance is to know - Partition 
Awareness
For now, it's impossible to understand how many request goes to node that is 
primary for key.

It seems usefull metric for this - two counters to track amount of requests for 
each server node 
- one counter for keys current node is primary.
- another counter for keys which require extra network hop between server nodes 
to serve the request.


For environment with optimal performance second counter should be close to zero.


> [IEP-35] Metric to track PA enabled request on ThinClient
> -
>
> Key: IGNITE-17345
> URL: https://issues.apache.org/jira/browse/IGNITE-17345
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-35, ise
>
> The crucial point to understand ThinClient performance is to know - Partition 
> Awareness enabled or not.
> For now, it's impossible to understand how many request goes to node that is 
> primary for key.
> It seems useful metrics to analyze PA behavior - two counters to track amount 
> of requests for each server node 
> - one counter for keys current node is primary.
> - another counter for keys which require extra network hop between server 
> nodes to serve the request.
> For environment with optimal performance second counter should be close to 
> zero.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17130) Profilies support for CLI configuration

2022-07-11 Thread Kirill Gusakov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564954#comment-17564954
 ] 

Kirill Gusakov commented on IGNITE-17130:
-

LGTM too

> Profilies support for CLI configuration
> ---
>
> Key: IGNITE-17130
> URL: https://issues.apache.org/jira/browse/IGNITE-17130
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Mikhail Pochatkin
>Assignee: Mikhail Pochatkin
>Priority: Major
>  Labels: ignite-3, ignite-3-cli-tool
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> h3. Config file site:
> The Ignite CLI currently only has one set of properties. Need to support 
> multiple profiles, this means that the config must have multiple sections for 
> each user profile. INI format completely covers everything we need, so need 
> migrate config file format from properties to ini.
> Default profile name [default].
> h3. CLI command site:
> Create profile command
> {code:java}
> ignite cli config profile create
> --name profile_name (REQUIRED) 
> --copy-from profile-name (OPTIONAL) 
> --activate true\false (OPTIONAL,  DEFAULT VALUE = false){code}
> Activate profile command as default profile
> {code:java}
> ignite cli config profile use profile_name {code}
> Read\write property with profile 
> {code:java}
> ignite cli config --profile profile_name get\set propertyKey=propertyValue 
> {code}
> Read current profile 
> {code:java}
> ignite cli config profile show{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17345) [IEP-35] Metric to track PA enabled request on ThinClient

2022-07-11 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov updated IGNITE-17345:
-
Labels: IEP-35 ise  (was: IEP-35)

> [IEP-35] Metric to track PA enabled request on ThinClient
> -
>
> Key: IGNITE-17345
> URL: https://issues.apache.org/jira/browse/IGNITE-17345
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-35, ise
>
> The crucial point to understand ThinClient performance is to know - Partition 
> Awareness
> For now, it's impossible to understand how many request goes to node that is 
> primary for key.
> It seems usefull metric for this - two counters to track amount of requests 
> for each server node 
> - one counter for keys current node is primary.
> - another counter for keys which require extra network hop between server 
> nodes to serve the request.
> For environment with optimal performance second counter should be close to 
> zero.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-9905) After transaction load cluster inconsistent

2022-07-11 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov resolved IGNITE-9905.
---
Resolution: Cannot Reproduce

Presented information is not sufficient for reproducing the issue.

Most likely the issue was fixed as a part of another ticket.

Resolving the issue as Can't reproduce. Feel free to reopen if it is likely 
that the bug is still there

> After transaction load cluster inconsistent
> ---
>
> Key: IGNITE-9905
> URL: https://issues.apache.org/jira/browse/IGNITE-9905
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7
>Reporter: ARomantsov
>Priority: Critical
>  Labels: ise
>
> Loaded data into the cluster using transactions consisting of two get / two 
> put
> Test env: one server, two server node, one client
> {code:java}
> idle_verify check has finished, found 60 conflict partitions: 
> [counterConflicts=45, hashConflicts=15]
> Update counter conflicts:
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=98]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=1519, size=596, partHash=-1167688484], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=1520, 
> size=596, partHash=-1167688484]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=34]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=node2, updateCntr=1539, size=596, partHash=-99631005], 
> PartitionHashRecordV2 [isPrimary=true, consistentId=node1, updateCntr=1537, 
> size=596, partHash=-1284437377]]
> Conflict partition: PartitionKeyV2 [grpId=770187303, 
> grpName=CACHEGROUP_PARTICLE_1, partId=31]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=15, size=4, partHash=-1125172674], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=16, 
> size=4, partHash=-1125172674]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=39]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=1555, size=596, partHash=-40303136], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=1556, 
> size=596, partHash=-40303136]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=90]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=node2, updateCntr=1557, size=596, partHash=-1295145299], 
> PartitionHashRecordV2 [isPrimary=true, consistentId=node1, updateCntr=1556, 
> size=596, partHash=-1221175703]]
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-9905) After transaction load cluster inconsistent

2022-07-11 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov updated IGNITE-9905:
--
Ignite Flags:   (was: Docs Required)

> After transaction load cluster inconsistent
> ---
>
> Key: IGNITE-9905
> URL: https://issues.apache.org/jira/browse/IGNITE-9905
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7
>Reporter: ARomantsov
>Priority: Critical
>  Labels: ise
>
> Loaded data into the cluster using transactions consisting of two get / two 
> put
> Test env: one server, two server node, one client
> {code:java}
> idle_verify check has finished, found 60 conflict partitions: 
> [counterConflicts=45, hashConflicts=15]
> Update counter conflicts:
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=98]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=1519, size=596, partHash=-1167688484], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=1520, 
> size=596, partHash=-1167688484]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=34]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=node2, updateCntr=1539, size=596, partHash=-99631005], 
> PartitionHashRecordV2 [isPrimary=true, consistentId=node1, updateCntr=1537, 
> size=596, partHash=-1284437377]]
> Conflict partition: PartitionKeyV2 [grpId=770187303, 
> grpName=CACHEGROUP_PARTICLE_1, partId=31]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=15, size=4, partHash=-1125172674], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=16, 
> size=4, partHash=-1125172674]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=39]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=1555, size=596, partHash=-40303136], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=1556, 
> size=596, partHash=-40303136]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=90]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=node2, updateCntr=1557, size=596, partHash=-1295145299], 
> PartitionHashRecordV2 [isPrimary=true, consistentId=node1, updateCntr=1556, 
> size=596, partHash=-1221175703]]
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-9905) After transaction load cluster inconsistent

2022-07-11 Thread Dmitry Pavlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Pavlov reassigned IGNITE-9905:
-

Assignee: (was: Ilya Lantukh)

> After transaction load cluster inconsistent
> ---
>
> Key: IGNITE-9905
> URL: https://issues.apache.org/jira/browse/IGNITE-9905
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.7
>Reporter: ARomantsov
>Priority: Critical
>  Labels: ise
>
> Loaded data into the cluster using transactions consisting of two get / two 
> put
> Test env: one server, two server node, one client
> {code:java}
> idle_verify check has finished, found 60 conflict partitions: 
> [counterConflicts=45, hashConflicts=15]
> Update counter conflicts:
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=98]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=1519, size=596, partHash=-1167688484], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=1520, 
> size=596, partHash=-1167688484]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=34]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=node2, updateCntr=1539, size=596, partHash=-99631005], 
> PartitionHashRecordV2 [isPrimary=true, consistentId=node1, updateCntr=1537, 
> size=596, partHash=-1284437377]]
> Conflict partition: PartitionKeyV2 [grpId=770187303, 
> grpName=CACHEGROUP_PARTICLE_1, partId=31]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=15, size=4, partHash=-1125172674], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=16, 
> size=4, partHash=-1125172674]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=39]
> Partition instances: [PartitionHashRecordV2 [isPrimary=true, 
> consistentId=node2, updateCntr=1555, size=596, partHash=-40303136], 
> PartitionHashRecordV2 [isPrimary=false, consistentId=node1, updateCntr=1556, 
> size=596, partHash=-40303136]]
> Conflict partition: PartitionKeyV2 [grpId=-1903385190, 
> grpName=CACHEGROUP_PARTICLE_1, partId=90]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
> consistentId=node2, updateCntr=1557, size=596, partHash=-1295145299], 
> PartitionHashRecordV2 [isPrimary=true, consistentId=node1, updateCntr=1556, 
> size=596, partHash=-1221175703]]
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17345) [IEP-35] Metric to track PA enabled request on ThinClient

2022-07-11 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created IGNITE-17345:


 Summary: [IEP-35] Metric to track PA enabled request on ThinClient
 Key: IGNITE-17345
 URL: https://issues.apache.org/jira/browse/IGNITE-17345
 Project: Ignite
  Issue Type: Improvement
Reporter: Nikolay Izhikov


The crucial point to understand ThinClient performance is to know - Partition 
Awareness
For now, it's impossible to understand how many request goes to node that is 
primary for key.

It seems usefull metric for this - two counters to track amount of requests for 
each server node 
- one counter for keys current node is primary.
- another counter for keys which require extra network hop between server nodes 
to serve the request.


For environment with optimal performance second counter should be close to zero.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-16756) .NET: Thin 3.0: Implement SQL API

2022-07-11 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564942#comment-17564942
 ] 

Igor Sapego commented on IGNITE-16756:
--

[~ptupitsyn] looks good to me.

> .NET: Thin 3.0: Implement SQL API
> -
>
> Key: IGNITE-16756
> URL: https://issues.apache.org/jira/browse/IGNITE-16756
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql, thin client
>Affects Versions: 3.0.0-alpha4
>Reporter: Igor Sapego
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to implement basic SQL API for .NET thin client in 3.0. Should be 
> done after IGNITE-14972 and re-use protocol messages introduced there.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17326) Update Ignite dependency: Spring

2022-07-11 Thread Amelchev Nikita (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita updated IGNITE-17326:
-
Release Note: Updated Spring dependency to 5.2.22  (was: Update Spring 
dependency to 5.2.22)

> Update Ignite dependency: Spring
> 
>
> Key: IGNITE-17326
> URL: https://issues.apache.org/jira/browse/IGNITE-17326
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr
>Assignee: Aleksandr
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update Ignite dependency Spring 5.2.21.RELEASE to 5.2.22.RELEASE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17326) Update Ignite dependency: Spring

2022-07-11 Thread Amelchev Nikita (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita updated IGNITE-17326:
-
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> Update Ignite dependency: Spring
> 
>
> Key: IGNITE-17326
> URL: https://issues.apache.org/jira/browse/IGNITE-17326
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr
>Assignee: Aleksandr
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update Ignite dependency Spring 5.2.21.RELEASE to 5.2.22.RELEASE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17323) Update Ignite dependency: Jetty

2022-07-11 Thread Amelchev Nikita (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita updated IGNITE-17323:
-
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> Update Ignite dependency: Jetty
> ---
>
> Key: IGNITE-17323
> URL: https://issues.apache.org/jira/browse/IGNITE-17323
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr
>Assignee: Aleksandr
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update Jetty dependency 9.4.39.v20210325 to 9.4.43.v20210629



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-16907) Add ability to use Raft log as storage WAL within the scope of local recovery

2022-07-11 Thread Ivan Bessonov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov reassigned IGNITE-16907:
--

Assignee: Ivan Bessonov

> Add ability to use Raft log as storage WAL within the scope of local recovery
> -
>
> Key: IGNITE-16907
> URL: https://issues.apache.org/jira/browse/IGNITE-16907
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>
> h4. Problem
> From the birds eye view raft-to-storage flow looks similar to
>  # 
> {code:java}
> RaftGroupService#run(writeCommand());{code}
>  # Inner raft replication logic, when replicated on majority adjust 
> raft.commitedIndex.
>  # Propagate command to RaftGroupListener (raft state machine).
> {code:java}
> RaftGroupListener#onWrite(closure(writeCommand()));{code}
>  # Within state machine insert data from writeCommand to underneath storage:  
> {code:java}
> var insertRes = storage.insert(cmd.getRow(), cmd.getTimestamp());{code}
>  # ack that data was applied successfully 
> {code:java}
> clo.result(insertRes);{code}
>  # move raft.appliedIndex to corresponding value, meaning that the data for 
> this index is applied to the state machine.
> The most interesting part, especially for given ticket, relates to step 4.
> In real world storage doesn't flush every mutator on disk, instead it buffers 
> some amount of such mutators and flush them all-together as a part of some 
> checkpointing process. Thus, if node fails before mutatorsBuffer.flush() it 
> might lost some data because raft will apply data starting from appliedIndex 
> + 1 on recovery.
> h4. Possible solutions:
> There are several possibilities to solve this issue:
>  # In-storage WAL. Bad solution, because there's already raft log that can be 
> used as a WAL. Such duplication is redundant.
>  # Local recovery starting from appliedIndex - mutatorsBuffer.size. Bad 
> solution. Won't work for not-idempotent operations. Exposes inner storage 
> details such as mutatorBuffer.size.
>  # proposedIndex propagation + checkpointIndex synchonization. Seems fine. 
> More details below:
>  * First off all, in order to coordinate raft replicator and storage, 
> proposedIndex should be propagated to raftGroupListener and storage.
>  * On every checkpoint, storage will persist corresponding proposed index as 
> checkpointIndex.
>  ** In case of storage inner checkpoints, storage won't notify raft 
> replicator about new checkpointIndex. This kind of notification is an 
> optimization that does not affect the correctness of the protocol.
>  ** In case of outer checkpoint intention, e.g. raft snapshotting for the 
> purposes of raft log truncation, corresponding checkpointIndex will be 
> propagated to raft replicator within a callback "onShapshotDone".
>  * During local recovery raft will apply raft log entries from the very 
> begging. If checkpointIndex occurred to be bigger than proposedIndex on an 
> another raft log entity it fails the proposed closure with 
> IndexMismatchException(checkpointIndex) that leads to proposedIndex shift and 
> optional async raft log truncation.
> Let's consider following example:
> ] checkpointBuffer = 3. [P] - perisisted entities, [!P] - not perisisted/in 
> memory one.
>  # raft.put(k1,v1)
>  ## -> raftlog[cmd(k1,v1, index:1)]
>  ## -> storage[(k1,v1), index:1]
>  ## -> appliedIndex:1
>  # raft.put(k2,v2)
>  ## -> raftlog[cmd(k1,v1, index:1), \\{*}cmd(k2,v2, index:2)\\{*}]
>  ## -> storage[(k1,v1), \\{*}(k2,v2)\\{*}, ** index:\\{*}2\\{*}]
>  ## -> appliedIndex:{*}2{*}
>  # raft.put(k3,v3)
>  ## -> raftlog[cmd(k1,v1, index:1), cmd(k2,v2, index:2),  \\{*}cmd(k3,v3, 
> index:3)\\{*}]
>  ## -> storage[(k1,v1), (k2,v2), \\{*}(k3,v3)\\{*}, index:\\{*}3\\{*}]
>  ## -> appliedIndex:{*}3{*}
>  ## *inner storage checkpoint*
>  ### raftlog[cmd(k1,v1, index:1), cmd(k2,v2, index:2),  cmd(k3,v3, index:3)]
>  ### storage[(k1,v1, proposedIndex:1), (k2,v2, proposedIndex:2), (k3,v3, 
> proposedIndex:3)]
>  ### {*}checkpointedData[(k1,v1),* *(k2,v2),* \\{*}(k3,v3), 
> checkpointIndex:3\\{*}{*}\\{*}{*}]{*}{*}{{*}}
>  # raft.put(k4,v4)
>  ## -> raftlog[cmd(k1,v1, index:1), cmd(k2,v2, index:2),  cmd(k3,v3, 
> index:3), \\{*}cmd(k4,v4, index:4)\\{*}]
>  ## -> storage[(k1,v1), (k2,v2), (k3,v3), *(k4,v4)* index:\\{*}4\\{*}]
>  ## -> checkpointedData[(k1,v1), (k2,v2), (k3,v3), checkpointIndex:3]
>  ## -> appliedIndex:{*}4{*}
>  # Node failure
>  # Node restart
>  ## StorageRecovery: storage.apply(checkpointedData)
>  ## raft-to-storage data application starting from index: 1 // raft doesn't 
> know checkpointedIndex at this point.
>  ### -> storageResponse::IndexMismatchException(3)
>    raft-to-storage data application starting 

[jira] [Assigned] (IGNITE-12984) Distributed join incorrectly processed when batched:unicast on primary key is used

2022-07-11 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-12984:
-

Assignee: (was: Taras Ledkov)

> Distributed join incorrectly processed when batched:unicast on primary key is 
> used
> --
>
> Key: IGNITE-12984
> URL: https://issues.apache.org/jira/browse/IGNITE-12984
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Ilya Kasnacheev
>Priority: Major
> Attachments: Issue_with_Distributed_joins.pdf, forDistributedJoins.sql
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Please see attached SQL script and userlist discussion.
> Summary :
> CASE-1 Results: Correct and as expected
> {code}
> SELECT
> __Z0.ID AS __C0_0,
> __Z0.NAME AS __C0_1,
> __Z1.BLOOD_GROUP AS __C0_2,
> __Z2.UNIVERSAL_DONOR AS __C0_3
> FROM PUBLIC.PERSON__Z0
> /* PUBLIC.PERSON_NAME_ASC_IDX_proxy */
> LEFT OUTER JOIN PUBLIC.MEDICAL_INFO __Z1
> /* batched:broadcast PUBLIC.MEDICAL_INFO_NAME_ASC_IDX: NAME = __Z0.NAME */
> ON __Z0.NAME = __Z1.NAME
> LEFT OUTER JOIN PUBLIC.BLOOD_GROUP_INFO_PJ __Z2
> /* batched:broadcast PUBLIC.BLOOD_GROUP_INFO_PJ_BLOOD_GROUP_ASC_IDX: 
> BLOOD_GROUP =
> __Z1.BLOOD_GROUP */
> ON __Z1.BLOOD_GROUP = __Z2.BLOOD_GROUP
> {code}
> {code}
> Summary :
> CASE-2 Results: In-correct
> SELECT
> __Z0.ID AS __C0_0,
> __Z0.NAME AS __C0_1,
> __Z1.BLOOD_GROUP AS __C0_2,
> __Z2.UNIVERSAL_DONOR AS __C0_3
> FROM PUBLIC.PERSON __Z0
> /* PUBLIC.PERSON_ID_ASC_IDX_proxy */
> LEFT OUTER JOIN PUBLIC.MEDICAL_INFO __Z1
> /* batched:broadcast PUBLIC.MEDICAL_INFO_NAME_ASC_IDX: NAME = __Z0.NAME */
> ON __Z0.NAME = __Z1.NAME
> LEFT OUTER JOIN PUBLIC.BLOOD_GROUP_INFO_P __Z2
> /* batched:unicast PUBLIC._key_PK_proxy: BLOOD_GROUP = __Z1.BLOOD_GROUP */
> ON __Z1.BLOOD_GROUP = __Z2.BLOOD_GROUP
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-14548) BinaryThreadLocalContext must be cleaned when client is closed

2022-07-11 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-14548:
-

Assignee: Taras Ledkov  (was: Taras Ledkov)

> BinaryThreadLocalContext must be cleaned when client is closed
> --
>
> Key: IGNITE-14548
> URL: https://issues.apache.org/jira/browse/IGNITE-14548
> Project: Ignite
>  Issue Type: Bug
>  Components: binary
>Affects Versions: 2.10
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
>
> ThreadLocal {{BinaryThreadLocalContext#CTX}} must be cleaned when thin client 
> and JDBC thin client are closed.
> Fail case: [Stack 
> overflow|https://stackoverflow.com/questions/67086723/unable-to-remove-org-apache-ignite-internal-binary-binarythreadlocal-error-in-we?noredirect=1#comment118615654_67086723]
> Previous discussion: IGNITE-967



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-15487) Calcite integration. Implements DDL command KILL query

2022-07-11 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-15487:
-

Assignee: Taras Ledkov  (was: Taras Ledkov)

> Calcite integration. Implements DDL command KILL query
> --
>
> Key: IGNITE-15487
> URL: https://issues.apache.org/jira/browse/IGNITE-15487
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
>  Labels: calcite2-required, calcite3-required
>
> Implements DDL command
> {{KILL }}
> The command will broadcast query cancel message.
> Depends on query cancel refactoring: IGNITE-12991



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-16098) Key type and schema must be validated on a data insertion

2022-07-11 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-16098:
-

Assignee: Taras Ledkov  (was: Taras Ledkov)

> Key type and schema must be validated on a data insertion
> -
>
> Key: IGNITE-16098
> URL: https://issues.apache.org/jira/browse/IGNITE-16098
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.11
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are two cases that break the consistency between indexes and data and 
> can cause the index tree to break.
> *1. Put different entities that are logically same for SQL*
> {{CREATE TABLE TEST(ID0 INT, ID1 INT, VAL int, PRIMARY KEY (ID0, ID1)) WITH 
> "KEY_TYPE=MyType,CACHE_NAME=test"}};
> then create to keys with hidden field and put to cache test.
> {code}
>  BinaryObjectBuilder bobKey0 = grid(0).binary().builder("MyType");
> bobKey0.setField("ID0", 0);
> bobKey0.setField("ID1", 0);
> bobKey0.setField("hidden", 0);
>  BinaryObjectBuilder bobKey1 = grid(0).binary().builder("MyType");
> bobKey0.setField("ID0", 0);
> bobKey0.setField("ID1", 0);
> bobKey0.setField("hidden", 1);
> {code}
> These key object are different  by hidden field and cache will contains two 
> entries.
> But (ID0, ID1) fields are same and sorted PK index will contain only one 
> record.
> Now this case is applied only for SQL CREATE TABLE and cannot be reproduced 
> via cache API.
> Because PK fields are unwrap only for SQL tables. 
> Different functions of the cache API and SQL API should be fixed by 
> IGNITE-11402.
> It should be possible to create identical tables by SQL and cache API.
> *2. Object with different key types*
> Now the value type is specify the table. If the value type doesn't not equal 
> to value type specified by the {{QueryEntity#valueType}} the entity is not 
> indexed (not applied fr the table).
> But {{QueryEntity#keyType}} isn't used to validate entry on insertion. Only 
> fields of inserted entry are validated.
> Only {{QueryBinaryProperty#field}} is prevent of insertion the keys with 
> different types. Such insertion produce critical error.
> But this property is set up on the first insertion on the node locally.
> So, the steps to fail:
> 1. Put key with type "Key0" on the one node - OK;
> 2. Put key with type "Key1" on the other node - OK;
> 3. Re-balance the cluster so that both keys are owned by one node - FAIL (on 
> build index);
> *Suggestion fix:*
> 1. Validate key type at the {{QueryTypeDescriptorImpl#validateKeyAndValue}} 
> (fixes the second case);
> 2. *Before* table creation register {{BinaryMetadata}} for specified key type;
> 3. If the type corresponding for KEY is already registered and schemas differ 
> - fail the table creation;
> 3. Save the proper {{schemaId}} for the KEY at the {{QueryEntityEx}} to 
> cluster-wide propagation and store at the persistence configuration; 
> 4. Validate key schema at the {{QueryTypeDescriptorImpl#validateKeyAndValue}} 
> (fixes the first case)
> 5. Introduce distributed property to disable these validation for backward 
> compatibility.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-15487) Calcite integration. Implements DDL command KILL query

2022-07-11 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-15487:
-

Assignee: (was: Taras Ledkov)

> Calcite integration. Implements DDL command KILL query
> --
>
> Key: IGNITE-15487
> URL: https://issues.apache.org/jira/browse/IGNITE-15487
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Taras Ledkov
>Priority: Major
>  Labels: calcite2-required, calcite3-required
>
> Implements DDL command
> {{KILL }}
> The command will broadcast query cancel message.
> Depends on query cancel refactoring: IGNITE-12991



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-12984) Distributed join incorrectly processed when batched:unicast on primary key is used

2022-07-11 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-12984:
-

Assignee: Taras Ledkov  (was: Taras Ledkov)

> Distributed join incorrectly processed when batched:unicast on primary key is 
> used
> --
>
> Key: IGNITE-12984
> URL: https://issues.apache.org/jira/browse/IGNITE-12984
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Ilya Kasnacheev
>Assignee: Taras Ledkov
>Priority: Major
> Attachments: Issue_with_Distributed_joins.pdf, forDistributedJoins.sql
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Please see attached SQL script and userlist discussion.
> Summary :
> CASE-1 Results: Correct and as expected
> {code}
> SELECT
> __Z0.ID AS __C0_0,
> __Z0.NAME AS __C0_1,
> __Z1.BLOOD_GROUP AS __C0_2,
> __Z2.UNIVERSAL_DONOR AS __C0_3
> FROM PUBLIC.PERSON__Z0
> /* PUBLIC.PERSON_NAME_ASC_IDX_proxy */
> LEFT OUTER JOIN PUBLIC.MEDICAL_INFO __Z1
> /* batched:broadcast PUBLIC.MEDICAL_INFO_NAME_ASC_IDX: NAME = __Z0.NAME */
> ON __Z0.NAME = __Z1.NAME
> LEFT OUTER JOIN PUBLIC.BLOOD_GROUP_INFO_PJ __Z2
> /* batched:broadcast PUBLIC.BLOOD_GROUP_INFO_PJ_BLOOD_GROUP_ASC_IDX: 
> BLOOD_GROUP =
> __Z1.BLOOD_GROUP */
> ON __Z1.BLOOD_GROUP = __Z2.BLOOD_GROUP
> {code}
> {code}
> Summary :
> CASE-2 Results: In-correct
> SELECT
> __Z0.ID AS __C0_0,
> __Z0.NAME AS __C0_1,
> __Z1.BLOOD_GROUP AS __C0_2,
> __Z2.UNIVERSAL_DONOR AS __C0_3
> FROM PUBLIC.PERSON __Z0
> /* PUBLIC.PERSON_ID_ASC_IDX_proxy */
> LEFT OUTER JOIN PUBLIC.MEDICAL_INFO __Z1
> /* batched:broadcast PUBLIC.MEDICAL_INFO_NAME_ASC_IDX: NAME = __Z0.NAME */
> ON __Z0.NAME = __Z1.NAME
> LEFT OUTER JOIN PUBLIC.BLOOD_GROUP_INFO_P __Z2
> /* batched:unicast PUBLIC._key_PK_proxy: BLOOD_GROUP = __Z1.BLOOD_GROUP */
> ON __Z1.BLOOD_GROUP = __Z2.BLOOD_GROUP
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17175) Table isn't created on client node when CREATE TABLE command is called on existing cache

2022-07-11 Thread Taras Ledkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-17175:
-

Assignee: Taras Ledkov  (was: Taras Ledkov)

> Table isn't created on client node when CREATE TABLE command is called on 
> existing cache
> 
>
> Key: IGNITE-17175
> URL: https://issues.apache.org/jira/browse/IGNITE-17175
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test {{DynamicEnableIndexingBasicSelfTest.testEnableDynamicIndexing}} 
> fails sometimes with error `Table not found`.
> *Root cause:*
> Looks like CREATE TABLE command isn’t affected on client node sometimes when 
> table is created on existed cache and the cache isn't started on the client 
> node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-17344) Move "metrics" SQL system view to GridSystemViewManager

2022-07-11 Thread Aleksey Plekhanov (Jira)
Aleksey Plekhanov created IGNITE-17344:
--

 Summary: Move "metrics" SQL system view to GridSystemViewManager
 Key: IGNITE-17344
 URL: https://issues.apache.org/jira/browse/IGNITE-17344
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksey Plekhanov
Assignee: Aleksey Plekhanov


Currently we have dedicated adapter for "metrics" system view 
(\{{MetricRegistryLocalSystemView}}), this view doesn't registered in common 
view registry (via \{{GridSystemViewManager}}) and can't be used by any views 
exporter except H2 SQL engine (for example by Calcite SQL engine). 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)