[jira] [Commented] (IGNITE-13007) JDBC: Introduce feature flags for JDBC thin

2020-05-14 Thread Konstantin Orlov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107997#comment-17107997
 ] 

Konstantin Orlov commented on IGNITE-13007:
---

[~tledkov-gridgain], do a review please. Here is PR: 
[https://github.com/apache/ignite/pull/7797] (Have no idea why it is not linked 
to ticket automatically)

> JDBC: Introduce feature flags for JDBC thin
> ---
>
> Key: IGNITE-13007
> URL: https://issues.apache.org/jira/browse/IGNITE-13007
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>
> Motivation the same as for https://issues.apache.org/jira/browse/IGNITE-12853
> The thin client & JDBC, ODBC have different protocol specific and may require 
> implement different features.
> Each client (thin cli, thin JDBC, ODBC) should have its own feature flags set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13007) JDBC: Introduce feature flags for JDBC thin

2020-05-14 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107989#comment-17107989
 ] 

Ignite TC Bot commented on IGNITE-13007:


{panel:title=Branch: [pull/7797/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5303896&buildTypeId=IgniteTests24Java8_RunAll]

> JDBC: Introduce feature flags for JDBC thin
> ---
>
> Key: IGNITE-13007
> URL: https://issues.apache.org/jira/browse/IGNITE-13007
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>
> Motivation the same as for https://issues.apache.org/jira/browse/IGNITE-12853
> The thin client & JDBC, ODBC have different protocol specific and may require 
> implement different features.
> Each client (thin cli, thin JDBC, ODBC) should have its own feature flags set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12438) Extend communication protocol to establish client-server connection

2020-05-14 Thread Ivan Bessonov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107988#comment-17107988
 ] 

Ivan Bessonov commented on IGNITE-12438:


Hi [~dmagda],

I'll start the discussion soon. Feature is in experimental state in the code 
because it's still not clear how good this concept will be.

BTW, IGNITE-13013 isn't too clear for me - is it any different from the current 
task? Both require the same thing - providing the ability to have one-way 
connection from clients to servers, right?

> Extend communication protocol to establish client-server connection
> ---
>
> Key: IGNITE-12438
> URL: https://issues.apache.org/jira/browse/IGNITE-12438
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Ivan Bessonov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Recently there was quite a lot of questions related to thick clients 
> connectivity issues when the clients are deployed in a k8s pod [1]. The 
> general issue here is clients reporting network address which are not 
> reachable from server nodes. At the same time, the clients can connect to 
> server nodes.
> An idea of how to fix this is as follows:
>  * Make sure that think clients discovery SPI always maintains a connection 
> to a server node (this should be already implemented)
>  * (Optionally) detect when a client has only one-way connectivity with the 
> server nodes. This part should be investigated. We need this to avoid server 
> nodes attempt to connect to a client and send communication request to the 
> client node faster
>  * When a server attempts to establish a connection with a client, check if 
> client is unreachable or the previous connection attempt failed. If so, send 
> a discovery message to the client to force a client-server connection. In 
> this case, server will be able to send the original message via the newly 
> established connection.
> [1] 
> https://stackoverflow.com/questions/59192075/ignite-communicationspi-questions-in-paas-environment/59232504



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-12438) Extend communication protocol to establish client-server connection

2020-05-14 Thread Denis A. Magda (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107941#comment-17107941
 ] 

Denis A. Magda edited comment on IGNITE-12438 at 5/15/20, 5:04 AM:
---

[~ibessonov], could you please start a discussion on the dev list to discuss 
this new API? I'm not sure that 'VIRTUALIZED' is the best name for the feature. 
Let's quickly brainstorm with a broader community. The final name can also be 
influenced by [IGNITE-13014|https://issues.apache.org/jira/browse/IGNITE-13013].


{code:java}
public enum EnvironmentType {
 /** Default value. */
 STANDALONE,

 /** */
 VIRTUALIZED;
 }
{code}

Also, I see that the feature is planned to be released in the experimental 
mode. What needs to be done to make it available in the GA state?


was (Author: dmagda):
[~ibessonov], could you please start a discussion on the dev list to discuss 
this new API? I'm not sure that 'VIRTUALIZED' is the best name for the feature. 
Let's quickly brainstorm with a broader community.


{code:java}
public enum EnvironmentType {
 /** Default value. */
 STANDALONE,

 /** */
 VIRTUALIZED;
 }
{code}

Also, I see that the feature is planned to be released in the experimental 
mode. What needs to be done to make it available in the GA state?

> Extend communication protocol to establish client-server connection
> ---
>
> Key: IGNITE-12438
> URL: https://issues.apache.org/jira/browse/IGNITE-12438
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Ivan Bessonov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Recently there was quite a lot of questions related to thick clients 
> connectivity issues when the clients are deployed in a k8s pod [1]. The 
> general issue here is clients reporting network address which are not 
> reachable from server nodes. At the same time, the clients can connect to 
> server nodes.
> An idea of how to fix this is as follows:
>  * Make sure that think clients discovery SPI always maintains a connection 
> to a server node (this should be already implemented)
>  * (Optionally) detect when a client has only one-way connectivity with the 
> server nodes. This part should be investigated. We need this to avoid server 
> nodes attempt to connect to a client and send communication request to the 
> client node faster
>  * When a server attempts to establish a connection with a client, check if 
> client is unreachable or the previous connection attempt failed. If so, send 
> a discovery message to the client to force a client-server connection. In 
> this case, server will be able to send the original message via the newly 
> established connection.
> [1] 
> https://stackoverflow.com/questions/59192075/ignite-communicationspi-questions-in-paas-environment/59232504



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13013) Thick client must not open server sockets when used by serverless functions

2020-05-14 Thread Denis A. Magda (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis A. Magda updated IGNITE-13013:

Description: 
A thick client fails to start if being used inside of a serverless function 
such as AWS Lamda or Azure Functions. Cloud providers prohibit opening network 
ports to accept connections on the function's end. In short, the function can 
only connect to a remote address.

To reproduce, you can follow this tutorial and swap the thin client (used in 
the tutorial) with the thick one: 
https://www.gridgain.com/docs/tutorials/serverless/azure_functions_tutorial

The thick client needs to support a mode when the communication SPI doesn't 
create a server socket if the client is used for serverless computing. This 
improvement looks like an extra task of this initiative: 
https://issues.apache.org/jira/browse/IGNITE-12438

  was:
A thick client fails to start if being used inside of a serverless function 
such as AWS Lamda or Azure Functions. Cloud providers prohibit opening network 
ports to accept connections on the function's end. In short, the function can 
only connect to a remote address.

The thick client needs to support a mode when the communication SPI doesn't 
create a server socket if the client is used for serverless computing. This 
improvement looks like an extra task of this initiative: 
https://issues.apache.org/jira/browse/IGNITE-12438


> Thick client must not open server sockets when used by serverless functions
> ---
>
> Key: IGNITE-13013
> URL: https://issues.apache.org/jira/browse/IGNITE-13013
> Project: Ignite
>  Issue Type: Improvement
>  Components: networking
>Affects Versions: 2.8
>Reporter: Denis A. Magda
>Priority: Critical
> Fix For: 2.9
>
>
> A thick client fails to start if being used inside of a serverless function 
> such as AWS Lamda or Azure Functions. Cloud providers prohibit opening 
> network ports to accept connections on the function's end. In short, the 
> function can only connect to a remote address.
> To reproduce, you can follow this tutorial and swap the thin client (used in 
> the tutorial) with the thick one: 
> https://www.gridgain.com/docs/tutorials/serverless/azure_functions_tutorial
> The thick client needs to support a mode when the communication SPI doesn't 
> create a server socket if the client is used for serverless computing. This 
> improvement looks like an extra task of this initiative: 
> https://issues.apache.org/jira/browse/IGNITE-12438



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13013) Thick client must not open server sockets when used by serverless functions

2020-05-14 Thread Denis A. Magda (Jira)
Denis A. Magda created IGNITE-13013:
---

 Summary: Thick client must not open server sockets when used by 
serverless functions
 Key: IGNITE-13013
 URL: https://issues.apache.org/jira/browse/IGNITE-13013
 Project: Ignite
  Issue Type: Improvement
  Components: networking
Affects Versions: 2.8
Reporter: Denis A. Magda
 Fix For: 2.9


A thick client fails to start if being used inside of a serverless function 
such as AWS Lamda or Azure Functions. Cloud providers prohibit opening network 
ports to accept connections on the function's end. In short, the function can 
only connect to a remote address.

The thick client needs to support a mode when the communication SPI doesn't 
create a server socket if the client is used for serverless computing. This 
improvement looks like an extra task of this initiative: 
https://issues.apache.org/jira/browse/IGNITE-12438



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12438) Extend communication protocol to establish client-server connection

2020-05-14 Thread Denis A. Magda (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107941#comment-17107941
 ] 

Denis A. Magda commented on IGNITE-12438:
-

[~ibessonov], could you please start a discussion on the dev list to discuss 
this new API? I'm not sure that 'VIRTUALIZED' is the best name for the feature. 
Let's quickly brainstorm with a broader community.


{code:java}
public enum EnvironmentType {
 /** Default value. */
 STANDALONE,

 /** */
 VIRTUALIZED;
 }
{code}

Also, I see that the feature is planned to be released in the experimental 
mode. What needs to be done to make it available in the GA state?

> Extend communication protocol to establish client-server connection
> ---
>
> Key: IGNITE-12438
> URL: https://issues.apache.org/jira/browse/IGNITE-12438
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexey Goncharuk
>Assignee: Ivan Bessonov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Recently there was quite a lot of questions related to thick clients 
> connectivity issues when the clients are deployed in a k8s pod [1]. The 
> general issue here is clients reporting network address which are not 
> reachable from server nodes. At the same time, the clients can connect to 
> server nodes.
> An idea of how to fix this is as follows:
>  * Make sure that think clients discovery SPI always maintains a connection 
> to a server node (this should be already implemented)
>  * (Optionally) detect when a client has only one-way connectivity with the 
> server nodes. This part should be investigated. We need this to avoid server 
> nodes attempt to connect to a client and send communication request to the 
> client node faster
>  * When a server attempts to establish a connection with a client, check if 
> client is unreachable or the previous connection attempt failed. If so, send 
> a discovery message to the client to force a client-server connection. In 
> this case, server will be able to send the original message via the newly 
> established connection.
> [1] 
> https://stackoverflow.com/questions/59192075/ignite-communicationspi-questions-in-paas-environment/59232504



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13012) Make node connection checking rely on the configuration. Simplify node ping routine.

2020-05-14 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13012:
--
Description: 
Current noted-to-node connection checking has several drawbacks:

1)  Minimal connection checking interval is not bound to failure detection 
parameters: 
static int ServerImpls.CON_CHECK_INTERVAL = 500;

2)  Connection checking is made as ability of periodical message sending 
(TcpDiscoveryConnectionCheckMessage). It is bound to own time (ServerImpl. 
RingMessageWorker.lastTimeConnCheckMsgSent), not to common time of last sent 
message. This is weird because any discovery message actually checks 
connection. And TpDiscoveryConnectionCheckMessage is just an addition when 
message queue is empty for a long time.

3)  Period of Node-to-Node connection checking can be sometimes shortened 
for strange reason: if no sent or received message appears within 
failureDetectionTimeout. Here, despite we have minimal period of connection 
checking (ServerImpls.CON_CHECK_INTERVAL), we can also send 
TpDiscoveryConnectionCheckMessage before this period exhausted. Moreover, this 
premature node ping relies also on time of last received message. Imagine: if 
node 2 receives no message from node 1 within some time it decides to do extra 
ping node 3 not waiting for regular ping interval. Such behavior makes 
confusion and gives no additional guaranties.

4)  If #3 happens, node writes in the log on INFO: “Local node seems to be 
disconnected from topology …” whereas it is not actually disconnected. User can 
see this message if he typed failureDetectionTimeout < 500ms. I wouldn’t like 
seeing INFO in a log saying a node is might be disconnected. This sounds like 
some troubles raised in network. But not as everything is OK. 

Suggestions:
1)  Make connection check interval be based on failureDetectionTimeout or 
similar params.
2)  Make connection check interval rely on common time of last sent 
message. Not on dedicated time.
3)  Remove additional, random, quickened connection checking.
4)  Do not worry user with “Node disconnected” when everything is OK.

  was:
Current noted-to-node connection checking has several drawbacks:
1)  Minimal connection checking interval is not bound to failure detection 
parameters: 
static int ServerImpls.CON_CHECK_INTERVAL = 500;

2)  Connection checking is made as ability of periodical message sending 
(TcpDiscoveryConnectionCheckMessage). It is bound to own time (ServerImpl. 
RingMessageWorker.lastTimeConnCheckMsgSent), not to common time of last sent 
message. This is weird because any discovery message actually checks 
connection. And TpDiscoveryConnectionCheckMessage is just an addition when 
message queue is empty for a long time.

3)  Period of Node-to-Node connection checking can be sometimes shortened 
for strange reason: if no sent or received message appears within 
failureDetectionTimeout. Here, despite we have minimal period of connection 
checking (ServerImpls.CON_CHECK_INTERVAL), we can also send 
TpDiscoveryConnectionCheckMessage before this period exhausted. Moreover, this 
premature node ping relies also on time of last received message. Imagine: if 
node 2 receives no message from node 1 within some time it decides to do extra 
ping node 3 not waiting for regular ping interval. Such behavior makes 
confusion and gives no additional guaranties.

4)  If #3 happens, node writes in the log on INFO: “Local node seems to be 
disconnected from topology …” whereas it is not actually disconnected. User can 
see this message if he typed failureDetectionTimeout < 500ms. I wouldn’t like 
seeing INFO in a log saying a node is might be disconnected. This sounds like 
some troubles raised in network. But not as everything is OK. 

Suggestions:
1)  Make connection check interval be based on failureDetectionTimeout or 
similar params.
2)  Make connection check interval rely on common time of last sent 
message. Not on dedicated time.
3)  Remove additional, random, quickened connection checking.
4)  Do not worry user with “Node disconnected” when everything is OK.


> Make node connection checking rely on the configuration. Simplify node ping 
> routine.
> 
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>
> Current noted-to-node connection checking has several drawbacks:
> 1)Minimal connection checking interval is not bound to failure detection 
> parameters: 
> static int ServerImpls.CON_CHECK_INTERVAL = 500;
> 2)Connection checking is made as abil

[jira] [Updated] (IGNITE-13012) Make node connection checking rely on the configuration. Simplify node ping routine.

2020-05-14 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13012:
--
Description: 
Current noted-to-node connection checking has several drawbacks:
1)  Minimal connection checking interval is not bound to failure detection 
parameters: 
static int ServerImpls.CON_CHECK_INTERVAL = 500;

2)  Connection checking is made as ability of periodical message sending 
(TcpDiscoveryConnectionCheckMessage). It is bound to own time (ServerImpl. 
RingMessageWorker.lastTimeConnCheckMsgSent), not to common time of last sent 
message. This is weird because any discovery message actually checks 
connection. And TpDiscoveryConnectionCheckMessage is just an addition when 
message queue is empty for a long time.

3)  Period of Node-to-Node connection checking can be sometimes shortened 
for strange reason: if no sent or received message appears within 
failureDetectionTimeout. Here, despite we have minimal period of connection 
checking (ServerImpls.CON_CHECK_INTERVAL), we can also send 
TpDiscoveryConnectionCheckMessage before this period exhausted. Moreover, this 
premature node ping relies also on time of last received message. Imagine: if 
node 2 receives no message from node 1 within some time it decides to do extra 
ping node 3 not waiting for regular ping interval. Such behavior makes 
confusion and gives no additional guaranties.

4)  If #3 happens, node writes in the log on INFO: “Local node seems to be 
disconnected from topology …” whereas it is not actually disconnected. User can 
see this message if he typed failureDetectionTimeout < 500ms. I wouldn’t like 
seeing INFO in a log saying a node is might be disconnected. This sounds like 
some troubles raised in network. But not as everything is OK. 

Suggestions:
1)  Make connection check interval be based on failureDetectionTimeout or 
similar params.
2)  Make connection check interval rely on common time of last sent 
message. Not on dedicated time.
3)  Remove additional, random, quickened connection checking.
4)  Do not worry user with “Node disconnected” when everything is OK.

  was:
Current noted-to-node connection checking has several drawbacks:
1)  Minimal connection checking interval is not bound to failure detection 
parameters: 
static int ServerImpls.CON_CHECK_INTERVAL = 500;

2)  Connection checking is made as ability of periodical message sending 
(TcpDiscoveryConnectionCheckMessage). It is bound to own time (ServerImpl. 
RingMessageWorker.lastTimeConnCheckMsgSent), not to common time of last sent 
message. This is weird because any discovery message actually checks 
connection. And TpDiscoveryConnectionCheckMessage is just an addition when 
message queue is empty for a long time.

3)  Period of Node-to-Node connection checking can be sometimes shortened 
for strange reason: if no sent or received message appears within 
failureDetectionTimeout. Here, despite we have minimal period of connection 
checking (ServerImpls.CON_CHECK_INTERVAL), we can also send 
TpDiscoveryConnectionCheckMessage before this period exhausted. Moreover, this 
premature node ping relies also on time of last received message. Imagine: if 
node 2 receives no message from node 1 within some time it decides to do extra 
ping node 3 not waiting for regular ping interval. Such behavior makes 
confusion and gives no additional guaranties.
See `ServerImpl.connectionChecking()`

4)  If #3 happens, node writes in the log on INFO: “Local node seems to be 
disconnected from topology …” whereas it is not actually disconnected. User can 
see this message if he typed failureDetectionTimeout < 500ms. I wouldn’t like 
seeing INFO in a log saying a node is might be disconnected. This sounds like 
some troubles raised in network. But not as everything is OK. 

Suggestions:
1)  Make connection check interval be based on failureDetectionTimeout or 
similar params.
2)  Make connection check interval rely on common time of last sent 
message. Not on dedicated time.
3)  Remove additional, random, quickened connection checking.
4)  Do not worry user with “Node disconnected” when everything is OK.


> Make node connection checking rely on the configuration. Simplify node ping 
> routine.
> 
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>
> Current noted-to-node connection checking has several drawbacks:
> 1)Minimal connection checking interval is not bound to failure detection 
> parameters: 
> static int ServerImpls.CON_CHECK_INTERVAL = 500;
> 2)  

[jira] [Updated] (IGNITE-13012) Make node connection checking rely on the configuration. Simplify node ping routine.

2020-05-14 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13012:
--
Description: 
Current noted-to-node connection checking has several drawbacks:
1)  Minimal connection checking interval is not bound to failure detection 
parameters: 
static int ServerImpls.CON_CHECK_INTERVAL = 500;

2)  Connection checking is made as ability of periodical message sending 
(TcpDiscoveryConnectionCheckMessage). It is bound to own time (ServerImpl. 
RingMessageWorker.lastTimeConnCheckMsgSent), not to common time of last sent 
message. This is weird because any discovery message actually checks 
connection. And TpDiscoveryConnectionCheckMessage is just an addition when 
message queue is empty for a long time.

3)  Period of Node-to-Node connection checking can be sometimes shortened 
for strange reason: if no sent or received message appears within 
failureDetectionTimeout. Here, despite we have minimal period of connection 
checking (ServerImpls.CON_CHECK_INTERVAL), we can also send 
TpDiscoveryConnectionCheckMessage before this period exhausted. Moreover, this 
premature node ping relies also on time of last received message. Imagine: if 
node 2 receives no message from node 1 within some time it decides to do extra 
ping node 3 not waiting for regular ping interval. Such behavior makes 
confusion and gives no additional guaranties.
See `ServerImpl.connectionChecking()`

4)  If #3 happens, node writes in the log on INFO: “Local node seems to be 
disconnected from topology …” whereas it is not actually disconnected. User can 
see this message if he typed failureDetectionTimeout < 500ms. I wouldn’t like 
seeing INFO in a log saying a node is might be disconnected. This sounds like 
some troubles raised in network. But not as everything is OK. 

Suggestions:
1)  Make connection check interval be based on failureDetectionTimeout or 
similar params.
2)  Make connection check interval rely on common time of last sent 
message. Not on dedicated time.
3)  Remove additional, random, quickened connection checking.
4)  Do not worry user with “Node disconnected” when everything is OK.

  was:

Current noted-to-node connection checking has several drawbacks:
1)  Minimal connection checking interval is not bound to failure detection 
parameters: 
static int ServerImpls.CON_CHECK_INTERVAL = 500;
2)  Connection checking is made as ability of periodical message sending 
(TcpDiscoveryConnectionCheckMessage). It is bound to own time (ServerImpl. 
RingMessageWorker.lastTimeConnCheckMsgSent), not to common time of last sent 
message. This is weird because any discovery message actually checks 
connection. And TpDiscoveryConnectionCheckMessage is just an addition when 
message queue is empty for a long time.
3)  Period of Node-to-Node connection checking can be sometimes shortened 
for strange reason: if no sent or received message appears within 
failureDetectionTimeout. Here, despite we have minimal period of connection 
checking (ServerImpls.CON_CHECK_INTERVAL), we can also send 
TpDiscoveryConnectionCheckMessage before this period exhausted. Moreover, this 
premature node ping relies also on time of last received message. Imagine: if 
node 2 receives no message from node 1 within some time it decides to do extra 
ping node 3 not waiting for regular ping interval. Such behavior makes 
confusion and gives no additional guaranties.
4)  If #3 happens, node writes in the log on INFO: “Local node seems to be 
disconnected from topology …” whereas it is not actually disconnected. User can 
see this message if he typed failureDetectionTimeout < 500ms. I wouldn’t like 
seeing INFO in a log saying a node is might be disconnected. This sounds like 
some troubles raised in network. But not as everything is OK. 

Suggestions:
1)  Make connection check interval be based on failureDetectionTimeout or 
similar params.
2)  Make connection check interval rely on common time of last sent 
message. Not on dedicated time.
3)  Remove additional, random, quickened connection checking.
4)  Do not worry user with “Node disconnected” when everything is OK.


> Make node connection checking rely on the configuration. Simplify node ping 
> routine.
> 
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>
> Current noted-to-node connection checking has several drawbacks:
> 1)Minimal connection checking interval is not bound to failure detection 
> parameters: 
> static int ServerImpls.CON_CHECK_INTERVAL = 500;
> 2)

[jira] [Updated] (IGNITE-13012) Make node connection checking rely on the configuration. Simplify node ping routine.

2020-05-14 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13012:
--
Labels: iep-45  (was: )

> Make node connection checking rely on the configuration. Simplify node ping 
> routine.
> 
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>
> Current noted-to-node connection checking has several drawbacks:
> 1)Minimal connection checking interval is not bound to failure detection 
> parameters: 
> static int ServerImpls.CON_CHECK_INTERVAL = 500;
> 2)Connection checking is made as ability of periodical message sending 
> (TcpDiscoveryConnectionCheckMessage). It is bound to own time (ServerImpl. 
> RingMessageWorker.lastTimeConnCheckMsgSent), not to common time of last sent 
> message. This is weird because any discovery message actually checks 
> connection. And TpDiscoveryConnectionCheckMessage is just an addition when 
> message queue is empty for a long time.
> 3)Period of Node-to-Node connection checking can be sometimes shortened 
> for strange reason: if no sent or received message appears within 
> failureDetectionTimeout. Here, despite we have minimal period of connection 
> checking (ServerImpls.CON_CHECK_INTERVAL), we can also send 
> TpDiscoveryConnectionCheckMessage before this period exhausted. Moreover, 
> this premature node ping relies also on time of last received message. 
> Imagine: if node 2 receives no message from node 1 within some time it 
> decides to do extra ping node 3 not waiting for regular ping interval. Such 
> behavior makes confusion and gives no additional guaranties.
> 4)If #3 happens, node writes in the log on INFO: “Local node seems to be 
> disconnected from topology …” whereas it is not actually disconnected. User 
> can see this message if he typed failureDetectionTimeout < 500ms. I wouldn’t 
> like seeing INFO in a log saying a node is might be disconnected. This sounds 
> like some troubles raised in network. But not as everything is OK. 
> Suggestions:
> 1)Make connection check interval be based on failureDetectionTimeout or 
> similar params.
> 2)Make connection check interval rely on common time of last sent 
> message. Not on dedicated time.
> 3)Remove additional, random, quickened connection checking.
> 4)Do not worry user with “Node disconnected” when everything is OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13012) Make node connection checking rely on the configuration. Simplify node ping routine.

2020-05-14 Thread Vladimir Steshin (Jira)
Vladimir Steshin created IGNITE-13012:
-

 Summary: Make node connection checking rely on the configuration. 
Simplify node ping routine.
 Key: IGNITE-13012
 URL: https://issues.apache.org/jira/browse/IGNITE-13012
 Project: Ignite
  Issue Type: Improvement
Reporter: Vladimir Steshin
Assignee: Vladimir Steshin



Current noted-to-node connection checking has several drawbacks:
1)  Minimal connection checking interval is not bound to failure detection 
parameters: 
static int ServerImpls.CON_CHECK_INTERVAL = 500;
2)  Connection checking is made as ability of periodical message sending 
(TcpDiscoveryConnectionCheckMessage). It is bound to own time (ServerImpl. 
RingMessageWorker.lastTimeConnCheckMsgSent), not to common time of last sent 
message. This is weird because any discovery message actually checks 
connection. And TpDiscoveryConnectionCheckMessage is just an addition when 
message queue is empty for a long time.
3)  Period of Node-to-Node connection checking can be sometimes shortened 
for strange reason: if no sent or received message appears within 
failureDetectionTimeout. Here, despite we have minimal period of connection 
checking (ServerImpls.CON_CHECK_INTERVAL), we can also send 
TpDiscoveryConnectionCheckMessage before this period exhausted. Moreover, this 
premature node ping relies also on time of last received message. Imagine: if 
node 2 receives no message from node 1 within some time it decides to do extra 
ping node 3 not waiting for regular ping interval. Such behavior makes 
confusion and gives no additional guaranties.
4)  If #3 happens, node writes in the log on INFO: “Local node seems to be 
disconnected from topology …” whereas it is not actually disconnected. User can 
see this message if he typed failureDetectionTimeout < 500ms. I wouldn’t like 
seeing INFO in a log saying a node is might be disconnected. This sounds like 
some troubles raised in network. But not as everything is OK. 

Suggestions:
1)  Make connection check interval be based on failureDetectionTimeout or 
similar params.
2)  Make connection check interval rely on common time of last sent 
message. Not on dedicated time.
3)  Remove additional, random, quickened connection checking.
4)  Do not worry user with “Node disconnected” when everything is OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-10100) Add public Java API to call Ignite.NET services

2020-05-14 Thread Ivan Daschinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107354#comment-17107354
 ] 

Ivan Daschinskiy commented on IGNITE-10100:
---

[~daradurvs] Could you please make a review, if you have a time? 

> Add public Java API to call Ignite.NET services
> ---
>
> Key: IGNITE-10100
> URL: https://issues.apache.org/jira/browse/IGNITE-10100
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Affects Versions: 2.6
>Reporter: Alexey Kukushkin
>Assignee: Ivan Daschinskiy
>Priority: Major
>  Labels: .NET, sbcf
> Fix For: 2.9
>
> Attachments: ignite-10100-vs-2.8.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Ignite wraps .NET services in PlatformDotNetServiceImpl implementing 
> PlatformService interface.
> PlatformService is defined in internal Ignite package 
> apache.ignite.internal.processors.platform.services. It exposes
> {{ invokeMethod(methodName, Object[] params): Object}}
> to call any service method dynamically. Right now there is no Ignite public 
> API to call a PlatformService using static typing.
> We need to develop a public API to call PlatformDotNetServiceImpl using 
> static typing in Java.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (IGNITE-12898) Server node with CacheStore fails to re-join the cluster: Cannot enable read-through (loader or store is not provided) for cache

2020-05-14 Thread Ivan Daschinskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinskiy updated IGNITE-12898:
--
Comment: was deleted

(was: [~daradurvs] Could you please make a review? )

> Server node with CacheStore fails to re-join the cluster: Cannot enable 
> read-through (loader or store is not provided) for cache
> 
>
> Key: IGNITE-12898
> URL: https://issues.apache.org/jira/browse/IGNITE-12898
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Alexey Kukushkin
>Assignee: Ivan Daschinskiy
>Priority: Major
>  Labels: sbcf
> Fix For: 2.9
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> If a cache with external persistence is dynamically created on a non-affinity 
> node then the cache affinity node cannot join the cluster after restart.
> h2. Repro Steps
>  # Run an "empty" Ignite node where no cache is going to be started
>  # Run a cache affinity node having the "ROLE" attribute set to "DATA"
>  # Create the cache from the "empty" node and use a Node Filter to limit the 
> cache to the "data" node. External persistence is configured for the cache.
>  # Restart the "data" node
> h3. Actual Result
> {{IgniteCheckedException: Cannot enable read-through (loader or store is not 
> provided) for cache}}
> h2. Reproducer
> h3. Reproducer.java
> {code:java}
> public class Reproducer {
> @Test
> public void test() throws Exception {
> final String DB_URL = "jdbc:h2:mem:test";
> final String ENTITY_NAME = "Person";
> Function igniteCfgFactory = instanceName 
> ->
> new IgniteConfiguration()
> .setIgniteInstanceName(instanceName)
> .setDiscoverySpi(new TcpDiscoverySpi()
> .setIpFinder(new 
> TcpDiscoveryVmIpFinder().setAddresses(Collections.singleton("127.0.0.1:47500")))
> );
> // 1. Run an "empty" Ignite node where no cache is going to be started
> try (Connection dbConn = DriverManager.getConnection(DB_URL, "sa", 
> "");
>  Statement dbStmt = dbConn.createStatement();
>  Ignite emptyNode = 
> Ignition.start(igniteCfgFactory.apply("emptyyNode"))) {
> // 2. Run a "Person" cache affinity node having the "ROLE" 
> attribute set to "DATA"
> Map dataNodeAttrs = new HashMap<>(1);
> dataNodeAttrs.put(DataNodeFilter.ATTR_NAME, 
> DataNodeFilter.ATTR_VAL);
> Ignite dataNode = 
> Ignition.start(igniteCfgFactory.apply("dataNode").setUserAttributes(dataNodeAttrs));
> // 3. Create the "Person" cache from the "empty" node and use a 
> Node Filter to limit the cache to the
> // "data" node. External persistence to the "Person" table in H2 
> DB is configured for the cache.
> dbStmt.execute("CREATE TABLE " + ENTITY_NAME + " (id int PRIMARY 
> KEY, name varchar)");
> CacheJdbcPojoStoreFactory 
> igniteStoreFactory = new CacheJdbcPojoStoreFactory<>();
> igniteStoreFactory.setDataSourceFactory(() -> 
> JdbcConnectionPool.create(DB_URL, "sa", ""))
> .setTypes(
> new JdbcType()
> .setCacheName(ENTITY_NAME)
> .setDatabaseTable(ENTITY_NAME)
> .setKeyType(Integer.class)
> .setValueType(ENTITY_NAME)
> .setKeyFields(new 
> JdbcTypeField(java.sql.Types.INTEGER, "id", Integer.class, "id"))
> .setValueFields(
> new JdbcTypeField(java.sql.Types.INTEGER, "id", 
> Integer.class, "id"),
> new JdbcTypeField(java.sql.Types.VARCHAR, "name", 
> String.class, "name")
> )
> );
> CacheConfiguration cacheCfg =
> new CacheConfiguration(ENTITY_NAME)
> .setCacheMode(CacheMode.REPLICATED)
> .setCacheStoreFactory(igniteStoreFactory)
> .setWriteThrough(true)
> .setReadThrough(true)
> .setNodeFilter(new DataNodeFilter());
> emptyNode.createCache(cacheCfg).withKeepBinary();
> // 4. Restart the "data" node
> dataNode.close();
> dataNode = 
> Ignition.start(igniteCfgFactory.apply("node2").setUserAttributes(dataNodeAttrs));
> dataNode.close();
> }
> }
> private static class DataNodeFilter implements 
> IgnitePredicate {
> public static final String ATTR_NAME = "ROLE";
> public static final String ATTR_VAL = "DATA";
> @Override public boolea

[jira] [Updated] (IGNITE-13011) .NET: Thin client Kubernetes discovery

2020-05-14 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-13011:

Remaining Estimate: 48h
 Original Estimate: 48h

> .NET: Thin client Kubernetes discovery
> --
>
> Key: IGNITE-13011
> URL: https://issues.apache.org/jira/browse/IGNITE-13011
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Thin clients should be able to discover servers from within Kubernetes pod 
> through k8s API, without specifying any IP addresses.
> E.g. we can retrieve pod list from within the pod like this:
> {code}
> curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H 
> "Authorization: Bearer $(cat 
> /var/run/secrets/kubernetes.io/serviceaccount/token)" 
> https://kubernetes.default.svc/api/v1/namespaces/MY_NAMESPACE/pods
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13011) .NET: Thin client Kubernetes discovery

2020-05-14 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-13011:

Description: 
Thin clients should be able to discover servers from within Kubernetes pod 
through k8s API, without specifying any IP addresses.

E.g. we can retrieve pod list from within the pod like this:
{code}
curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H 
"Authorization: Bearer $(cat 
/var/run/secrets/kubernetes.io/serviceaccount/token)" 
https://kubernetes.default.svc/api/v1/namespaces/MY_NAMESPACE/pods
{code}

  was:Thin clients should be able to discover servers from within Kubernetes 
pod through k8s API, without specifying any IP addresses.


> .NET: Thin client Kubernetes discovery
> --
>
> Key: IGNITE-13011
> URL: https://issues.apache.org/jira/browse/IGNITE-13011
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
>
> Thin clients should be able to discover servers from within Kubernetes pod 
> through k8s API, without specifying any IP addresses.
> E.g. we can retrieve pod list from within the pod like this:
> {code}
> curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H 
> "Authorization: Bearer $(cat 
> /var/run/secrets/kubernetes.io/serviceaccount/token)" 
> https://kubernetes.default.svc/api/v1/namespaces/MY_NAMESPACE/pods
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13011) .NET: Thin client Kubernetes discovery

2020-05-14 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-13011:
---

 Summary: .NET: Thin client Kubernetes discovery
 Key: IGNITE-13011
 URL: https://issues.apache.org/jira/browse/IGNITE-13011
 Project: Ignite
  Issue Type: New Feature
  Components: platforms
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn


Thin clients should be able to discover servers from within Kubernetes pod 
through k8s API, without specifying any IP addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12898) Server node with CacheStore fails to re-join the cluster: Cannot enable read-through (loader or store is not provided) for cache

2020-05-14 Thread Ivan Daschinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107304#comment-17107304
 ] 

Ivan Daschinskiy commented on IGNITE-12898:
---

[~daradurvs] Could you please make a review? 

> Server node with CacheStore fails to re-join the cluster: Cannot enable 
> read-through (loader or store is not provided) for cache
> 
>
> Key: IGNITE-12898
> URL: https://issues.apache.org/jira/browse/IGNITE-12898
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Alexey Kukushkin
>Assignee: Ivan Daschinskiy
>Priority: Major
>  Labels: sbcf
> Fix For: 2.9
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> If a cache with external persistence is dynamically created on a non-affinity 
> node then the cache affinity node cannot join the cluster after restart.
> h2. Repro Steps
>  # Run an "empty" Ignite node where no cache is going to be started
>  # Run a cache affinity node having the "ROLE" attribute set to "DATA"
>  # Create the cache from the "empty" node and use a Node Filter to limit the 
> cache to the "data" node. External persistence is configured for the cache.
>  # Restart the "data" node
> h3. Actual Result
> {{IgniteCheckedException: Cannot enable read-through (loader or store is not 
> provided) for cache}}
> h2. Reproducer
> h3. Reproducer.java
> {code:java}
> public class Reproducer {
> @Test
> public void test() throws Exception {
> final String DB_URL = "jdbc:h2:mem:test";
> final String ENTITY_NAME = "Person";
> Function igniteCfgFactory = instanceName 
> ->
> new IgniteConfiguration()
> .setIgniteInstanceName(instanceName)
> .setDiscoverySpi(new TcpDiscoverySpi()
> .setIpFinder(new 
> TcpDiscoveryVmIpFinder().setAddresses(Collections.singleton("127.0.0.1:47500")))
> );
> // 1. Run an "empty" Ignite node where no cache is going to be started
> try (Connection dbConn = DriverManager.getConnection(DB_URL, "sa", 
> "");
>  Statement dbStmt = dbConn.createStatement();
>  Ignite emptyNode = 
> Ignition.start(igniteCfgFactory.apply("emptyyNode"))) {
> // 2. Run a "Person" cache affinity node having the "ROLE" 
> attribute set to "DATA"
> Map dataNodeAttrs = new HashMap<>(1);
> dataNodeAttrs.put(DataNodeFilter.ATTR_NAME, 
> DataNodeFilter.ATTR_VAL);
> Ignite dataNode = 
> Ignition.start(igniteCfgFactory.apply("dataNode").setUserAttributes(dataNodeAttrs));
> // 3. Create the "Person" cache from the "empty" node and use a 
> Node Filter to limit the cache to the
> // "data" node. External persistence to the "Person" table in H2 
> DB is configured for the cache.
> dbStmt.execute("CREATE TABLE " + ENTITY_NAME + " (id int PRIMARY 
> KEY, name varchar)");
> CacheJdbcPojoStoreFactory 
> igniteStoreFactory = new CacheJdbcPojoStoreFactory<>();
> igniteStoreFactory.setDataSourceFactory(() -> 
> JdbcConnectionPool.create(DB_URL, "sa", ""))
> .setTypes(
> new JdbcType()
> .setCacheName(ENTITY_NAME)
> .setDatabaseTable(ENTITY_NAME)
> .setKeyType(Integer.class)
> .setValueType(ENTITY_NAME)
> .setKeyFields(new 
> JdbcTypeField(java.sql.Types.INTEGER, "id", Integer.class, "id"))
> .setValueFields(
> new JdbcTypeField(java.sql.Types.INTEGER, "id", 
> Integer.class, "id"),
> new JdbcTypeField(java.sql.Types.VARCHAR, "name", 
> String.class, "name")
> )
> );
> CacheConfiguration cacheCfg =
> new CacheConfiguration(ENTITY_NAME)
> .setCacheMode(CacheMode.REPLICATED)
> .setCacheStoreFactory(igniteStoreFactory)
> .setWriteThrough(true)
> .setReadThrough(true)
> .setNodeFilter(new DataNodeFilter());
> emptyNode.createCache(cacheCfg).withKeepBinary();
> // 4. Restart the "data" node
> dataNode.close();
> dataNode = 
> Ignition.start(igniteCfgFactory.apply("node2").setUserAttributes(dataNodeAttrs));
> dataNode.close();
> }
> }
> private static class DataNodeFilter implements 
> IgnitePredicate {
> public static final String ATTR_NAME = "ROLE";
> public static final String ATTR_VAL = "DATA";
>   

[jira] [Commented] (IGNITE-12399) Java thin client: add cache expiry policies

2020-05-14 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107256#comment-17107256
 ] 

Aleksey Plekhanov commented on IGNITE-12399:


[~isapego], I agree.

> Java thin client: add cache expiry policies
> ---
>
> Key: IGNITE-12399
> URL: https://issues.apache.org/jira/browse/IGNITE-12399
> Project: Ignite
>  Issue Type: New Feature
>  Components: thin client
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: thin
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement:
>  * Dynamic cache creation with expire policy.
>  * Put data into the cache with expire policy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-12399) Java thin client: add cache expiry policies

2020-05-14 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107247#comment-17107247
 ] 

Igor Sapego edited comment on IGNITE-12399 at 5/14/20, 12:14 PM:
-

[~alex_pl], there is a mistake in public API method name *withExpirePolicy* 
must be *withExpiryPolicy*. I propose to introduce a right version and 
deprecate invalid one. WDYT?


was (Author: isapego):
[~alex_pl], there is a mistake in public API method name *withExpir+e+Policy* 
must be *withExpir+y+Policy*. I propose to introduce a right version and 
deprecate invalid one. WDYT?

> Java thin client: add cache expiry policies
> ---
>
> Key: IGNITE-12399
> URL: https://issues.apache.org/jira/browse/IGNITE-12399
> Project: Ignite
>  Issue Type: New Feature
>  Components: thin client
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: thin
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement:
>  * Dynamic cache creation with expire policy.
>  * Put data into the cache with expire policy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-12399) Java thin client: add cache expiry policies

2020-05-14 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107247#comment-17107247
 ] 

Igor Sapego edited comment on IGNITE-12399 at 5/14/20, 12:14 PM:
-

[~alex_pl], there is a mistake in public API method name *withExpir+e+Policy* 
must be *withExpir+y+Policy*. I propose to introduce a right version and 
deprecate invalid one. WDYT?


was (Author: isapego):
[~alex_pl], there is a mistake in public API method name withExpirePolicy must 
be withExpiryPolicy. I propose to introduce a right version and deprecate 
invalid one. WDYT?

> Java thin client: add cache expiry policies
> ---
>
> Key: IGNITE-12399
> URL: https://issues.apache.org/jira/browse/IGNITE-12399
> Project: Ignite
>  Issue Type: New Feature
>  Components: thin client
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: thin
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement:
>  * Dynamic cache creation with expire policy.
>  * Put data into the cache with expire policy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-12399) Java thin client: add cache expiry policies

2020-05-14 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107247#comment-17107247
 ] 

Igor Sapego edited comment on IGNITE-12399 at 5/14/20, 12:14 PM:
-

[~alex_pl], there is a mistake in public API method name withExpirePolicy must 
be withExpiryPolicy. I propose to introduce a right version and deprecate 
invalid one. WDYT?


was (Author: isapego):
[~alex_pl], there is a mistake in public API method name withExpir*e*Policy 
must be withExpir*y*Policy. I propose to introduce a right version and 
deprecate invalid one. WDYT?

> Java thin client: add cache expiry policies
> ---
>
> Key: IGNITE-12399
> URL: https://issues.apache.org/jira/browse/IGNITE-12399
> Project: Ignite
>  Issue Type: New Feature
>  Components: thin client
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: thin
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement:
>  * Dynamic cache creation with expire policy.
>  * Put data into the cache with expire policy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12399) Java thin client: add cache expiry policies

2020-05-14 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107247#comment-17107247
 ] 

Igor Sapego commented on IGNITE-12399:
--

[~alex_pl], there is a mistake in public API method name withExpir*e*Policy 
must be withExpir*y*Policy. I propose to introduce a right version and 
deprecate invalid one. WDYT?

> Java thin client: add cache expiry policies
> ---
>
> Key: IGNITE-12399
> URL: https://issues.apache.org/jira/browse/IGNITE-12399
> Project: Ignite
>  Issue Type: New Feature
>  Components: thin client
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: thin
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement:
>  * Dynamic cache creation with expire policy.
>  * Put data into the cache with expire policy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-12898) Server node with CacheStore fails to re-join the cluster: Cannot enable read-through (loader or store is not provided) for cache

2020-05-14 Thread Ivan Daschinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107237#comment-17107237
 ] 

Ivan Daschinskiy edited comment on IGNITE-12898 at 5/14/20, 11:55 AM:
--

The main reason of fail -- if coordinator is not an affinity node, it saves 
descriptor with not fully deserialized CacheConfiguration and small serialized 
part. Before fix if affinity node joins, coordinator send CacheConfiguration 
with missing CacheStore, because it obtain it not from saved binary data, but 
from reduced CacheConfiguration. So validation of CacheConfiguration fails on 
joining node and it fails. 

I proposed send to joined node previously saved binary data, that is correct. 

Workaround for current version same, as suggests [~kukushal]


was (Author: ivandasch):
The main reason of fail -- if coordinator is not an affinity node, it saves 
descriptor with not fully deserialized CacheConfiguration and small serialized 
part. Before fix if affinity node joins, coordinator send CacheConfiguration 
with missing CacheStore, because it obtain it not from saved binary data, but 
from reduced CacheConfiguration. So validation of CacheConfiguration fails on 
joining node and it fails. 

I proposed send to joined node previously saved binary data, that is correct. 

> Server node with CacheStore fails to re-join the cluster: Cannot enable 
> read-through (loader or store is not provided) for cache
> 
>
> Key: IGNITE-12898
> URL: https://issues.apache.org/jira/browse/IGNITE-12898
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Alexey Kukushkin
>Assignee: Ivan Daschinskiy
>Priority: Major
>  Labels: sbcf
> Fix For: 2.9
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> If a cache with external persistence is dynamically created on a non-affinity 
> node then the cache affinity node cannot join the cluster after restart.
> h2. Repro Steps
>  # Run an "empty" Ignite node where no cache is going to be started
>  # Run a cache affinity node having the "ROLE" attribute set to "DATA"
>  # Create the cache from the "empty" node and use a Node Filter to limit the 
> cache to the "data" node. External persistence is configured for the cache.
>  # Restart the "data" node
> h3. Actual Result
> {{IgniteCheckedException: Cannot enable read-through (loader or store is not 
> provided) for cache}}
> h2. Reproducer
> h3. Reproducer.java
> {code:java}
> public class Reproducer {
> @Test
> public void test() throws Exception {
> final String DB_URL = "jdbc:h2:mem:test";
> final String ENTITY_NAME = "Person";
> Function igniteCfgFactory = instanceName 
> ->
> new IgniteConfiguration()
> .setIgniteInstanceName(instanceName)
> .setDiscoverySpi(new TcpDiscoverySpi()
> .setIpFinder(new 
> TcpDiscoveryVmIpFinder().setAddresses(Collections.singleton("127.0.0.1:47500")))
> );
> // 1. Run an "empty" Ignite node where no cache is going to be started
> try (Connection dbConn = DriverManager.getConnection(DB_URL, "sa", 
> "");
>  Statement dbStmt = dbConn.createStatement();
>  Ignite emptyNode = 
> Ignition.start(igniteCfgFactory.apply("emptyyNode"))) {
> // 2. Run a "Person" cache affinity node having the "ROLE" 
> attribute set to "DATA"
> Map dataNodeAttrs = new HashMap<>(1);
> dataNodeAttrs.put(DataNodeFilter.ATTR_NAME, 
> DataNodeFilter.ATTR_VAL);
> Ignite dataNode = 
> Ignition.start(igniteCfgFactory.apply("dataNode").setUserAttributes(dataNodeAttrs));
> // 3. Create the "Person" cache from the "empty" node and use a 
> Node Filter to limit the cache to the
> // "data" node. External persistence to the "Person" table in H2 
> DB is configured for the cache.
> dbStmt.execute("CREATE TABLE " + ENTITY_NAME + " (id int PRIMARY 
> KEY, name varchar)");
> CacheJdbcPojoStoreFactory 
> igniteStoreFactory = new CacheJdbcPojoStoreFactory<>();
> igniteStoreFactory.setDataSourceFactory(() -> 
> JdbcConnectionPool.create(DB_URL, "sa", ""))
> .setTypes(
> new JdbcType()
> .setCacheName(ENTITY_NAME)
> .setDatabaseTable(ENTITY_NAME)
> .setKeyType(Integer.class)
> .setValueType(ENTITY_NAME)
> .setKeyFields(new 
> JdbcTypeField(java.sql.Types.INTEGER, "id", Integer.class, "id"))
> .setValueFields(
>

[jira] [Commented] (IGNITE-12898) Server node with CacheStore fails to re-join the cluster: Cannot enable read-through (loader or store is not provided) for cache

2020-05-14 Thread Ivan Daschinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107237#comment-17107237
 ] 

Ivan Daschinskiy commented on IGNITE-12898:
---

The main reason of fail -- if coordinator is not an affinity node, it saves 
descriptor with not fully deserialized CacheConfiguration and small serialized 
part. Before fix if affinity node joins, coordinator send CacheConfiguration 
with missing CacheStore, because it obtain it not from saved binary data, but 
from reduced CacheConfiguration. So validation of CacheConfiguration fails on 
joining node and it fails. 

I proposed send to joined node previously saved binary data, that is correct. 

> Server node with CacheStore fails to re-join the cluster: Cannot enable 
> read-through (loader or store is not provided) for cache
> 
>
> Key: IGNITE-12898
> URL: https://issues.apache.org/jira/browse/IGNITE-12898
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Alexey Kukushkin
>Assignee: Ivan Daschinskiy
>Priority: Major
>  Labels: sbcf
> Fix For: 2.9
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> If a cache with external persistence is dynamically created on a non-affinity 
> node then the cache affinity node cannot join the cluster after restart.
> h2. Repro Steps
>  # Run an "empty" Ignite node where no cache is going to be started
>  # Run a cache affinity node having the "ROLE" attribute set to "DATA"
>  # Create the cache from the "empty" node and use a Node Filter to limit the 
> cache to the "data" node. External persistence is configured for the cache.
>  # Restart the "data" node
> h3. Actual Result
> {{IgniteCheckedException: Cannot enable read-through (loader or store is not 
> provided) for cache}}
> h2. Reproducer
> h3. Reproducer.java
> {code:java}
> public class Reproducer {
> @Test
> public void test() throws Exception {
> final String DB_URL = "jdbc:h2:mem:test";
> final String ENTITY_NAME = "Person";
> Function igniteCfgFactory = instanceName 
> ->
> new IgniteConfiguration()
> .setIgniteInstanceName(instanceName)
> .setDiscoverySpi(new TcpDiscoverySpi()
> .setIpFinder(new 
> TcpDiscoveryVmIpFinder().setAddresses(Collections.singleton("127.0.0.1:47500")))
> );
> // 1. Run an "empty" Ignite node where no cache is going to be started
> try (Connection dbConn = DriverManager.getConnection(DB_URL, "sa", 
> "");
>  Statement dbStmt = dbConn.createStatement();
>  Ignite emptyNode = 
> Ignition.start(igniteCfgFactory.apply("emptyyNode"))) {
> // 2. Run a "Person" cache affinity node having the "ROLE" 
> attribute set to "DATA"
> Map dataNodeAttrs = new HashMap<>(1);
> dataNodeAttrs.put(DataNodeFilter.ATTR_NAME, 
> DataNodeFilter.ATTR_VAL);
> Ignite dataNode = 
> Ignition.start(igniteCfgFactory.apply("dataNode").setUserAttributes(dataNodeAttrs));
> // 3. Create the "Person" cache from the "empty" node and use a 
> Node Filter to limit the cache to the
> // "data" node. External persistence to the "Person" table in H2 
> DB is configured for the cache.
> dbStmt.execute("CREATE TABLE " + ENTITY_NAME + " (id int PRIMARY 
> KEY, name varchar)");
> CacheJdbcPojoStoreFactory 
> igniteStoreFactory = new CacheJdbcPojoStoreFactory<>();
> igniteStoreFactory.setDataSourceFactory(() -> 
> JdbcConnectionPool.create(DB_URL, "sa", ""))
> .setTypes(
> new JdbcType()
> .setCacheName(ENTITY_NAME)
> .setDatabaseTable(ENTITY_NAME)
> .setKeyType(Integer.class)
> .setValueType(ENTITY_NAME)
> .setKeyFields(new 
> JdbcTypeField(java.sql.Types.INTEGER, "id", Integer.class, "id"))
> .setValueFields(
> new JdbcTypeField(java.sql.Types.INTEGER, "id", 
> Integer.class, "id"),
> new JdbcTypeField(java.sql.Types.VARCHAR, "name", 
> String.class, "name")
> )
> );
> CacheConfiguration cacheCfg =
> new CacheConfiguration(ENTITY_NAME)
> .setCacheMode(CacheMode.REPLICATED)
> .setCacheStoreFactory(igniteStoreFactory)
> .setWriteThrough(true)
> .setReadThrough(true)
> .setNodeFilter(new DataNodeFilter());
> emptyNode.createCache(cacheC

[jira] [Commented] (IGNITE-12886) Introduce separate SQL configuration

2020-05-14 Thread Taras Ledkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107229#comment-17107229
 ] 

Taras Ledkov commented on IGNITE-12886:
---

[~amashenkov], I've fixed the minor javadoc issues and reply about thread pool 
property.

> Introduce separate SQL configuration
> 
>
> Key: IGNITE-12886
> URL: https://issues.apache.org/jira/browse/IGNITE-12886
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> A lot of SQL-related configuration parameter are placed at the root of the 
> {{IgniteConfiguration}}.
> It would be better to move them to a separate configuration class, e.g. 
> {{SqlConfiguration}}.
> Thread on [Ignite developers 
> list|http://apache-ignite-developers.2346864.n4.nabble.com/Introduce-separate-SQL-configuration-td46636.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12938) control.sh utility commands: IdleVerify and ValidateIndexes use eventual payload check.

2020-05-14 Thread Stanilovsky Evgeny (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny updated IGNITE-12938:

Description: 
{noformat}
"--cache idle_verify" and "--cache validate_indexes"
{noformat}
 commands of *control.sh*  utility use eventual payload check during  
execution. This can lead to execution concurrently with active payload and no 
errors like : "Checkpoint with dirty pages started! Cluster not idle"  will be 
triggered. Additionally current functional miss check on caches without 
persistence.  Remove old functionality from PageMemory and move it into update 
counters usage. Running this checks with active rebalance or active payload may 
give erroneous results in cases of out of order update messages will be 
processed or some gaps eventually arises, more info [1]. This fix covers such 
problems.

[1] https://cwiki.apache.org/confluence/display/IGNITE/Data+consistency 

  was:"--cache idle_verify" and "--cache validate_indexes" commands of 
*control.sh*  utility use eventual payload check during  execution. This can 
lead to execution concurrently with active payload and no errors like : 
"Checkpoint with dirty pages started! Cluster not idle"  will be triggered. 
Additionally current functional miss check on caches without persistence.  
Remove old functionality from PageMemory and move it into update counters usage.


> control.sh utility commands: IdleVerify and ValidateIndexes use eventual 
> payload check.
> ---
>
> Key: IGNITE-12938
> URL: https://issues.apache.org/jira/browse/IGNITE-12938
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.8
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {noformat}
> "--cache idle_verify" and "--cache validate_indexes"
> {noformat}
>  commands of *control.sh*  utility use eventual payload check during  
> execution. This can lead to execution concurrently with active payload and no 
> errors like : "Checkpoint with dirty pages started! Cluster not idle"  will 
> be triggered. Additionally current functional miss check on caches without 
> persistence.  Remove old functionality from PageMemory and move it into 
> update counters usage. Running this checks with active rebalance or active 
> payload may give erroneous results in cases of out of order update messages 
> will be processed or some gaps eventually arises, more info [1]. This fix 
> covers such problems.
> [1] https://cwiki.apache.org/confluence/display/IGNITE/Data+consistency 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12891) Add userAttributes map to all GridClient messages

2020-05-14 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107185#comment-17107185
 ] 

Ignite TC Bot commented on IGNITE-12891:


{panel:title=Branch: [pull/7671/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5303780&buildTypeId=IgniteTests24Java8_RunAll]

> Add userAttributes map to all GridClient messages
> -
>
> Key: IGNITE-12891
> URL: https://issues.apache.org/jira/browse/IGNITE-12891
> Project: Ignite
>  Issue Type: Bug
>Reporter: Oleg Ostanin
>Assignee: Oleg Ostanin
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Currently we are only sending userAttributes map in GridClient TOPOLOGY 
> message. In some particular circumstances it can lead to an authentication 
> failure.
> Reproducer:
> https://github.com/oleg-ostanin/ignite/blob/gridclient-fail-reproducer/modules/core/src/test/java/org/apache/ignite/internal/processors/security/client/AdditionalSecurityCheckGridClientTest.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12964) Java thin client: implement cluster group API

2020-05-14 Thread Ivan Daschinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107171#comment-17107171
 ] 

Ivan Daschinskiy commented on IGNITE-12964:
---

[~alex_pl] Great job! After review this contribution looks good to me. Thank 
you for contribution!

> Java thin client: implement cluster group API
> -
>
> Key: IGNITE-12964
> URL: https://issues.apache.org/jira/browse/IGNITE-12964
> Project: Ignite
>  Issue Type: New Feature
>  Components: thin client
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Implement API for node filtering on java thin client side. Thin-client API 
> should match thick-client API ({{ClusterGroup}}, {{ClusterNode}} classes) as 
> much as possible. 
> Already implemented server-side thin-client operations 
> {{OP_CLUSTER_GROUP_GET_NODE_IDS}},  {{OP_CLUSTER_GROUP_GET_NODE_INFO}} should 
> be used. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12886) Introduce separate SQL configuration

2020-05-14 Thread Andrey Mashenkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107163#comment-17107163
 ] 

Andrey Mashenkov commented on IGNITE-12886:
---

[~tledkov-gridgain], I've left few comments to the PR.

> Introduce separate SQL configuration
> 
>
> Key: IGNITE-12886
> URL: https://issues.apache.org/jira/browse/IGNITE-12886
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> A lot of SQL-related configuration parameter are placed at the root of the 
> {{IgniteConfiguration}}.
> It would be better to move them to a separate configuration class, e.g. 
> {{SqlConfiguration}}.
> Thread on [Ignite developers 
> list|http://apache-ignite-developers.2346864.n4.nabble.com/Introduce-separate-SQL-configuration-td46636.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13010) A local listener for cache events with type EVT_CACHE_STOPPED does not get a cache event from a remote node.

2020-05-14 Thread Denis Garus (Jira)
Denis Garus created IGNITE-13010:


 Summary: A local listener for cache events with type 
EVT_CACHE_STOPPED does not get a cache event from a remote node.
 Key: IGNITE-13010
 URL: https://issues.apache.org/jira/browse/IGNITE-13010
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.8
Reporter: Denis Garus


A local listener for cache events with type EVT_CACHE_STOPPED does not get a 
cache event from a remote node. 
That occurs due to NPE on a remote node:
{code:java}
[2020-05-14 
12:07:25,623][ERROR][sys-#206%security.NpeGridEventConsumeHandlerReproducer2%][GridEventConsumeHandler]
 Failed to send event notification to node: 
55671ec1-dad9-452b-8ab2-4b7916c0[2020-05-14 
12:07:25,623][ERROR][sys-#206%security.NpeGridEventConsumeHandlerReproducer2%][GridEventConsumeHandler]
 Failed to send event notification to node: 
55671ec1-dad9-452b-8ab2-4b7916c0java.lang.NullPointerException at 
org.apache.ignite.internal.GridEventConsumeHandler$2$1.run(GridEventConsumeHandler.java:238)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base/java.lang.Thread.run(Thread.java:834)
{code}
The reproducer:


{code:java}
public class NpeGridEventConsumeHandlerReproducer extends 
GridCommonAbstractTest {

private static AtomicInteger rmtCounter = new AtomicInteger();
private static AtomicInteger locCounter = new AtomicInteger();

@Override protected IgniteConfiguration getConfiguration(String 
igniteInstanceName) throws Exception {
return 
super.getConfiguration(igniteInstanceName).setIncludeEventTypes(EVT_CACHE_STOPPED);
}

@Test
public void test() throws Exception {
startGrids(3);

grid(1).createCache(new CacheConfiguration<>("test_cache"));

grid(0).events().remoteListen((uuid, evt) ->{
 locCounter.incrementAndGet();
 return true;
}, evt->{
rmtCounter.incrementAndGet();
return true;
}, EVT_CACHE_STOPPED);

grid(1).destroyCache("test_cache");

TimeUnit.SECONDS.sleep(10);

assertEquals(rmtCounter.get(), locCounter.get());
}
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12886) Introduce separate SQL configuration

2020-05-14 Thread Taras Ledkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107104#comment-17107104
 ] 

Taras Ledkov commented on IGNITE-12886:
---

[~korlov], [~amashenkov], please review the patch.

> Introduce separate SQL configuration
> 
>
> Key: IGNITE-12886
> URL: https://issues.apache.org/jira/browse/IGNITE-12886
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A lot of SQL-related configuration parameter are placed at the root of the 
> {{IgniteConfiguration}}.
> It would be better to move them to a separate configuration class, e.g. 
> {{SqlConfiguration}}.
> Thread on [Ignite developers 
> list|http://apache-ignite-developers.2346864.n4.nabble.com/Introduce-separate-SQL-configuration-td46636.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12886) Introduce separate SQL configuration

2020-05-14 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107102#comment-17107102
 ] 

Ignite TC Bot commented on IGNITE-12886:


{panel:title=Branch: [pull/7745/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5304188&buildTypeId=IgniteTests24Java8_RunAll]

> Introduce separate SQL configuration
> 
>
> Key: IGNITE-12886
> URL: https://issues.apache.org/jira/browse/IGNITE-12886
> Project: Ignite
>  Issue Type: Task
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A lot of SQL-related configuration parameter are placed at the root of the 
> {{IgniteConfiguration}}.
> It would be better to move them to a separate configuration class, e.g. 
> {{SqlConfiguration}}.
> Thread on [Ignite developers 
> list|http://apache-ignite-developers.2346864.n4.nabble.com/Introduce-separate-SQL-configuration-td46636.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-11147) Re-balance cancellation occur by non-affected event

2020-05-14 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107086#comment-17107086
 ] 

Ignite TC Bot commented on IGNITE-11147:


{panel:title=Branch: [pull/7428/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5304072&buildTypeId=IgniteTests24Java8_RunAll]

> Re-balance cancellation occur by non-affected event
> ---
>
> Key: IGNITE-11147
> URL: https://issues.apache.org/jira/browse/IGNITE-11147
> Project: Ignite
>  Issue Type: Test
>  Components: cache
>Affects Versions: 2.7
>Reporter: Sergey Antonov
>Assignee: Vladislav Pyatkov
>Priority: Critical
> Fix For: 2.9
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Re-balance cancels by non-affected events, for examples:
> 1) joining non affinity node
> 2) stating snapshot
> 3) starting/stopping other cache
> Try to skip as more as possible events instead of cancellation.
> After solved several issues appearing during this testing, I decided to add 
> specific property allowing on/off rebalance's optimization. (see 
> {{IgniteSystemProperties#IGNITE_DISABLE_REBALANCING_CANCELLATION_OPTIMIZATION}}
>  by default false).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13005) Spring Data 2 - JPA Improvements and working with multiple Ignite instances on same JVM

2020-05-14 Thread Ilya Kasnacheev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107072#comment-17107072
 ] 

Ilya Kasnacheev commented on IGNITE-13005:
--

Thank you for the effort. Unfortunately, I'm afraid the trouble of finding 
somebody to drive it will fall on you, since the feature is not under active 
development at the moment.

> Spring Data 2 - JPA Improvements and working with multiple Ignite instances 
> on same JVM
> ---
>
> Key: IGNITE-13005
> URL: https://issues.apache.org/jira/browse/IGNITE-13005
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.7.6
>Reporter: Manuel Núñez
>Assignee: Ilya Kasnacheev
>Priority: Major
>
> I have it working for Spring Data 2 (2.7.6) module with some interesting 
> improvements, but by now I don't have enough time to give it the attention it 
> requires, full unit/integration tests..., sorry a lot. ¿maybe any of you have 
> the time?. Thanks community!!
> Code is 100% compatible with previous versions. 
> [https://github.com/hawkore/ignite-hk/tree/master/modules/spring-data-2.0]
>  * Supports multiple ignite instances on same JVM (@RepositoryConfig).
>  * Supports query tuning parameters in {{@Query}} annotation
>  * Supports projections
>  * Supports {{Page}} and {{Stream}} responses
>  * Supports Sql Fields Query resultset transformation into the domain entity
>  * Supports named parameters ({{:myParam}}) into SQL queries, declared using 
> {{@Param("myParam")}}
>  * Supports advanced parameter binding and SpEL expressions into SQL queries:
>  ** *Template variables*:
>  *** {{#entityName}} - the simple class name of the domain entity
>  ** *Method parameter expressions*: Parameters are exposed for indexed access 
> ({{[0]}} is the first query method's param) or via the name declared using 
> {{@Param}}. The actual SpEL expression binding is triggered by {{?#}}. 
> Example: {{?#\{[0]\}} or {{?#\{#myParamName\}}}
>  ** *Advanced SpEL expressions*: While advanced parameter binding is a very 
> useful feature, the real power of SpEL stems from the fact, that the 
> expressions can refer to framework abstractions or other application 
> components through SpEL EvaluationContext extension model.
>  * Supports SpEL expressions into Text queries ({{TextQuery}}). 
> Some examples:
> {code:java}
> // Spring Data Repositories using different ignite instances on same JVM
> @RepositoryConfig(igniteInstance = "FLIGHTS_BBDD", cacheName = "ROUTES")
> public interface FlightRouteRepository extends IgniteRepository String> {
> ...
> }
> @RepositoryConfig(igniteInstance = "GEO_BBDD", cacheName = "POIS")
> public interface PoiRepository extends IgniteRepository {
> ...
> }
> {code}
> {code:java}
> // named parameter
> @Query(value = "SELECT * from #{#entityName} where email = :email")
> User searchUserByEmail(@Param("email") String email);
> {code}
> {code:java}
> // indexed parameters
> @Query(value = "SELECT * from #{#entityName} where country = ?#{[0] and city 
> = ?#{[1]}")
> List searchUsersByCity(@Param("country") String country, @Param("city") 
> String city, Pageable pageable);
> {code}
> {code:java}
> // ordered method parameters
> @Query(value = "SELECT * from #{#entityName} where email = ?")
> User searchUserByEmail(String email);
> {code}
> {code:java}
> // Advanced SpEL expressions
> @Query(value = "SELECT * from #{#entityName} where uuidCity = 
> ?#{mySpELFunctionsBean.cityNameToUUID(#city)}")
> List searchUsersByCity(@Param("city") String city, Pageable pageable);
> {code}
> {code:java}
> // textQuery - evaluated SpEL named parameter
> @Query(textQuery = true, value = "email: #{#email}")
> User searchUserByEmail(@Param("email") String email);
> {code}
> {code:java}
> // textQuery - evaluated SpEL named parameter
> @Query(textQuery = true, value = "#{#textToSearch}")
> List searchUsersByText(@Param("textToSearch") String text, Pageable 
> pageable);
> {code}
> {code:java}
> // textQuery - evaluated SpEL indexed parameter
> @Query(textQuery = true, value = "#{[0]}")
> List searchUsersByText(String textToSearch, Pageable pageable);
> {code}
> {code:java}
> // Projection
> @Query(value =
>"SELECT DISTINCT m.id, m.name, m.logos FROM #{#entityName} e 
> USE INDEX (ORIGIN_IDX) INNER JOIN \"flightMerchants\".Merchant m ON m"
>+ "._key=e"
>+ ".merchant WHERE e.origin = :origin and e.disabled = 
> :disabled GROUP BY m.id, m.name, m.logos ORDER BY m.name")
>  List searchMerchantsByOrigin(Class projection, @Param("origin") 
> String origin, @Param("disabled") boolean disabled);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12617) PME-free switch should wait for recovery only at affected nodes.

2020-05-14 Thread Anton Vinogradov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107034#comment-17107034
 ] 

Anton Vinogradov commented on IGNITE-12617:
---

[~ascherbakov],
Thank you for the review!

> Double latch waiting if replicated caches are in topology.
Single latch waiting on healthy cells.
The only broken cell will wait for a partitioned-recovery latch.

> 2. It degrades to be a no-op if backups are spread by grid nodes (this is a 
> default behavior with rendezvous affinity).
Sure, but this fix is for real production cases where the baseline set should 
be set as well.
So, this will not fix every case but allow us to speed-up the production.
Regular deployment still may have ... PME of node left.

> I would like to propose an algorithm, which should provide the same latency 
> decrease ...
In addition to counters, we should also wait for recovery finish to have 
consistent partitions before allowing any operations on it.
As we discussed privately, it seems to be possible to perform a 
recovery-await-free switch just acquiring locks on prepared keys before 
finishing the exchange future, but this case requires additional research.

> PME-free switch should wait for recovery only at affected nodes.
> 
>
> Key: IGNITE-12617
> URL: https://issues.apache.org/jira/browse/IGNITE-12617
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since IGNITE-9913, new-topology operations allowed immediately after 
> cluster-wide recovery finished.
> But is there any reason to wait for a cluster-wide recovery if only one node 
> failed?
> In this case, we should recover only the failed node's backups.
> Unfortunately, {{RendezvousAffinityFunction}} tends to spread the node's 
> backup partitions to the whole cluster. In this case, we, obviously, have to 
> wait for cluster-wide recovery on switch.
> But what if only some nodes will be the backups for every primary?
> In case nodes combined into virtual cells where, for each partition, backups 
> located at the same cell with primaries, it's possible to finish the switch 
> outside the affected cell before tx recovery finish.
> This optimization will allow us to start and even finish new operations 
> outside the failed cell without a cluster-wide switch finish (broken cell 
> recovery) waiting.
> In other words, switch (when left/fail + baseline + rebalanced) will have 
> little effect on the operation's (not related to failed cell) latency.
> In other words
> - We should wait for tx recovery before finishing the switch only on a broken 
> cell.
> - We should wait for replicated caches tx recovery everywhere since every 
> node is a backup of a failed one.
> - Upcoming operations related to the broken cell (including all replicated 
> caches operations) will require a cluster-wide switch finish to be processed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12579) JDBC SQL INSERT operation hangs with security enabled.

2020-05-14 Thread Mikhail Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17106992#comment-17106992
 ] 

Mikhail Petrov commented on IGNITE-12579:
-

[~alex_pl] , [~garus.d.g] Thank you for the review.

> JDBC SQL INSERT operation hangs with security enabled.
> --
>
> Key: IGNITE-12579
> URL: https://issues.apache.org/jira/browse/IGNITE-12579
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Assignee: Mikhail Petrov
>Priority: Major
>  Labels: iep-41
> Fix For: 2.9
>
> Attachments: JdbcRemoteKeyInsertTest.java
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
>  
> SQL INSERT operation hangs in case INSERT KEY belongs to remote node(node 
> that is different from one to which jdbc connection was established) and 
> security enabled with the following exception in log:
> {code:java}
> [2020-01-24 
> 14:59:42,189][ERROR][sys-stripe-4-#48%jdbc.JdbcRemoteKeyInsertTest1%][IgniteTestResources]
>  Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, 
> err=java.lang.NullPointerException]][2020-01-24 
> 14:59:42,189][ERROR][sys-stripe-4-#48%jdbc.JdbcRemoteKeyInsertTest1%][IgniteTestResources]
>  Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, 
> err=java.lang.NullPointerException]]java.lang.NullPointerException at 
> org.apache.ignite.internal.processors.security.SecurityUtils.nodeSecurityContext(SecurityUtils.java:132)
>  at 
> org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.lambda$withContext$0(IgniteSecurityProcessor.java:106)
>  at 
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
>  at 
> org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.withContext(IgniteSecurityProcessor.java:105)
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1844)
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1470)
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$5200(GridIoManager.java:229)
>  at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1365)
>  at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:565)
>  at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at 
> java.lang.Thread.run(Thread.java:748)[2020-01-24 14:59:42,198][WARN 
> ][sys-stripe-4-#48%jdbc.JdbcRemoteKeyInsertTest1%][CacheDiagnosticManager] 
> Page locks dump:
> Thread=[name=auth-#83%jdbc.JdbcRemoteKeyInsertTest1%, id=116], 
> state=WAITINGLocked pages = []Locked pages log: 
> name=auth-#83%jdbc.JdbcRemoteKeyInsertTest1% time=(1579867182194, 2020-01-24 
> 14:59:42.194)Thread=[name=db-checkpoint-thread-#101%jdbc.JdbcRemoteKeyInsertTest1%,
>  id=135], state=TIMED_WAITINGLocked pages = []Locked pages log: 
> name=db-checkpoint-thread-#101%jdbc.JdbcRemoteKeyInsertTest1% 
> time=(1579867182194, 2020-01-24 
> 14:59:42.194)Thread=[name=dms-writer-thread-#92%jdbc.JdbcRemoteKeyInsertTest1%,
>  id=126], state=WAITINGLocked pages = []Locked pages log: 
> name=dms-writer-thread-#92%jdbc.JdbcRemoteKeyInsertTest1% 
> time=(1579867182194, 2020-01-24 
> 14:59:42.194)Thread=[name=exchange-worker-#84%jdbc.JdbcRemoteKeyInsertTest1%, 
> id=117], state=TIMED_WAITINGLocked pages = []Locked pages log: 
> name=exchange-worker-#84%jdbc.JdbcRemoteKeyInsertTest1% time=(1579867182194, 
> 2020-01-24 14:59:42.194)Thread=[name=main, id=1], state=TIMED_WAITINGLocked 
> pages = []Locked pages log: name=main time=(1579867182193, 2020-01-24 
> 14:59:42.193)
> [2020-01-24 
> 14:59:42,198][ERROR][sys-stripe-4-#48%jdbc.JdbcRemoteKeyInsertTest1%][G] 
> Failed to execute runnable.java.lang.NullPointerException at 
> org.apache.ignite.internal.processors.security.SecurityUtils.nodeSecurityContext(SecurityUtils.java:132)
>  at 
> org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.lambda$withContext$0(IgniteSecurityProcessor.java:106)
>  at 
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
>  at 
> org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.withContext(IgniteSecurityProcessor.java:1