Re: Thin client failover mechanism (+ODBC, JDBC)

2018-02-02 Thread Pavel Tupitsyn
Alexey, let's keep it simple for now. On Fri, Feb 2, 2018 at 11:47 AM, Alexey Kuznetsov wrote: > Pavel, > > Does it make sense in case of "connection lost" to ping available addresses > in parallel? > For example using thread pool of 4 threads? > This may speed up detecting next alive node under

Re: Thin client failover mechanism (+ODBC, JDBC)

2018-02-02 Thread Alexey Kuznetsov
Pavel, Does it make sense in case of "connection lost" to ping available addresses in parallel? For example using thread pool of 4 threads? This may speed up detecting next alive node under Windows if several addresses become unavailable at once. On Fri, Feb 2, 2018 at 2:09 PM, Pavel Tupitsyn wr

Re: Thin client failover mechanism (+ODBC, JDBC)

2018-02-01 Thread Pavel Tupitsyn
Dmitriy, yes, that's what I'm implementing as part of IGNITE-7282: * List of hosts in config * Pick random index (basic load balancing), connect * When connection is lost, all current requests throw an exception * Next request causes reconnect attempt to the next host * If all hosts fail, throw exc

Re: Thin client failover mechanism (+ODBC, JDBC)

2018-02-01 Thread Dmitriy Setrakyan
On Thu, Feb 1, 2018 at 5:55 AM, Pavel Tupitsyn wrote: > Ok, let's add simple reconnect logic and see what will come of it. > Just to clarify, a list of IP addresses a client should connect to needs to be provided on startup. Once a connection is lost, a client needs to try to connect to all othe

Re: Thin client failover mechanism (+ODBC, JDBC)

2018-02-01 Thread Pavel Tupitsyn
Ok, let's add simple reconnect logic and see what will come of it. On Thu, Feb 1, 2018 at 12:49 AM, Dmitriy Setrakyan wrote: > Pavel, > > I disagree. I think automatic reconnect is a very useful feature. For > example, all client-side operation can throw exception anyway, so if you > throw an ex

Re: Thin client failover mechanism (+ODBC, JDBC)

2018-01-31 Thread Dmitriy Setrakyan
Pavel, I disagree. I think automatic reconnect is a very useful feature. For example, all client-side operation can throw exception anyway, so if you throw an exception due to a client reconnect, it will not require any additional exception-handling logic. On the other hand, after a few failed op

Re: Thin client failover mechanism (+ODBC, JDBC)

2018-01-31 Thread Alexey Kuznetsov
Pavel, I completely agree with you. No need of "black magic", just throw appropriate Exception, for example, IgniteThinClientConnectionLostException. But it will be very useful to "reconnect to next alive node" just to not reinvent the wheel in every user app. On Wed, Jan 31, 2018 at 7:21 PM, Ig

Re: Thin client failover mechanism (+ODBC, JDBC)

2018-01-31 Thread Igor Sapego
Well, I agree with Pavel here. To me looks like this feature gives a little to a user, as they need to write all the same amount of code as they would need to if there was no this feature. It also will produce some new issues with the "hanging" of operations, while thin client tries and fails to re

Re: Thin client failover mechanism (+ODBC, JDBC)

2018-01-31 Thread Pavel Tupitsyn
Alexey, retrieving addresses from topology makes sense, but in this thread I'm trying to understand whether any kind of built-in failover makes sense at all at the Ignite API level. I mean, on the business logic level failover certainly makes sense: if Web Agent has failed to execute some operatio

Re: Thin client failover mechanism (+ODBC, JDBC)

2018-01-31 Thread Alexey Kuznetsov
Pavel, I hope, that at some point Web agent (connector to Web Console) will be refactored from REST to thin client. It will be nice if thin client will support following modes: 1) Specify several addresses in thin client connection config. Thin client will use ONLY this addresses (hardcoded list)

Thin client failover mechanism (+ODBC, JDBC)

2018-01-31 Thread Pavel Tupitsyn
Igniters, I'm working on client-side failover logic for .NET Thin Client. This will probably apply to ODBC and JDBC thin clients as well in future. Currently all thin clients connect to a single specified Ignite node. The idea is to have multiple known nodes (host:port pairs) and reconnect to ano