Hello! I doubt that you will have speed-up here compared to just using JDBC (possibly with randomized endpoints list).
Regards, -- Ilya Kasnacheev ср, 27 февр. 2019 г. в 14:52, 李玉珏@163 <[email protected]>: > The main consideration is that using JDBC interface, the existing code > modification workload is small. > 在 2019/2/27 下午5:31, Stephen Darlington 写道: > > If you’re already using Ignite-specific APIs (IgniteCallable), why not use > the other Ignite-native APIs for reading/writing/processing data? That way > you can use affinity functions for load balancing where it makes sense and > Ignite’s normal load balancing processing for general compute tasks. > > Regards, > Stephen > > On 27 Feb 2019, at 06:00, 李玉珏@163 <[email protected]> wrote: > > Hi, > Since JDBC can't achieve multi-endpoint load balancing, we want to use > affinityCall (...) mechanism to achieve load balancing, that is, to obtain > and use JDBC Connection in IgniteCallable implementation. > How to efficiently access and use JDBC Connection? > > -------- 转发的消息 -------- > 主题: Re: On Multiple Endpoints Mode of JDBC Driver > 日期: Tue, 26 Feb 2019 14:53:17 -0800 > 发件人: Denis Magda <[email protected]> <[email protected]> > 回复地址: [email protected] > 收件人: dev <[email protected]> <[email protected]> > > Hello, > > You provide a list of IP addresses for the sake of high-availability - if > one of the servers goes down then the client will reconnect to the next IP > automatically. There is no any load balancing in place presently. But! In > the next Ignite version, we're planning to roll out partition-awareness > support - the client will send a request to the nodes who hold the data > needed for the request. > > - > Denis > > > On Tue, Feb 26, 2019 at 2:48 PM 李玉珏 <[email protected]> > <[email protected]> wrote: > > Hi, > > Does have load balancing function in Multiple Endpoints mode of JDBC > driver?For example, "jdbc: ignite: thin://192.168.0.50:101,192.188.5.40:101, > 192.168.10.230:101" > If not, will one node become the bottleneck of the whole system? > > > > > >
