Re: Ignite JDBC connection pooling mechanism

2020-11-17 Thread Sanjaya Kumar Sahoo
We solved the problem by removing the complete hikari connection pooling
mechanism.

Instead we use IgniteJdbcThinDataSource (
https://apacheignite-sql.readme.io/docs/jdbc-driver) ,
with appropriate  client connector configuration(
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setClientConnectorConfiguration-org.apache.ignite.configuration.ClientConnectorConfiguration-
)

After doing a few hit and trials, we concluded ignite does not
require connection pooling in the client side (like we do in RDBMS)
, instead let the Ignite server handle the SQL queries, by providing
appropriate client connection details.




On Fri, Nov 6, 2020 at 7:01 PM Vladimir Pligin  wrote:

> In general it should be ok to use connection pooling with Ignite. Is your
> network ok? It look like a connection is being closed because of network
> issues.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite JDBC connection pooling mechanism

2020-11-06 Thread Vladimir Pligin
In general it should be ok to use connection pooling with Ignite. Is your
network ok? It look like a connection is being closed because of network
issues.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite JDBC connection pooling mechanism

2020-11-04 Thread Sanjaya Kumar Sahoo
The above idea did not work. The ignite setup is as follows/

Ignite 2.8.1
Hikari 3.4.5
Java 1.8
Spring JdbcTemplate
Apache ignite is in Azure K8S cluster, and the service is being exposed as
a Azure internal load balancer.

The api works well for some time, (*till * *1 hour of restart*), after that
we are getting below error, if we restart then it works for another
approximately 1 hour.

is it not advisable to use a connection pooling  mechanism with ignite, if
yes, then what is the best way to serve concurrent requests ? is it kind
creating connection per user request and close once job done ?

Request to help on this we are completely stuck on this use case in
production.


LOGS IN IGNITE
=
^-- System thread pool [active=0, idle=6, qSize=0]
[06:51:33,191][SEVERE][grid-nio-worker-client-listener-2-#32][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=2, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-2, igniteInstanceName=null,
finished=false, heartbeatTs=1604472690975, hashCode=1771197860,
interrupted=false, runner=grid-nio-worker-client-listener-2-#32]]],
writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null,
closeSocket=true, outboundMessagesQueueSizeMetric=null,
super=GridNioSessionImpl [locAddr=/10.188.0.115:10800, rmtAddr=/
10.189.3.42:46262, createTime=1604464791433, closeTime=0, bytesSent=46,
bytesRcvd=51, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1604464791514,
lastSndTime=1604464791514, lastRcvTime=1604464791433, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser,
directMode=false]], accepted=true, markedForClose=false]]]

*java.io.IOException: Operation timed out* at
sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1162)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2449)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2216)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1857)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
[06:51:33,191][WARNING][grid-nio-worker-client-listener-2-#32][ClientListenerProcessor]
Client disconnected abruptly due to network connection loss or because the
connection was left open on application shutdown. [cls=class
o.a.i.i.util.nio.GridNioException, msg=Operation timed out]
[06:52:25,552][INFO][db-checkpoint-thread-#68][GridCacheDatabaseSharedManager]
Skipping checkpoint (no pages were modified)
[checkpointBeforeLockTime=17ms, checkpointLockWait=0ms,
checkpointListenersExecuteTime=21ms, checkpointLockHoldTime=23ms,
reason='timeout']
[06:52:25,716][INFO][grid-timeout-worker-#23][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=747a4939, uptime=41 days, 13:40:47.769]


LOGS IN APPLICATION SIDE
===
*03-11-2020 23:00:44.027 [http-nio-8080-exec-4] WARN
 com.zaxxer.hikari.pool.ProxyConnection.157 cache-query-service prod v1
cache-query-service-v1-5c5d8cd74d-jgnbb - HikariPool-1 - Connection
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection@62708a92 marked as
broken because of SQLSTATE(08006), ErrorCode(0)*

*java.sql.SQLException: Failed to communicate with Ignite cluster.* at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:760)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:212)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeQuery(JdbcThinStatement.java:123)
at
com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:111)
at
com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
at
org.springframework.jdbc.core.JdbcTemplate$1QueryStatementCallback.doInStatement(JdbcTemplate.java:439)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:376)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:452)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:462)
at
org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:473)
at
org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:480)
at

Re: Ignite JDBC connection pooling mechanism

2020-11-03 Thread Sanjaya Kumar Sahoo
Hi,

I truly appreciated the support we are getting from the community.

As of now we don't have a re-producer, The above issue basically comes once
in a while.

The server is up and running, *Note*: The ignite cluster has been installed
in azure kubernetes cluster as statefulset pods.

We have other application pods they frequently talk to ignite.

While analyzing we understood that the application pod which
creating problem, is running with ignite 2.6.0 where as the Ignite server
is 2.8.1 for us

We followed below steps and deployed in production

1- Changed version of Hikari to 3.4.5
2- Ignite core changed to 2.8.1
3- Spring boot was auto configuring jdbc templates (with hikari), we
disabled auto configuration and configured manually.

We deployed the application and we are monitoring, and will publish the
result.


Thanks,
Sanjaya





On Tue, Nov 3, 2020 at 8:28 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Are you sure that the Ignite cluster is in fact up? :)
>
> If it is, maybe your usage patterns of this pool somehow assign the
> connection to two different threads, which try to do queries in parallel.
> In theory, this is what connection pools are explicitly created to avoid,
> but maybe there's some knob you have to turn to actually make them
> thread-exclusive.
>
> Also, does it happen every time? How soon would it happen?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 2 нояб. 2020 г. в 12:31, Sanjaya :
>
>> Hi All,
>>
>> we are trying to use HIkari connection pooling with ignite JdbcThinDriver.
>> we are facing issue as
>>
>>
>> Any idea what is the supported connection pooling mechanism work with
>> IgniteThinDriver
>>
>>
>> ERROR LOG
>> ==
>>
>> WARN  com.zaxxer.hikari.pool.ProxyConnection.157 sm-event-consumer prod
>> sm-event-consumer-v1-55f4db767d-2kskt - HikariPool-1 - Connection
>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection@68f0e2a1 marked
>> as
>> broken because of SQLSTATE(08006), ErrorCode(0)
>>
>> java.sql.SQLException: Failed to communicate with Ignite cluster.
>>
>> at
>>
>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:760)
>>
>> at
>>
>> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeBatch(JdbcThinStatement.java:651)
>>
>> at
>>
>> com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128)
>>
>> at
>>
>> com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
>>
>> at
>>
>> org.springframework.jdbc.core.JdbcTemplate.lambda$batchUpdate$2(JdbcTemplate.java:950)
>>
>> at
>> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:617)
>>
>> at
>> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:647)
>>
>> at
>>
>> org.springframework.jdbc.core.JdbcTemplate.batchUpdate(JdbcTemplate.java:936)
>>
>> at
>>
>> org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.batchUpdate(NamedParameterJdbcTemplate.java:366)
>>
>> at
>>
>> com.ecoenergy.cortix.sm.event.cache.SMIgniteCacheManager.updateObjectStates(SMIgniteCacheManager.java:118)
>>
>> at
>>
>> com.ecoenergy.cortix.sm.event.notifcator.SMIgniteNotificator.notify(SMIgniteNotificator.java:69)
>>
>> at
>>
>> com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.notify(ObjectEventHandler.java:100)
>>
>> at
>>
>> com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.receiveEvents(ObjectEventHandler.java:86)
>>
>> at
>>
>> com.ecoenergy.cortix.sm.event.consumer.ObjectEventConsumer.processObjectEvents(ObjectEventConsumer.java:60)
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Ignite JDBC connection pooling mechanism

2020-11-03 Thread Ilya Kasnacheev
Hello!

Are you sure that the Ignite cluster is in fact up? :)

If it is, maybe your usage patterns of this pool somehow assign the
connection to two different threads, which try to do queries in parallel.
In theory, this is what connection pools are explicitly created to avoid,
but maybe there's some knob you have to turn to actually make them
thread-exclusive.

Also, does it happen every time? How soon would it happen?

Regards,
-- 
Ilya Kasnacheev


пн, 2 нояб. 2020 г. в 12:31, Sanjaya :

> Hi All,
>
> we are trying to use HIkari connection pooling with ignite JdbcThinDriver.
> we are facing issue as
>
>
> Any idea what is the supported connection pooling mechanism work with
> IgniteThinDriver
>
>
> ERROR LOG
> ==
>
> WARN  com.zaxxer.hikari.pool.ProxyConnection.157 sm-event-consumer prod
> sm-event-consumer-v1-55f4db767d-2kskt - HikariPool-1 - Connection
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection@68f0e2a1 marked as
> broken because of SQLSTATE(08006), ErrorCode(0)
>
> java.sql.SQLException: Failed to communicate with Ignite cluster.
>
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:760)
>
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeBatch(JdbcThinStatement.java:651)
>
> at
> com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128)
>
> at
>
> com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
>
> at
>
> org.springframework.jdbc.core.JdbcTemplate.lambda$batchUpdate$2(JdbcTemplate.java:950)
>
> at
> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:617)
>
> at
> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:647)
>
> at
>
> org.springframework.jdbc.core.JdbcTemplate.batchUpdate(JdbcTemplate.java:936)
>
> at
>
> org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.batchUpdate(NamedParameterJdbcTemplate.java:366)
>
> at
>
> com.ecoenergy.cortix.sm.event.cache.SMIgniteCacheManager.updateObjectStates(SMIgniteCacheManager.java:118)
>
> at
>
> com.ecoenergy.cortix.sm.event.notifcator.SMIgniteNotificator.notify(SMIgniteNotificator.java:69)
>
> at
>
> com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.notify(ObjectEventHandler.java:100)
>
> at
>
> com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.receiveEvents(ObjectEventHandler.java:86)
>
> at
>
> com.ecoenergy.cortix.sm.event.consumer.ObjectEventConsumer.processObjectEvents(ObjectEventConsumer.java:60)
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite JDBC connection pooling mechanism

2020-11-03 Thread Vladimir Pligin
I wasn't able to reproduce that hurriedly. What Hikari settings do you have?
Maybe you have a reproducer?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite JDBC connection pooling mechanism

2020-11-02 Thread Sanjaya Kumar Sahoo
Please find details below

Java - 1.8
Hikari - 3.4.1
Ignite - 2.6.0
SSL Enabled - false

Thanks,
Sanjaya







On Mon, Nov 2, 2020 at 8:11 PM Vladimir Pligin  wrote:

> Hi,
>
> What java version do you use? What about Hikari & Ignite versions? Do you
> have SSL enabled?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite JDBC connection pooling mechanism

2020-11-02 Thread Vladimir Pligin
Hi,

What java version do you use? What about Hikari & Ignite versions? Do you
have SSL enabled?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite JDBC connection pooling mechanism

2020-11-02 Thread Sanjaya
Hi All,

we are trying to use HIkari connection pooling with ignite JdbcThinDriver.
we are facing issue as 


Any idea what is the supported connection pooling mechanism work with
IgniteThinDriver


ERROR LOG
==

WARN  com.zaxxer.hikari.pool.ProxyConnection.157 sm-event-consumer prod
sm-event-consumer-v1-55f4db767d-2kskt - HikariPool-1 - Connection
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection@68f0e2a1 marked as
broken because of SQLSTATE(08006), ErrorCode(0)

java.sql.SQLException: Failed to communicate with Ignite cluster.

at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:760)

at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeBatch(JdbcThinStatement.java:651)

at
com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128)

at
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)

at
org.springframework.jdbc.core.JdbcTemplate.lambda$batchUpdate$2(JdbcTemplate.java:950)

at
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:617)

at
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:647)

at
org.springframework.jdbc.core.JdbcTemplate.batchUpdate(JdbcTemplate.java:936)

at
org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.batchUpdate(NamedParameterJdbcTemplate.java:366)

at
com.ecoenergy.cortix.sm.event.cache.SMIgniteCacheManager.updateObjectStates(SMIgniteCacheManager.java:118)

at
com.ecoenergy.cortix.sm.event.notifcator.SMIgniteNotificator.notify(SMIgniteNotificator.java:69)

at
com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.notify(ObjectEventHandler.java:100)

at
com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.receiveEvents(ObjectEventHandler.java:86)

at
com.ecoenergy.cortix.sm.event.consumer.ObjectEventConsumer.processObjectEvents(ObjectEventConsumer.java:60)





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Jdbc connection

2016-11-11 Thread Anil
HI Andrey,

Thanks for your response. #2 answered from other answers

You are right. i created only one connection and it looks good. thanks.

On 11 November 2016 at 16:59, Andrey Gura <ag...@apache.org> wrote:

> Hi,
>
>
> 1. Ignite client node is thread-safe and you can create multiple
> statements in order to query execution. So, from my point of view, you
> should close connection when finish all your queries.
> 2. Could you please clarify your question?
> 3. I don't think that pooling is required.
> 4. Ignite client will try to reconnect to the Ignite cluster in case of
> server node fails. All you need is proper IP finder configuration.
>
>
> On Thu, Nov 10, 2016 at 5:01 PM, Anil <anilk...@gmail.com> wrote:
>
>> Any help in understanding below ?
>>
>> On 10 November 2016 at 16:31, Anil <anilk...@gmail.com> wrote:
>>
>>> I have couple of questions on ignite jdbc connection. Could you please
>>> clarify ?
>>>
>>> 1. Should connection be closed like other jdbc db connection ? - I see
>>> connection close is shutdown of ignite client node.
>>> 2. Connection objects are not getting released and all connections are
>>> busy ?
>>> 3. Connection pool is really required for ignite client ? i hope one
>>> ignite connection can handle number of queries in parallel.
>>> 4. What is the recommended configuration for ignite client to support
>>> failover ?
>>>
>>> Thanks.
>>>
>>
>>
>


Re: Ignite Jdbc connection

2016-11-11 Thread Andrey Gura
Hi,


1. Ignite client node is thread-safe and you can create multiple statements
in order to query execution. So, from my point of view, you should close
connection when finish all your queries.
2. Could you please clarify your question?
3. I don't think that pooling is required.
4. Ignite client will try to reconnect to the Ignite cluster in case of
server node fails. All you need is proper IP finder configuration.


On Thu, Nov 10, 2016 at 5:01 PM, Anil <anilk...@gmail.com> wrote:

> Any help in understanding below ?
>
> On 10 November 2016 at 16:31, Anil <anilk...@gmail.com> wrote:
>
>> I have couple of questions on ignite jdbc connection. Could you please
>> clarify ?
>>
>> 1. Should connection be closed like other jdbc db connection ? - I see
>> connection close is shutdown of ignite client node.
>> 2. Connection objects are not getting released and all connections are
>> busy ?
>> 3. Connection pool is really required for ignite client ? i hope one
>> ignite connection can handle number of queries in parallel.
>> 4. What is the recommended configuration for ignite client to support
>> failover ?
>>
>> Thanks.
>>
>
>


Re: Ignite Jdbc connection

2016-11-10 Thread Anil
Any help in understanding below ?

On 10 November 2016 at 16:31, Anil <anilk...@gmail.com> wrote:

> I have couple of questions on ignite jdbc connection. Could you please
> clarify ?
>
> 1. Should connection be closed like other jdbc db connection ? - I see
> connection close is shutdown of ignite client node.
> 2. Connection objects are not getting released and all connections are
> busy ?
> 3. Connection pool is really required for ignite client ? i hope one
> ignite connection can handle number of queries in parallel.
> 4. What is the recommended configuration for ignite client to support
> failover ?
>
> Thanks.
>


Re: Ignite Jdbc connection

2016-11-10 Thread Anil
I have couple of questions on ignite jdbc connection. Could you please
clarify ?

1. Should connection be closed like other jdbc db connection ? - I see
connection close is shutdown of ignite client node.
2. Connection objects are not getting released and all connections are busy
?
3. Connection pool is really required for ignite client ? i hope one ignite
connection can handle number of queries in parallel.
4. What is the recommended configuration for ignite client to support
failover ?

Thanks.


Re: Ignite Jdbc connection

2016-11-06 Thread Anil
i see it is because of client timeout. and has been resolved. thanks.

On 6 November 2016 at 00:36, Anil <anilk...@gmail.com> wrote:

> Hi Manu and All,
>
> Ignite jdbc connection is very slow for very first time even with data
> source.
>
> Consecutive queries are very fast. queries with 1 mins time duration
> becoming slow.
>
> PoolProperties p = new PoolProperties();
>  p.setUrl("jdbc:ignite:cfg://cache=TEST_CACHE@file:" +
> System.getProperties().getProperty("ignite.config.file"));
>  p.setDriverClassName("org.apache.ignite.IgniteJdbcDriver");
>  p.setMaxActive(20);
>  p.setInitialSize(5);
>  p.setMaxWait(5000);
>  p.setMinIdle(5);
>  p.setMaxIdle(10);
>  p.setTestOnBorrow(true);
>  p.setTestWhileIdle(true);
>  p.setTestOnReturn(true);
>  p.setTimeBetweenEvictionRunsMillis(6);
>  p.setMinEvictableIdleTimeMillis(12);
>  p.setMaxAge(150);
>  p.setRemoveAbandoned(true);
>  p.setRemoveAbandonedTimeout(300);
>  p.setLogAbandoned(true);
>  p.setFairQueue(true);
>  p.setValidationQuery("select count(*) from \"TEST_CACHE\".Person
> limit 1");
>  p.setValidationInterval(3000);
>  ds = new DataSource();
>  ds.setPoolProperties(p);
>  anyone facing the similar problem ?
>
> Thanks.
>


Re: Ignite Jdbc connection

2016-11-05 Thread Anil
Hi Manu and All,

Ignite jdbc connection is very slow for very first time even with data
source.

Consecutive queries are very fast. queries with 1 mins time duration
becoming slow.

PoolProperties p = new PoolProperties();
 p.setUrl("jdbc:ignite:cfg://cache=TEST_CACHE@file:" +
System.getProperties().getProperty("ignite.config.file"));
 p.setDriverClassName("org.apache.ignite.IgniteJdbcDriver");
 p.setMaxActive(20);
 p.setInitialSize(5);
 p.setMaxWait(5000);
 p.setMinIdle(5);
 p.setMaxIdle(10);
 p.setTestOnBorrow(true);
 p.setTestWhileIdle(true);
 p.setTestOnReturn(true);
 p.setTimeBetweenEvictionRunsMillis(6);
 p.setMinEvictableIdleTimeMillis(12);
 p.setMaxAge(150);
 p.setRemoveAbandoned(true);
 p.setRemoveAbandonedTimeout(300);
 p.setLogAbandoned(true);
 p.setFairQueue(true);
 p.setValidationQuery("select count(*) from \"TEST_CACHE\".Person
limit 1");
 p.setValidationInterval(3000);
 ds = new DataSource();
 ds.setPoolProperties(p);
 anyone facing the similar problem ?

Thanks.


Re: Ignite Jdbc connection

2016-10-25 Thread Anil
Thank you Manu. This is really helpful.

On 24 October 2016 at 20:07, Manu <maxn...@hotmail.com> wrote:

> If you use ignite jdbc driver, to ensure that you always get a valid ignite
> instance before call a ignite operation I recommend to use a datasource
> implementation that validates connection before calls and create new ones
> otherwise.
>
> For common operations with a ignite instance, I use this method to ensure a
> *good* ignite instance and don´t waits or control reconnection... maybe
> there are some other mechanisms... but who cares? ;)
>
> public Ignite getIgnite() {
> if (this.ignite!=null){
> try{
> //ensure this ignite instance is STARTED
> and connected
> this.ignite.getOrCreateCache("default");
> }catch (IllegalStateException e){
> this.ignite=null;
> }catch (IgniteClientDisconnectedException cause) {
> this.ignite=null;
> }catch (CacheException e) {
> if (e.getCause() instanceof
> IgniteClientDisconnectedException) {
> this.ignite=null;
> }else if (e.getCause() instanceof
> IgniteClientDisconnectedCheckedException) {
> this.ignite=null;
> }else{
> throw e;
> }
> }
> }
> if (this.ignite==null){
> this.createIgniteInstance();
> }
> return ignite;
> }
>
> also you can wait for reconnection using this catch block instead of
> above... but as I said... who cares?... sometimes reconnection waits are
> not
> desirable...
> [...]
>try{
> //ensure this ignite instance is STARTED
> and connected
> this.ignite.getOrCreateCache("default");
> }catch (IllegalStateException e){
> this.ignite=null;
> }catch (IgniteClientDisconnectedException cause) {
> LOG.warn("Client disconnected from cluster.
> Waiting for reconnect...");
> cause.reconnectFuture().get(); // Wait for
> reconnect.
> }catch (CacheException e) {
> if (e.getCause() instanceof
> IgniteClientDisconnectedException) {
> LOG.warn("Client disconnected from
> cluster. Waiting for reconnect...");
> IgniteClientDisconnectedException cause =
> (IgniteClientDisconnectedException)e.getCause();
> cause.reconnectFuture().get(); // Wait for
> reconnect.
> }else if (e.getCause() instanceof
> IgniteClientDisconnectedCheckedException) {
> LOG.warn("Client disconnected from
> cluster. Waiting for reconnect...");
> IgniteClientDisconnectedCheckedException
> cause =
> (IgniteClientDisconnectedCheckedException)e.getCause();
> cause.reconnectFuture().get(); // Wait for
> reconnect.
> }else{
> throw e;
> }
> }
> [...]
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8441.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Jdbc connection

2016-10-24 Thread Manu
If you use ignite jdbc driver, to ensure that you always get a valid ignite
instance before call a ignite operation I recommend to use a datasource
implementation that validates connection before calls and create new ones
otherwise.

For common operations with a ignite instance, I use this method to ensure a
*good* ignite instance and don´t waits or control reconnection... maybe
there are some other mechanisms... but who cares? ;)

public Ignite getIgnite() {
if (this.ignite!=null){
try{
//ensure this ignite instance is STARTED and 
connected
this.ignite.getOrCreateCache("default");
}catch (IllegalStateException e){
this.ignite=null;
}catch (IgniteClientDisconnectedException cause) {
this.ignite=null;
}catch (CacheException e) {
if (e.getCause() instanceof 
IgniteClientDisconnectedException) {
this.ignite=null;
}else if (e.getCause() instanceof
IgniteClientDisconnectedCheckedException) {
this.ignite=null;
}else{
throw e;
}
}
}
if (this.ignite==null){
this.createIgniteInstance();
}
return ignite;
}

also you can wait for reconnection using this catch block instead of
above... but as I said... who cares?... sometimes reconnection waits are not
desirable...
[...]
   try{
//ensure this ignite instance is STARTED and 
connected
this.ignite.getOrCreateCache("default");
}catch (IllegalStateException e){
this.ignite=null;
}catch (IgniteClientDisconnectedException cause) {
LOG.warn("Client disconnected from cluster. Waiting for 
reconnect...");
cause.reconnectFuture().get(); // Wait for reconnect.
}catch (CacheException e) {
if (e.getCause() instanceof 
IgniteClientDisconnectedException) {
LOG.warn("Client disconnected from cluster. 
Waiting for reconnect...");
IgniteClientDisconnectedException cause =
(IgniteClientDisconnectedException)e.getCause();
cause.reconnectFuture().get(); // Wait for 
reconnect.
}else if (e.getCause() instanceof
IgniteClientDisconnectedCheckedException) {
LOG.warn("Client disconnected from cluster. 
Waiting for reconnect...");
IgniteClientDisconnectedCheckedException cause =
(IgniteClientDisconnectedCheckedException)e.getCause();
cause.reconnectFuture().get(); // Wait for 
reconnect.
}else{
throw e;
}
}
[...]



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8441.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Jdbc connection

2016-10-24 Thread Manu
You are right,  if connection is closed due to cluster *client* node
disconnection, client will automatically recreate connection using discovery
configuration. Pool is also supported, but N pooled instances of
org.apache.ignite.internal.jdbc2.JdbcConnection for same url on same java VM
will use same and unique ignite instance...



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8440.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Jdbc connection

2016-10-24 Thread Anil
typo correction.

Thanks Manu.
>
> if i understand it correctly, if connection is closed due to cluster node
> failure, client will automatically recreate connection using discovery
> configuration.
>
> and *jdbc connection does support connection pool*.
>
> thanks for your help.
>
>
>
>
>
> On 24 October 2016 at 18:12, Manu <maxn...@hotmail.com> wrote:
>
>> Hi,
>>
>> as you know, org.apache.ignite.internal.jdbc2.JdbcConnection is an
>> implementation of java.sql.Connection, works always on client mode (this
>> flag is hardcoded to true when load xml configuration passed on connection
>> url) and works on read mode (only select). On same java VM instance,
>> connection (ignite instance) is cached internally in JdbcConnection by
>> url,
>> so for same connection (type, path, collocation...) you only have (and
>> need)
>> one ignite instance. For more info check this
>> https://apacheignite.readme.io/docs/jdbc-driver
>> <https://apacheignite.readme.io/docs/jdbc-driver>
>>
>> As a java.sql.Connection, you could use a javax.sql.DataSource
>> implementation to manage it and checks connection status (validation
>> query)
>> etc, but you don't need a pool, for example:
>>
>> > destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource">
>> > value="org.apache.ignite.IgniteJdbcDriver"/>
>> > value="jdbc:ignite:cfg://cache=default:collocated=true:local=false@ignite
>> /data_grid/ignite-client.xml"/>
>> 
>> 
>> > value="300"/>
>> 
>> 
>>
>>
>> [...]
>> This is client ignite configuration with default cache (dummy, without
>> data,
>> only used to validate client connection) used on url of
>> collocatedDbcpIgniteDataGridDataSource
>>
>> > class="org.apache.ignite.configuration.IgniteConfiguration">
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> > value="default*" />
>>         > value="PARTITIONED" />
>> 
>> 
>>
>> java.lang.String
>>
>> java.lang.String
>> 
>> 
>> 
>> [...]
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8436.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Ignite Jdbc connection

2016-10-24 Thread Anil
Thanks Manu.

if i understand it correctly, if connection is closed due to cluster node
failure, client will automatically recreate connection using discovery
configuration.

and jdbc connection does not support connection pool.

thanks for your help.





On 24 October 2016 at 18:12, Manu <maxn...@hotmail.com> wrote:

> Hi,
>
> as you know, org.apache.ignite.internal.jdbc2.JdbcConnection is an
> implementation of java.sql.Connection, works always on client mode (this
> flag is hardcoded to true when load xml configuration passed on connection
> url) and works on read mode (only select). On same java VM instance,
> connection (ignite instance) is cached internally in JdbcConnection by url,
> so for same connection (type, path, collocation...) you only have (and
> need)
> one ignite instance. For more info check this
> https://apacheignite.readme.io/docs/jdbc-driver
> <https://apacheignite.readme.io/docs/jdbc-driver>
>
> As a java.sql.Connection, you could use a javax.sql.DataSource
> implementation to manage it and checks connection status (validation query)
> etc, but you don't need a pool, for example:
>
>  destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource">
>  value="org.apache.ignite.IgniteJdbcDriver"/>
>  value="jdbc:ignite:cfg://cache=default:collocated=true:local=false@ignite
> /data_grid/ignite-client.xml"/>
> 
> 
>  value="300"/>
> 
> 
>
>
> [...]
> This is client ignite configuration with default cache (dummy, without
> data,
> only used to validate client connection) used on url of
> collocatedDbcpIgniteDataGridDataSource
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
> 
> 
> 
>  value="default*" />
>  value="PARTITIONED" />
> 
> 
>
> java.lang.String
>
> java.lang.String
> 
> 
> 
> [...]
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8436.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Jdbc connection

2016-10-24 Thread Manu
Hi,

as you know, org.apache.ignite.internal.jdbc2.JdbcConnection is an
implementation of java.sql.Connection, works always on client mode (this
flag is hardcoded to true when load xml configuration passed on connection
url) and works on read mode (only select). On same java VM instance,
connection (ignite instance) is cached internally in JdbcConnection by url,
so for same connection (type, path, collocation...) you only have (and need)
one ignite instance. For more info check this 
https://apacheignite.readme.io/docs/jdbc-driver
<https://apacheignite.readme.io/docs/jdbc-driver>  

As a java.sql.Connection, you could use a javax.sql.DataSource
implementation to manage it and checks connection status (validation query)
etc, but you don't need a pool, for example:








 


[...]
This is client ignite configuration with default cache (dummy, without data,
only used to validate client connection) used on url of
collocatedDbcpIgniteDataGridDataSource





 



default*" />




java.lang.String

java.lang.String 

 


[...]



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Jdbc-connection-tp8431p8436.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite Jdbc connection

2016-10-23 Thread Anil
HI,

Does ignite jdbc connection is fault tolerant ? as it is distributed
cluster and any node can go down at any point of time.

and Does it support connection pool ? as each connection is starting ignite
with client mode true.

Thanks for your clarifications.