Re: Artemis does not reconnect to MySQL after connection timeout

2019-08-30 Thread mk666aim
Bumping this one.
I know that the JDBC persistence is under development, but a simple
connection maintenance should be considered an essential feature.
If Artemis can't stay up while connected to a MySQL server, then the feature
is not just experimental, but unusable...




mk666aim wrote
> We're trying to use MySQL 5.7 backend for JDBC persistent store.
> This works fine, until the point where the connection goes stale due to
> the
> server timeout.
> This is our current set up in broker.xml:
> 
>
> 
>  
> 
> jdbc:mysql://xxx:3306/artemis_datasource?create=trueuser=mq_adminpassword=abcduseSSL=falseautoReconnect=truetcpKeepAlive=trueautoReconnectForPools=true
> 
>   
> 
> BINDINGS_TABLE
> 
>   
> 
> MESSAGE_TABLE
> 
>   
> 
> P_MSG_TBL
> 
>  
> 
> LARGE_MESSAGES_TABLE
> 
>  
> 
> NODE_MANAGER_TABLE
> 
>  
> 
> com.mysql.cj.jdbc.Driver
> 
>
> 
> 
> MySQL default setting of the *wait_timeout* parameter is 8 hours.
> We are not able to get Artemis to reconnect again after this timeout is
> reached and we were forced to set up a cron-triggered restart to mitigate
> this.
> 
> Error that we get is as follows:
> 
> *The last packet successfully received from the server was 43417 seconds
> ago.The last packet sent successfully to the server was 43417 seconds ago,
> which is longer than the server configured value of 'wait_timeout'. You
> should consider either expiring and/or testing connection validity before
> use in your application, increasing the server configured values for
> client
> timeouts, or using the Connector/J connection property
> 'autoReconnect=true'
> to avoid this problem
> *
> 
> This issue is easily rectifiable in the old ActiveMQ, as this uses
> Spring-cofngiured datasource, e.g.:
> 
>  destroy-method="close">
>   
> 
>   
>  value="jdbc:mysql://xxx:3306/mq_datasource?useSSL=false"/>
>   
> 
>   
> 
>   
> 
>   
> 
>   
> 
>   
> 
>   
> 
>   
> 
>
> 
> 
> The *autoReconnect* / *autoReconnectForPools* parameters to the driver URL
> did not make any difference, and in addition they are not actually
> recommended by MySQL maintainers.
> 
> 
> 
> 
> 
> --
> Sent from:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html





--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: replicated static master/slave - which is the correct URI for an artemis-jms-client?

2019-08-21 Thread mk666aim
Thank you Justin.
I can indeed see in your source code that this flag is being used all over
the place.

I am now using it and also using reconnectAttempts=-1 and things behave ok.

I ran into the issue in one of the client environments where clients did not
want to failover, as if locked in to the same node.
I have found out that the time between the different servers was 1 hour
apart (ntp was not active).
I wonder if this was causing the failover issues, as the problems went away
once all boxes were using the same time (we have got 2 boxes where artemis
brokers run, and 2 boxes where micro-services run).



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Artemis does not reconnect to MySQL after connection timeout

2019-08-19 Thread mk666aim
We're trying to use MySQL 5.7 backend for JDBC persistent store.
This works fine, until the point where the connection goes stale due to the
server timeout.
This is our current set up in broker.xml:


   
 
jdbc:mysql://xxx:3306/artemis_datasource?create=trueuser=mq_adminpassword=abcduseSSL=falseautoReconnect=truetcpKeepAlive=trueautoReconnectForPools=true
  BINDINGS_TABLE
  MESSAGE_TABLE
  P_MSG_TBL
 
LARGE_MESSAGES_TABLE
 
NODE_MANAGER_TABLE
 
com.mysql.cj.jdbc.Driver
   


MySQL default setting of the *wait_timeout* parameter is 8 hours.
We are not able to get Artemis to reconnect again after this timeout is
reached and we were forced to set up a cron-triggered restart to mitigate
this.

Error that we get is as follows:

*The last packet successfully received from the server was 43417 seconds
ago.The last packet sent successfully to the server was 43417 seconds ago,
which is longer than the server configured value of 'wait_timeout'. You
should consider either expiring and/or testing connection validity before
use in your application, increasing the server configured values for client
timeouts, or using the Connector/J connection property 'autoReconnect=true'
to avoid this problem
*

This issue is easily rectifiable in the old ActiveMQ, as this uses
Spring-cofngiured datasource, e.g.:

  
  
  
  
  
  
  
  
  
  
   


The *autoReconnect* / *autoReconnectForPools* parameters to the driver URL
did not make any difference, and in addition they are not actually
recommended by MySQL maintainers.





--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: JDBC HA failover, is this supported?

2019-08-14 Thread mk666aim
This turned out to be a typo in my configuration - underscore instead of a
dash, so the configuration parser was unable to find the connector.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: ActiveMq Artemis Master/Slave

2019-08-14 Thread mk666aim
This turned out to be a typo in my configuration - underscore instead of a
dash, so the configuration parser was unable to find the connector.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: replicated static master/slave - which is the correct URI for an artemis-jms-client?

2019-08-13 Thread mk666aim
So what exactly is the ha parameter for?
Does it treat servers somehow differently? E.g. *non-ha* pool of servers,
vs. *ha* pool of servers?

Form what I am seeing, even without it, client fails over to slave after
retrying master 5 times and then fail over to the slave... So the fail over
is somehow triggered still...




--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: JDBC HA failover, is this supported?

2019-08-13 Thread mk666aim
I realised that my original post had some omitted error traces, so I have now
corrected it.
It seems that quoted stack traces do not get posted correctly and they
result in empty space.
I have now instead just pasted text normally and marked it bold.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: replicated static master/slave - which is the correct URI for an artemis-jms-client?

2019-08-01 Thread mk666aim
Also, in a scenario where master server is started again, and backup server
should fail back to the master from that moment onwards
Should the client also reconnect to the master? It does not seem to be
happening, as my client is still locked to the backup server.
When I shut down backup server, client then reconnects to the master, and
then I can start the backup server back.

Is this the expected behaviour?



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: replicated static master/slave - which is the correct URI for an artemis-jms-client?

2019-08-01 Thread mk666aim
The below URL does not seem to work for me.
Client basically never switches to the backup node. Isn't that what
reconnectAttempts=-1 will cause?

I used following URL before and the switching happened:

*(tcp://master:61616,tcp://slave:61616)?reconnectAttempts=5*

What am I missing?

And what is the *ha* flag about?


jbertram wrote
> Thanks for following up, Frank. I would expect a URL like this to work:
> 
>   (tcp://master:61616,tcp://slave:61616)?ha=true=-1





--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: ActiveMq Artemis Master/Slave

2019-08-01 Thread mk666aim
I read the linked issue too, and as I understand that was to do with NFS.
However, I am getting very similar issue when using shared JDBC store
(MySQL).

Error is slightly different though:



I have traced it to this call here in the FailbackChecker:



Seems the default connection is empty or what?

Attaching the configurations of master and slave:
broker_node1.xml
  
broker_node2.xml
  

Any help appreciated. 



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: ActiveMq Artemis Master/Slave

2019-08-01 Thread mk666aim
I read the linked issue too, anf as I understand that was to do with NFS.
However, I am getting very similar issue when using shared JDBC store
(MySQL).

Error is slightly different though:


Any help appreciated.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


JDBC HA failover, is this supported?

2019-07-30 Thread mk666aim
With Artemis 2.9.0, I am trying to use a shared JDBC store between 2 nodes.

I have configured high availability with the failover, but it all seems to
not quite work. I am getting various error messages during the failover,
even though failover does mostly happen.

I am attaching the configuration of both brokers.

broker_node1.xml
  
broker_node2.xml
  

When both brokers startup, I can correctly see one registering as a Live
server, and the other one as a Backup.

When I kill the broker on node1 (i.e. the live one), after a while, the
backup server will go live, but throws error like this:


When I then start broker again on the node 1, it will stop at this point:


Then after a while it throws this error:


On the node 2 meanwhile, broker announces following:



Please note that we found out that time is about 90secs off between the 2
nodes, will get it synced up soon, although I don't feel this is related to
the above errors...

Any help greatly appreciated.




--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html