Re: replicated static master/slave - which is the correct URI for an artemis-jms-client?

2019-08-13 Thread mk666aim
So what exactly is the ha parameter for? Does it treat servers somehow differently? E.g. *non-ha* pool of servers, vs. *ha* pool of servers? Form what I am seeing, even without it, client fails over to slave after retrying master 5 times and then fail over to the slave... So the fail over is

Re: JDBC HA failover, is this supported?

2019-08-13 Thread mk666aim
I realised that my original post had some omitted error traces, so I have now corrected it. It seems that quoted stack traces do not get posted correctly and they result in empty space. I have now instead just pasted text normally and marked it bold. -- Sent from:

Re: ActiveMq Artemis Master/Slave

2019-08-14 Thread mk666aim
This turned out to be a typo in my configuration - underscore instead of a dash, so the configuration parser was unable to find the connector. -- Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: JDBC HA failover, is this supported?

2019-08-14 Thread mk666aim
This turned out to be a typo in my configuration - underscore instead of a dash, so the configuration parser was unable to find the connector. -- Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

JDBC HA failover, is this supported?

2019-07-30 Thread mk666aim
With Artemis 2.9.0, I am trying to use a shared JDBC store between 2 nodes. I have configured high availability with the failover, but it all seems to not quite work. I am getting various error messages during the failover, even though failover does mostly happen. I am attaching the

Re: ActiveMq Artemis Master/Slave

2019-08-01 Thread mk666aim
I read the linked issue too, anf as I understand that was to do with NFS. However, I am getting very similar issue when using shared JDBC store (MySQL). Error is slightly different though: Any help appreciated. -- Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html

Re: ActiveMq Artemis Master/Slave

2019-08-01 Thread mk666aim
I read the linked issue too, and as I understand that was to do with NFS. However, I am getting very similar issue when using shared JDBC store (MySQL). Error is slightly different though: I have traced it to this call here in the FailbackChecker: Seems the default connection is empty or

Re: replicated static master/slave - which is the correct URI for an artemis-jms-client?

2019-08-01 Thread mk666aim
Also, in a scenario where master server is started again, and backup server should fail back to the master from that moment onwards Should the client also reconnect to the master? It does not seem to be happening, as my client is still locked to the backup server. When I shut down backup

Re: replicated static master/slave - which is the correct URI for an artemis-jms-client?

2019-08-01 Thread mk666aim
The below URL does not seem to work for me. Client basically never switches to the backup node. Isn't that what reconnectAttempts=-1 will cause? I used following URL before and the switching happened: *(tcp://master:61616,tcp://slave:61616)?reconnectAttempts=5* What am I missing? And what is

Artemis does not reconnect to MySQL after connection timeout

2019-08-19 Thread mk666aim
We're trying to use MySQL 5.7 backend for JDBC persistent store. This works fine, until the point where the connection goes stale due to the server timeout. This is our current set up in broker.xml:

Re: Artemis does not reconnect to MySQL after connection timeout

2019-08-30 Thread mk666aim
Bumping this one. I know that the JDBC persistence is under development, but a simple connection maintenance should be considered an essential feature. If Artemis can't stay up while connected to a MySQL server, then the feature is not just experimental, but unusable... mk666aim wrote > We

Re: replicated static master/slave - which is the correct URI for an artemis-jms-client?

2019-08-21 Thread mk666aim
Thank you Justin. I can indeed see in your source code that this flag is being used all over the place. I am now using it and also using reconnectAttempts=-1 and things behave ok. I ran into the issue in one of the client environments where clients did not want to failover, as if locked in to