Re: replicated static master/slave - which is the correct URI for an artemis-jms-client?

2019-08-13 Thread mk666aim
So what exactly is the ha parameter for?
Does it treat servers somehow differently? E.g. *non-ha* pool of servers,
vs. *ha* pool of servers?

Form what I am seeing, even without it, client fails over to slave after
retrying master 5 times and then fail over to the slave... So the fail over
is somehow triggered still...




--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: JDBC HA failover, is this supported?

2019-08-13 Thread mk666aim
I realised that my original post had some omitted error traces, so I have now
corrected it.
It seems that quoted stack traces do not get posted correctly and they
result in empty space.
I have now instead just pasted text normally and marked it bold.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: ActiveMQ clustering using NFSv4

2019-08-13 Thread Tim Bain
Phu,

You can use any shared filesystem for storing the data, but to act as the
shared lock, the filesystem must support specific lock operations. NFSv4
supports those options. NFSv3 and before does not. Other filesystems may or
may not, though your experiment makes it sound like ESXi may not.

However, via the pluggable storage locker feature (
https://activemq.apache.org/pluggable-storage-lockers), you can use a
different technology for the shared lock than you use for the data. So you
could store your data in EXSi but use a JDBC database for the lock, for
example.

Tim

On Mon, Aug 12, 2019, 6:13 PM Arshkr  wrote:

> Hi Tim,
>
> Thank you for your reply.
> Yes, I have read this document before, and master/slave concept is clear to
> me. My big question is on NFS shared file system where ActiveMQ is writing
> data on.
> I have only one option for this setup is to use NFS shared file system for
> the data directory. I don't know if NFSv4 is the only option or there is
> another alternative such as local file share storage vs. network file share
> or not.
>
>
> Regards,
> Phu Nguyen.
>
>
>
>
> --
> Sent from:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
>


Re: VirtualTopic issue with virtualSelectorCacheBrokerPlugin

2019-08-13 Thread Tim Bain
The next time this happens, could you please use JVisualVM to capture a
minute or so of CPU sampling to give us an understanding of what code paths
are/aren't being executed? Please make sure that messages are coming into
the virtual topic in question during the time you do the sampling. Note
that you'll need to have the JMX port open on the broker process for this
to work.

Once you have that, please create a bug in JIRA and attach the capture to
it, and hopefully someone can figure out how to fix the bug based on the
information in the capture.

Thanks,
Tim

On Thu, Aug 8, 2019, 12:38 PM The4Summers 
wrote:

> Hi!
>
> We recently started to use the  VirtualTopic concept in ActiveMQ
>    with the
> selectorAware
> attribute set to true, and with /virtualSelectorCacheBrokerPlugin/ to be
> able to continue receiving selected (filtered) messages in the destination
> queue even if a consumer if down.
>
> We have producer and consumer services defined with Camel endpoints and
> running as SpringBoot application .
>
> *In activemq.xml:*
> ...
> 
> persistFile="${activemq.data}/selectorcache.data"/>
> 
>
> 
> 
> 
>   selectorAware="true"/>
> 
> 
> 
> ​...
>
>
> - *In producer, virtual topic is defined with the following Camel URI*:
>
>
> jms.producer.uri=/activemq:topic:VirtualTopic.DeliveryCompletion?connectionFactory=xaJmsConnectionFactory=#transactionManager/
>
>
>
> - *In consumer, the listening queue is defined with the following Camel
> URI*:
>
>
> jms.consumer.uri=/activemq:queue:Consumer.QUEUE_A.VirtualTopic.DeliveryCompletion?connectionFactory=connectionFactory=CLIENT_ACKNOWLEDGE=DESTINATION_APP_ID='1234567890'/
>
>
> In our producer test, we are sending in loop same set of messages to the
> destination virtual topic with few distinct application Id in the header
> (DESTINATION_APP_ID). The selector configured in the consumers is used to
> filter messages based on this header containing the application id. So we
> expect that when the producer sends a bunch of notification messages to the
> virtual topic (VirtualTopic.DeliveryCompletion), then they will be
> forwarded
> to listener queues according to the configured selector expression. The
> role
> of 'virtualSelectorCacheBrokerPlugin' is to cache all the selector
> expressions for each consumer listening on the virtual topic queue.
>
> Most of the time (I would say 99%) of the time, all this is working very
> well. All messages are properly forwarded to the proper queues according to
> the selector, other are discarded. However, after some time, some messages
> are not filtered properly and are sent to all listening queues and they
> stay
> there (since not accepted by consumers because of the selector).
>
> When we disable the plugin in ActiveMQ, we do not have this problem,
> however
> if one consumer goes down few minutes for example, it loose some messages
> of
> course. To void loosing messages when consumers are down, we want to use
> the
> plugin. But because we encountered that issue with it, it is not possible
> to
> continue use this approach (for the moment we can reproduce the expected
> behavior using the concept of Composite Destination, but is it not as
> flexible as the VirtualTopic).
>
> If anyone knows what is going on with this solution (with the
> "virtualSelectorCacheBrokerPlugin" plugin?), let us know.
>
>
>
>
>
> --
> Sent from:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
>