RE: MQTT Link Stealing

2015-09-10 Thread Shobhana
I am using version 5.11.1 with Eclipse Paho MQTT Client and find the same
issue in this version too. By default, for MQTT, allowLinkStealing is set to
true. But I see hundreds of following messages in AMQ logs :

2015-09-10 13:21:47,544 | WARN  | Failed to register MBean
org.apache.activemq:type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=mqtt,connectionViewType=clientId,connectionName=qrserverchat
| org.apache.activemq.broker.jmx.ManagedTransportConnection | ActiveMQ
Transport: tcp:///52.74.19.21:56042@1883
2015-09-10 13:21:47,543 | WARN  | Failed to register MBean
org.apache.activemq:type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=mqtt,connectionViewType=clientId,connectionName=qrserverchat
| org.apache.activemq.broker.jmx.ManagedTransportConnection | ActiveMQ
Transport: tcp:///52.74.19.21:56044@1883
2015-09-10 13:21:47,543 | WARN  | Failed to register MBean
org.apache.activemq:type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=mqtt,connectionViewType=clientId,connectionName=qrserverchat
| org.apache.activemq.broker.jmx.ManagedTransportConnection | ActiveMQ
Transport: tcp:///52.74.19.21:56008@1883
2015-09-10 13:21:47,543 | WARN  | Failed to register MBean
org.apache.activemq:type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=mqtt,connectionViewType=clientId,connectionName=qrserverchat
| org.apache.activemq.broker.jmx.ManagedTransportConnection | ActiveMQ
Transport: tcp:///52.74.19.21:56013@1883
2015-09-10 13:21:47,543 | WARN  | Failed to register MBean
org.apache.activemq:type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=mqtt,connectionViewType=clientId,connectionName=qrserverchat
| org.apache.activemq.broker.jmx.ManagedTransportConnection | ActiveMQ
Transport: tcp:///52.74.19.21:56009@1883
2015-09-10 13:21:47,528 | WARN  | Stealing link for clientId qrserverchat
>From Connection Transport Connection to: tcp://52.74.19.21:56241 |
org.apache.activemq.broker.region.RegionBroker | ActiveMQ Transport:
tcp:///52.74.19.21:56229@1883
2015-09-10 13:21:47,526 | WARN  | Stealing link for clientId qrserverchat
>From Connection Transport Connection to: tcp://52.74.19.21:56235 |
org.apache.activemq.broker.region.RegionBroker | ActiveMQ Transport:
tcp:///52.74.19.21:56241@1883
2015-09-10 13:21:47,526 | WARN  | Stealing link for clientId qrserverchat
>From Connection Transport Connection to: tcp://52.74.19.21:56237 |
org.apache.activemq.broker.region.RegionBroker | ActiveMQ Transport:
tcp:///52.74.19.21:56235@1883
2015-09-10 13:21:47,525 | WARN  | Stealing link for clientId qrserverchat
>From Connection Transport Connection to: tcp://52.74.19.21:56203 |
org.apache.activemq.broker.region.RegionBroker | ActiveMQ Transport:
tcp:///52.74.19.21:56237@1883

Strangely, this problems starts after running AMQ for about 6 hours and AMQ
does not recover from this issue; the only solution is to restart it! 

Does anyone know any solution or workaround to get rid of this problem?

Thanks,
Shobhana



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/MQTT-Link-Stealing-tp4682263p4701866.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ with KahaDB as persistent store becomes very slow (almost unresponsive) after creating large no (25000+) of Topics

2016-03-30 Thread Shobhana
Hi Tim & Christopher,

I tried with 5.13.2 version but as you suspected, it did not solve my
problem.

We don't have any wildcard subscriptions. Most of the Topics have a maximum
of 8 subscriptions (Ranges between 2 and 8) and a few topics (~25-30 so far)
have more than 8 (this is not fixed, it depends on no of users interested in
these specific topics; the max I have seen is 40).

Btw, I just realized that I have set a very low value for destination
inactivity (30 secs) and hence many destinations are getting removed very
early. Later when there is any message published to the same destination, it
would result in destination getting created again. I will correct this by
increasing this time out to appropriate values based on each destination
(varies from 1 hour to 1 day)

Today after upgrading to 5.13.2 version in my test env, I tried with
different configurations to see if there is any improvement. In particular,
I disabled journal disk sync (since many threads were waiting at KahaDB
level operations) and also disabled metadata update. With these changes, the
contention moved to a different level (KahaDB update index .. see attached
thread dumps)

ThreadDump1.txt
<http://activemq.2283324.n4.nabble.com/file/n4710055/ThreadDump1.txt>  
ThreadDump2.txt
<http://activemq.2283324.n4.nabble.com/file/n4710055/ThreadDump2.txt>  

I will test again by increasing the index cache size (current value is set
to the default of 1) to 10 and see if it makes any improvement.

Also histo reports showed a huge number (1393177) of
org.apache.activemq.management.CountStatisticImpl instances and 1951637
instances of java.util.concurrent.locks.ReentrantLock$NonfairSync. See
attached histo for complete report.

histo.txt <http://activemq.2283324.n4.nabble.com/file/n4710055/histo.txt>  

What are these org.apache.activemq.management.CountStatisticImpl instances?
Is there any way to avoid them?

Thanks,
Shobhana






--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-tp4709985p4710055.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ with KahaDB as persistent store becomes very slow (almost unresponsive) after creating large no (25000+) of Topics

2016-04-11 Thread Shobhana
I looked at only the thread dumps. I will profile with some profiler first
and see where is the broker spending more time.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-tp4709985p4710583.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ with KahaDB as persistent store becomes very slow (almost unresponsive) after creating large no (25000+) of Topics

2016-04-08 Thread Shobhana
Hi Tim,

I said indexing was the point of contention after seeing that Thread
"ActiveMQ NIO Worker 169" was still working on
org.apache.activemq.store.kahadb.MessageDatabase.updateIndex even after >
3.5 minutes.

These are full thread dumps. I guess the lock (read lock) is held by threads
"ActiveMQ NIO Worker 169" and "ActiveMQ NIO Worker 171". Since the read lock
is already held by other threads, the thread "ActiveMQ Broker[localhost]
Scheduler" is waiting to acquire write lock. Since there is already a thread
waiting to acquire write lock, other threads which are waiting to acquire
read lock are still waiting.

What could be the reason for updateIndex not completing even after 3.5
minutes?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-tp4709985p4710533.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ with KahaDB as persistent store becomes very slow (almost unresponsive) after creating large no (25000+) of Topics

2016-03-28 Thread Shobhana
I enabled debug logs to see what was happening when AMQ could not send ACK
within 30 secs. I started the AMQ broker on 28-Mar around 17:40 and to my
surprise, I find that the server startup is still going on even now at
29-Mar 09:30 (~16 hours!!). I see a lot of following messages in the logs :

2016-03-28 18:36:25,340 | DEBUG | PrimaryBroker adding destination:
topic://33ConcurrentTopicCreator410 |
org.apache.activemq.broker.region.AbstractRegion | main
2016-03-28 18:36:25,347 | DEBUG | Restoring durable subscription:
SubscriptionInfo {subscribedDestination =
topic://33ConcurrentTopicCreator410, destination =
topic://33ConcurrentTopicCreator410, clientId = 3ConcurrentTopicCreator,
subscriptionName = AT_LEAST_ONCE:33ConcurrentTopicCreator410, selector =
null, noLocal = false} | org.apache.activemq.broker.region.TopicRegion |
main
2016-03-28 18:36:25,350 | DEBUG | PrimaryBroker adding destination:
topic://33ConcurrentTopicCreator409 |
org.apache.activemq.broker.region.AbstractRegion | main
2016-03-28 18:36:25,357 | DEBUG | Restoring durable subscription:
SubscriptionInfo {subscribedDestination =
topic://33ConcurrentTopicCreator409, destination =
topic://33ConcurrentTopicCreator409, clientId = 3ConcurrentTopicCreator,
subscriptionName = AT_LEAST_ONCE:33ConcurrentTopicCreator409, selector =
null, noLocal = false} | org.apache.activemq.broker.region.TopicRegion |
main
2016-03-28 18:36:25,360 | DEBUG | PrimaryBroker adding destination:
topic://33ConcurrentTopicCreator408 |
org.apache.activemq.broker.region.AbstractRegion | main
2016-03-28 18:36:25,367 | DEBUG | Restoring durable subscription:
SubscriptionInfo {subscribedDestination =
topic://33ConcurrentTopicCreator408, destination =
topic://33ConcurrentTopicCreator408, clientId = 3ConcurrentTopicCreator,
subscriptionName = AT_LEAST_ONCE:33ConcurrentTopicCreator408, selector =
null, noLocal = false} | org.apache.activemq.broker.region.TopicRegion |
main
2016-03-28 18:36:25,370 | DEBUG | PrimaryBroker adding destination:
topic://33ConcurrentTopicCreator403 |
org.apache.activemq.broker.region.AbstractRegion | main
2016-03-28 18:36:25,377 | DEBUG | Restoring durable subscription:
SubscriptionInfo {subscribedDestination =
topic://33ConcurrentTopicCreator403, destination =
topic://33ConcurrentTopicCreator403, clientId = 3ConcurrentTopicCreator,
subscriptionName = AT_LEAST_ONCE:33ConcurrentTopicCreator403, selector =
null, noLocal = false} | org.apache.activemq.broker.region.TopicRegion |
main
2016-03-28 18:36:25,380 | DEBUG | PrimaryBroker adding destination:
topic://33ConcurrentTopicCreator402 |
org.apache.activemq.broker.region.AbstractRegion | main
2016-03-28 18:36:25,387 | DEBUG | Restoring durable subscription:
SubscriptionInfo {subscribedDestination =
topic://33ConcurrentTopicCreator402, destination =
topic://33ConcurrentTopicCreator402, clientId = 3ConcurrentTopicCreator,
subscriptionName = AT_LEAST_ONCE:33ConcurrentTopicCreator402, selector =
null, noLocal = false} | org.apache.activemq.broker.region.TopicRegion |
main

There are 89000+ instances of "PrimaryBroker adding destination:" and
"Restoring durable subscription:"

If I enable only INFO log level, broker starts up within a few minutes, but
if DEBUG is enabled, why does it take so long? Will writing debug message to
the log take so much more time?





--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-tp4709985p4710008.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


ActiveMQ with KahaDB as persistent store becomes very slow (almost unresponsive) after creating large no (25000+) of Topics

2016-03-28 Thread Shobhana
a:120)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTNIOTransport.access$000(MQTTNIOTransport.java:43)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTNIOTransport$1.onSelect(MQTTNIOTransport.java:72)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:98)[activemq-client-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:118)[activemq-client-5.13.1.jar:5.13.1]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_95]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_95]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_95]

These exceptions make sense because I had killed the test client abruptly.

But after this, AMQ becomes virtually unusable! Further attempts to connect
to the broker and/or subscribe/unsubscribe to/from topics will always
timeout.

I took thread dumps of AMQ process and made following observations :
a) Most of the threads are waiting to acquire read lock on
org.apache.activemq.broker.region.AbstractRegion's destinationsLock
b) The method
org.apache.activemq.broker.region.AbstractRegion.addDestination which had
acquired the write lock did not finish even after 5 minutes! (Check
"ActiveMQ NIO Worker 1323" in ThreadDump1.log and ThreadDump2.log)
c) The method org.apache.activemq.broker.region.TopicRegion.addConsumer
which internally calls the addDestination method mentioned above did not
complete even after 30 minutes!! (Check same thread in ThreadDump2.log and
ThreadDump3.log)

ThreadDump1.log
<http://activemq.2283324.n4.nabble.com/file/n4709985/ThreadDump1.log>  
ThreadDump2.log
<http://activemq.2283324.n4.nabble.com/file/n4709985/ThreadDump2.log>  
ThreadDump3.log
<http://activemq.2283324.n4.nabble.com/file/n4709985/ThreadDump3.log>  

PS : 
a) We have disabled dedicated task runner by setting
-Dorg.apache.activemq.UseDedicatedTaskRunner=false
b) We use KahaDB as persistent store and all messages are set to QOS 1.

I feel I am not using the right AMQ configurations, but I am not able to
figure out which config is missing/wrong.

Any help / pointers would be greatly appreciated!

Thanks,
Shobhana



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-tp4709985.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Is there a way to be notified when a durable subscriber receives a MQTT message?

2016-04-04 Thread Shobhana
Thank you Tim!

In our production env, we have disabled advisories since the number of
destinations are already high in number and we have this issue
(http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-td4709985.html).
So if I enable advisories, it will further add to the no of destinations and
worsen our problem.

Is there any other alternative or any way to selectively enable advisories
for only those destinations that are necessary?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Is-there-a-way-to-be-notified-when-a-durable-subscriber-receives-a-MQTT-message-tp4710173p4710344.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Is there a way to be notified when a durable subscriber receives a MQTT message?

2016-04-04 Thread Shobhana
If you can enable advisories, you can use the solution suggested above by
Tim.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Is-there-a-way-to-be-notified-when-a-durable-subscriber-receives-a-MQTT-message-tp4710173p4710345.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Is there a way to be notified when a durable subscriber receives a MQTT message?

2016-04-04 Thread Shobhana
Thats great! I’ll give it a try. Many thanks Christopher


> On 05-Apr-2016, at 2:58 AM, christopher.l.shannon [via ActiveMQ] 
> <ml-node+s2283324n4710346...@n4.nabble.com> wrote:
> 
> Advisory messages are not persisted so you should not have any issues with 
> KahaDB by using them. 
> 
> On Mon, Apr 4, 2016 at 5:01 PM, Shobhana <[hidden email] 
> > wrote: 
> 
> > If you can enable advisories, you can use the solution suggested above by 
> > Tim. 
> > 
> > 
> > 
> > -- 
> > View this message in context: 
> > http://activemq.2283324.n4.nabble.com/Is-there-a-way-to-be-notified-when-a-durable-subscriber-receives-a-MQTT-message-tp4710173p4710345.html
> >  
> > <http://activemq.2283324.n4.nabble.com/Is-there-a-way-to-be-notified-when-a-durable-subscriber-receives-a-MQTT-message-tp4710173p4710345.html>
> > Sent from the ActiveMQ - User mailing list archive at Nabble.com. 
> > 
> 
> 
> If you reply to this email, your message will be added to the discussion 
> below:
> http://activemq.2283324.n4.nabble.com/Is-there-a-way-to-be-notified-when-a-durable-subscriber-receives-a-MQTT-message-tp4710173p4710346.html
>  
> <http://activemq.2283324.n4.nabble.com/Is-there-a-way-to-be-notified-when-a-durable-subscriber-receives-a-MQTT-message-tp4710173p4710346.html>
> To unsubscribe from Is there a way to be notified when a durable subscriber 
> receives a MQTT message?, click here 
> <http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code=4710173=c2hvYmhhbmFAcXVpY2tyaWRlLmlufDQ3MTAxNzN8MTUyNjM2MjU2Mg==>.
> NAML 
> <http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>




--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Is-there-a-way-to-be-notified-when-a-durable-subscriber-receives-a-MQTT-message-tp4710173p4710347.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: ActiveMQ with KahaDB as persistent store becomes very slow (almost unresponsive) after creating large no (25000+) of Topics

2016-03-29 Thread Shobhana
Thank you Christopher for your suggestion. I'll check this with 5.13.2



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-tp4709985p4710043.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Support for setting default message expiry for MQTT messages

2016-03-29 Thread Shobhana
Thank you Tim for super-quick response :-)
I will check this option.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Support-for-setting-default-message-expiry-for-MQTT-messages-tp4710015p4710018.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Support for setting default message expiry for MQTT messages

2016-03-29 Thread Shobhana
Hi,

We use AMQ 5.13.1 and connect to AMQ using Eclipse Paho's MQTT V3 client lib
to exchange MQTT messages between message publishers and durable
subscribers. Some of our durable subscribers (running on Android app) may go
offline for extended periods (say when mobile data connection is disabled
for the app) and this results in messages getting accumulated for such
subscribers on the broker.

One way to overcome this problem is to set message expiry so that AMQ can
delete all expired messages. But MQTT specification does not support message
expiry and hence I cannot set "TTL" for individual messages. Is there any
hook from AMQ side to achieve the same effect (either at destination level
or broker level)?

I saw a configuration offlineDurableSubscriberTimeout that can help to
achieve similar result. My question is :
a) Will offlineDurableSubscriberTimeout cleanup offline subscribers even if
there are unconsumed messages?
b) Is there any better mechanism to achieve the desired result?

Appreciate any help.

Thanks,
Shobhana



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Support-for-setting-default-message-expiry-for-MQTT-messages-tp4710015.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Is there a way to be notified when a durable subscriber receives a MQTT message?

2016-04-01 Thread Shobhana
Publisher sends a persistent message to a topic which has more than one
durable subscribers. One or more of these durable subscribers may be offline
when the message is sent. Is there any way to get notified when the message
is delivered to all subscribers?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Is-there-a-way-to-be-notified-when-a-durable-subscriber-receives-a-MQTT-message-tp4710173.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ClassCastException while subscribing to a topic

2016-04-21 Thread Shobhana
Sorry Tim, I don't know how to reproduce this issue.

After this issue happened, I restarted AMQ broker by deleting the whole data
directory. I have been observing AMQ logs, but haven't found this issue
again. 

Another point to note : Our app will try to subscribe again if subscription
fails for any reason other than connection issues. It keeps retrying until
it is successful or connection is down. This issue was observed with one
specific topic "profile/specific_id" where specific_id is the ID of one
specific user. From the logs backed up, I could see that this has failed for
25 times (could have been more, but I had configured only 50 backup
logs). After I restarted the AMQ broker, subscription was successful.

I will keep observing and update if I find anything new.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ClassCastException-while-subscribing-to-a-topic-tp4710870p4711096.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ClassCastException while subscribing to a topic

2016-04-26 Thread Shobhana
No Tim, there are no more stack traces. Maybe the implementation of
TopicMessageStore.recoverNextMessages() just throws a new ClassCastException
?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ClassCastException-while-subscribing-to-a-topic-tp4710870p4711256.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: AMQ 5.13.2 : Kaha DB logs cleanup issue

2016-05-17 Thread Shobhana
Thanks Tim, I did not know that the stats are reset when the broker is
restarted. I'll check how to use JConsole to view the current status of
consumers.

Yes, all topics have durable subscribers.

I used the logic mentioned in the example given at 
http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html
  
to identify the topic that prevents log file from being GCed. Even I was
confused with the example. I expected "dest:0:J" to be holding messages in
#86; but since the example said "dest:0:I", I thought the trace log was
meant for "before" and not "after". Maybe this example can be corrected in
the AMQ link.
However, since I was unsure whether the trace log was wrong or the example
was wrong, I cross-checked for any pending messages in both
riderride.chat.209 and passengerride.user.1234567890 and found that both
were 0. Hence I raised this question.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/AMQ-5-13-2-Kaha-DB-logs-cleanup-issue-tp4712046p4712083.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: AMQ 5.13.2 : Kaha DB logs cleanup issue

2016-05-19 Thread Shobhana
When checked using the Jolokia Rest APIs, I get same result ... enqueued
message count on these topics show 0; so I guess the Jolokia Rest APIs also
give only stats since the broker started last.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/AMQ-5-13-2-Kaha-DB-logs-cleanup-issue-tp4712046p4712151.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


AMQ 5.13.2 : Kaha DB logs cleanup issue

2016-05-17 Thread Shobhana
We use AMQ to exchange MQTT messages. I observed that the log files in Kaha
DB don't get deleted even after many days.

I used the steps mentioned in 
http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html
  
to locate which destination is containing unacked messages or slow consumer.
Following is the trace log output :

2016-05-17 18:05:32,077 | TRACE | Last update: 7:1109948, full gc candidates
set: [1, 2, 3, 4, 5, 6, 7] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,078 | TRACE | gc candidates after
producerSequenceIdTrackerLocation:7, [1, 2, 3, 4, 5, 6] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,078 | TRACE | gc candidates after
ackMessageFileMapLocation:7, [1, 2, 3, 4, 5, 6] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,079 | TRACE | gc candidates after tx range:[null, null],
[1, 2, 3, 4, 5, 6] | org.apache.activemq.store.kahadb.MessageDatabase |
ActiveMQ Journal Checkpoint Worker
2016-05-17 18:05:32,079 | TRACE | gc candidates after
dest:1:riderride.chat.200, [1, 3, 4, 5, 6] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,081 | TRACE | gc candidates after
dest:1:riderride.chat.203, [1, 3, 4, 5, 6] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,081 | TRACE | gc candidates after
dest:1:riderride.chat.206, [1, 3, 5, 6] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,083 | TRACE | gc candidates after
dest:1:riderride.chat.209, [1, 3, 5, 6] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,083 | TRACE | gc candidates after
dest:1:passengerride.user.1234567890, [1, 3, 5] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,083 | TRACE | gc candidates after
dest:1:passengerride.user.1234567891, [1, 3, 5] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,085 | TRACE | gc candidates after
dest:1:passengerride.status.160, [1, 3, 5] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,089 | TRACE | gc candidates after
dest:1:riderride.chat.218, [1, 3, 5] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,090 | TRACE | gc candidates after
dest:1:invitationStatus.1234567891, [1, 3, 5] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,090 | TRACE | gc candidates after
dest:1:chat.9206705762, [3, 5] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,090 | TRACE | gc candidates after
dest:1:invitationStatus.1234567890, [3] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker
2016-05-17 18:05:32,091 | TRACE | gc candidates after
dest:1:riderride.chat.220, [3] |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
Checkpoint Worker

As per description given in the link mentioned above, there must be some
unacked messages or slow consumers on topic riderride.chat.209 which is
preventing log file # 6 to be GCed. However, when I check in the admin
console to see how many messages were exchanged on this topic, I see that
the enqueued and dequeued messages are 0. So there is no chance for any
unacked messages. There is 1 inactive subscriber though. Will this inactive
subscriber prevent the GCing of this log file?

Same issue is observed for all other log files too. Any idea what's going
wrong here?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/AMQ-5-13-2-Kaha-DB-logs-cleanup-issue-tp4712046.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: AMQ 5.13.2 : Kaha DB logs cleanup issue

2016-05-18 Thread Shobhana
Sure, I'll check when I have some time.

Btw, I tried to use JConsole to view the current status of consumers. I
tried hard getting JConsole to connect to my broker that runs on an EC2
instance, but couldn't succeed. Can I use the Jolokia Rest APIs instead? Do
these APIs give same result as JMX viewer (cumulative stats) or only from
the time the broker is restarted?

Are you talking about  AMQ-6203
  ? Currently we use
ActiveMQ 5.13.2 and still see this problem. If the enhancement that you are
talking about is AMQ-6203, we will upgrade to 5.13.3 version in which this
is implemented.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/AMQ-5-13-2-Kaha-DB-logs-cleanup-issue-tp4712046p4712109.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Failed to remove inactive destination

2016-05-15 Thread Shobhana
In our production environment, we use AMQ broker with Kaha DB persistence. We
have set the following in broker configuration:

schedulePeriodForDestinationPurge="60"
offlineDurableSubscriberTimeout="25920" 
offlineDurableSubscriberTaskSchedule="360"


Sometimes, I have observed that when the inactivity monitor runs, its not
able to purge inactive destination. Following is log snippet :

2016-05-15 22:29:27,742 | INFO  | profile. Inactive for longer than
360 ms - removing ... | org.apache.activemq.broker.region.Topic |
ActiveMQ Broker[PrimaryBroker] Scheduler
2016-05-15 22:29:27,744 | ERROR | Failed to remove inactive destination
Topic: destination=profile., subscriptions=0 |
org.apache.activemq.broker.region.RegionBroker | ActiveMQ
Broker[PrimaryBroker] Scheduler
javax.jms.JMSException: Destination still has an active subscription:
topic://profile.
at
org.apache.activemq.broker.region.AbstractRegion.removeDestination(AbstractRegion.java:270)[activemq-broker-5.13.2.jar:5.13.2]
at
org.apache.activemq.broker.region.RegionBroker.removeDestination(RegionBroker.java:363)[activemq-broker-5.13.2.jar:5.13.2]
at
org.apache.activemq.broker.BrokerFilter.removeDestination(BrokerFilter.java:178)[activemq-broker-5.13.2.jar:5.13.2]
at
org.apache.activemq.broker.BrokerFilter.removeDestination(BrokerFilter.java:178)[activemq-broker-5.13.2.jar:5.13.2]
at
org.apache.activemq.broker.MutableBrokerFilter.removeDestination(MutableBrokerFilter.java:183)[activemq-broker-5.13.2.jar:5.13.2]
at
org.apache.activemq.broker.region.RegionBroker.purgeInactiveDestinations(RegionBroker.java:918)[activemq-broker-5.13.2.jar:5.13.2]
at
org.apache.activemq.broker.region.RegionBroker$1.run(RegionBroker.java:118)[activemq-broker-5.13.2.jar:5.13.2]
at
org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)[activemq-client-5.13.2.jar:5.13.2]
at java.util.TimerThread.mainLoop(Timer.java:555)[:1.7.0_95]
at java.util.TimerThread.run(Timer.java:505)[:1.7.0_95]

One part of the log message says "there are 0 subscriptions", but another
part says "Destination still has an active subscription"

Why is this inconsistency?





--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Failed-to-remove-inactive-destination-tp4711967.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


ClassCastException while subscribing to a topic

2016-04-18 Thread Shobhana
]
at
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)[activemq-client-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTCodec$1.onFrame(MQTTCodec.java:65)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTCodec.processCommand(MQTTCodec.java:90)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTCodec.access$400(MQTTCodec.java:26)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTCodec$4.parse(MQTTCodec.java:213)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTCodec$3.parse(MQTTCodec.java:179)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTCodec$2.parse(MQTTCodec.java:138)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTCodec.parse(MQTTCodec.java:76)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTNIOTransport.processBuffer(MQTTNIOTransport.java:132)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTNIOTransport.serviceRead(MQTTNIOTransport.java:120)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTNIOTransport.access$000(MQTTNIOTransport.java:43)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.mqtt.MQTTNIOTransport$1.onSelect(MQTTNIOTransport.java:72)[activemq-mqtt-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:98)[activemq-client-5.13.1.jar:5.13.1]
at
org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:118)[activemq-client-5.13.1.jar:5.13.1]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_95]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_95]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_95]

a) What is the reason behind this error? Under what conditions is this
triggered?
b) Why is this surfacing now after AMQ has run fine for 3+ weeks?
c) Is this fixed in 5.13.2 version?

Any inputs will be appreciated.

Thanks,
Shobhana



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ClassCastException-while-subscribing-to-a-topic-tp4710870.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


KahaDB journal logs cleanup issue

2017-02-19 Thread Shobhana
I have read 
http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html
<http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html>  
and I fairly understand some of the reasons why some journal logs may not
get deleted :
(a) It contains a pending message for a destination or durable topic
subscription
(b) It contains an ack for a message which is in an in-use data file - the
ack cannot be removed as a recovery would then mark the message for
redelivery
(c) The journal references a pending transaction
(d) It is a journal file, and there may be a pending write to it

Is the above list a complete list of reasons why journal files may not get
deleted? Or are there any more possible reasons?

Since I have no control over offline subscribers (these are apps used by end
users), I try to overcome the above scenarios with certain configurations :

To avoid issues due to (a), I have enabled following configurations in my
broker xml :

To ensure offline durable subscribers don't cause piling up of these log
files, I have enabled timeout for offline durable subscribers to 24 hours :

http://activemq.apache.org/schema/core; useJmx="false"
brokerName="PrimaryBroker" deleteAllMessagesOnStartup="false"
advisorySupport="false" schedulePeriodForDestinationPurge="60"
offlineDurableSubscriberTimeout="8640"
offlineDurableSubscriberTaskSchedule="360"
dataDirectory="${activemq.data}">

I have also set message expiry as 12 hours :






I think the above configurations only help to overcome scenario (a). How do
I overcome scenarios (b), (c) and (d)? Are there any configuration to :
a) delete old ack messages?
b) timeout pending transactions?

In the last 3 days run, I see that there are some journal logs which are
were created on 17-Feb early morning. Why are these files still not getting
deleted even after more than 72 hours? What could be the probable reasons?
Since this is in production environment, I cannot enable debug logs to see
which destination is holding up which journal file. Any help will be greatly
appreciated.

P.S : We use AMQ 5.14.1 and exchange MQTT messages mostly (thousands of
topics will be created on the fly and both persistent and non-persistent
messages are exchanged over these topics) and a few JMS messages to a queue.

TIA,
Regards,
Shobhana



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/KahaDB-journal-logs-cleanup-issue-tp4722217.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Sometimes, the messages are not delivered immediately to durable subscribers

2016-10-05 Thread Shobhana
We are using AMQ version 5.14.0 and we have observed that sometimes, the
messages don't get delivered from the topics to active durable subscribers
(Android app connected via MQTT). I observed following log when this
happened in our production server :

Transport Connection to: tcp://x.y.z.a:2686 failed: java.io.IOException:
Connection timed out |
org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ NIO
Worker

Unfortunately, since this is a production server, I can't stop the same to
change the log level so that we can get more information about this. We have
disabled JMX in production; so can't use the admin console to check what
could be the problem.

The durable subscriber is configured with keep-alive as 2 minutes and on the
AMQ broker, the mqtt+nio transport connector is configured with
wireFormat.maxInactivityDuration=18. Essentially, the MQTT connection to
the transport connector would not become inactive .

If the connection is broken for some reason, the connectionLost() method of
the durable subscriber should be invoked, but logs indicate that the
connection is still fine.

If I re-open the app (which results in re-establishing the connection), the
durable subscribers get the message immediately.

Any idea why this could be happening?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Sometimes-the-messages-are-not-delivered-immediately-to-durable-subscribers-tp4717529.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: AMQ 5.13.2 : Kaha DB logs cleanup issue

2016-10-16 Thread Shobhana
Hi Tim,

I tried 5.13.3 and all further versions till 5.14.0, but this problem is
still not gone. I still see a lot of Kaha DB log files which consume huge
amounts of disk space and eventually the broker stops functioning!

I made some more changes to check if I could get rid of this problem :
a) I enabled following plug-in to set expiry of 1 day for every message (we
use only MQTT messages) :

  



b) I enabled DLQ to drop all expired messages for all topics and queues :

  




c) I enabled offline durable subscribers to timeout after 1 day :
offlineDurableSubscriberTimeout="8640"
offlineDurableSubscriberTaskSchedule="360"

After these changes, I could see a few log files got deleted, but majority
of the log files remain in the Kahadb folder which consume lot of disk
space.

I observed this in our production server where I cannot enable trace or JMX.
Is there any other way to identify what is causing this issue?

TIA,
Shobhana






--
View this message in context: 
http://activemq.2283324.n4.nabble.com/AMQ-5-13-2-Kaha-DB-logs-cleanup-issue-tp4712046p4717976.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: KAHADB clean up old log files

2016-10-16 Thread Shobhana
Sorry about adding to somebody else's thread I am adding here since I see
same issue in our production server.

We use 5.14.0 version and I have already enabled DLQ to discard all expired
messages and also set expiry of 1 day to all messages (MQTT) and offline
durable consumers using the following configurations :

a) I enabled following plug-in to set expiry of 1 day for every message (we
use only MQTT messages) : 

  



b) I enabled DLQ to drop all expired messages for all topics and queues : 

  




c) I enabled offline durable subscribers to timeout after 1 day : 
offlineDurableSubscriberTimeout="8640"
offlineDurableSubscriberTaskSchedule="360" 

After these changes, I could see a few log files got deleted, but majority
of the log files remain in the Kahadb folder which consume lot of disk
space. 

I observed this in our production server where I cannot enable trace or JMX.
Is there any other way to identify what is causing this issue?

TIA,
Shobhana



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/KAHADB-clean-up-old-log-files-tp4716661p4717977.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


ActiveMQ broker becomes unresponsive after sometime

2017-04-27 Thread Shobhana
We use ActiveMQ 5.14.1 with KahaDB for persistence.
>From last 2 days, we have observed that the broker becomes unresponsive
after running for around 4 hours! None of the operations work after this
including establishing new connections, publishing messages to topics,
subscribing to topics, unsubscribing, etc.

We have disabled JMX for performance reasons; so I cannot check the
status/health of broker from JMX. I tried to take thread dump to see what's
happening, but it fails with a message : Unable to open socket file: target
process not responding or HotSpot VM not loaded!

Similar error when I try to take heap dump! But I can see that broker
process is running using ps -ef |grep java option.

Tried to take the thread dump forcefully using the -F option, but this also
fails with "java.lang.RuntimeException: Unable to deduce type of thread from
address 0x7f9288012800 (expected type JavaThread, CompilerThread,
ServiceThread, JvmtiAgentThread, or SurrogateLockerThread)"

Forceful Heap dump fails with "Expecting GenCollectedHeap, G1CollectedHeap,
or ParallelScavengeHeap, but got sun.jvm.hotspot.gc_interface.CollectedHeap"

We have just one broker running on AWS EC2 Ubuntu instance. Broker is
started with Xmx of 12 GB. Our server and Android applications together
create thousands of topics and exchange MQTT messages (both persistent and
non-persistent). Within 4 hours, around 20 GB of journal files got created
in the last run before broker became unresponsive! The only way to overcome
this problem is to stop the broker, delete all files in KahaDB and restart
the broker!

Any hints to what could be going wrong is highly appreciated!

Broker configurations is given below for reference :

http://activemq.apache.org/schema/core; useJmx="false"
brokerName="PrimaryBroker" deleteAllMessagesOnStartup="false"
advisorySupport="false" schedulePeriodForDestinationPurge="60"
offlineDurableSubscriberTimeout="5400"
offlineDurableSubscriberTaskSchedule="360"
dataDirectory="${activemq.data}">



  

  

  
  

  


  

  

  







  











    













TIA,
Shobhana



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-broker-becomes-unresponsive-after-sometime-tp4725278.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ broker becomes unresponsive after sometime

2017-05-18 Thread Shobhana
I observed closely for last 3 days, but couldn't identify anything wrong with
CPU, memory, Network I/O, Disk I/O, etc
a) CPU usage on this EC2 instance (has 8 vCPUs) has never crossed 35%
b) Memory usage varies between 1GB to 18 GB (Total available is 32 GB and
Xmx assigned for this broker process is 26 GB)
c) Thread dumps don't show any blocked threads
d) Logs (enabled at INFO level) don't show any errors, except for occasional
"Failed to remove inactive destination. Destination still has an active
subscription". Is there any log to indicate producer flow control has kicked
in?

In this morning run, there was 1 full GC (we use CMS GC) before the issue
popped up and another full GC just 1 second after the issue. The first full
GC took about 8.17 secs. Does this indicate any trouble?

I enabled JMX to check the no of messages pending. The JMX console just
shows message statistics for each destination; However in my setup, there
are thousands of topics created and thousands of durable subscribers. The
JMX console couldn't even load all of them. So couldn't get how many
messages were pending. Is there any other way to get total no of messages
pending delivery?

Is our usage of ActiveMQ following any of the anti-patterns? We create
thousands of connections, hundreds of thousands of topics and durable
subscribers to exchange MQTT messages. Is this usage not recommended?




--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-broker-becomes-unresponsive-after-sometime-tp4725278p4726317.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ broker becomes unresponsive after sometime

2017-05-18 Thread Shobhana
>From last 3 days' GC logs I observe something consistently :
When full GC happens which takes about 8 and odd seconds, 15 minutes later
our servers loose connection to the broker;  all further attempts to
reconnect fail with "Timeout" exception. New connection attempts also fail
with same exception!

Why isn't the broker unable to recover after full GC?




--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-broker-becomes-unresponsive-after-sometime-tp4725278p4726325.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ broker becomes unresponsive after sometime

2017-05-19 Thread Shobhana
Tim, full GC takes 8 seconds, not 8 minutes.

Also, after full GC, large amount of memory is reclaimed (13G to <2G) :

2017-05-17T14:01:46.179+0530: 34205.488: [Full
GC2017-05-17T14:01:46.179+0530: 34205.488: [CMS:
13039360K->1895795K(26214400K), 8.5260340 secs]
13578056K->1895795K(27158144K), [CMS Perm : 33865K->33826K(131072K)],
8.5261390 secs] [Times: user=8.87 sys=0.00, real=8.52 secs] 

Considering so much memory is freed up, do you think fragmentation would
still be a cause for concern? I checked for size of few objects in histo
report but couldn't find any object greater than 1.5KB.

Haven't used JConsole, but from GC logs I see just one or 2 full GCs before
broker becomes unresponsive.

I'll also try by switching to G1GC and see if it makes any difference.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-broker-becomes-unresponsive-after-sometime-tp4725278p4726390.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Is creating thousands of connections to a single AMQ broker node and keeping them open an anti-pattern?

2017-05-19 Thread Shobhana
We use a single AMQ broker (using 5.14.1 version) node to exchange MQTT
messages between our server and apps running on mobile devices (both Android
and iOS).
The app establishes a connection with AMQ broker using Eclipse Paho client
lib. To be able to receive messages as soon as they are published, the app
keeps this connection open all the time. Even when the app is closed, a
background service gets started (in case of Android) which establishes a
connection with the broker.
In effect, if there are 10 users using the app, there would be 10
connections to AMQ broker. Is there any known issue in using AMQ broker this
way?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Is-creating-thousands-of-connections-to-a-single-AMQ-broker-node-and-keeping-them-open-an-anti-patte-tp4726391.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ broker becomes unresponsive after sometime

2017-05-18 Thread Shobhana
Is "Fresh KahaDB" equivalent to deleting the entire contents of KahaDB
folder? If yes, we do this every early morning to prevent the issue, but it
has never helped us!



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-broker-becomes-unresponsive-after-sometime-tp4725278p4726372.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: ActiveMQ broker becomes unresponsive after sometime

2017-05-19 Thread Shobhana
Hahah .. that's okay :-)

I have never used JVisualVM; will understand how to use this tool and report
my observations. Thanks for all inputs so far!



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-broker-becomes-unresponsive-after-sometime-tp4725278p4726397.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Is creating thousands of connections to a single AMQ broker node and keeping them open an anti-pattern?

2017-05-19 Thread Shobhana
@clebertsuconic, ActiveMQ also supports NIO and we have already configured to
use it. How is this different from that supported in Artemis?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Is-creating-thousands-of-connections-to-a-single-AMQ-broker-node-and-keeping-them-open-an-anti-patte-tp4726391p4726401.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Is creating thousands of connections to a single AMQ broker node and keeping them open an anti-pattern?

2017-05-19 Thread Shobhana
Tim, each client process will open just one connection.
There can be hundreds of thousands of different client processes (as many as
no of users using our app) connected to the broker at the same time.

In another thread
(http://activemq.2283324.n4.nabble.com/ActiveMQ-broker-becomes-unresponsive-after-sometime-td4725278.html#a4726390)
I had mentioned that broker becomes unresponsive after some time; wanted to
chcek if our usage itself was wrong; hence posted this thread.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Is-creating-thousands-of-connections-to-a-single-AMQ-broker-node-and-keeping-them-open-an-anti-patte-tp4726391p4726400.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Messages stuck in broker

2017-11-13 Thread Shobhana
Hi,

I have a network of brokers with just 2 brokers in the network. Relevant
configurations are shown in attached files :

On Node-1 :
activemq-PrimaryBorker.xml
<http://activemq.2283324.n4.nabble.com/file/t377415/activemq-PrimaryBorker.xml> 
 

On Node-2 :
activemq-SecondaryBorker.xml
<http://activemq.2283324.n4.nabble.com/file/t377415/activemq-SecondaryBorker.xml>
  

When a topic subscriber is connected to Node-1 and publisher is connected to
Node-2, sometimes I see that messages are stuck in the broker and not given
to subscriber. Statistics on Node-1 show that consumer is connected :
{"timestamp":1510559053,"status":200,"request":{"mbean":"org.apache.activemq:brokerName=PrimaryBroker,destinationName=topicname,destinationType=Topic,type=Broker","attribute":"ConsumerCount","type":"read"},"value":1}

I can also see some messages are enqueued :
{"timestamp":1510559073,"status":200,"request":{"mbean":"org.apache.activemq:brokerName=PrimaryBroker,destinationName=topicname,destinationType=Topic,type=Broker","attribute":"EnqueueCount","type":"read"},"value":12}

However, dequeue count is 0 :
{"timestamp":1510559078,"status":200,"request":{"mbean":"org.apache.activemq:brokerName=PrimaryBroker,destinationName=topicname,destinationType=Topic,type=Broker","attribute":"DequeueCount","type":"read"},"value":0}

I had used same configuration in my test environment before rolling into
production and in my test env, my subscriber would always receive the
messages sent to any of the brokers. The main difference between my test env
and production env is that the production env has a high no of connections
(several tens of thousands) and topics (again several tens of thousands)!

Any hints on why messages are not delivered? How can I analyse further?

Thanks & Regards,
Shobhana



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: Messages stuck in broker

2017-11-13 Thread Shobhana
I see a bunch of these messages in the logs of Node-2 :

2017-11-13 15:30:06,506 | INFO  | Establishing network connection from
vm://SecondaryBroker?create=false=false to tcp://x.x.x.x:61616 |
org.apache.activemq.network.DiscoveryNetworkConnector | ActiveMQ Task-4
2017-11-13 15:37:49,129 | INFO  | ActiveMQ.Advisory.MasterBroker Inactive
for longer than 180 ms - removing ... |
org.apache.activemq.broker.region.Topic | ActiveMQ Broker[SecondaryBroker]
Scheduler
2017-11-13 15:38:37,095 | WARN  | Network connection between
vm://SecondaryBroker#16 and tcp://x.x.x.x/x.x.x.x:61616@46009 shutdown due
to a remote error: java.util.concurrent.TimeoutException |
org.apache.activemq.network.DemandForwardingBridgeSupport |
triggerStartAsyncNetworkBridgeCreation:
remoteBroker=tcp://x.x.x.x/x.x.x.x:61616@46009, localBroker=
vm://SecondaryBroker#16
2017-11-13 15:38:37,101 | INFO  | Connector vm://SecondaryBroker stopped |
org.apache.activemq.broker.TransportConnector | ActiveMQ
BrokerService[SecondaryBroker] Task-21
2017-11-13 15:38:37,102 | INFO  | SecondaryBroker bridge to Unknown stopped
| org.apache.activemq.network.DemandForwardingBridgeSupport | ActiveMQ
BrokerService[SecondaryBroker] Task-21
2017-11-13 15:38:38,101 | INFO  | Establishing network connection from
vm://SecondaryBroker?create=false=false to tcp://x.x.x.x:61616 |
org.apache.activemq.network.DiscoveryNetworkConnector | ActiveMQ Task-5
2017-11-13 15:38:38,102 | INFO  | Connector vm://SecondaryBroker started |
org.apache.activemq.broker.TransportConnector | ActiveMQ Task-5
2017-11-13 15:39:08,169 | WARN  | Network connection between
vm://SecondaryBroker#20 and tcp://x.x.x.x/x.x.x.x:61616@46010 shutdown due
to a remote error: java.util.concurrent.TimeoutException |
org.apache.activemq.network.DemandForwardingBridgeSupport |
triggerStartAsyncNetworkBridgeCreation:
remoteBroker=tcp://x.x.x.x/x.x.x.x:61616@46010, localBroker=
vm://SecondaryBroker#20
2017-11-13 15:39:08,171 | INFO  | Connector vm://SecondaryBroker stopped |
org.apache.activemq.broker.TransportConnector | ActiveMQ
BrokerService[SecondaryBroker] Task-25
2017-11-13 15:39:08,171 | INFO  | SecondaryBroker bridge to Unknown stopped
| org.apache.activemq.network.DemandForwardingBridgeSupport | ActiveMQ
BrokerService[SecondaryBroker] Task-25

I could only understand second statement in the logs above. How do I
indicate to the broker that network connector should not be timed out? Can I
increase the maxInactivityDuration of the transport connector to a large
value in both brokers?

What is the reason for other logs? I couldn't see anything relevant in the
logs of Node-1 during this time.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: Disable producer flow control in ActiveMQ v5.14.3

2017-11-06 Thread Shobhana
Hi Abhishek,

Did you figure out why flow control got triggered even though flow control
was disabled?



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Some questions about ActiveMQ JMX MBeans

2017-11-07 Thread Shobhana
I want to monitor my broker using JMX MBeans; useJmx is set to true. I use
AMQ v 5.14.5. I have a few questions:

a) How can I get current connections count? I see TotalConnectionsCount but
I think this gives a count of all connections since the broker was started
even if the connection is no longer active.
b) Why is TotalDequeueCount higher than TotalEnqueueCount?
c) Why is TotalMessageCount always 0? MemoryPercentUsage and
StorePercentUsage are also 0 always
d) Can I use wild card entry for destinationName to get statistics for all
destinations that I am interested in? We have tens of thousands of
destinations and hence querying by each destination is unreasonable.




--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: MQTT Subscriber gets disconnected frequently when there are large number of MQTT clients connected

2017-10-29 Thread Shobhana
Hi Tim,

The log content quoted in my post was from broker log.

I'll try to reproduce this in a test environment. Meanwhile if you could
think of any common (generic) reasons why broker may becomes unresponsive,
please share.

Thanks,
Shobhana



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html