Re: Configuring vm with non-vm on the same broker

2016-04-29 Thread Tim Bain
The excludedDestinations attribute on the networkConnector sounds like what
you want.  http://activemq.apache.org/networks-of-brokers.html
On Apr 29, 2016 1:49 PM, "Allan Wax"  wrote:

> Thanks.  Would you provide some pointers or documentation references that
> tell me how to doit.
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Configuring-vm-with-non-vm-on-the-same-broker-tp4711350p4711395.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>


Re: Artemis slave goes down after master comes back up

2016-04-29 Thread abhijith
Further looking into it, I found that true
is causing it.  I thought allow-failback will relinquish control, not shut
itself off.  



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Artemis-slave-goes-down-after-master-comes-back-up-tp4711403p4711404.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Artemis slave goes down after master comes back up

2016-04-29 Thread abhijith
Hi,

I am seeing a behavior in Artemis HA which I can't really explain why. 

I ran the example successfully provided here: 
https://github.com/apache/activemq-artemis/tree/master/examples/features/ha/replicated-failback-static
 

Then bought up both master and slave manually which are present under
target/server0 and target/server1 folder respectively.  Both of them came up
successfully and I was able to send/receive messages

I bought server0(master) down and server1(slave) took over servicing as
expected.  Then I got server0(master) back up which made server1(slave) shut
down after syncing information to server0(master).

I am not sure if this is expected behavior.  I was expecting slave back to
slave mode rather than shut itself down. Is there anyway this behavior can
be changed?

Thanks
Abhi



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Artemis-slave-goes-down-after-master-comes-back-up-tp4711403.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Configuring vm with non-vm on the same broker

2016-04-29 Thread Allan Wax
Thanks.  Would you provide some pointers or documentation references that
tell me how to doit.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Configuring-vm-with-non-vm-on-the-same-broker-tp4711350p4711395.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Artemis JMX not showing queues

2016-04-29 Thread abhijith
Thanks for the prompt response.  Will watch on the Jira.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Artemis-JMX-not-showing-queues-tp4711375p4711396.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Artemis JMX not showing queues

2016-04-29 Thread Justin Bertram
To be clear, calling 
org.apache.activemq.artemis.api.jms.ActiveMQJMSClient#createQueue simply 
creates a a client-side representation of a JMS Queue (as the JavaDoc 
indicates).  Nothing happens on the server when that method is invoked.

I opened https://issues.apache.org/jira/browse/ARTEMIS-510 to deal with this 
specific issue.  You can find the technical details there.  In short, when the 
producer is created the server-side core queue is created but the JMS related 
MBean is not.  This will be fixed shortly.


Justin

- Original Message -
From: "abhijith" 
To: users@activemq.apache.org
Sent: Friday, April 29, 2016 10:05:46 AM
Subject: Re: Artemis JMX not showing queues

First I am creating a connection factory using 

ActiveMQConnectionFactory cf =
ActiveMQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, server1,
server2);

Then I am creating client representation of queue

Queue queue = ActiveMQJMSClient.createQueue("x");

after that 

Connection cf = cf.createConnection();
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
TextMessage message = session.createTextMessage(messageToSend);
MessageProducer producer = session.createProducer(queue);
producer.send(message);

I think queue got created because address-settings has 
true by default.

Please let me know if you need any further info

Thanks
Abhi




--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Artemis-JMX-not-showing-queues-tp4711375p4711387.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Artemis create topic programatically

2016-04-29 Thread Justin Bertram
Automatically creating JMS server-side topics is not currently supported.  See 
https://issues.apache.org/jira/browse/ARTEMIS-322 for more details.


Justin

- Original Message -
From: "abhijith" 
To: users@activemq.apache.org
Sent: Friday, April 29, 2016 10:13:32 AM
Subject: Artemis create topic programatically

Hi,

I am trying to create topic programatically.  Please find below my code

@Test
   public void testCreateTopic() throws Exception {
ConnectionFactory cf =
ActiveMQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, server1,
server2);
Connection conn = cf.createConnection();

Session session = conn.createSession(false,
Session.AUTO_ACKNOWLEDGE);

Topic topic = session.createTopic("TestTopic");

String topicName = topic.getTopicName();

   
assertFalse(topicName.startsWith(ActiveMQDestination.JMS_TOPIC_ADDRESS_PREFIX));

Topic replyTopic = session.createTopic(topicName);

MessageConsumer consumer = session.createConsumer(replyTopic);

conn.start();

MessageProducer producer = session.createProducer(replyTopic);

producer.send(session.createMessage());

assertNotNull(consumer.receive(1));
}

I also have the right permissions 










But I keep getting the error 

"javax.jms.JMSException: There is no topic with name TestTopic".  Similar
code works for queues.  Please let me know if I am doing something wrong.

Thanks
Abhi





--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Artemis-create-topic-programatically-tp4711388.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Artemis create topic programatically

2016-04-29 Thread abhijith
Hi,

I am trying to create topic programatically.  Please find below my code

@Test
   public void testCreateTopic() throws Exception {
ConnectionFactory cf =
ActiveMQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, server1,
server2);
Connection conn = cf.createConnection();

Session session = conn.createSession(false,
Session.AUTO_ACKNOWLEDGE);

Topic topic = session.createTopic("TestTopic");

String topicName = topic.getTopicName();

   
assertFalse(topicName.startsWith(ActiveMQDestination.JMS_TOPIC_ADDRESS_PREFIX));

Topic replyTopic = session.createTopic(topicName);

MessageConsumer consumer = session.createConsumer(replyTopic);

conn.start();

MessageProducer producer = session.createProducer(replyTopic);

producer.send(session.createMessage());

assertNotNull(consumer.receive(1));
}

I also have the right permissions 










But I keep getting the error 

"javax.jms.JMSException: There is no topic with name TestTopic".  Similar
code works for queues.  Please let me know if I am doing something wrong.

Thanks
Abhi





--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Artemis-create-topic-programatically-tp4711388.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Artemis JMX not showing queues

2016-04-29 Thread abhijith
First I am creating a connection factory using 

ActiveMQConnectionFactory cf =
ActiveMQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, server1,
server2);

Then I am creating client representation of queue

Queue queue = ActiveMQJMSClient.createQueue("x");

after that 

Connection cf = cf.createConnection();
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
TextMessage message = session.createTextMessage(messageToSend);
MessageProducer producer = session.createProducer(queue);
producer.send(message);

I think queue got created because address-settings has 
true by default.

Please let me know if you need any further info

Thanks
Abhi




--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Artemis-JMX-not-showing-queues-tp4711375p4711387.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Separate kahadb for queues and dead letter queues?

2016-04-29 Thread Quinn Stevenson
Wildcards aren’t quite that flexible - the docs say “don’t specify any string 
after ‘>’ on your wildcard express, it will be ignored” 
http://activemq.apache.org/wildcards.html 


For what it’s worth - I got around this issue with DLQs by changing the naming 
pattern - I put DLQ at the front, rather than the end.

> On Apr 29, 2016, at 9:25 AM, Christian Schneider  
> wrote:
> 
> I just tested by sending messages into a queue test. Unfortunately the filter 
> ">.DLQ" seems to include "test". So it seems everything after ">" is ignored.
> Is there any way to express that I want all queues that end in DLQ?
> 
> Christian
> 
> On 29.04.2016 17:17, Christian Schneider wrote:
>> I took a stab at creating a config with a separate kahadb for the DLQs:
>> 
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> Does that make sense?
>> 
>> I am unsure about the ">" wildcard as I am not sure if it can be used at the 
>> start. All examples I found till now only seem to use it at the end.
>> I would be happy about a confirmation that what I do is correct or a better 
>> solution.
>> 
>> Christian
>> 
>> On 29.04.2016 14:39, Christian Schneider wrote:
>>> We have the issue that our journals at a customer are very large. We 
>>> analyzed the messages in the system and found that most of the time
>>> the normal queues stay at a very low number of messages near to 0. The only 
>>> queues that hold messages for longer periods of time are the dead letter 
>>> queues.
>>> 
>>> The journals are much bigger than the dead letter queue contents.
>>> 
>>> My assumption is that most of the journals only contain very few still 
>>> needed messages and the ones they contain are the messages that now are in 
>>> the dead letter queues.
>>> Still the journals can not be cleaned up of course.
>>> 
>>> So what I would like to to is to use one kahadb for the regular queues and 
>>> a different one for the dead letter queues. Does that make sense?
>>> I will try to do this using "Multi(m) kahaDB persistence adapter": See 
>>> http://activemq.apache.org/kahadb.html
>>> 
>>> I also wonder if it would make sense that ActiveMQ cleans up its journals 
>>> regularly. It could simply move all oldest messages to the newest jorunal 
>>> to kind of pack if denser.
>>> Would that work?
>>> 
>>> Christian
>>> 
>> 
>> 
> 
> 
> -- 
> Christian Schneider
> http://www.liquid-reality.de
> 
> Open Source Architect
> http://www.talend.com
> 



Re: Separate kahadb for queues and dead letter queues?

2016-04-29 Thread Christian Schneider
I just tested by sending messages into a queue test. Unfortunately the 
filter ">.DLQ" seems to include "test". So it seems everything after ">" 
is ignored.

Is there any way to express that I want all queues that end in DLQ?

Christian

On 29.04.2016 17:17, Christian Schneider wrote:

I took a stab at creating a config with a separate kahadb for the DLQs:
















Does that make sense?

I am unsure about the ">" wildcard as I am not sure if it can be used 
at the start. All examples I found till now only seem to use it at the 
end.
I would be happy about a confirmation that what I do is correct or a 
better solution.


Christian

On 29.04.2016 14:39, Christian Schneider wrote:
We have the issue that our journals at a customer are very large. We 
analyzed the messages in the system and found that most of the time
the normal queues stay at a very low number of messages near to 0. 
The only queues that hold messages for longer periods of time are the 
dead letter queues.


The journals are much bigger than the dead letter queue contents.

My assumption is that most of the journals only contain very few 
still needed messages and the ones they contain are the messages that 
now are in the dead letter queues.

Still the journals can not be cleaned up of course.

So what I would like to to is to use one kahadb for the regular 
queues and a different one for the dead letter queues. Does that make 
sense?
I will try to do this using "Multi(m) kahaDB persistence adapter": 
See http://activemq.apache.org/kahadb.html


I also wonder if it would make sense that ActiveMQ cleans up its 
journals regularly. It could simply move all oldest messages to the 
newest jorunal to kind of pack if denser.

Would that work?

Christian







--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Testing Master Slave on Shared File System

2016-04-29 Thread Quinn Stevenson
Chris -

Most of the implementations I’ve seen have used NFS, but it’s needs to be NFSv4 
to get the right locking behavior.

I’ve run this type of setup in a VM environment as well, but we didn’t use a VM 
for the NFS server the client had an enterprise NFS solution so we used that.  
Also, I know some SANs will export NFS natively.

> On Apr 29, 2016, at 8:54 AM, Christopher Fogarty 
>  wrote:
> 
> Quinn
> 
> Thanks. I could use NFS, but the master slave shared docs (very Spartan) 
> indicated SAN. If ext4 is not the proper filesystem. What should I use? I 
> could use NFS, but I was concerned over performance. Also, NFS in a VM 
> environment would require a third server?
> 
> Node 1
> Node 2
> NFS server (I would not think it would be prudent to use nfs from one of the 
> two nodes?)
> 
> Chris Fogarty
> 
> VP, System Engineering
> Versiant Corporation
> 3700 Arco Corporate Drive
> Suite 350
> Charlotte, NC 28273
> Office: (704) 831-3905 | Mobile: (704) 763-
> 
> chris.foga...@versiant.com
> 
> -Original Message-
> From: Quinn Stevenson [mailto:qu...@pronoia-solutions.com] 
> Sent: Friday, April 29, 2016 10:45 AM
> To: users@activemq.apache.org
> Subject: Re: Testing Master Slave on Shared File System
> 
> I've done quite a bit of master/slave setups - they've never been an issue as 
> long as I had a filesystem that supported locking.  I've used NFSv4 and GFSv2 
> for these setups.
> 
> If I'm reading this correctly, you've setup a volume on a SAN and mounted it 
> on both systems as an ext4 filesystem.  If that's the case, I think that is 
> your issue - ext4 is not a shared filesystem and it isn't cluster aware.  
> 
> Can you try using NFSv4?
> 
>> On Apr 29, 2016, at 8:37 AM, Matt Pavlovich  wrote:
>> 
>> Are both volumes mounted with ext4? Does EXT4 have support for distributed 
>> lock sharing?  Sounds like one server would mount rw and the other would be 
>> mounted ro and there aren't any shared locks.
>> 
>> I'm not as current on the latest EXT4 features, but do know a cluster-aware 
>> filesystem such as GFSv2 is designed for this type of setup.
>> 
>> -Matt
>> 
>> On 4/28/16 3:14 PM, Christopher Fogarty wrote:
>>> I have the disk a part of its on vggroup and an lv carved out of that with 
>>> ext 4 file system on it. This is mounted on both systems and I am able to 
>>> start active mq fine. But would feel a lot better validating that only one 
>>> of the two nodes actually has a lock. I would love even more to verify that 
>>> both nodes when started are doing what they should, which is one has a 
>>> locked access and the other is in a sort of stand by until the lock is 
>>> released.
>>> 
>>> Hope this makes sense.
>>> 
>>> Chris Fogarty
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Thu, Apr 28, 2016 at 12:37 PM -0700, "Matt Pavlovich" 
>>> mailto:mattr...@gmail.com>> wrote:
>>> 
>>> Chris-
>>> 
>>> What file system are you using to share the mount?  The filesystem 
>>> would need to support distributed locking (many "shareable 
>>> filesystems" don't do this properly.
>>> 
>>> The other approach is to use the shared filesystem for KahaDB and a 
>>> database lease-locker to work around the 
>>> most-shared-filesystems-don't-do-locking-properly problem.
>>> 
>>> -Matt
>>> 
>>> On 4/28/16 12:34 PM, Christopher Fogarty wrote:
 I have set up two servers:
 
 Both CENTOS with a shared SAN disk mounted and active on both nodes.
 
 I have set up ActiveMQ 5.6
 
 I am able to start each with the following configuration
 
 
   
 
 
 Each node can and does start, but how can I test, or what do I look for to 
 make sure that file locking is actually working as described in the 
 http://activemq.apache.org/shared-file-system-master-slave.html document.  
 Before putting this into production, I would feel a lot better knowing 
 that only one of the two nodes is capable of accessing the kahadb.
 
 
 Chris Fogarty
 
 VP, System Engineering
 Versiant Corporation
 3700 Arco Corporate Drive
 Suite 350
 Charlotte, NC 28273
 Office: (704) 831-3905 | Mobile: (704) 763-
 
 chris.foga...@versiant.com
 
 -Original Message-
 From: Christopher Fogarty
 Sent: Wednesday, April 27, 2016 9:02 AM
 To: 'users@activemq.apache.org' 
 Subject: RE: Running ActiveMQ Broker as different username unable to 
 connect via web admin console
 
 What Platform? Do you have a firewall running
 
 Chris Fogarty
 
 VP, System Engineering
 Versiant Corporation
 3700 Arco Corporate Drive
 Suite 350
 Charlotte, NC 28273
 Office: (704) 831-3905 | Mobile: (704) 763-
 
 chris.foga...@versiant.com
 
 -Original Message-
 From: jboss [mailto:jb...@bcidaho.com]
 Sent: Wednesday, April 27, 2016 8:16 AM
 To: users@activemq.apache.org
 Subject: Re: Running ActiveMQ Broker as different username unable to 
 con

Re: Separate kahadb for queues and dead letter queues?

2016-04-29 Thread Christian Schneider

I took a stab at creating a config with a separate kahadb for the DLQs:
















Does that make sense?

I am unsure about the ">" wildcard as I am not sure if it can be used at 
the start. All examples I found till now only seem to use it at the end.
I would be happy about a confirmation that what I do is correct or a 
better solution.


Christian

On 29.04.2016 14:39, Christian Schneider wrote:
We have the issue that our journals at a customer are very large. We 
analyzed the messages in the system and found that most of the time
the normal queues stay at a very low number of messages near to 0. The 
only queues that hold messages for longer periods of time are the dead 
letter queues.


The journals are much bigger than the dead letter queue contents.

My assumption is that most of the journals only contain very few still 
needed messages and the ones they contain are the messages that now 
are in the dead letter queues.

Still the journals can not be cleaned up of course.

So what I would like to to is to use one kahadb for the regular queues 
and a different one for the dead letter queues. Does that make sense?
I will try to do this using "Multi(m) kahaDB persistence adapter": See 
http://activemq.apache.org/kahadb.html


I also wonder if it would make sense that ActiveMQ cleans up its 
journals regularly. It could simply move all oldest messages to the 
newest jorunal to kind of pack if denser.

Would that work?

Christian




--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



RE: Testing Master Slave on Shared File System

2016-04-29 Thread Christopher Fogarty
Matt,

I will switch to GFS and let you know. I really appreciate.

Chris Fogarty

VP, System Engineering
Versiant Corporation
3700 Arco Corporate Drive
Suite 350
Charlotte, NC 28273
Office: (704) 831-3905 | Mobile: (704) 763-

chris.foga...@versiant.com

-Original Message-
From: Matt Pavlovich [mailto:mattr...@gmail.com] 
Sent: Friday, April 29, 2016 10:38 AM
To: users@activemq.apache.org
Subject: Re: Testing Master Slave on Shared File System

Are both volumes mounted with ext4? Does EXT4 have support for distributed lock 
sharing?  Sounds like one server would mount rw and the other would be mounted 
ro and there aren't any shared locks.

I'm not as current on the latest EXT4 features, but do know a cluster-aware 
filesystem such as GFSv2 is designed for this type of setup.

-Matt

On 4/28/16 3:14 PM, Christopher Fogarty wrote:
> I have the disk a part of its on vggroup and an lv carved out of that with 
> ext 4 file system on it. This is mounted on both systems and I am able to 
> start active mq fine. But would feel a lot better validating that only one of 
> the two nodes actually has a lock. I would love even more to verify that both 
> nodes when started are doing what they should, which is one has a locked 
> access and the other is in a sort of stand by until the lock is released.
>
> Hope this makes sense.
>
> Chris Fogarty
>
>
>
>
>
> On Thu, Apr 28, 2016 at 12:37 PM -0700, "Matt Pavlovich" 
> mailto:mattr...@gmail.com>> wrote:
>
> Chris-
>
> What file system are you using to share the mount?  The filesystem 
> would need to support distributed locking (many "shareable 
> filesystems" don't do this properly.
>
> The other approach is to use the shared filesystem for KahaDB and a 
> database lease-locker to work around the 
> most-shared-filesystems-don't-do-locking-properly problem.
>
> -Matt
>
> On 4/28/16 12:34 PM, Christopher Fogarty wrote:
>> I have set up two servers:
>>
>> Both CENTOS with a shared SAN disk mounted and active on both nodes.
>>
>> I have set up ActiveMQ 5.6
>>
>> I am able to start each with the following configuration
>>
>> 
>> 
>> 
>>
>> Each node can and does start, but how can I test, or what do I look for to 
>> make sure that file locking is actually working as described in the 
>> http://activemq.apache.org/shared-file-system-master-slave.html document.  
>> Before putting this into production, I would feel a lot better knowing that 
>> only one of the two nodes is capable of accessing the kahadb.
>>
>>
>> Chris Fogarty
>>
>> VP, System Engineering
>> Versiant Corporation
>> 3700 Arco Corporate Drive
>> Suite 350
>> Charlotte, NC 28273
>> Office: (704) 831-3905 | Mobile: (704) 763-
>>
>> chris.foga...@versiant.com
>>
>> -Original Message-
>> From: Christopher Fogarty
>> Sent: Wednesday, April 27, 2016 9:02 AM
>> To: 'users@activemq.apache.org' 
>> Subject: RE: Running ActiveMQ Broker as different username unable to 
>> connect via web admin console
>>
>> What Platform? Do you have a firewall running
>>
>> Chris Fogarty
>>
>> VP, System Engineering
>> Versiant Corporation
>> 3700 Arco Corporate Drive
>> Suite 350
>> Charlotte, NC 28273
>> Office: (704) 831-3905 | Mobile: (704) 763-
>>
>> chris.foga...@versiant.com
>>
>> -Original Message-
>> From: jboss [mailto:jb...@bcidaho.com]
>> Sent: Wednesday, April 27, 2016 8:16 AM
>> To: users@activemq.apache.org
>> Subject: Re: Running ActiveMQ Broker as different username unable to 
>> connect via web admin console
>>
>> The web console does not come up at all.   The error that the Chrome gives is
>> "Connection Refused".  Does not even get to the point of asking for 
>> username/password.
>>
>> Thanks,
>>
>>
>>
>>
>> --
>> View this message in context: 
>> http://activemq.2283324.n4.nabble.com/Running-ActiveMQ-Broker-as-diff
>> erent-username-unable-to-connect-via-web-admin-console-tp4711175p4711
>> 280.html Sent from the ActiveMQ - User mailing list archive at 
>> Nabble.com.
>



Re: Artemis JMX not showing queues

2016-04-29 Thread Justin Bertram
What exactly are you doing to create "different JMS queues from client 
programatically"?


Justin

- Original Message -
From: "abhijith" 
To: users@activemq.apache.org
Sent: Friday, April 29, 2016 8:40:46 AM
Subject: Artemis JMX not showing queues

Hi,

I am trying out artemis and creating different JMS queues from client
programatically.  Code itself works, but when I log into JMX, I see that few
queues are listed under "Core/Queue" with jms.queue.# prefix, but they don't
appear under "JMS/Queue".  I don't understand how this could happen.  

Attaching the image showing the same.  Any insight into why this could
happen, would be helpful.  Thanks.


 



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Artemis-JMX-not-showing-queues-tp4711375.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


RE: Testing Master Slave on Shared File System

2016-04-29 Thread Christopher Fogarty
Quinn

Thanks. I could use NFS, but the master slave shared docs (very Spartan) 
indicated SAN. If ext4 is not the proper filesystem. What should I use? I could 
use NFS, but I was concerned over performance. Also, NFS in a VM environment 
would require a third server?

Node 1
Node 2
NFS server (I would not think it would be prudent to use nfs from one of the 
two nodes?)

Chris Fogarty

VP, System Engineering
Versiant Corporation
3700 Arco Corporate Drive
Suite 350
Charlotte, NC 28273
Office: (704) 831-3905 | Mobile: (704) 763-

chris.foga...@versiant.com

-Original Message-
From: Quinn Stevenson [mailto:qu...@pronoia-solutions.com] 
Sent: Friday, April 29, 2016 10:45 AM
To: users@activemq.apache.org
Subject: Re: Testing Master Slave on Shared File System

I've done quite a bit of master/slave setups - they've never been an issue as 
long as I had a filesystem that supported locking.  I've used NFSv4 and GFSv2 
for these setups.

If I'm reading this correctly, you've setup a volume on a SAN and mounted it on 
both systems as an ext4 filesystem.  If that's the case, I think that is your 
issue - ext4 is not a shared filesystem and it isn't cluster aware.  

Can you try using NFSv4?

> On Apr 29, 2016, at 8:37 AM, Matt Pavlovich  wrote:
> 
> Are both volumes mounted with ext4? Does EXT4 have support for distributed 
> lock sharing?  Sounds like one server would mount rw and the other would be 
> mounted ro and there aren't any shared locks.
> 
> I'm not as current on the latest EXT4 features, but do know a cluster-aware 
> filesystem such as GFSv2 is designed for this type of setup.
> 
> -Matt
> 
> On 4/28/16 3:14 PM, Christopher Fogarty wrote:
>> I have the disk a part of its on vggroup and an lv carved out of that with 
>> ext 4 file system on it. This is mounted on both systems and I am able to 
>> start active mq fine. But would feel a lot better validating that only one 
>> of the two nodes actually has a lock. I would love even more to verify that 
>> both nodes when started are doing what they should, which is one has a 
>> locked access and the other is in a sort of stand by until the lock is 
>> released.
>> 
>> Hope this makes sense.
>> 
>> Chris Fogarty
>> 
>> 
>> 
>> 
>> 
>> On Thu, Apr 28, 2016 at 12:37 PM -0700, "Matt Pavlovich" 
>> mailto:mattr...@gmail.com>> wrote:
>> 
>> Chris-
>> 
>> What file system are you using to share the mount?  The filesystem 
>> would need to support distributed locking (many "shareable 
>> filesystems" don't do this properly.
>> 
>> The other approach is to use the shared filesystem for KahaDB and a 
>> database lease-locker to work around the 
>> most-shared-filesystems-don't-do-locking-properly problem.
>> 
>> -Matt
>> 
>> On 4/28/16 12:34 PM, Christopher Fogarty wrote:
>>> I have set up two servers:
>>> 
>>> Both CENTOS with a shared SAN disk mounted and active on both nodes.
>>> 
>>> I have set up ActiveMQ 5.6
>>> 
>>> I am able to start each with the following configuration
>>> 
>>> 
>>>
>>> 
>>> 
>>> Each node can and does start, but how can I test, or what do I look for to 
>>> make sure that file locking is actually working as described in the 
>>> http://activemq.apache.org/shared-file-system-master-slave.html document.  
>>> Before putting this into production, I would feel a lot better knowing that 
>>> only one of the two nodes is capable of accessing the kahadb.
>>> 
>>> 
>>> Chris Fogarty
>>> 
>>> VP, System Engineering
>>> Versiant Corporation
>>> 3700 Arco Corporate Drive
>>> Suite 350
>>> Charlotte, NC 28273
>>> Office: (704) 831-3905 | Mobile: (704) 763-
>>> 
>>> chris.foga...@versiant.com
>>> 
>>> -Original Message-
>>> From: Christopher Fogarty
>>> Sent: Wednesday, April 27, 2016 9:02 AM
>>> To: 'users@activemq.apache.org' 
>>> Subject: RE: Running ActiveMQ Broker as different username unable to 
>>> connect via web admin console
>>> 
>>> What Platform? Do you have a firewall running
>>> 
>>> Chris Fogarty
>>> 
>>> VP, System Engineering
>>> Versiant Corporation
>>> 3700 Arco Corporate Drive
>>> Suite 350
>>> Charlotte, NC 28273
>>> Office: (704) 831-3905 | Mobile: (704) 763-
>>> 
>>> chris.foga...@versiant.com
>>> 
>>> -Original Message-
>>> From: jboss [mailto:jb...@bcidaho.com]
>>> Sent: Wednesday, April 27, 2016 8:16 AM
>>> To: users@activemq.apache.org
>>> Subject: Re: Running ActiveMQ Broker as different username unable to 
>>> connect via web admin console
>>> 
>>> The web console does not come up at all.   The error that the Chrome gives 
>>> is
>>> "Connection Refused".  Does not even get to the point of asking for 
>>> username/password.
>>> 
>>> Thanks,
>>> 
>>> 
>>> 
>>> 
>>> --
>>> View this message in context: 
>>> http://activemq.2283324.n4.nabble.com/Running-ActiveMQ-Broker-as-dif
>>> ferent-username-unable-to-connect-via-web-admin-console-tp4711175p47
>>> 11280.html Sent from the ActiveMQ - User mailing list archive at 
>>> Nabble.com.
>> 
> 



Artemis JMX not showing queues

2016-04-29 Thread abhijith
Hi,

I am trying out artemis and creating different JMS queues from client
programatically.  Code itself works, but when I log into JMX, I see that few
queues are listed under "Core/Queue" with jms.queue.# prefix, but they don't
appear under "JMS/Queue".  I don't understand how this could happen.  

Attaching the image showing the same.  Any insight into why this could
happen, would be helpful.  Thanks.


 



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Artemis-JMX-not-showing-queues-tp4711375.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Testing Master Slave on Shared File System

2016-04-29 Thread Quinn Stevenson
I’ve done quite a bit of master/slave setups - they’ve never been an issue as 
long as I had a filesystem that supported locking.  I’ve used NFSv4 and GFSv2 
for these setups.

If I’m reading this correctly, you’ve setup a volume on a SAN and mounted it on 
both systems as an ext4 filesystem.  If that’s the case, I think that is your 
issue - ext4 is not a shared filesystem and it isn’t cluster aware.  

Can you try using NFSv4?

> On Apr 29, 2016, at 8:37 AM, Matt Pavlovich  wrote:
> 
> Are both volumes mounted with ext4? Does EXT4 have support for distributed 
> lock sharing?  Sounds like one server would mount rw and the other would be 
> mounted ro and there aren't any shared locks.
> 
> I'm not as current on the latest EXT4 features, but do know a cluster-aware 
> filesystem such as GFSv2 is designed for this type of setup.
> 
> -Matt
> 
> On 4/28/16 3:14 PM, Christopher Fogarty wrote:
>> I have the disk a part of its on vggroup and an lv carved out of that with 
>> ext 4 file system on it. This is mounted on both systems and I am able to 
>> start active mq fine. But would feel a lot better validating that only one 
>> of the two nodes actually has a lock. I would love even more to verify that 
>> both nodes when started are doing what they should, which is one has a 
>> locked access and the other is in a sort of stand by until the lock is 
>> released.
>> 
>> Hope this makes sense.
>> 
>> Chris Fogarty
>> 
>> 
>> 
>> 
>> 
>> On Thu, Apr 28, 2016 at 12:37 PM -0700, "Matt Pavlovich" 
>> mailto:mattr...@gmail.com>> wrote:
>> 
>> Chris-
>> 
>> What file system are you using to share the mount?  The filesystem would
>> need to support distributed locking (many "shareable filesystems" don't
>> do this properly.
>> 
>> The other approach is to use the shared filesystem for KahaDB and a
>> database lease-locker to work around the
>> most-shared-filesystems-don't-do-locking-properly problem.
>> 
>> -Matt
>> 
>> On 4/28/16 12:34 PM, Christopher Fogarty wrote:
>>> I have set up two servers:
>>> 
>>> Both CENTOS with a shared SAN disk mounted and active on both nodes.
>>> 
>>> I have set up ActiveMQ 5.6
>>> 
>>> I am able to start each with the following configuration
>>> 
>>> 
>>>
>>> 
>>> 
>>> Each node can and does start, but how can I test, or what do I look for to 
>>> make sure that file locking is actually working as described in the 
>>> http://activemq.apache.org/shared-file-system-master-slave.html document.  
>>> Before putting this into production, I would feel a lot better knowing that 
>>> only one of the two nodes is capable of accessing the kahadb.
>>> 
>>> 
>>> Chris Fogarty
>>> 
>>> VP, System Engineering
>>> Versiant Corporation
>>> 3700 Arco Corporate Drive
>>> Suite 350
>>> Charlotte, NC 28273
>>> Office: (704) 831-3905 | Mobile: (704) 763-
>>> 
>>> chris.foga...@versiant.com
>>> 
>>> -Original Message-
>>> From: Christopher Fogarty
>>> Sent: Wednesday, April 27, 2016 9:02 AM
>>> To: 'users@activemq.apache.org' 
>>> Subject: RE: Running ActiveMQ Broker as different username unable to 
>>> connect via web admin console
>>> 
>>> What Platform? Do you have a firewall running
>>> 
>>> Chris Fogarty
>>> 
>>> VP, System Engineering
>>> Versiant Corporation
>>> 3700 Arco Corporate Drive
>>> Suite 350
>>> Charlotte, NC 28273
>>> Office: (704) 831-3905 | Mobile: (704) 763-
>>> 
>>> chris.foga...@versiant.com
>>> 
>>> -Original Message-
>>> From: jboss [mailto:jb...@bcidaho.com]
>>> Sent: Wednesday, April 27, 2016 8:16 AM
>>> To: users@activemq.apache.org
>>> Subject: Re: Running ActiveMQ Broker as different username unable to 
>>> connect via web admin console
>>> 
>>> The web console does not come up at all.   The error that the Chrome gives 
>>> is
>>> "Connection Refused".  Does not even get to the point of asking for 
>>> username/password.
>>> 
>>> Thanks,
>>> 
>>> 
>>> 
>>> 
>>> --
>>> View this message in context: 
>>> http://activemq.2283324.n4.nabble.com/Running-ActiveMQ-Broker-as-different-username-unable-to-connect-via-web-admin-console-tp4711175p4711280.html
>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>> 
> 



Re: Testing Master Slave on Shared File System

2016-04-29 Thread Matt Pavlovich
Are both volumes mounted with ext4? Does EXT4 have support for 
distributed lock sharing?  Sounds like one server would mount rw and the 
other would be mounted ro and there aren't any shared locks.


I'm not as current on the latest EXT4 features, but do know a 
cluster-aware filesystem such as GFSv2 is designed for this type of setup.


-Matt

On 4/28/16 3:14 PM, Christopher Fogarty wrote:

I have the disk a part of its on vggroup and an lv carved out of that with ext 
4 file system on it. This is mounted on both systems and I am able to start 
active mq fine. But would feel a lot better validating that only one of the two 
nodes actually has a lock. I would love even more to verify that both nodes 
when started are doing what they should, which is one has a locked access and 
the other is in a sort of stand by until the lock is released.

Hope this makes sense.

Chris Fogarty





On Thu, Apr 28, 2016 at 12:37 PM -0700, "Matt Pavlovich" 
mailto:mattr...@gmail.com>> wrote:

Chris-

What file system are you using to share the mount?  The filesystem would
need to support distributed locking (many "shareable filesystems" don't
do this properly.

The other approach is to use the shared filesystem for KahaDB and a
database lease-locker to work around the
most-shared-filesystems-don't-do-locking-properly problem.

-Matt

On 4/28/16 12:34 PM, Christopher Fogarty wrote:

I have set up two servers:

Both CENTOS with a shared SAN disk mounted and active on both nodes.

I have set up ActiveMQ 5.6

I am able to start each with the following configuration





Each node can and does start, but how can I test, or what do I look for to make 
sure that file locking is actually working as described in the 
http://activemq.apache.org/shared-file-system-master-slave.html document.  
Before putting this into production, I would feel a lot better knowing that 
only one of the two nodes is capable of accessing the kahadb.


Chris Fogarty

VP, System Engineering
Versiant Corporation
3700 Arco Corporate Drive
Suite 350
Charlotte, NC 28273
Office: (704) 831-3905 | Mobile: (704) 763-

chris.foga...@versiant.com

-Original Message-
From: Christopher Fogarty
Sent: Wednesday, April 27, 2016 9:02 AM
To: 'users@activemq.apache.org' 
Subject: RE: Running ActiveMQ Broker as different username unable to connect 
via web admin console

What Platform? Do you have a firewall running

Chris Fogarty

VP, System Engineering
Versiant Corporation
3700 Arco Corporate Drive
Suite 350
Charlotte, NC 28273
Office: (704) 831-3905 | Mobile: (704) 763-

chris.foga...@versiant.com

-Original Message-
From: jboss [mailto:jb...@bcidaho.com]
Sent: Wednesday, April 27, 2016 8:16 AM
To: users@activemq.apache.org
Subject: Re: Running ActiveMQ Broker as different username unable to connect 
via web admin console

The web console does not come up at all.   The error that the Chrome gives is
"Connection Refused".  Does not even get to the point of asking for 
username/password.

Thanks,




--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Running-ActiveMQ-Broker-as-different-username-unable-to-connect-via-web-admin-console-tp4711175p4711280.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.






Re: Consumer not receiving anything

2016-04-29 Thread Tim Bain
Where is the "there" that it hangs?  Where specifically is the thread hung?

Also, why are you setting your message listener after you start the
connection?

Tim
On Apr 29, 2016 7:51 AM, "opacut"  wrote:

Hello,

I am writing a sort of performance test on AMQ 5.13. The producer is working
perfectly, but the consumer connects and does not receive any messages, just
hangs in there. I have trouble finding the point where it breaks. The
program spawns a number of worker threads that connect to the broker
instance. I will copy the configuration and the entire program. I have found
a similar question but it was unanswered.

I am in need of assistance, thank You in advance.

Oliver Pacut



The critical section I think is this:

@Override
public void run() {
try{
ConnectionFactory connectionFactory = new
ActiveMQConnectionFactory(brokerAddress);
connection = connectionFactory.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Topic topic = session.createTopic(topicName);
messageConsumer = session.createConsumer(topic);
} catch (JMSException e) {
e.printStackTrace();
}

MessageListener messageListener = new MessageListener() {
@Override
public void onMessage(Message message) {
msgs++;
}
};

setUp = true;

try{
while(!ready){ Thread.sleep(1);}
} catch (InterruptedException ex) {
System.out.println("You done goofed in da worka.");
}

long startMs;

try {
connection.start();
messageConsumer.setMessageListener(messageListener);

while (msgs == 0){
Thread.sleep(1);
}

if(!first){
first = true;
}
startMs = System.currentTimeMillis();

while ((System.currentTimeMillis() <= (startMs + runfor))) {
Thread.sleep(1);
}


} catch (JMSException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
try {
connection.stop();
messageConsumer.close();
session.close();
connection.close();
} catch (JMSException e) {
e.printStackTrace();
}
}

ConsumerPerformance.parent.updateMessages(msgs);

}


The whole configuration activemq.xml:



http://www.springframework.org/schema/beans";
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
  xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
  http://activemq.apache.org/schema/core
http://activemq.apache.org/schema/core/activemq-core.xsd";>




file:${activemq.conf}/credentials.properties








http://activemq.apache.org/schema/core";
brokerName="localhost" dataDirectory="${activemq.data}">



  


  

  

  











   



  





















http://www.springframework.org/schema/beans";
class="org.apache.activemq.hooks.SpringContextHook" />










And the consumer code:

/**
 * Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE
 * file distributed with this work for additional information regarding
copyright ownership. The ASF licenses this file
 * to You under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the
 * License. You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on
 * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
express or implied. See the License for the
 * specific language governing permissions and limitations under the
License.
 */

import net.sourceforge.argparse4j.ArgumentParsers;
import net.sourceforge.argparse4j.inf.ArgumentParser;
import net.sourceforge.argparse4j.inf.ArgumentParserException;
import net.sourceforge.argparse4j.inf.Namespace;
import org.apache.activemq.ActiveMQConnectionFactory;

import javax.jms.*;
import java.util.concurrent.atomic.AtomicLong;

import static net.sourceforge.argparse4j.impl.Arguments.store;

public class ConsumerPerformance {
static ConsumerPerformance parent;
protected static AtomicLong totalMessages = new AtomicLong(0);
protected static boolean setUp, ready, first;
private final Obje

Consumer not receiving anything

2016-04-29 Thread opacut
Hello,

I am writing a sort of performance test on AMQ 5.13. The producer is working
perfectly, but the consumer connects and does not receive any messages, just
hangs in there. I have trouble finding the point where it breaks. The
program spawns a number of worker threads that connect to the broker
instance. I will copy the configuration and the entire program. I have found
a similar question but it was unanswered.

I am in need of assistance, thank You in advance.

Oliver Pacut



The critical section I think is this:

@Override
public void run() {
try{
ConnectionFactory connectionFactory = new
ActiveMQConnectionFactory(brokerAddress);
connection = connectionFactory.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Topic topic = session.createTopic(topicName);
messageConsumer = session.createConsumer(topic);
} catch (JMSException e) {
e.printStackTrace();
}

MessageListener messageListener = new MessageListener() {
@Override
public void onMessage(Message message) {
msgs++;
}
};

setUp = true;

try{
while(!ready){ Thread.sleep(1);}
} catch (InterruptedException ex) {
System.out.println("You done goofed in da worka.");
}

long startMs;

try {
connection.start();
messageConsumer.setMessageListener(messageListener);

while (msgs == 0){
Thread.sleep(1);
}

if(!first){
first = true;
}
startMs = System.currentTimeMillis();

while ((System.currentTimeMillis() <= (startMs + runfor))) {
Thread.sleep(1);
}


} catch (JMSException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
try {
connection.stop();
messageConsumer.close();
session.close();
connection.close();
} catch (JMSException e) {
e.printStackTrace();
}
}

ConsumerPerformance.parent.updateMessages(msgs);

}


The whole configuration activemq.xml:



http://www.springframework.org/schema/beans";
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
  xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
  http://activemq.apache.org/schema/core
http://activemq.apache.org/schema/core/activemq-core.xsd";>




file:${activemq.conf}/credentials.properties



   




http://activemq.apache.org/schema/core";
brokerName="localhost" dataDirectory="${activemq.data}">



  


  

  

  











   


  
  

















   




http://www.springframework.org/schema/beans";
class="org.apache.activemq.hooks.SpringContextHook" />










And the consumer code:

/**
 * Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE
 * file distributed with this work for additional information regarding
copyright ownership. The ASF licenses this file
 * to You under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the
 * License. You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on
 * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
express or implied. See the License for the
 * specific language governing permissions and limitations under the
License.
 */

import net.sourceforge.argparse4j.ArgumentParsers;
import net.sourceforge.argparse4j.inf.ArgumentParser;
import net.sourceforge.argparse4j.inf.ArgumentParserException;
import net.sourceforge.argparse4j.inf.Namespace;
import org.apache.activemq.ActiveMQConnectionFactory;

import javax.jms.*;
import java.util.concurrent.atomic.AtomicLong;

import static net.sourceforge.argparse4j.impl.Arguments.store;

public class ConsumerPerformance {
static ConsumerPerformance parent;
protected static AtomicLong totalMessages = new AtomicLong(0);
protected static boolean setUp, ready, first;
private final Object lock = new Object();

public void updateMessages(long msgs){
synchronized (lock){
   

Re: Failover very slow with kahadb while restart of master is fast

2016-04-29 Thread Tim Bain
Sure, YOU know it was a clean shutdown, but how does your slave broker know
which type it was?

Methods to signal that fact could be added, but none currently exists in
the code.
On Apr 29, 2016 7:14 AM, "Christian Schneider" 
wrote:

> I do not assume that the restart will be clean when the master dies.
>
> In our case the brokers are shutdown orderly during a regular maintenance
> where some of the customer software is updated on the system.
> So in the case were we experienced the slow start it was a clean shutdown.
>
> Christian
>
> On 29.04.2016 15:03, Tim Bain wrote:
>
>> Also, your question makes the assumption that files your broker stumbles
>> upon when it becomes master were written by a previous master that
>> shutdown
>> cleanly, which is a rather optimistic assumption to make.
>> On Apr 29, 2016 7:01 AM, "Tim Bain"  wrote:
>>
>>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>
>


Composite Destinations and originalDestination

2016-04-29 Thread Quinn Stevenson
I’m trying to use a composite destination to setup a simple wiretap of messages 
going to a set of queues - the configuration I’m using looks like this






This works great except for one problem - I can’t tell where the original 
message came from so I can’t build my audit trail.

Is there a way to configure the composite destination so it can set the 
originalDestination header?




Re: Failover very slow with kahadb while restart of master is fast

2016-04-29 Thread Christian Schneider

I do not assume that the restart will be clean when the master dies.

In our case the brokers are shutdown orderly during a regular 
maintenance where some of the customer software is updated on the system.

So in the case were we experienced the slow start it was a clean shutdown.

Christian

On 29.04.2016 15:03, Tim Bain wrote:

Also, your question makes the assumption that files your broker stumbles
upon when it becomes master were written by a previous master that shutdown
cleanly, which is a rather optimistic assumption to make.
On Apr 29, 2016 7:01 AM, "Tim Bain"  wrote:



--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: ActiveMQ Broker Camel Component and wildcards

2016-04-29 Thread Tim Bain
Sorry, I've never used them nor looked for examples, but maybe someone else
has.
On Apr 29, 2016 7:02 AM, "Quinn Stevenson" 
wrote:

> Thank You Tim -
>
> Is there an example of an interceptor somewhere in the code base I can use
> as a reference to get started on this?
>
> BTW - I did it working with the Camel Broker Component by switching from
>  to .  I’m not sure why that made a difference,
> but it did.
>
> > On Apr 28, 2016, at 10:53 PM, Tim Bain  wrote:
> >
> > I think you want to be using interceptors (
> > http://activemq.apache.org/interceptors.html), not consuming from a
> queue
> > and then publishing back to the same queue.  I've always believed that
> > embedded Camel routes couldn't be inserted into the middle of accepting a
> > message (which is what you really want), but interceptors should be able
> to
> > do that.
> >
> > Tim
> >
> > On Thu, Apr 28, 2016 at 12:49 PM, Quinn Stevenson <
> > qu...@pronoia-solutions.com> wrote:
> >
> >> I’m trying to use the ActiveMQ Broker Camel Component to add some JMS
> user
> >> properties to messages as they arrive at the broker.  I’m using a
> wildcard
> >> on the from so I can apply the same logic to a set of topics or queues,
> but
> >> I can’t seem to send the message back to the original queue.
> >>
> >> Without wildcards, it works fine - something like this:
> >> http://www.springframework.org/schema/beans";
> >>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> >>   xsi:schemaLocation="
> >>http://camel.apache.org/schema/spring
> >> http://camel.apache.org/schema/spring/camel-spring.xsd
> >>http://www.springframework.org/schema/beans
> >> http://www.springframework.org/schema/beans/spring-beans.xsd";>
> >>
> >>http://camel.apache.org/schema/spring";>
> >>
> >>
> >>
> >>MyHeaderValue
> >>
> >>
> >>
> >>
> >>
> >>
> >> 
> >>
> >> However when I put in the wildcards, I get and IllegalStateException.
> The
> >> configuration I’m trying looks like this
> >> http://www.springframework.org/schema/beans";
> >>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> >>   xsi:schemaLocation="
> >>http://camel.apache.org/schema/spring
> >> http://camel.apache.org/schema/spring/camel-spring.xsd
> >>http://www.springframework.org/schema/beans
> >> http://www.springframework.org/schema/beans/spring-beans.xsd";>
> >>
> >>http://camel.apache.org/schema/spring";>
> >>
> >>
> >>
> >>MyHeaderValue
> >>
> >>
> >>broker://${header[JMSDestination]}
> >>
> >>
> >>
> >>
> >>
> >> 
> >>
> >> And here’s the exception I get
> >>
> >> 2016-04-28 12:47:43,721 | ERROR | Failed delivery for (MessageId:
> >> ID-macpro-local-53588-1461869253207-0-2 on ExchangeId:
> >> ID-macpro-local-53588-1461869253207-0-3). Exhausted after delivery
> attempt:
> >> 1 caught: java.lang.IllegalStateException: Not the original message from
> >> the broker Message: Enter some text here for the message body...
> >>
> >> Message History
> >>
> >>
> ---
> >> RouteId  ProcessorId  Processor
> >>Elapsed (ms)
> >> [in-audit-to-file  ] [in-audit-to-file  ] [broker://queue:in.adt.*
> >>   ] [23]
> >> [in-audit-to-file  ] [setHeader1] [setHeader[MyCustomHeader]
> >>   ] [ 3]
> >> [in-audit-to-file  ] [recipientList1]
> >> [recipientList[simple{broker://${header[JMSDestination]}}]
> >>   ] [18]
> >>
> >> Exchange
> >>
> >>
> ---
> >> Exchange[
> >>Id  ID-macpro-local-53588-1461869253207-0-3
> >>ExchangePattern InOnly
> >>Headers
> >> {breadcrumbId=ID:macpro.local-53587-1461869252118-4:1:1:1:1,
> >> CamelRedelivered=false, CamelRedeliveryCounter=0, JMSCorrelationID=,
> >> JMSCorrelationIDAsBytes=, JMSDeliveryMode=1,
> >> JMSDestination=queue://in.adt.epic, JMSExpiration=0,
> >> JMSMessageID=ID:macpro.local-53587-1461869252118-4:1:1:1:1,
> JMSPriority=0,
> >> JMSRedelivered=false, JMSReplyTo=null, JMSTimestamp=1461869263684,
> >> JMSType=, JMSXGroupID=null, JMSXUserID=null,
> MyCustomHeader=MyHeaderValue}
> >>BodyTypeString
> >>BodyEnter some text here for the message body...
> >> ]
> >>
> >> Stacktrace
> >>
> ---
> >> | org.apache.camel.processor.DefaultErrorHandler | Ac

Re: Separate kahadb for queues and dead letter queues?

2016-04-29 Thread Tim Bain
Using mKahaDB with a separate KahaDB instance for the DLQ is the best idea
I know of for dealing with the problem you describe.

A JIRA enhancement request exists for compacting KahaDB journals (
https://issues.apache.org/jira/plugins/servlet/mobile#issue/AMQ-3978).  You
should vote for it, and if you felt like implementing a fix and submitting
a pull request, that would help make sure it gets implemented sooner rather
than later.
On Apr 29, 2016 6:39 AM, "Christian Schneider" 
wrote:

> We have the issue that our journals at a customer are very large. We
> analyzed the messages in the system and found that most of the time
> the normal queues stay at a very low number of messages near to 0. The
> only queues that hold messages for longer periods of time are the dead
> letter queues.
>
> The journals are much bigger than the dead letter queue contents.
>
> My assumption is that most of the journals only contain very few still
> needed messages and the ones they contain are the messages that now are in
> the dead letter queues.
> Still the journals can not be cleaned up of course.
>
> So what I would like to to is to use one kahadb for the regular queues and
> a different one for the dead letter queues. Does that make sense?
> I will try to do this using "Multi(m) kahaDB persistence adapter": See
> http://activemq.apache.org/kahadb.html
>
> I also wonder if it would make sense that ActiveMQ cleans up its journals
> regularly. It could simply move all oldest messages to the newest jorunal
> to kind of pack if denser.
> Would that work?
>
> Christian
>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>
>


Re: Failover very slow with kahadb while restart of master is fast

2016-04-29 Thread Tim Bain
Also, your question makes the assumption that files your broker stumbles
upon when it becomes master were written by a previous master that shutdown
cleanly, which is a rather optimistic assumption to make.
On Apr 29, 2016 7:01 AM, "Tim Bain"  wrote:

> By reading the code (
> http://www.grepcode.com/file/repo1.maven.org/maven2/org.apache.activemq/activemq-kahadb-store/5.11.1/org/apache/activemq/store/kahadb/disk/journal/Journal.java#Journal.recoveryCheck%28org.apache.activemq.store.kahadb.disk.journal.DataFile%29),
> I found that we're reading the data files to confirm that they're not
> corrupted.  That's the beauty of an open-source project: all of the source
> code is available, so you can read it yourself and see what's being done.
>
> You should confirm that your OS will properly invalidate its cache entry
> for a given file when that file changes; having KahaDB read stale cache
> entries would be a very bad thing.
>
> Tim
> We have found the main cause for this issue. Current Master and new Master
> seem to do the exact same things on startup.
> They read all the journals.
>
> It seems that the OS is caching the files in memory in some cases.
> The master seems to have all the jorunal files in the cache. So when it
> restarts it is fast.
> The new master seems to not have the journals in cache so they have to be
> transfered over the network (~10MB/s) in our case. So this
> causes the difference in startup time.
>
> One big question remains:
> Why are the whole journals read? Shouldn't a master that is shut down in a
> controlled way leave a clean state, so on startup only the meta data index
> is read?
>
> Christian
>
> On 15.02.2016 13:32, Christian Schneider wrote:
>
>> At a customer we are experiencing a very strange behaviour of activemq.
>>
>> We have two brokers on separate machines A and B. Storage is on a third
>> machine and imported using nfsv4.
>> Kahadb contains about 18 GB of journal files.
>>
>> Scenario 1:
>> - Stop B
>> - Stop A (master now)
>> - Start A
>>
>> The start of A takes about 50 seconds.
>>
>> Scnario 2:
>> - Stop A
>> - Stop B (master now)
>> - Start A
>>
>> The start of A takes about 5 minutes.
>>
>> In the slow case the log at debug level shows a big gap between adding
>> the last queue (12:04) and running the ActiveMQ Journal Checkpoint Worker
>> again after the queue creation (12:09).
>> I have no idea what activemq is doing in the mean time but it seems to
>> take several minutes.
>>
>> Can anyone explain this behaviour or explain what activemq does in the
>> mean time? I would also be happy about pointer to the code.
>> What I do not understand is why it makes a difference if the master is
>> started again or if the slave is becoming master. As they both share the
>> same kahadb over nfs I would expect the same startup
>> behaviour.
>>
>> Christian
>>
>> -
>>
>> Slow start
>> 2016-02-15 12:03:01,943 | INFO  | Refreshing
>> org.apache.activemq.xbean.XBeanBrokerFactory$1@17b14695: startup date
>> [Mon Feb 15 12:03:01 CET 2016]; root of context hierarchy |
>> org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
>> ...
>> 2016-02-15 12:03:03,018 | INFO  |
>> PListStore:[/path/activemq/data/A/tmp_storage] started |
>> org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
>> 2016-02-15 12:03:03,033 | INFO  | ignoring zero length, partially
>> initialised journal data file: db-1.log number = 1 , length = 0 |
>> org.apache.activemq.store.kahadb.disk.journal.Journal | main
>> 2016-02-15 12:03:03,177 | INFO  | JobSchedulerStore:path/scheduler
>> started | org.apache.activemq.store.kahadb.scheduler.JobSchedulerStoreImpl
>> | main
>> 2016-02-15 12:03:03,180 | INFO  | JobScheduler using directory:
>> path/scheduler | org.apache.activemq.broker.BrokerService | main
>> 2016-02-15 12:03:03,335 | INFO  | Using Persistence Adapter:
>> KahaDBPersistenceAdapter[/path] | org.apache.activemq.broker.BrokerService
>> | main
>> 2016-02-15 12:03:03,350 | INFO  | JMX consoles can connect to
>> service:jmx:rmi:///jndi/rmi://localhost:1617/jmxrmi |
>> org.apache.activemq.broker.jmx.ManagementContext | JMX connector
>> 2016-02-15 12:03:05,198 | INFO  | Corrupt journal records found in
>> 'path/db-1122.log' between offsets: 17893102..17893732 |
>> org.apache.activemq.store.kahadb.disk.journal.Journal | main
>> 2016-02-15 12:03:47,925 | INFO  | KahaDB is version 5 |
>> org.apache.activemq.store.kahadb.MessageDatabase | main
>> 
>> 2016-02-15 12:03:49,233 | INFO  | Recovering from the journal ... |
>> org.apache.activemq.store.kahadb.MessageDatabase | main
>> 2016-02-15 12:03:49,237 | INFO  | Recovery replayed 20 operations from
>> the journal in 0.094 seconds. |
>> org.apache.activemq.store.kahadb.MessageDatabase | main
>> 2016-02-15 12:04:26,863 | DEBUG | Checkpoint started. |
>> org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
>> Checkpoint Worker
>> 2016-02-15 12:04:26,932 | DEBUG | Checkpoint done. |
>> org.apache.activemq.store.kahadb.MessageDatabase | Acti

Re: ActiveMQ Broker Camel Component and wildcards

2016-04-29 Thread Quinn Stevenson
Thank You Tim -

Is there an example of an interceptor somewhere in the code base I can use as a 
reference to get started on this?

BTW - I did it working with the Camel Broker Component by switching from 
 to .  I’m not sure why that made a difference, but 
it did.

> On Apr 28, 2016, at 10:53 PM, Tim Bain  wrote:
> 
> I think you want to be using interceptors (
> http://activemq.apache.org/interceptors.html), not consuming from a queue
> and then publishing back to the same queue.  I've always believed that
> embedded Camel routes couldn't be inserted into the middle of accepting a
> message (which is what you really want), but interceptors should be able to
> do that.
> 
> Tim
> 
> On Thu, Apr 28, 2016 at 12:49 PM, Quinn Stevenson <
> qu...@pronoia-solutions.com> wrote:
> 
>> I’m trying to use the ActiveMQ Broker Camel Component to add some JMS user
>> properties to messages as they arrive at the broker.  I’m using a wildcard
>> on the from so I can apply the same logic to a set of topics or queues, but
>> I can’t seem to send the message back to the original queue.
>> 
>> Without wildcards, it works fine - something like this:
>> http://www.springframework.org/schema/beans";
>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>>   xsi:schemaLocation="
>>http://camel.apache.org/schema/spring
>> http://camel.apache.org/schema/spring/camel-spring.xsd
>>http://www.springframework.org/schema/beans
>> http://www.springframework.org/schema/beans/spring-beans.xsd";>
>> 
>>http://camel.apache.org/schema/spring";>
>>
>>
>>
>>MyHeaderValue
>>
>>
>>
>> 
>>
>> 
>> 
>> 
>> However when I put in the wildcards, I get and IllegalStateException.  The
>> configuration I’m trying looks like this
>> http://www.springframework.org/schema/beans";
>>   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>>   xsi:schemaLocation="
>>http://camel.apache.org/schema/spring
>> http://camel.apache.org/schema/spring/camel-spring.xsd
>>http://www.springframework.org/schema/beans
>> http://www.springframework.org/schema/beans/spring-beans.xsd";>
>> 
>>http://camel.apache.org/schema/spring";>
>>
>>
>>
>>MyHeaderValue
>>
>>
>>broker://${header[JMSDestination]}
>>
>>
>> 
>>
>> 
>> 
>> 
>> And here’s the exception I get
>> 
>> 2016-04-28 12:47:43,721 | ERROR | Failed delivery for (MessageId:
>> ID-macpro-local-53588-1461869253207-0-2 on ExchangeId:
>> ID-macpro-local-53588-1461869253207-0-3). Exhausted after delivery attempt:
>> 1 caught: java.lang.IllegalStateException: Not the original message from
>> the broker Message: Enter some text here for the message body...
>> 
>> Message History
>> 
>> ---
>> RouteId  ProcessorId  Processor
>>Elapsed (ms)
>> [in-audit-to-file  ] [in-audit-to-file  ] [broker://queue:in.adt.*
>>   ] [23]
>> [in-audit-to-file  ] [setHeader1] [setHeader[MyCustomHeader]
>>   ] [ 3]
>> [in-audit-to-file  ] [recipientList1]
>> [recipientList[simple{broker://${header[JMSDestination]}}]
>>   ] [18]
>> 
>> Exchange
>> 
>> ---
>> Exchange[
>>Id  ID-macpro-local-53588-1461869253207-0-3
>>ExchangePattern InOnly
>>Headers
>> {breadcrumbId=ID:macpro.local-53587-1461869252118-4:1:1:1:1,
>> CamelRedelivered=false, CamelRedeliveryCounter=0, JMSCorrelationID=,
>> JMSCorrelationIDAsBytes=, JMSDeliveryMode=1,
>> JMSDestination=queue://in.adt.epic, JMSExpiration=0,
>> JMSMessageID=ID:macpro.local-53587-1461869252118-4:1:1:1:1, JMSPriority=0,
>> JMSRedelivered=false, JMSReplyTo=null, JMSTimestamp=1461869263684,
>> JMSType=, JMSXGroupID=null, JMSXUserID=null, MyCustomHeader=MyHeaderValue}
>>BodyTypeString
>>BodyEnter some text here for the message body...
>> ]
>> 
>> Stacktrace
>> ---
>> | org.apache.camel.processor.DefaultErrorHandler | ActiveMQ VMTransport:
>> vm://localhost#1
>> java.lang.IllegalStateException: Not the original message from the broker
>> Message: Enter some text here for the message body...
>>at
>> org.apache.activemq.camel.component.broker.BrokerProducer.checkOriginalMessage(BrokerProducer.java:95)[activemq-camel-5.13.2.jar:5.13.2]
>>at
>> org.apache.activemq.ca

Re: Failover very slow with kahadb while restart of master is fast

2016-04-29 Thread Tim Bain
By reading the code (
http://www.grepcode.com/file/repo1.maven.org/maven2/org.apache.activemq/activemq-kahadb-store/5.11.1/org/apache/activemq/store/kahadb/disk/journal/Journal.java#Journal.recoveryCheck%28org.apache.activemq.store.kahadb.disk.journal.DataFile%29),
I found that we're reading the data files to confirm that they're not
corrupted.  That's the beauty of an open-source project: all of the source
code is available, so you can read it yourself and see what's being done.

You should confirm that your OS will properly invalidate its cache entry
for a given file when that file changes; having KahaDB read stale cache
entries would be a very bad thing.

Tim
We have found the main cause for this issue. Current Master and new Master
seem to do the exact same things on startup.
They read all the journals.

It seems that the OS is caching the files in memory in some cases.
The master seems to have all the jorunal files in the cache. So when it
restarts it is fast.
The new master seems to not have the journals in cache so they have to be
transfered over the network (~10MB/s) in our case. So this
causes the difference in startup time.

One big question remains:
Why are the whole journals read? Shouldn't a master that is shut down in a
controlled way leave a clean state, so on startup only the meta data index
is read?

Christian

On 15.02.2016 13:32, Christian Schneider wrote:

> At a customer we are experiencing a very strange behaviour of activemq.
>
> We have two brokers on separate machines A and B. Storage is on a third
> machine and imported using nfsv4.
> Kahadb contains about 18 GB of journal files.
>
> Scenario 1:
> - Stop B
> - Stop A (master now)
> - Start A
>
> The start of A takes about 50 seconds.
>
> Scnario 2:
> - Stop A
> - Stop B (master now)
> - Start A
>
> The start of A takes about 5 minutes.
>
> In the slow case the log at debug level shows a big gap between adding the
> last queue (12:04) and running the ActiveMQ Journal Checkpoint Worker again
> after the queue creation (12:09).
> I have no idea what activemq is doing in the mean time but it seems to
> take several minutes.
>
> Can anyone explain this behaviour or explain what activemq does in the
> mean time? I would also be happy about pointer to the code.
> What I do not understand is why it makes a difference if the master is
> started again or if the slave is becoming master. As they both share the
> same kahadb over nfs I would expect the same startup
> behaviour.
>
> Christian
>
> -
>
> Slow start
> 2016-02-15 12:03:01,943 | INFO  | Refreshing
> org.apache.activemq.xbean.XBeanBrokerFactory$1@17b14695: startup date
> [Mon Feb 15 12:03:01 CET 2016]; root of context hierarchy |
> org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
> ...
> 2016-02-15 12:03:03,018 | INFO  |
> PListStore:[/path/activemq/data/A/tmp_storage] started |
> org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
> 2016-02-15 12:03:03,033 | INFO  | ignoring zero length, partially
> initialised journal data file: db-1.log number = 1 , length = 0 |
> org.apache.activemq.store.kahadb.disk.journal.Journal | main
> 2016-02-15 12:03:03,177 | INFO  | JobSchedulerStore:path/scheduler started
> | org.apache.activemq.store.kahadb.scheduler.JobSchedulerStoreImpl | main
> 2016-02-15 12:03:03,180 | INFO  | JobScheduler using directory:
> path/scheduler | org.apache.activemq.broker.BrokerService | main
> 2016-02-15 12:03:03,335 | INFO  | Using Persistence Adapter:
> KahaDBPersistenceAdapter[/path] | org.apache.activemq.broker.BrokerService
> | main
> 2016-02-15 12:03:03,350 | INFO  | JMX consoles can connect to
> service:jmx:rmi:///jndi/rmi://localhost:1617/jmxrmi |
> org.apache.activemq.broker.jmx.ManagementContext | JMX connector
> 2016-02-15 12:03:05,198 | INFO  | Corrupt journal records found in
> 'path/db-1122.log' between offsets: 17893102..17893732 |
> org.apache.activemq.store.kahadb.disk.journal.Journal | main
> 2016-02-15 12:03:47,925 | INFO  | KahaDB is version 5 |
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 
> 2016-02-15 12:03:49,233 | INFO  | Recovering from the journal ... |
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2016-02-15 12:03:49,237 | INFO  | Recovery replayed 20 operations from the
> journal in 0.094 seconds. |
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2016-02-15 12:04:26,863 | DEBUG | Checkpoint started. |
> org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
> Checkpoint Worker
> 2016-02-15 12:04:26,932 | DEBUG | Checkpoint done. |
> org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal
> Checkpoint Worker
> 2016-02-15 12:04:26,940 | INFO  | Apache ActiveMQ 5.10.2 (A,
> ID:A-59167-1455534183070-1:1) is starting |
> org.apache.activemq.broker.BrokerService | main
> ...
> 2016-02-15 12:04:30,289 | DEBUG | A adding destination: queue://q1.DLQ |
> org.apache.activemq.broker.region.AbstractRegion | main
> 2016-02-15 12:04:57,026 | DEBUG | queue://q1.DLQ exp

RE: Testing Master Slave on Shared File System

2016-04-29 Thread Christopher Fogarty
Has anyone done master slave with shared filesystem?

The documentation is sparse.

Tim, you have been a great help. I have started to see this

INFO | Database /data/kahadb/lock is locked... waiting 10 seconds for the 
database to be unlocked. Reason: java.io.FileNotFoundException: 
/data/kahadb/lock (Read-only file system)

However, when I shutdown the copy of mq that seems to have the lock, the above 
message does not clear.

Chris Fogarty

VP, System Engineering
Versiant Corporation
3700 Arco Corporate Drive
Suite 350
Charlotte, NC 28273
Office: (704) 831-3905 | Mobile: (704) 763-

chris.foga...@versiant.com

-Original Message-
From: tbai...@gmail.com [mailto:tbai...@gmail.com] On Behalf Of Tim Bain
Sent: Friday, April 29, 2016 12:56 AM
To: ActiveMQ Users 
Subject: Re: Testing Master Slave on Shared File System

Browsing the queue via the web console for the slave broker should be a good 
test; if you're able to do that, the master failed to lock the slave out of the 
KahaDB data.

Tim

On Thu, Apr 28, 2016 at 2:14 PM, Christopher Fogarty < 
christopher.foga...@versiant.com> wrote:

> I have the disk a part of its on vggroup and an lv carved out of that 
> with ext 4 file system on it. This is mounted on both systems and I am 
> able to start active mq fine. But would feel a lot better validating 
> that only one of the two nodes actually has a lock. I would love even 
> more to verify that both nodes when started are doing what they 
> should, which is one has a locked access and the other is in a sort of 
> stand by until the lock is released.
>
> Hope this makes sense.
>
> Chris Fogarty
>
>
>
>
>
> On Thu, Apr 28, 2016 at 12:37 PM -0700, "Matt Pavlovich" < 
> mattr...@gmail.com> wrote:
>
> Chris-
>
> What file system are you using to share the mount?  The filesystem 
> would need to support distributed locking (many "shareable 
> filesystems" don't do this properly.
>
> The other approach is to use the shared filesystem for KahaDB and a 
> database lease-locker to work around the 
> most-shared-filesystems-don't-do-locking-properly problem.
>
> -Matt
>
> On 4/28/16 12:34 PM, Christopher Fogarty wrote:
> > I have set up two servers:
> >
> > Both CENTOS with a shared SAN disk mounted and active on both nodes.
> >
> > I have set up ActiveMQ 5.6
> >
> > I am able to start each with the following configuration
> >
> > 
> >
> > 
> >
> > Each node can and does start, but how can I test, or what do I look 
> > for
> to make sure that file locking is actually working as described in the 
> http://activemq.apache.org/shared-file-system-master-slave.html
> document.  Before putting this into production, I would feel a lot 
> better knowing that only one of the two nodes is capable of accessing the 
> kahadb.
> >
> >
> > Chris Fogarty
> >
> > VP, System Engineering
> > Versiant Corporation
> > 3700 Arco Corporate Drive
> > Suite 350
> > Charlotte, NC 28273
> > Office: (704) 831-3905 | Mobile: (704) 763-
> >
> > chris.foga...@versiant.com
> >
> > -Original Message-
> > From: Christopher Fogarty
> > Sent: Wednesday, April 27, 2016 9:02 AM
> > To: 'users@activemq.apache.org' 
> > Subject: RE: Running ActiveMQ Broker as different username unable to
> connect via web admin console
> >
> > What Platform? Do you have a firewall running
> >
> > Chris Fogarty
> >
> > VP, System Engineering
> > Versiant Corporation
> > 3700 Arco Corporate Drive
> > Suite 350
> > Charlotte, NC 28273
> > Office: (704) 831-3905 | Mobile: (704) 763-
> >
> > chris.foga...@versiant.com
> >
> > -Original Message-
> > From: jboss [mailto:jb...@bcidaho.com]
> > Sent: Wednesday, April 27, 2016 8:16 AM
> > To: users@activemq.apache.org
> > Subject: Re: Running ActiveMQ Broker as different username unable to
> connect via web admin console
> >
> > The web console does not come up at all.   The error that the Chrome
> gives is
> > "Connection Refused".  Does not even get to the point of asking for
> username/password.
> >
> > Thanks,
> >
> >
> >
> >
> > --
> > View this message in context:
> http://activemq.2283324.n4.nabble.com/Running-ActiveMQ-Broker-as-diffe
> rent-username-unable-to-connect-via-web-admin-console-tp4711175p471128
> 0.html
> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>


Separate kahadb for queues and dead letter queues?

2016-04-29 Thread Christian Schneider
We have the issue that our journals at a customer are very large. We 
analyzed the messages in the system and found that most of the time
the normal queues stay at a very low number of messages near to 0. The 
only queues that hold messages for longer periods of time are the dead 
letter queues.


The journals are much bigger than the dead letter queue contents.

My assumption is that most of the journals only contain very few still 
needed messages and the ones they contain are the messages that now are 
in the dead letter queues.

Still the journals can not be cleaned up of course.

So what I would like to to is to use one kahadb for the regular queues 
and a different one for the dead letter queues. Does that make sense?
I will try to do this using "Multi(m) kahaDB persistence adapter": See 
http://activemq.apache.org/kahadb.html


I also wonder if it would make sense that ActiveMQ cleans up its 
journals regularly. It could simply move all oldest messages to the 
newest jorunal to kind of pack if denser.

Would that work?

Christian

--
Christian Schneider
http://www.liquid-reality.de

Open Source Architect
http://www.talend.com



Re: Failover very slow with kahadb while restart of master is fast

2016-04-29 Thread Christian Schneider
We have found the main cause for this issue. Current Master and new 
Master seem to do the exact same things on startup.

They read all the journals.

It seems that the OS is caching the files in memory in some cases.
The master seems to have all the jorunal files in the cache. So when it 
restarts it is fast.
The new master seems to not have the journals in cache so they have to 
be transfered over the network (~10MB/s) in our case. So this

causes the difference in startup time.

One big question remains:
Why are the whole journals read? Shouldn't a master that is shut down in 
a controlled way leave a clean state, so on startup only the meta data 
index is read?


Christian

On 15.02.2016 13:32, Christian Schneider wrote:

At a customer we are experiencing a very strange behaviour of activemq.

We have two brokers on separate machines A and B. Storage is on a 
third machine and imported using nfsv4.

Kahadb contains about 18 GB of journal files.

Scenario 1:
- Stop B
- Stop A (master now)
- Start A

The start of A takes about 50 seconds.

Scnario 2:
- Stop A
- Stop B (master now)
- Start A

The start of A takes about 5 minutes.

In the slow case the log at debug level shows a big gap between adding 
the last queue (12:04) and running the ActiveMQ Journal Checkpoint 
Worker again after the queue creation (12:09).
I have no idea what activemq is doing in the mean time but it seems to 
take several minutes.


Can anyone explain this behaviour or explain what activemq does in the 
mean time? I would also be happy about pointer to the code.
What I do not understand is why it makes a difference if the master is 
started again or if the slave is becoming master. As they both share 
the same kahadb over nfs I would expect the same startup

behaviour.

Christian

-

Slow start
2016-02-15 12:03:01,943 | INFO  | Refreshing 
org.apache.activemq.xbean.XBeanBrokerFactory$1@17b14695: startup date 
[Mon Feb 15 12:03:01 CET 2016]; root of context hierarchy | 
org.apache.activemq.xbean.XBeanBrokerFactory$1 | main

...
2016-02-15 12:03:03,018 | INFO  | 
PListStore:[/path/activemq/data/A/tmp_storage] started | 
org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2016-02-15 12:03:03,033 | INFO  | ignoring zero length, partially 
initialised journal data file: db-1.log number = 1 , length = 0 | 
org.apache.activemq.store.kahadb.disk.journal.Journal | main
2016-02-15 12:03:03,177 | INFO  | JobSchedulerStore:path/scheduler 
started | 
org.apache.activemq.store.kahadb.scheduler.JobSchedulerStoreImpl | main
2016-02-15 12:03:03,180 | INFO  | JobScheduler using directory: 
path/scheduler | org.apache.activemq.broker.BrokerService | main
2016-02-15 12:03:03,335 | INFO  | Using Persistence Adapter: 
KahaDBPersistenceAdapter[/path] | 
org.apache.activemq.broker.BrokerService | main
2016-02-15 12:03:03,350 | INFO  | JMX consoles can connect to 
service:jmx:rmi:///jndi/rmi://localhost:1617/jmxrmi | 
org.apache.activemq.broker.jmx.ManagementContext | JMX connector
2016-02-15 12:03:05,198 | INFO  | Corrupt journal records found in 
'path/db-1122.log' between offsets: 17893102..17893732 | 
org.apache.activemq.store.kahadb.disk.journal.Journal | main
2016-02-15 12:03:47,925 | INFO  | KahaDB is version 5 | 
org.apache.activemq.store.kahadb.MessageDatabase | main


2016-02-15 12:03:49,233 | INFO  | Recovering from the journal ... | 
org.apache.activemq.store.kahadb.MessageDatabase | main
2016-02-15 12:03:49,237 | INFO  | Recovery replayed 20 operations from 
the journal in 0.094 seconds. | 
org.apache.activemq.store.kahadb.MessageDatabase | main
2016-02-15 12:04:26,863 | DEBUG | Checkpoint started. | 
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal 
Checkpoint Worker
2016-02-15 12:04:26,932 | DEBUG | Checkpoint done. | 
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal 
Checkpoint Worker
2016-02-15 12:04:26,940 | INFO  | Apache ActiveMQ 5.10.2 (A, 
ID:A-59167-1455534183070-1:1) is starting | 
org.apache.activemq.broker.BrokerService | main

...
2016-02-15 12:04:30,289 | DEBUG | A adding destination: queue://q1.DLQ 
| org.apache.activemq.broker.region.AbstractRegion | main
2016-02-15 12:04:57,026 | DEBUG | queue://q1.DLQ expiring messages .. 
| org.apache.activemq.broker.region.Queue | ActiveMQ Broker[A] Scheduler
2016-02-15 12:04:57,029 | DEBUG | q1.DLQ toPageIn: 5, Inflight: 0, 
pagedInMessages.size 0, pagedInPendingDispatch.size 0, enqueueCount: 
0, dequeueCount: 0, memUsage:0 | 
org.apache.activemq.broker.region.Queue | ActiveMQ Broker[A] Scheduler
2016-02-15 12:09:20,440 | DEBUG | Checkpoint started. | 
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal 
Checkpoint Worker
2016-02-15 12:09:20,476 | DEBUG | Checkpoint done. | 
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal 
Checkpoint Worker
2016-02-15 12:09:20,488 | DEBUG | queue://q1.DLQ expiring messages 
done. | org.apache.activemq.broker.region.Queue | ActiveMQ Broker[A] 
Scheduler






--
Ch