Clients Connections Still Active After Stopping/Removing TransportConnector

2016-01-26 Thread Mark Anderson
With ActiveMQ 5.13.0 I'm programattically adding and removing a
TcpTransportConnector as follows. The connectorUri is tcp://0.0.0.0:24726

Add

connector = brokerService.addConnector(connectorUri);
brokerService.startTransportConnector(connector);

Remove

brokerService.removeConnector(connector);
connector.stop();
connector.setBrokerService(null);

However, I'm finding that after I remove the connector any client
connections that were made to that transport are still established and
functioning (i.e. messages are still being sent and received).

Is this the expected behaviour? I would have though that removing and
stopping a connector would also automatically close any client connections.

Thanks,
Mark


Re: Clients Connections Still Active After Stopping/Removing TransportConnector

2016-01-26 Thread Mark Anderson
To follow up I figured out that brokerService.startTransport was actually
returning a managed transport and this was the one I should be stopping.
The client connections then all failed as expected.

The Javadocs on BrokerService could definitely be better in this area.

However, I did then run into an issue with removeConnector which I filed at
https://issues.apache.org/jira/browse/AMQ-6144

On Tue, 26 Jan 2016 at 09:55 Mark Anderson <manderso...@gmail.com> wrote:

> With ActiveMQ 5.13.0 I'm programattically adding and removing a
> TcpTransportConnector as follows. The connectorUri is tcp://0.0.0.0:24726
>
> Add
>
> connector = brokerService.addConnector(connectorUri);
> brokerService.startTransportConnector(connector);
>
> Remove
>
> brokerService.removeConnector(connector);
> connector.stop();
> connector.setBrokerService(null);
>
> However, I'm finding that after I remove the connector any client
> connections that were made to that transport are still established and
> functioning (i.e. messages are still being sent and received).
>
> Is this the expected behaviour? I would have though that removing and
> stopping a connector would also automatically close any client connections.
>
> Thanks,
> Mark
>


Re: Shared File System Master/Slave with DRBD

2014-05-01 Thread Mark Anderson
Greg,

Thanks for the reply. I just wanted to confirm a couple of things.

1) The NFS requirement is so that DRBD can be used in Primary/Primary mode
as the file system needs to be writeable from both nodes?
2) With NFS I would also need to use Pacemaker to make the NFS server HA?
3) Is the NFS Tech Guide listed at
http://www.linbit.com/en/downloads/tech-guides what you are referring to?

Thanks,
Mark


On 1 May 2014 00:34, Greg Eckert g...@linbit.com wrote:

 Hey Mark,

 You should be able to use drbd with something like NFS without a problem.
 The LINBIT engineers are in the process publishing an NFS Tech guide... For
 current official documentation, feel free to contact me directly.

 Best Regards,

 Greg

 --
 : Greg Eckert
 : g...@linbit.com
 : Not an Engineer
 : LINBIT | Your Way to High Availability
 : Web: http://www.linbit.com




Shared File System Master/Slave with DRBD

2014-04-30 Thread Mark Anderson
Is it possible to use DRBD with shared file system master/slave instead of
a SAN?

Slide 30 at http://www.slideshare.net/ceposta/activemq-59x-new-featuressuggests
this is the case but I haven't been able to find very much
information online.

Before spending time trying it I wanted to check if it was possible and if
there are any known issues. Any pointers to documentation or guides online
would also be appreciated.

Thanks,
Mark


Re: Memory and Temp Usage Questions

2012-11-27 Thread Mark Anderson
Thanks for the information.

I have a default policy entry with no memory limit configured. However, JMX
shows that all topics are using the 128Mb system usage limit. So to me my
memory limits appear to be correct and I would expect it to spool to disk.

Given the above do you think it is likely I am triggering AMQ-3643? Is
there a workaround? Do you have any idea when this is likely to be fixed?


On 26 November 2012 12:21, Gary Tully gary.tu...@gmail.com wrote:

 you have a slow or blocked consumer that is blocking the send due to the
 pending message cursor being full. The blocked send will stop the
 connection from being terminated.

 To have the cursor spool to disk (temp store) you need to reduce the system
 usage memory limit b/c spooling to disk is based on that shared limit. It
 is independent of the destination limit.
 However, I think it is the destination limit that is visible in the log,
 hence the 211%.

 Start by increasing the destination limit to the same value as your system
 usage memory limit. Do this via a destination policy for your (or all)
 topic(s).

 There is a known problem with the checking for memory limits in the non
 persistent case to avoid the block on the cursor and respect the
 sendfailifnospace flag, but that needs some work
 Though the cursor is different in the jira, the symptom is related to https
 ://issues.apache.org/jira/browse/AMQ-3643




 On 23 November 2012 15:36, Mark Anderson manderso...@gmail.com wrote:

  I have ActiveMQ 5.6.0 configured as follows:
 
  Producer Flow Control = false
  Send Fail If No Space = true
  Memory Usage Limit = 128Mb
  Temp Usage Limit = 1Gb
 
  All my messages are non-persistent. The temp usage is configured to
 handle
  spikes/slow consumers when processing messages.
 
  I continually see the following in the logs:
 
  WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
  java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0, empty
  queue]] org.apac
  he.activemq.broker.TransportConnection.Transport) Transport Connection
 to:
  tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken pipe
  INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
  192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
  connection to 'tcp:
  //192.168.2.103:35166' is taking a long time to shutdown.
  INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
  192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
  connection to 'tcp:
  //192.168.2.103:35166' is taking a long time to shutdown.
  INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
  192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
  connection to 'tcp:
  //192.168.2.103:35166' is taking a long time to shutdown.
 
  I'm not sure why the connection will never shutdown.
 
  I then see the following message:
 
  org.apache.activemq.broker.region.TopicSubscription) TopicSubscription:
  consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
  dispatched=32766, delivered=0, matched=0, discarded=0: Pending message
  cursor
 
 
 [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
  ]
  is full, temp usage (0%) or memory usage (211%) limit reached, blocking
  message add() pending the release of resources.
 
  This leads me to the following questions:
 
  1) Why would the memory usage be 211% while temp usage is 0%.
  2) The thread dump shows that send calls on producers are blocking. Why
  would they not throw exceptions when send fail if no space = true?
  3) Would the issue with connection shutdown contribute to the memory
 usage?
 
  Thanks,
  Mark
 



 --
 http://redhat.com
 http://blog.garytully.com



Re: Memory and Temp Usage Questions

2012-11-27 Thread Mark Anderson
The server is configured in code as follows:

BrokerService broker = new BrokerService();
broker.setBrokerName(brokerName);
// enable persistence so that temp storage can be used
broker.setPersistent(true);
broker.setDataDirectory(dataDirectory);
broker.setSchedulerSupport(false);

broker.addConnector(tcp://0.0.0.0:24726);  // 24726 peer to peer
broker.addConnector(tcp://0.0.0.0:24727);  // 24727 client to server

// set the default policy to be used by all queues and topics
// disable producer flow control so that producers don't block and
// temp storage is used if memory limit is reached
PolicyMap policyMap = new PolicyMap();
PolicyEntry defaultPolicy = new PolicyEntry();
defaultPolicy.setProducerFlowControl(false);
policyMap.setDefaultEntry(defaultPolicy);
// we want to limit the live server domain events topic to 20k pending
msg
// to prevent messaging to the historical server locking up
PolicyEntry liveServerDomainEventsPolicy = new PolicyEntry();
ConstantPendingMessageLimitStrategy constantPendingMessageLimitStrategy
= new ConstantPendingMessageLimitStrategy();
constantPendingMessageLimitStrategy.setLimit(2);

liveServerDomainEventsPolicy.setPendingMessageLimitStrategy(constantPendingMessageLimitStrategy);
policyMap.put(new ActiveMQTopic(liveServerDomainEvents),
liveServerDomainEventsPolicy);
broker.setDestinationPolicy(policyMap);

// adjust default memory settings
SystemUsage systemUsage = broker.getSystemUsage();
// enable failure exception as last resort if memory fills otherwise
producers will block
systemUsage.setSendFailIfNoSpace(true);
// set in memory limit to 128Mb
// set temp storage to 1Gb
// store is not used as all messages are non-persistent
systemUsage.getMemoryUsage().setLimit(12800);
systemUsage.getTempUsage().setLimit(10);

// optionally connect to a peer broker
// this will be set on spoke nodes
if (peerAddress != null)
{
  URI uri = new URI(static:(tcp:// + peerAddress + : + jmsPortPeer
+
)?initialReconnectDelay=5000useExponentialBackOff=falsejms.prefetchPolicy.topicPrefetch=32766);

  NetworkConnector networkConnector = new
PeerNetworkConnector(peerAddress, uri, this);
  networkConnector.setName(peerConnector- + peerAddress);
  networkConnector.setDuplex(true);
  networkConnector.setNetworkTTL(networkTTL);
  networkConnector.setPrefetchSize(32766);
  broker.addNetworkConnector(networkConnector);
}

ManagementContext managementContext = new ManagementContext();
managementContext.setCreateConnector(false);
broker.setManagementContext(managementContext);
broker.setUseShutdownHook(false);
broker.start();

// broker.start() is asynchronous so wait
// don't want to accidently create an embedded broker
broker.waitUntilStarted();

The error happened on a customer system so creating a JUnit test case could
be difficult as we have not yet been able to reproduce in our test
environement.


On 27 November 2012 11:11, Gary Tully gary.tu...@gmail.com wrote:

 can you post your xml configuration to clarify. Even better, if you can
 produce a junit test case that can reproduce it would help get to the
 bottom of this.


 w.r.t to  AMQ-3643, it is a long way down on the priority list but is
 something that is on the radar.

 On 27 November 2012 09:22, Mark Anderson manderso...@gmail.com wrote:

  AMQ-3643




 --
 http://redhat.com
 http://blog.garytully.com



Re: Memory and Temp Usage Questions

2012-11-27 Thread Mark Anderson
The prefetch size was set on the network connector as we were getting
messages about slow consumers across the network bridge.

As far as I can see the network bridge had not failed. The connector
entries in the log are for a client subscription that will also have the
topic prefetch set to 32766. I am trying to get logs from the client.

The broker on the other end of the bridge uses the same configuration.


On 27 November 2012 13:41, Christian Posta christian.po...@gmail.comwrote:

 Answers to your questions:

 1) Not sure yet
 2) Because at the moment, send fail if no space is only triggered when
 producer flow control is on (at least for this case, topics)
 3) like gtully said, connections could not be shutdown if they are blocked
 somehow

 I noticed in your config you explicitly set the prefetch on the network
 connector to 32766. The default for network connectors is 1000 and the
 default for regular topics is Short.MAX_VALUE (which is 32767). Since the
 bridge doesn't have a prefetch buffer like normal clients do, setting the
 prefetch to 32766 could end up flooding it. Any reason why you have it set
 to 32766?

 So TopicSubscriptions should always have the broker's main memory usage. If
 it has the destination's memory limit, then something went wrong. Like Gary
 said, the pending message cursor's messages would be spooled to disk when
 the main memory limit reaches its high water mark (70% by default) but
 that appears to not have happened in this case.

 Are there any indications that the TopicSubscription is for the network
 bridge? Or maybe that the network bridge failed somehow? I see that the
 dispatched count is the same as what you've set for you prefertch on the
 bridge, but if anything else can point to that it might be helpful. For
 example, are those port numbers on the transport connector logs for the
 network bridge?

 How is the broker on the other end of the bridge configured? Same?


 On Fri, Nov 23, 2012 at 8:36 AM, Mark Anderson manderso...@gmail.com
 wrote:

  I have ActiveMQ 5.6.0 configured as follows:
 
  Producer Flow Control = false
  Send Fail If No Space = true
  Memory Usage Limit = 128Mb
  Temp Usage Limit = 1Gb
 
  All my messages are non-persistent. The temp usage is configured to
 handle
  spikes/slow consumers when processing messages.
 
  I continually see the following in the logs:
 
  WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
  java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0, empty
  queue]] org.apac
  he.activemq.broker.TransportConnection.Transport) Transport Connection
 to:
  tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken pipe
  INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
  192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
  connection to 'tcp:
  //192.168.2.103:35166' is taking a long time to shutdown.
  INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
  192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
  connection to 'tcp:
  //192.168.2.103:35166' is taking a long time to shutdown.
  INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
  192.168.2.103:35168] org.apache.activemq.broker.TransportConnection) The
  connection to 'tcp:
  //192.168.2.103:35166' is taking a long time to shutdown.
 
  I'm not sure why the connection will never shutdown.
 
  I then see the following message:
 
  org.apache.activemq.broker.region.TopicSubscription) TopicSubscription:
  consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
  dispatched=32766, delivered=0, matched=0, discarded=0: Pending message
  cursor
 
 
 [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
  ]
  is full, temp usage (0%) or memory usage (211%) limit reached, blocking
  message add() pending the release of resources.
 
  This leads me to the following questions:
 
  1) Why would the memory usage be 211% while temp usage is 0%.
  2) The thread dump shows that send calls on producers are blocking. Why
  would they not throw exceptions when send fail if no space = true?
  3) Would the issue with connection shutdown contribute to the memory
 usage?
 
  Thanks,
  Mark
 



 --
 *Christian Posta*
 http://www.christianposta.com/blog
 twitter: @christianposta



Re: Memory and Temp Usage Questions

2012-11-27 Thread Mark Anderson
PeerNetworkConnector extends DiscoveryNetworkConnector so I can fire
listeners for onServiceAdd and onServiceRemove.


On 27 November 2012 14:16, Christian Posta christian.po...@gmail.comwrote:

 Okay good to know. I suppose this error happened just once randomly and you
 cannot reproduce?

 BTW... what is PeerNetworkConnector in your config:

 NetworkConnector networkConnector = new
 PeerNetworkConnector(peerAddress, uri, this);


 On Tue, Nov 27, 2012 at 7:08 AM, Mark Anderson manderso...@gmail.com
 wrote:

  The prefetch size was set on the network connector as we were getting
  messages about slow consumers across the network bridge.
 
  As far as I can see the network bridge had not failed. The connector
  entries in the log are for a client subscription that will also have the
  topic prefetch set to 32766. I am trying to get logs from the client.
 
  The broker on the other end of the bridge uses the same configuration.
 
 
  On 27 November 2012 13:41, Christian Posta christian.po...@gmail.com
  wrote:
 
   Answers to your questions:
  
   1) Not sure yet
   2) Because at the moment, send fail if no space is only triggered when
   producer flow control is on (at least for this case, topics)
   3) like gtully said, connections could not be shutdown if they are
  blocked
   somehow
  
   I noticed in your config you explicitly set the prefetch on the network
   connector to 32766. The default for network connectors is 1000 and the
   default for regular topics is Short.MAX_VALUE (which is 32767). Since
 the
   bridge doesn't have a prefetch buffer like normal clients do, setting
 the
   prefetch to 32766 could end up flooding it. Any reason why you have it
  set
   to 32766?
  
   So TopicSubscriptions should always have the broker's main memory
 usage.
  If
   it has the destination's memory limit, then something went wrong. Like
  Gary
   said, the pending message cursor's messages would be spooled to disk
 when
   the main memory limit reaches its high water mark (70% by default)
  but
   that appears to not have happened in this case.
  
   Are there any indications that the TopicSubscription is for the network
   bridge? Or maybe that the network bridge failed somehow? I see that the
   dispatched count is the same as what you've set for you prefertch on
 the
   bridge, but if anything else can point to that it might be helpful. For
   example, are those port numbers on the transport connector logs for the
   network bridge?
  
   How is the broker on the other end of the bridge configured? Same?
  
  
   On Fri, Nov 23, 2012 at 8:36 AM, Mark Anderson manderso...@gmail.com
   wrote:
  
I have ActiveMQ 5.6.0 configured as follows:
   
Producer Flow Control = false
Send Fail If No Space = true
Memory Usage Limit = 128Mb
Temp Usage Limit = 1Gb
   
All my messages are non-persistent. The temp usage is configured to
   handle
spikes/slow consumers when processing messages.
   
I continually see the following in the logs:
   
WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0,
  empty
queue]] org.apac
he.activemq.broker.TransportConnection.Transport) Transport
 Connection
   to:
tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken
  pipe
INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
  The
connection to 'tcp:
//192.168.2.103:35166' is taking a long time to shutdown.
INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
  The
connection to 'tcp:
//192.168.2.103:35166' is taking a long time to shutdown.
INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
  The
connection to 'tcp:
//192.168.2.103:35166' is taking a long time to shutdown.
   
I'm not sure why the connection will never shutdown.
   
I then see the following message:
   
org.apache.activemq.broker.region.TopicSubscription)
 TopicSubscription:
consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
dispatched=32766, delivered=0, matched=0, discarded=0: Pending
 message
cursor
   
   
  
 
 [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
]
is full, temp usage (0%) or memory usage (211%) limit reached,
 blocking
message add() pending the release of resources.
   
This leads me to the following questions:
   
1) Why would the memory usage be 211% while temp usage is 0%.
2) The thread dump shows that send calls on producers are blocking.
 Why
would they not throw exceptions when send fail if no space = true?
3) Would the issue with connection shutdown contribute to the memory
   usage?
   
Thanks,
Mark

Re: Can non-persistent messages sent via topics use temp storage?

2012-03-13 Thread Mark Anderson
Thanks for the pointer. It seems that what I was missing in my
configuration was setting the topic prefetch size on the connection factory.

I think the online documentation could really do with a clear of example of
how to configure ActiveMQ without producer flow control. I had to spend a
good few hours jumping between various parts of the documentation, googling
and eventually finding the unit test that triggered my question here.

Can you submit bugs against the documentation? I would consider submitting
a patch/new documentation if that was possible.

Mark


On 9 March 2012 20:50, Gary Tully gary.tu...@gmail.com wrote:

 look a the other test in there that uses a topic for inspiration,
 those consumers are using the temp store
 for their pending messages.

 org.apache.activemq.bugs.TempStorageBlockedBrokerTest#testRunProducerWithHungConsumer

 the testFillTempAndConsume does not have a consumer, so the topic
 variant just discards messages.


 On 9 March 2012 15:18, Mark Anderson manderso...@gmail.com wrote:
  I am having problems trying to get non-persistent messages sent via a
 topic
  to use temp storage when the memory usage limit is reached.
 
  In my search for a solution I found a reference to the following unit
 test
  as an example of how to configure ActiveMQ to use temp storage for
  non-persistent messages:
 
 
 http://svn.apache.org/viewvc/activemq/trunk/activemq-core/src/test/java/org/apache/activemq/bugs/TempStorageBlockedBrokerTest.java?view=markuppathrev=1233367
 
  When I edit the test case testFillTempAndConsume to use a topic instead
 of
  a queue it fails and no data is written to the temp storage.
 
  Is the usage of temp storage with topics supported in this way?
 
  I found issue AMQ-3738 (Cursors not dealing with topic memoryLimit
  overflow) which sounds like the same issue as I am seeing. Is this a bug
 in
  ActiveMQ and if so is it likely to be resolved in the immediate future?
 
  Thanks,
  Mark



 --
 http://fusesource.com
 http://blog.garytully.com



Can non-persistent messages sent via topics use temp storage?

2012-03-09 Thread Mark Anderson
I am having problems trying to get non-persistent messages sent via a topic
to use temp storage when the memory usage limit is reached.

In my search for a solution I found a reference to the following unit test
as an example of how to configure ActiveMQ to use temp storage for
non-persistent messages:

http://svn.apache.org/viewvc/activemq/trunk/activemq-core/src/test/java/org/apache/activemq/bugs/TempStorageBlockedBrokerTest.java?view=markuppathrev=1233367

When I edit the test case testFillTempAndConsume to use a topic instead of
a queue it fails and no data is written to the temp storage.

Is the usage of temp storage with topics supported in this way?

I found issue AMQ-3738 (Cursors not dealing with topic memoryLimit
overflow) which sounds like the same issue as I am seeing. Is this a bug in
ActiveMQ and if so is it likely to be resolved in the immediate future?

Thanks,
Mark