Thanks & Regards,
> Nawaz Ali Shaik
> Integration Support | Digital & Technology Service Operations
> Sainsbury's Supermarkets Ltd | Walsgrave,Coventry
> nawazali.sh...@sainsburys.co.uk | Mobile: +44-7405734657
>
> www.sainsburys.co.uk
>
>
> -Original Message-
This isn't something I've seen reported for non-NMS clients, and it's my
understanding that the NMS client gets a lot less use than the JMS client
so it's entirely possible that there's a bug in the NMS client that no one
has detected till now.
Are you able to reproduce this reliably? If so,
Can you give more details about the timeline and observed behavior? Did the
broker declare the store to be full while the NFS server was offline or
after it came back? If after, how long after? How much data was in the
persistent store before the NFS server dropped? What's the approximate rate
of
files
from that directory to reference SqlServerJDBCAdapter rather than
TransactJDBCAdapter? I'd guess only the 6.1 file needs a change, but maybe
try them one by one to determine the minimum set?
Also, what SQL Server version are you using?
Tim
On Mon, Mar 14, 2022, 6:42 AM Tim Bain wrote
Support for SQL Server (including the use of varbinary) was added in
5.15.11 and 5.16.0 under AMQ-6904 (
https://issues.apache.org/jira/browse/AMQ-6904), but the broker is using
DefaultJDBCAdapter instead. Can you show your config?
Tim
On Mon, Mar 14, 2022, 4:52 AM wrote:
> Hello Community,
>
ation 'xx.yy.zz' - trying to recover. Cause:
> Could
> >> not create JMS transaction; nested exception is javax.jms.JMSException:
> >> Could not create Transport. Reason: java.lang.IllegalArgumentException:
> >> Invalid connect parameters: {jms.prefetchPolicy.queuePrefetch=1
I don't know this code nor the history behind the design decision, but this
behavior is what I would have expected.
What you're doing (if server side, then no client side) is probably the
typical case, and in that case it would be beneficial to have it get
defaulted automatically. But I'm not
Your first and third URIs should work, so maybe this is something specific
to DefaultJmsListenerContainerFactory()? I don't have experience using it,
but an answer on
https://stackoverflow.com/questions/9224/spring-jms-listener-container-concurrency-attribute-not-working
lists 3 ways to set
with the behavior
of non-composite topics. Would you be willing to create a feature request
in JIRA asking for that to be added?
Tim
On Fri, Feb 11, 2022, 2:38 AM Simon Lundström wrote:
> On Thu, 2022-02-10 at 14:05:42 +0100, Tim Bain wrote:
> > It's been a while since I've looked a
It's been a while since I've looked at the JMX beans, but I believe that
each topic consumer has its own MBean somewhere in the tree, with
individual per-consumer stats. Are those increasing even though the topic
is not?
And for that matter, do you have at least one consumer connected and
It's probably also worth considering whether you're giving the JVM (and the
machine) enough RAM for the amount of data you're asking it to hold in
memory.
Tim
On Mon, Feb 7, 2022, 7:47 PM Matt Pavlovich wrote:
> Hello Karl-
>
> Those error messages are usually indicative of a garbage
need for maintenance
> > > - it would reduce the communication necessary to coordinate a release;
> a
> > > release manager could simply step up and perform the release when the
> time
> > > comes
> > >
> > > Obviously we can still have "ad hoc" rel
You've not given much detail about the setup and the pattern of life for
this broker and its clients, and without more information it'll be hard to
help. In general yes clients should reconnect and the broker should detect
disconnected clients though I don't remember if it logs; from long-ago
Would it be worth the effort to create and then maintain a page that lists
the planned timeline of upcoming releases for both 5.x and Artemis? There
have been a lot of questions about upcoming plans in the wake of the Log4J
CVE, but even during normal times we get occasional questions here about
If you want to prevent messages from being read by people other than the
intended recipient, whether on the web console or elsewhere, the standard
way to do that is to encrypt the message when sending and then have the
intended recipient decrypt it upon receipt.
Or you can limit access to the web
Some of the answers at the bottom of the list of answers for
https://stackoverflow.com/questions/5737923/how-do-i-limit-the-number-of-connections-jetty-will-accept
provide ways to limit the number of concurrent *connections*, but I didn't
see anything to limit the number of sessions.
Tim
On Wed,
0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre/lib/jsse.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre/lib/jce.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre/lib/charsets.jar:/usr/li
But thank you for taking the time to attempt the fix even though Justin got
there first. As an open source project, it's great when members of the
community are willing to help with fixes, so thanks even though it didn't
work out this time!
Tim
On Thu, Jan 20, 2022, 8:12 AM Justin Bertram
A hub-and-spoke topology is one of the topologies often suggested for
ActiveMQ 5.x networks of brokers. See for example
https://access.redhat.com/documentation/en-us/red_hat_amq/6.1/html/using_networks_of_brokers/fmqnetworkstopologies#FMQNetworksHubSpoke
and
on
> > > DuckDuckGo, Google, or Bing provide the relevant information in the
> > first few results.
> > > In my opinion if folks aren't finding the information it's because
> > > they aren't looking. There's always going to be folks like that
> > unfortunately.
&g
hrottling will take place in not
> graceful way when the amount of free space on the filesystem volume on
> which the data file lives is exhausted. Is that right?
>
>
> Best regards,
> Daniel
>
> -----Original Message-
> From: Tim Bain
> Sent: Friday, Janua
JB, should we put that link somewhere prominent on
https://activemq.apache.org/contact for a few months? I believe all the
users who posted questions about the CVE were first-time posters who likely
went to that page before posting questions, so we might be able to save
everyone the time and
The sentence is fully accurate if we interpret "capacity" in that sentence
as "the amount of free space on the filesystem volume on which the data
file lives" rather than "the value you put in the config file." Under that
definition, a producer that produces beyond the broker's capacity will
Great, I'm glad you were able to figure it out, and thanks for sharing the
root cause once you found it.
Tim
On Mon, Dec 6, 2021, 5:24 AM David Martin wrote:
> Domenico, Tim,
>
> I've figured it out.
>
> On further investigation, the kubernetes command params included the
> following :
>
>
>
To take the K8s networking out of the equation, maybe kubectl exec a shell
session into the container and invoke the curl command against localhost?
And while you're in the container, you can check that your sed command
produced the expected output.
Tim
On Fri, Dec 3, 2021, 9:45 AM Domenico
Please post questions to the mailing list, rather than emailing people
directly.
I've never used that policy, but the Javadoc[1] says its constructor takes
a 'SubscriptionRetentionPolicy wrapped' object, and the schema docs[2] show
that the other policies listed on that page are available
Just FYI, networkConnectors and the masterslave transport are for making
networks of brokers, which might be networks of failover pairs. If you just
have a single active-passive failover pair, you don't need those things.
Tim
On Wed, Dec 1, 2021, 1:49 AM Simon Lundström wrote:
> On Tue,
Persistent messages are written to persistent store and are controlled by
that limit, while non-persistent messages are written to the memory store
and are controlled by that limit. So in order to see the memory store limit
applied, you'll need to be sending non-persistent messages.
Tim
On Wed,
If you decide to try rebuilding the KahaDB index without the corrupted
file(s), https://access.redhat.com/solutions/276323 gives steps for doing
that. I'd highly recommend you back up the KahaDB data directory and that
you try the process in a development broker using a copy of the data files.
I second what Matt wrote, especially the point that you're running
7-year-old code that's had lots of bug/security fixes since then.
I hadn't caught that this memory leak was in the client code, but the same
philosophy applies: if you're leaking threads in a JVM (whether broker or
client), they
If we're leaking threads, you should be able to see a large number of
threads with this stack trace if you trigger a thread dump on the broker
process. Could you do that, to fact-check the output from the tool?
Tim
On Sun, Nov 21, 2021, 5:54 PM Matt Pavlovich wrote:
> Hello-
>
> What version
Only one ActiveMQ broker accesses the data store at a time. The passive
node is simply waiting to acquire the lock but will not access the data
store until the lock is acquired.
Yes, it's possible to do a rolling upgrade as you described,. The downtime
incurred will be minimal (just a standard
K is 1 second.
> Can this settings be the cause of our problems?
>
> Regards
>
> Guillaume Cripiau
>
> Le 02-11-21 à 12:45, Tim Bain a écrit :
> > These are broker logs, or client logs? Whichever it is, what's in the
> other
> > process's logs at the same time?
> &
atch logs. Seems we were exceeded
> maximum connections.
>
>
>
>
>
> Thanks,
>
> Sai
>
> -Original Message-
> From: Tim Bain
> Sent: Friday, November 12, 2021 4:34 AM
> To: ActiveMQ Users
> Subject: Re: Help - Authentication failed
>
>
&g
Typically when you see an error message like this, the relevant information
(if any) is in the other process's logs. So if you're seeing this in the
client's logs, look at the broker's logs for the same time.
Tim
On Thu, Nov 11, 2021, 10:49 AM JB Onofré wrote:
> Can you please provide some
roperty 'uri' threw
> exception; nested exception is java.io.IOException: DiscoveryAgent scheme
> NOT recognized: [failover]
>
>
> As said before, all help is very much appreciated.
>
>
>
> From: Tim Bain
> Sent: 02 November 2021 11:55
> To: ActiveMQ Users
For the second option, I think you'll also want to set the priorityBackup
option (see the Priority Backup section of
https://activemq.apache.org/failover-transport-reference.html) to get the
behavior of failing back to the fast link when it becomes available once
again.
I'm very surprised to hear
These are broker logs, or client logs? Whichever it is, what's in the other
process's logs at the same time?
You said that reconnecting isn't possible for hours after this happens. Do
you see the same messages in the logs for that whole time?
At the time this happens, what are the client(s) and
The error message says the broker is trying to use AMQAdmin as the
username. Is that the correct username? Can you authenticate to the LDAP
with those credentials via command line tools?
Tim
On Wed, Oct 27, 2021, 8:49 AM Diptin Patel wrote:
> Please help!
>
> I'm setting up
r(s). No messages in or out?
>
> BR,
> - Simon
>
> On Fri, 2021-10-22 at 04:42:18 +0200, Tim Bain wrote:
> >I believe it would be possible to write a custom interceptor (see
> https://activemq.apache.org/interceptors) that rejected incoming messages
> and incoming connections. (
I believe it would be possible to write a custom interceptor (see
https://activemq.apache.org/interceptors) that rejected incoming messages
and incoming connections. (Maybe rejecting incoming connections would be
enough, because you can't send a message without an established
connection.) If you
Is your systemctl-launched ActiveMQ running as root or another user? Can
that user manually create that file from the command line?
When the file is created under systemctl, is it owned by the user the
broker is running as? Are you able to tell which process created the lock
file? Does any
The fact that pending messages > enqueued messages looks suspicious, so my
expectation is that this will turn out to be a bug in the statistics code.
But the information I requested may help us to confirm that.
Tim
On Fri, Oct 1, 2021, 6:23 AM Jean-Baptiste Onofré wrote:
> Hi,
>
> enqueued and
Can you please describe your second broker and its relationship with the
first? Is it reading from the data file of the KO'ed first broker? Or is
this a network of brokers where both are operational at the same time?
Thanks,
Tim
On Fri, Sep 24, 2021, 6:26 AM PROVENZANO Felipe [prestataire] <
It's not on our website, but
https://stackoverflow.com/questions/40756712/whats-means-of-each-part-of-jms-message-id
provides an answer that looks accurate for ActiveMQ 5.x.
I don't know if Artemis uses the same implementation or is different in
some way.
Tim
On Wed, Sep 29, 2021, 6:25 AM Simon
How are you determining the counts you referenced? I'd expect you could get
them from the web console or by doing SQL queries against the database, and
I'd encourage you to do both and compare the numbers.
I'm hoping you'll find that the stats from the web console (which are
sourced from the JMX
Same as I said in the email I just wrote, the lack of a response to any of
these Kubernetes questions from anyone but me has me convinced that the
authors of Artemis Cloud aren't on this list (or aren't monitoring it
closely), so you probably won't get a good answer here to some of your
questions
p/remote_source/app/src/yacfg/profiles/artemis/2.18.0/_modules/bootstrap_xml/*
> of the init image quay.io/artemiscloud/activemq-artemis-broker-init:0.2.6.
> I gonna look into ingress next
>
> Thai Le
>
> On Mon, Aug 23, 2021 at 12:59 AM Tim Bain wrote:
>
> > Thanks
as
> I understand, ingress is to load balance http traffic so at one point in
> time, the console of a particular broker can be accessed.
>
> Thai Le
>
>
> On Fri, Aug 20, 2021 at 12:14 AM Tim Bain wrote:
>
> > Can you port-forward directly to the individual pods successfully?
Can you port-forward directly to the individual pods successfully? If that
doesn't work, then going through the service won't, so make sure that
building block is working.
Also, if you switch the service to be a NodePort service, can you hit the
web console from outside the K8s cluster without
This change also eliminates a source of occasional confusion where Nabble
allows users to edit messages after they've been sent to the mailing list,
yet does not result in any updated emails being sent. So users occasionally
think they've provided more/better information but people who use the
While working with Kubernetes for non-ActiveMQ things I've observed
situations where a deployment starts a replacement pod before the prior pod
was fully terminated. Might that be what's going on, a race condition where
your two programs competing for the lock are the previous and current pods
Is this a race condition (i.e. the database transaction is committed if you
wait a second or two), or is the transaction ultimately failing and being
rolled back? If it's not the former, fix that problem (whatever it is)
first.
Tim
On Sat, Jul 31, 2021, 2:25 AM Ben Pirt wrote:
> We're
Maybe I'm oversimplifying this, but isn't the client required to use a
unique client ID, and we're splitting hairs over the exact undefined
behavior that occurs when something invalid is done? It seems like the real
solution is to modify the client applications to make them use unique
client IDs,
As we've mentioned previously, replicated LevelDB is not a supported
configuration and support is being removed in the next release, no one on
this mailing list knows enough about it to help you troubleshoot problems,
and if you do choose to use it despite those things you should be prepared
to
been
tried to date, please try it and see if that gets things working as
expected.
Tim
On Tue, Jul 13, 2021, 9:49 AM Tim Bain wrote:
> Vince,
>
> Thanks for explaining the use case. I'm surprised to hear that moderate
> load is enough to make the lock table inaccessible, and it makes
If you switch the configuration from LevelDB to KahaDB on local storage, do
both brokers start successfully? Without a shared filesystem they won't be
in a failover pair, but still, do they start and accept/deliver messages?
What about if you temporarily set up a database (e.g. SQLite, doesn't
>From that stack trace, my best guess is that your broker is not actually
started. Maybe that's because of the start="false" snippet you quoted? Or
maybe there's something wrong with the part of your config that you didn't
post? Or maybe your HTTP request is going to a passive broker instead of
And it appears to come from an inability to
> access the lock in time causing the slave to take over. When they first
> came to us, we wondered why they wanted to do this as well.
>
> Vince
>
> On 7/12/21, 7:56 AM, "Tim Bain" wrote:
>
> To confirm, are you sayin
To confirm, are you saying that you're trying to use one Oracle database
for message storage and a different Oracle database for locking?
If so, would you mind explaining why you're not just using a single
database for both purposes? I have no idea if the configuration I think
you're describing
I am not aware of any plans to allow replication of data between brokers
for data stores that run on the same hosts as the ActiveMQ brokers. Though
you may be able to run your clustered NFS processes on the same hosts that
the broker is running on, so you could consider that possibility.
Thanks for the link. To the best of my knowledge, that particular
experiment never bore fruit and there is no current effort underway to
implement replicated KahaDB.
Regarding NFS, I agree, I wouldn't run a single NFS server for exactly that
reason. Running your own NFS cluster in clustered mode
Can you please provide the link where you saw KahaDB replication discussed?
I suspect that the content is very out of date since I'm not aware of any
current development effort to implement replicated KahaDB.
To the best of my knowledge, the out-of-the-box data store options
available to you are
Sai,
The images didn't come through on your message, but I have a potential
answer based on what you wrote.
I think you're browsing a queue from the web console. Queues are FIFO data
structures that provide access to the oldest elements from the head of the
queue, so when you consume messages,
Let me strengthen the statement made by Justin.
The decision to deprecate and soon remove LevelDB was made because there
was no developer willing/able to maintain the code and no member of the
mailing list willing/able to answer questions, even basic ones, on this
mailing list.
If you aren't
factories with different
> ClientID
> > > > > values configured and use them to create 1 connection each, or stop
> > > > > configuring an explicit ClientID for the factory and set it on each
> > > > > connection immediately after creation,
ce Operations
> Sainsbury's Supermarkets Ltd | Walsgrave,Coventry
> nawazali.sh...@sainsburys.co.uk | Mobile: +44-7405734657
>
> www.sainsburys.co.uk
>
>
> -Original Message-
> From: Tim Bain
> Sent: 02 July 2021 12:42
> To: ActiveMQ Users
>
As JB says, you need to ensure that the messages are sent as persistent
messages and that the broker configures a persistence store whose data will
survive the restart of the container. I'll go into some detail about
various possible options, and if what I write doesn't go deep enough to
answer
Can you please tell us more about what you're doing and in what way it's
not working? Is this a queue or a topic? Are your threads failing to
subscribe to the destination, or failing to receive messages as expected
after subscribing? What behavior are you expecting, and what behavior are
you
Is that filesystem a local disk (i.e. exclusive to the host) or an NFS
share (i.e. the file could be locked by a process running on another host)?
If the latter, lsof wouldn't show processes from other hosts, so you'd want
to run the command from all hosts where ActiveMQ is installed and might be
b-data-in-activeMQ-5-15-0-td4757076.html
Gary Tully referenced a tool to export the messages from a set of KahaDB
files, so you might try that (and increase logging if necessary) to test
the theory that the data files are corrupted.
Tim
On Sat, Jun 5, 2021, 7:16 AM Tim Bain wrote:
> I was ref
It might be worth considering LDAP authentication rather than the simple
(file-based) authentication mechanism, since that would allow user changes
without restarting either ActiveMQ or your LDAP server. But of course then
you have to create and maintain an LDAP server, so you'll have to decide
At least some portions of the configuration can be reloaded without a
restart. https://activemq.apache.org/runtime-configuration has details
about what portions of the configuration support live configuration
reloading.
If there's something that isn't supported that you think should be, you can
Andrew,
Let me make sure I'm understanding the specific question. I think you're
saying that messages are traversing the broker as expected, but that
logging of messages larger than 64KB is now outputting the message bodies
in debug log lines in a way that is different from the behavior under
That matches my understanding: the queue browser is meant to be a way to
view small numbers of messages from among those that would be consumed
next, not a way to view every message in a huge queue.
If you need that ability, one option is to use the JDBC backing store type,
since then you can
If, after the messages expire, you connect a real consumer to the topic,
does it receive those messages or does the broker expire them at that point?
How are you setting the expiration time on these messages?
For the web console, are you saying that this particular topic's enqueue
counts (on the
, 2021, 11:06 AM Phil Ruggera wrote:
> What does "include all destinations in the 5.3 -> 5.16 direction, to force
> all messages to be transferred" entail?
>
> On Fri, Jun 4, 2021, 5:45 AM Tim Bain wrote:
>
> > One option would be to load the 5.3 data file into
OK, thanks for clarifying the root cause.
Tim
On Thu, Jun 3, 2021, 12:10 AM ヤ艾枫o.-- <1169114...@qq.com> wrote:
> Hi
>
>
>
> writing problem, the setting expiration time is very short (1 second).
>
> But in the actual verification, I used 10 seconds。
> The reason for this problem is
One option would be to load the 5.3 data file into a new temporary 5.3
broker and configure it to make a network of brokers with your 5.16.0
broker. You'll likely want to statically include all destinations in the
5.3 -> 5.16 direction, to force all messages to be transferred to the
5.16.0 broker.
It looks like all messages in your broker expire after max 1 second, since
the timeStampingBrokerPlugin will set the TTL to 1 second if it is absent
or >1s.
Your original question says that you're accessing the messages before they
reach their expiration times, which means within 1s of them being
JB is correct about how to do this in 5.x. However, ActiveMQ Artemis's
clustering ability is significantly better than a network of brokers
(because a NoB requires messages to be forwarded between the brokers of the
"cluster," resulting in higher load on the brokers and edge cases when
consumers
lly solve the
> reconnection-problem if the broker is "bumped",
> which makes the whole setup easier and cleaner to fix,
> and more robust, of course.
> Very nice, thanks! :)
>
> ========
> > From: Tim Bain
> > Subject: R
the timeout option as following, I can't ensure that the
> broker closed a client
> connection.
> Can you tell me how to confirm the disconnection when the timeout has
> expired?
>
> -
>
&g
Matt and JB seem to have answered different questions, where Matt seemed to
be talking about segmenting a single config across multiple files and JB
was talking about the ability to have multiple distinct configs and choose
one of them when launching. Which of those scenarios do you mean?
Tim
On
etecting that the broker has stopped, basically shutdown all
> dependent processes,
> then have the script loop and check for DB-availability,
> when it comes back first restart the broker, then all the listeners.
> This is outside the scope of ActiveMQ, though, since it requires monitor
If instead of staying up and attempting to reconnect to the database (while
still servicing requests without a database connection, whatever that would
mean), would you be OK with having ActiveMQ restart repeatedly until the
database is available again?
Also, is this 5.x or Artemis?
Tim
On Thu,
Note that the comments on AMQ-7426 (Log4J 2) state the following:
ActiveMQ is not affected by CVE-2019-17571 directly as we don't use the
SocketServer.
The upgrade does not appear to be in 5.16.2, so expect that to remain in
your scan results, and you'll have to manually adjudicate the finding.
I think you're asking how to ensure that the broker will close a client
connection if that connection is idle for 60s or longer. If so, that
setting is the wireFormat.maxInactivityDuration element of the URIs you
provided, and you'll want to change the current value of 0 to 6
(milliseconds)
Sounds great, thank you.
Tim
On Sun, Apr 11, 2021, 7:16 AM Prameet Patil
wrote:
> Yes definitely.
> I don't have access to the source code of the JMS clients. so i am not sure
> if i will be able to reproduce it in a clean environment.
> but will definitely give it a try and create a JIRA if i
Be sure you understand what advisory support actually gives you (i.e. what
you're giving up by turning it off) before you lock into this workaround as
a permanent solution. See https://activemq.apache.org/advisory-message for
more info.
Also, if you're able to reproduce the problem reliably,
This sounds like a bug, since the closure of the inactive connections
doesn't seem to remove the associated subscriptions nor reduce the JMX
count accordingly. Can you please submit a bug in JIRA for this behavior?
Are you able to reproduce the problem on demand, ideally on a single broker
with
I'm not aware of other options from within the broker. But you've asked
several questions about dynamic changes to the set of authorized users, so
I wonder if you'd be best served using an LDAP user store rather than the
bare-bones file-based auth store. In that case, you'd have access to all
the
By default, topic consumers will receive only messages sent after they
subscribe. However, it is possible to configure the consumer to use a
retroactive subscription[1] to receive any previously sent messages that
haven't been acked by all existing subscribers and deleted by the broker.
By
One thing to keep in mind is that although the code may be slightly slower,
if it lets you use an API that is more favorable (which might mean better
documented, more portable, more stable across future versions, easier for
others to maintain because they're already familiar with it, or a number
as simple as setting the TTL).
Sorry for missing your point in my first response.
Tim
On Wed, Mar 24, 2021, 5:47 AM Tim Bain wrote:
> I'm saying that in the scenario where you're preventing the producer from
> sending messages because the consumer has fallen behind, the inability to
t, as the Relay component will be blocked from
> publishing if the buffer queue is backed up, this will cause problems
> upstream?
>
>
>
> Dave
>
>
>
> On Tue, 23 Mar 2021 at 11:47, Tim Bain wrote:
>
> > As an aside, while we wait for the OP to tell us wheth
As an aside, while we wait for the OP to tell us whether any of these
suggestions are relevant to his situation:
In most cases, you want producers and consumers to be decoupled, so that a
slow consumer doesn't block its producers. Flow control is typically used
to protect the broker and to
If the third party simply puts a sleep at the end of their message-handling
logic, will that meet the need? If not, hearing what doesn't work about
that approach will help us to better understand exactly what's needed.
Tim
On Fri, Mar 19, 2021, 5:01 PM Christopher Pisz
wrote:
> I am using
Can you please submit a bug in JIRA for this behavior? If you're interested
in digging in and doing the investigation and the fix, that's great, but
let's get the problem captured, along with enough information to reproduce
the problem and test a fix. If you're interested in doing that, you might
s.
>
>
> Brian
>
> > On Jan 12, 2021, at 10:20 PM, Tim Bain wrote:
> >
> > Early in my time as an ActiveMQ user, I ran into unexpectedly poor
> > performance between a network of ActiveMQ 5.x brokers across a
> high-latency
> > network link. The particular
1 - 100 of 2242 matches
Mail list logo