I don't know why this content wasn't posted to the mailing list when I sent
it via email on 3/29, but since it wasn't, here it is again:
A broker should be attempting to acquire the lock on startup, so if that's
not working right, it seems to indicate problems with your NFS
configuration.
The
Sorry, my information is not enough.
I changed the ActiveMQ setting as follows, and restarted.
-
offlineDurableSubscriberTimeout="12"
offlineDurableSubscriberTaskSchedule="18"
-
In this case, the
OK, it sounds like I understood you correctly, then. Did you use the tools
and techniques outlined in the wiki page I provided to determine which
destination(s) contain the messages that are preventing the files from
being deleted?
Tim
On Wed, Apr 4, 2018, 6:02 PM norinos
Sorry, my information is not enough.
I changed the ActiveMQ setting as follows, and restarted.
-
offlineDurableSubscriberTimeout="12"
offlineDurableSubscriberTaskSchedule="18"
-
In this case, the
Hi,
We have been using ActiveMQ 5.x (upgraded to 5.14 last year) for our product
which is in production for 3years. We have been facing stability issues with
replicated LevelDB store(it was deprecated by community after we went live
with LevelDB, we have stuck to it as we accomplished HA through
I tried deleting db.data and db.redo files, and starting up ActiveMQ.
This try succeeded.(ActiveMQ start successfuly, and recreated db.data and
db.redo)
But when the offline durable subscription cleanup is started, journal file
could not be deleted.
The following message was logged to the
On Wed, Apr 4, 2018 at 9:18 AM, gbrown wrote:
> We had a short outage on the network and once the this came back both
> instances in our master / slave setup were up and connectable. Once this
> was
> discovered when messages on queues were not browsable or able to be
>
well, before submitting issue, I'd like to ask if there's already a
possibility to configure that way.
ok, got your point, will open a request
2018-04-04 17:29 GMT+05:00 Tim Bain :
> If you think that message expiration should be checked when the publisher
> publishes the
If you think that message expiration should be checked when the publisher
publishes the message, you can submit an enhancement request in JIRA for it.
Tim
On Tue, Apr 3, 2018, 11:15 PM Илья Шипицин wrote:
> Tim, thank you for your investigation (looks like our client is
I'm not understanding. Are you saying that after those durable
subscriptions were deleted, there were no more unconsumed messages and so
the journal files should have been deleted but were not?
If I've understood correctly,
Sorry - the weren’t really shuffled.
I don’t know exactly if they were moved to the back of the queue or just held
until their redelivery delay expired and then re-injected into the queue. We
didn’t test enough to make that determination - we stopped as soon as we
discovered that delayed
We had a short outage on the network and once the this came back both
instances in our master / slave setup were up and connectable. Once this was
discovered when messages on queues were not browsable or able to be consumed
the instances were restarted after renaming the db.data file as other
12 matches
Mail list logo