KahaDB journal logs cleanup issue

2017-02-19 Thread Shobhana
I have read 
http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html
  
and I fairly understand some of the reasons why some journal logs may not
get deleted :
(a) It contains a pending message for a destination or durable topic
subscription
(b) It contains an ack for a message which is in an in-use data file - the
ack cannot be removed as a recovery would then mark the message for
redelivery
(c) The journal references a pending transaction
(d) It is a journal file, and there may be a pending write to it

Is the above list a complete list of reasons why journal files may not get
deleted? Or are there any more possible reasons?

Since I have no control over offline subscribers (these are apps used by end
users), I try to overcome the above scenarios with certain configurations :

To avoid issues due to (a), I have enabled following configurations in my
broker xml :

To ensure offline durable subscribers don't cause piling up of these log
files, I have enabled timeout for offline durable subscribers to 24 hours :

http://activemq.apache.org/schema/core; useJmx="false"
brokerName="PrimaryBroker" deleteAllMessagesOnStartup="false"
advisorySupport="false" schedulePeriodForDestinationPurge="60"
offlineDurableSubscriberTimeout="8640"
offlineDurableSubscriberTaskSchedule="360"
dataDirectory="${activemq.data}">

I have also set message expiry as 12 hours :






I think the above configurations only help to overcome scenario (a). How do
I overcome scenarios (b), (c) and (d)? Are there any configuration to :
a) delete old ack messages?
b) timeout pending transactions?

In the last 3 days run, I see that there are some journal logs which are
were created on 17-Feb early morning. Why are these files still not getting
deleted even after more than 72 hours? What could be the probable reasons?
Since this is in production environment, I cannot enable debug logs to see
which destination is holding up which journal file. Any help will be greatly
appreciated.

P.S : We use AMQ 5.14.1 and exchange MQTT messages mostly (thousands of
topics will be created on the fly and both persistent and non-persistent
messages are exchanged over these topics) and a few JMS messages to a queue.

TIA,
Regards,
Shobhana



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/KahaDB-journal-logs-cleanup-issue-tp4722217.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Replication and client transactions

2017-02-19 Thread Alec Henninger
Thank you very much for your time.

On Sat, Feb 18, 2017 at 10:16 PM Justin Bertram  wrote:

> > Given I have a broker with for example two backups, configured with
> journal
> > replication (not shared storage), is there a way to guarantee a message
> has
> > successfully replicated before completing a write?
>
> Assuming the message you send is durable then this happens automatically.
> The producer will not receive a reply back from the broker until the
> message has been replicated successfully.
>

This may be asking a lot, but is it possible to direct me a little bit to
where this is expressed in the code? I've been looking around the client
and server code trying to understand how this is enforced and as far as I
can tell it looks like replication is done asynchronously. I obviously
don't doubt your answer, I'm just trying to understand how it works.


>
> > Is it possible to configure how many backups a message should be
> replicated
> > to before it is successful (such as a majority of them)?
>
> Although multiple backups can be configured it's important to know that
> only 1 backup actually receives the replicated data.  In this use-case when
> the live broker fails and the backup with the replicated data takes over
> then one of the "extra" backups becomes a backup to the new live broker.
>
> > Is a message consumable before it has been replicated to multiple
> backups?
>
> No, I don't believe so.
>
>
> Justin
>
> - Original Message -
> From: "Alec Henninger" 
> To: users@activemq.apache.org
> Sent: Saturday, February 18, 2017 3:42:11 PM
> Subject: Re: Replication and client transactions
>
> Woops, thought the list was Artemis specific. Yes I'm using Artemis in this
> example.
>
> On Sat, Feb 18, 2017, 1:38 PM Justin Bertram  wrote:
>
> > Are you using ActiveMQ Artemis?
> >
> >
> > Justin
> >
> > - Original Message -
> > From: "Alec Henninger" 
> > To: users@activemq.apache.org
> > Sent: Saturday, February 18, 2017 11:50:18 AM
> > Subject: Replication and client transactions
> >
> > Hi all,
> >
> > Given I have a broker with for example two backups, configured with
> journal
> > replication (not shared storage), is there a way to guarantee a message
> has
> > successfully replicated before completing a write?
> >
> > Does this happen automatically, or is configuration necessary to achieve
> > this?
> >
> > Is it possible to configure how many backups a message should be
> replicated
> > to before it is successful (such as a majority of them)?
> >
> > Is a message consumable before it has been replicated to multiple
> backups?
> >
> > Thank you very much for your time,
> > Alec
> >
>