Re: mKahaDB - no clean-up because of ACKs

2018-03-12 Thread alprausch77
Hello Tim.
Thanks for the hint about the XA transactions.
There were indeed 2 XA transactions hanging in the kahaDB files. After I
committed them via JConsole the store was cleaned-up.

Joachim



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: mKahaDB - no clean-up because of ACKs

2018-03-10 Thread Timothy Bish

On 03/09/2018 11:26 PM, Tim Bain wrote:

Joachim,

There must have been at least one file that was kept not because of acks in
an earlier file. Presumably this would be the one with the lowest number.
Can you provide the log output for that file?

The standard guess when people say that their KahaDB files are being kept
even though there are no messages is to ask, "Did you check the DLQ?" So,
did you check the DLQ?

Alternatively, you could use a debugger to step through the code in
org.apache.activemq.store.kahadb.MessageDatabase, so you can see from the
debugger why that first file is being kept alive.

Another option, if you still can't figure out which message is keeping the
journal files alive, might be to start up a 5.14.0 broker, which had the
ack-compaction logic present, and let it squish the log files down to just
a few files, then shut down the broker and take the now-much-smaller files
back to your 5.10 broker and pick up from there.

Tim


Also check for any incomplete XA transactions that are holding up a file.


On Fri, Mar 9, 2018 at 2:51 AM, alprausch77 
wrote:


Hello.
Recently we had a problem on a ActiveMQ 5.10 version (with manually applied
patch for AMQ-5542).
The mKahaDB data store increased to ~30 GB and couldn´t clean up the data
files anymore.

The log showed always something like this:
/not removing data file: 317633 as contained ack(s) refer to referenced
file: [317632, 317633]/

I´m aware that the data files can´t be cleaned up if there is a not
consumed
message in a queue. But that´s not the case here.
I have started a ActiveMQ with the copied storage on my local machine and
checked every queue and topic via JConsole if there is any message in it -
but every queue/topic shows a size of 0.

So it seems to me that the messages are processed but just the ACK is
somewhat stuck in the store.

Is there a way to (manually) get rid of the ACKs?
Or is there a way to have a deeper analysis of the kahaDB storage files to
find the reason for the stucked ACKs?

I can provide the whole log with the KahaDB recovering if this is of any
help.

Thanks.
Joachim



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
f2341805.html



--
Tim Bish
twitter: @tabish121
blog: http://timbish.blogspot.com/



Re: mKahaDB - no clean-up because of ACKs

2018-03-09 Thread Tim Bain
Joachim,

There must have been at least one file that was kept not because of acks in
an earlier file. Presumably this would be the one with the lowest number.
Can you provide the log output for that file?

The standard guess when people say that their KahaDB files are being kept
even though there are no messages is to ask, "Did you check the DLQ?" So,
did you check the DLQ?

Alternatively, you could use a debugger to step through the code in
org.apache.activemq.store.kahadb.MessageDatabase, so you can see from the
debugger why that first file is being kept alive.

Another option, if you still can't figure out which message is keeping the
journal files alive, might be to start up a 5.14.0 broker, which had the
ack-compaction logic present, and let it squish the log files down to just
a few files, then shut down the broker and take the now-much-smaller files
back to your 5.10 broker and pick up from there.

Tim

On Fri, Mar 9, 2018 at 2:51 AM, alprausch77 
wrote:

> Hello.
> Recently we had a problem on a ActiveMQ 5.10 version (with manually applied
> patch for AMQ-5542).
> The mKahaDB data store increased to ~30 GB and couldn´t clean up the data
> files anymore.
>
> The log showed always something like this:
> /not removing data file: 317633 as contained ack(s) refer to referenced
> file: [317632, 317633]/
>
> I´m aware that the data files can´t be cleaned up if there is a not
> consumed
> message in a queue. But that´s not the case here.
> I have started a ActiveMQ with the copied storage on my local machine and
> checked every queue and topic via JConsole if there is any message in it -
> but every queue/topic shows a size of 0.
>
> So it seems to me that the messages are processed but just the ACK is
> somewhat stuck in the store.
>
> Is there a way to (manually) get rid of the ACKs?
> Or is there a way to have a deeper analysis of the kahaDB storage files to
> find the reason for the stucked ACKs?
>
> I can provide the whole log with the KahaDB recovering if this is of any
> help.
>
> Thanks.
> Joachim
>
>
>
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> f2341805.html
>


mKahaDB - no clean-up because of ACKs

2018-03-09 Thread alprausch77
Hello.
Recently we had a problem on a ActiveMQ 5.10 version (with manually applied
patch for AMQ-5542).
The mKahaDB data store increased to ~30 GB and couldn´t clean up the data
files anymore.

The log showed always something like this:
/not removing data file: 317633 as contained ack(s) refer to referenced
file: [317632, 317633]/

I´m aware that the data files can´t be cleaned up if there is a not consumed
message in a queue. But that´s not the case here.
I have started a ActiveMQ with the copied storage on my local machine and
checked every queue and topic via JConsole if there is any message in it -
but every queue/topic shows a size of 0.

So it seems to me that the messages are processed but just the ACK is
somewhat stuck in the store.

Is there a way to (manually) get rid of the ACKs?
Or is there a way to have a deeper analysis of the kahaDB storage files to
find the reason for the stucked ACKs?

I can provide the whole log with the KahaDB recovering if this is of any
help.

Thanks.
Joachim



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html