[
https://issues.apache.org/jira/browse/AMQ-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963599#comment-15963599
]
Gary Tully commented on AMQ-6654:
---------------------------------
this may be resolved via AMQ-6652
> Durable subscriber pendingQueueSize not flushed with KahaDB after force-kill
> of broker
> --------------------------------------------------------------------------------------
>
> Key: AMQ-6654
> URL: https://issues.apache.org/jira/browse/AMQ-6654
> Project: ActiveMQ
> Issue Type: Bug
> Components: KahaDB
> Affects Versions: 5.14.4
> Environment: Reproducible in Linux and Windows
> Reporter: Justin Reock
> Attachments: localhost___Durable_Topic_Subscribers.jpg
>
>
> This is related to AMQ-5960, marked as fixed, but the issue persists.
> It is very easy to reproduce.
> 1) Start up ActiveMQ
> 2) Produce load into a topic
> 3) Connect a few durable subscribers to the topic
> 4) Force-terminate the running broker instance
> 5) Restart the broker (without load)
> 6) Allow durable subscribers to reconnect, and attempt to drain the durable
> subscription queues
> In almost every case, you will see the pending queue sizes of the durable
> subscribers with lingering "messages." I say that in quotes, because I have
> been able to prove that all the messages are in fact delivered to the
> clients, there's no message loss, but, KahaDB still thinks that there are
> messages waiting to be dispatched.
> This causes KahaDB to be unable to clean up extents and ultimately will cause
> the KahaDB store to grow out of control, hence the "Major" severity despite
> no actual message loss occurring.
> The only way to recover from the situation is to delete and recreate the
> subscriber, which does allow KahaDB to clean itself up.
> I have tried several things, including disabling the new ackCompation
> functionality, significantly reducing the time between checkpoints, reducing
> the size of the index cache to force more frequent flushes to disk, but none
> of those completely eliminate the problem.
> This does not happen with LevelDB, but, of course LevelDB has been
> deprecated, so it's not a good solution to switch to that. Does not happen
> with JDBC either, but, JDBC is as we know significantly slower than KahaDB,
> so ideally we'd see this fixed in KahaDB.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)