[ 
https://issues.apache.org/jira/browse/AMQ-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

winking updated AMQ-6802:
-------------------------
    Description: 
Hello,
I am using ActiveMQ 5.15.0 in a scenario where 300 producers (network of 
brokers with static bridge) are sending each every 5 seconds an event to one 
and the same queue. Since upgrading to 5.15.0 all duplicates which were 
detected by the audit process are forwarded to the DLQ, which is not wanted. 
This leads to the issue that alot of journal files of the kahadb are not 
cleaned because duplicates in DLQ are blocking the removal.

According to the documentation this must not happen if a dead letter strategy 
is configured:
_"The dead letter strategy has an message audit that is enabled by default. 
This prevents duplicate messages from being added to the configured DLQ"_ 
([source|http://activemq.apache.org/message-redelivery-and-dlq-handling.html])

I found in the release notes of 5.15.0 a hint that this should be already 
fixed, but I cannot confirm: 
[ticket|https://issues.apache.org/jira/browse/AMQ-6667]

In my activemq.log I can find messages like:

{code:java}
2017-08-17 12:32:43,121 | WARN | 
org.apache.activemq.broker.region.cursors.QueueStorePrefetch@18e1ab6f:My.Queue,batchResetNeeded=false,size=0,cacheEnabled=true,maxBatchSize:1,hasSpace:true,pen
 
dingCachedIds.size:1,lastSyncCachedId:null,lastSyncCachedId-seq:null,lastAsyncCachedId:ID:09DC12000893-2955-636385757120625000-1:1:1:1:172,lastAsyncCachedId-seq:1626,store=permits:9999,sd=nextSeq:1629,lastRet:MessageOrderC
 ursor:[def:0, low:0, high:0],pending:0 - cursor got duplicate send 
ID:09FC12000801-1834-636385719159843750-1:1:1:1:944 seq: 
org.apache.activemq.store.kahadb.KahaDBStore$StoreQueueTask$InnerFutureTask@3fffe287
 | org.apache. activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
NIO Worker 5
2017-08-17 12:32:43,126 | WARN | duplicate message from store 
ID:09FC12000801-1834-636385719159843750-1:1:1:1:944, redirecting for dlq 
processing | org.apache.activemq.broker.region.Queue | ActiveMQ NIO Worker 5
{code}


Parts of activemq.xml:
...

{code:java}
                                        <policyEntry maxAuditDepth="8192" 
maxProducersToAudit="1000" enableAudit="true" queue="My.Queue">
                                          <deadLetterStrategy>
                                                <individualDeadLetterStrategy 
queuePrefix="DLQ." useQueueForQueueMessages="true"/>
                                          </deadLetterStrategy>
                                        </policyEntry>
{code}

...

{code:java}
                <persistenceAdapter>
                        <kahaDB directory="${activemq.data}/kahadb" 
                                        journalMaxFileLength="128mb"
                                        compactAcksIgnoresStoreGrowth="true"
                                        enableAckCompaction="true"
                                        lockKeepAlivePeriod="10000"
                                    concurrentStoreAndDispatchQueues="false"/>
                </persistenceAdapter>
{code}

...

  was:
Hello,
I am using ActiveMQ 5.15.0 in a scenario where 300 producers are sending each 
every 5 seconds an event to one and the same queue. Since upgrading to 5.15.0 
all duplicates which were detected by the audit process are forwarded to the 
DLQ, which is not wanted. This leads to the issue that alot of journal files of 
the kahadb are not cleaned because duplicates in DLQ are blocking the removal.

According to the documentation this must not happen if a dead letter strategy 
is configured:
_"The dead letter strategy has an message audit that is enabled by default. 
This prevents duplicate messages from being added to the configured DLQ"_ 
([source|http://activemq.apache.org/message-redelivery-and-dlq-handling.html])

I found in the release notes of 5.15.0 a hint that this should be already 
fixed, but I cannot confirm: 
[ticket|https://issues.apache.org/jira/browse/AMQ-6667]

In my activemq.log I can find messages like:

{code:java}
2017-08-17 12:32:43,121 | WARN | 
org.apache.activemq.broker.region.cursors.QueueStorePrefetch@18e1ab6f:My.Queue,batchResetNeeded=false,size=0,cacheEnabled=true,maxBatchSize:1,hasSpace:true,pen
 
dingCachedIds.size:1,lastSyncCachedId:null,lastSyncCachedId-seq:null,lastAsyncCachedId:ID:09DC12000893-2955-636385757120625000-1:1:1:1:172,lastAsyncCachedId-seq:1626,store=permits:9999,sd=nextSeq:1629,lastRet:MessageOrderC
 ursor:[def:0, low:0, high:0],pending:0 - cursor got duplicate send 
ID:09FC12000801-1834-636385719159843750-1:1:1:1:944 seq: 
org.apache.activemq.store.kahadb.KahaDBStore$StoreQueueTask$InnerFutureTask@3fffe287
 | org.apache. activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
NIO Worker 5
2017-08-17 12:32:43,126 | WARN | duplicate message from store 
ID:09FC12000801-1834-636385719159843750-1:1:1:1:944, redirecting for dlq 
processing | org.apache.activemq.broker.region.Queue | ActiveMQ NIO Worker 5
{code}


Parts of activemq.xml:
...

{code:java}
                                        <policyEntry maxAuditDepth="8192" 
maxProducersToAudit="1000" enableAudit="true" queue="My.Queue">
                                          <deadLetterStrategy>
                                                <individualDeadLetterStrategy 
queuePrefix="DLQ." useQueueForQueueMessages="true"/>
                                          </deadLetterStrategy>
                                        </policyEntry>
{code}

...

{code:java}
                <persistenceAdapter>
                        <kahaDB directory="${activemq.data}/kahadb" 
                                        journalMaxFileLength="128mb"
                                        compactAcksIgnoresStoreGrowth="true"
                                        enableAckCompaction="true"
                                        lockKeepAlivePeriod="10000"
                                    concurrentStoreAndDispatchQueues="false"/>
                </persistenceAdapter>
{code}

...


> Duplicates are sent to DLQ
> --------------------------
>
>                 Key: AMQ-6802
>                 URL: https://issues.apache.org/jira/browse/AMQ-6802
>             Project: ActiveMQ
>          Issue Type: Bug
>    Affects Versions: 5.15.0
>         Environment: Linux REL7 x86_64
>            Reporter: winking
>
> Hello,
> I am using ActiveMQ 5.15.0 in a scenario where 300 producers (network of 
> brokers with static bridge) are sending each every 5 seconds an event to one 
> and the same queue. Since upgrading to 5.15.0 all duplicates which were 
> detected by the audit process are forwarded to the DLQ, which is not wanted. 
> This leads to the issue that alot of journal files of the kahadb are not 
> cleaned because duplicates in DLQ are blocking the removal.
> According to the documentation this must not happen if a dead letter strategy 
> is configured:
> _"The dead letter strategy has an message audit that is enabled by default. 
> This prevents duplicate messages from being added to the configured DLQ"_ 
> ([source|http://activemq.apache.org/message-redelivery-and-dlq-handling.html])
> I found in the release notes of 5.15.0 a hint that this should be already 
> fixed, but I cannot confirm: 
> [ticket|https://issues.apache.org/jira/browse/AMQ-6667]
> In my activemq.log I can find messages like:
> {code:java}
> 2017-08-17 12:32:43,121 | WARN | 
> org.apache.activemq.broker.region.cursors.QueueStorePrefetch@18e1ab6f:My.Queue,batchResetNeeded=false,size=0,cacheEnabled=true,maxBatchSize:1,hasSpace:true,pen
>  
> dingCachedIds.size:1,lastSyncCachedId:null,lastSyncCachedId-seq:null,lastAsyncCachedId:ID:09DC12000893-2955-636385757120625000-1:1:1:1:172,lastAsyncCachedId-seq:1626,store=permits:9999,sd=nextSeq:1629,lastRet:MessageOrderC
>  ursor:[def:0, low:0, high:0],pending:0 - cursor got duplicate send 
> ID:09FC12000801-1834-636385719159843750-1:1:1:1:944 seq: 
> org.apache.activemq.store.kahadb.KahaDBStore$StoreQueueTask$InnerFutureTask@3fffe287
>  | org.apache. activemq.broker.region.cursors.AbstractStoreCursor | ActiveMQ 
> NIO Worker 5
> 2017-08-17 12:32:43,126 | WARN | duplicate message from store 
> ID:09FC12000801-1834-636385719159843750-1:1:1:1:944, redirecting for dlq 
> processing | org.apache.activemq.broker.region.Queue | ActiveMQ NIO Worker 5
> {code}
> Parts of activemq.xml:
> ...
> {code:java}
>                                       <policyEntry maxAuditDepth="8192" 
> maxProducersToAudit="1000" enableAudit="true" queue="My.Queue">
>                                         <deadLetterStrategy>
>                                               <individualDeadLetterStrategy 
> queuePrefix="DLQ." useQueueForQueueMessages="true"/>
>                                         </deadLetterStrategy>
>                                       </policyEntry>
> {code}
> ...
> {code:java}
>               <persistenceAdapter>
>                       <kahaDB directory="${activemq.data}/kahadb" 
>                                       journalMaxFileLength="128mb"
>                                       compactAcksIgnoresStoreGrowth="true"
>                                       enableAckCompaction="true"
>                                       lockKeepAlivePeriod="10000"
>                                   concurrentStoreAndDispatchQueues="false"/>
>               </persistenceAdapter>
> {code}
> ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to