kahaDB producerAudit LRU cache configuration is time dependent, it should not
be..
----------------------------------------------------------------------------------
Key: AMQ-3569
URL: https://issues.apache.org/jira/browse/AMQ-3569
Project: ActiveMQ
Issue Type: Improvement
Components: Message Store
Affects Versions: 5.5.1, 5.5.0
Reporter: Gary Tully
Assignee: Gary Tully
Fix For: 5.6.0
The failover: reconnect logic can submit duplicate messages, if a send reply is
lost. this is trapped by the producerAudit. The audit keeps an LRU cache of
producerIds and message sequence ids. The default value is 64. This is a little
small if many producers come and go. It can be configured via: {code}<kahaDB
... maxFailoverProducersToTrack="2048" />{code}. The problem is picking a
value.
If a connection is down for an indeterminate time, the number of producers that
will be seen by the audit is indeterminate. It can be made very large, but this
will consume memory resources. The audit needs to be maintained on a connection
basis. This will suffer from the same problem, how many connections to audit,
but with connection pools this can be mitigated better than producers.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira