[
https://issues.apache.org/jira/browse/AMQ-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14303181#comment-14303181
]
Dmytro Karimov commented on AMQ-5235:
-------------------------------------
Today I facet with following:
{code}
2015-02-03 13:13:57,259 | INFO | Usage(default:temp:queue://test.queue:temp)
percentUsage=0%, usage=3145728000, limit=107374182400,
percentUsageMinDelta=1%;Parent:Usage(default:temp) percentUsage=102%,
usage=3145728000, limit=3079098368, percentUsageMinDelta=1%: Temp Store is Full
(0% of 107374182400). Stopping producer
(ID:myserver.internal-38652-1422959666113-2:20:-1:1) to prevent flooding
queue://test.queue. See http://activemq.apache.org/producer-flow-control.html
for more info (blocking for: 151s) | org.apache.activemq.broker.region.Queue |
ActiveMQ Transport: tcp:///192.168.1.84:53129@61611
{code}
> Temp Store is Full (0% of 107374182400)
:)
Also, I disabled producerFlowControl
{code}
<policyEntry queue=">" producerFlowControl="false"
memoryLimit="100mb" gcInactiveDestinations="true"
inactiveTimoutBeforeGC="30000" >
<deadLetterStrategy>
<individualDeadLetterStrategy queuePrefix="DLQ."
useQueueForQueueMessages="true" />
</deadLetterStrategy>
</policyEntry>
{code}
All queues are empty, but activeMQ says: "Usage(default:temp)
percentUsage=102%, usage=3145728000"
> erroneous temp percent used
> ---------------------------
>
> Key: AMQ-5235
> URL: https://issues.apache.org/jira/browse/AMQ-5235
> Project: ActiveMQ
> Issue Type: Bug
> Components: activemq-leveldb-store
> Affects Versions: 5.9.0
> Environment: debian (quality testing and production)
> Reporter: anselme dewavrin
>
> Dear all,
> We have an activemq 5.9 configured with 1GB of tempUsage allowed. Just by
> security because we only use persistent messages (about 6000 messages per
> day). After severall days of use, the temp usage increases, and even shows
> values that are above the total amount of the data on disk. Here it shows 45%
> of its 1GB limit for the following files :
> find activemq-data -ls
> 76809801 4 drwxr-xr-x 5 anselme anselme 4096 Jun 19 10:24
> activemq-data
> 76809813 4 -rw-r--r-- 1 anselme anselme 24 Jun 16 16:13
> activemq-data/store-version.txt
> 76809817 4 drwxr-xr-x 2 anselme anselme 4096 Jun 16 16:13
> activemq-data/dirty.index
> 76809811 4 -rw-r--r-- 2 anselme anselme 2437 Jun 16 12:06
> activemq-data/dirty.index/000008.sst
> 76809820 4 -rw-r--r-- 1 anselme anselme 16 Jun 16 16:13
> activemq-data/dirty.index/CURRENT
> 76809819 80 -rw-r--r-- 1 anselme anselme 80313 Jun 16 16:13
> activemq-data/dirty.index/000011.sst
> 76809822 0 -rw-r--r-- 1 anselme anselme 0 Jun 16 16:13
> activemq-data/dirty.index/LOCK
> 76809810 300 -rw-r--r-- 2 anselme anselme 305206 Jun 16 11:51
> activemq-data/dirty.index/000005.sst
> 76809821 2048 -rw-r--r-- 1 anselme anselme 2097152 Jun 19 11:30
> activemq-data/dirty.index/000012.log
> 76809818 1024 -rw-r--r-- 1 anselme anselme 1048576 Jun 16 16:13
> activemq-data/dirty.index/MANIFEST-000010
> 76809816 0 -rw-r--r-- 1 anselme anselme 0 Jun 16 16:13
> activemq-data/lock
> 76809815 102400 -rw-r--r-- 1 anselme anselme 104857600 Jun 19 11:30
> activemq-data/0000000000f0faaf.log
> 76809823 102400 -rw-r--r-- 1 anselme anselme 104857600 Jun 16 11:50
> activemq-data/0000000000385f46.log
> 76809807 4 drwxr-xr-x 2 anselme anselme 4096 Jun 16 16:13
> activemq-data/0000000000f0faaf.index
> 76809808 420 -rw-r--r-- 1 anselme anselme 429264 Jun 16 16:13
> activemq-data/0000000000f0faaf.index/000009.log
> 76809811 4 -rw-r--r-- 2 anselme anselme 2437 Jun 16 12:06
> activemq-data/0000000000f0faaf.index/000008.sst
> 76809812 4 -rw-r--r-- 1 anselme anselme 165 Jun 16 16:13
> activemq-data/0000000000f0faaf.index/MANIFEST-000007
> 76809809 4 -rw-r--r-- 1 anselme anselme 16 Jun 16 16:13
> activemq-data/0000000000f0faaf.index/CURRENT
> 76809810 300 -rw-r--r-- 2 anselme anselme 305206 Jun 16 11:51
> activemq-data/0000000000f0faaf.index/000005.sst
> 76809814 102400 -rw-r--r-- 1 anselme anselme 104857600 Jun 12 21:06
> activemq-data/0000000000000000.log
> 76809802 4 drwxr-xr-x 2 anselme anselme 4096 Jun 16 16:13
> activemq-data/plist.index
> 76809803 4 -rw-r--r-- 1 anselme anselme 16 Jun 16 16:13
> activemq-data/plist.index/CURRENT
> 76809806 0 -rw-r--r-- 1 anselme anselme 0 Jun 16 16:13
> activemq-data/plist.index/LOCK
> 76809805 1024 -rw-r--r-- 1 anselme anselme 1048576 Jun 16 16:13
> activemq-data/plist.index/000003.log
> 76809804 1024 -rw-r--r-- 1 anselme anselme 1048576 Jun 16 16:13
> activemq-data/plist.index/MANIFEST-000002
> The problem is that in our production system it once blocked producers with a
> tempusage at 122%, even if the disk was empty.
> So we invesigated and executed the broker in a debugger, and found how the
> usage is calculated. If it in the scala leveldb files : It is not based on
> what is on disk, but on what it thinks is on the disk. It multiplies the
> size of one log by the number of logs known by a certain hashmap.
> I think the entries of the hashmap are not removed when the log files are
> purged.
> Could you confirm ?
> Thanks in advance
> Anselme
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)