[ 
https://issues.apache.org/jira/browse/AMQ-1739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-1739:
----------------------------

    Comment: was deleted

(was: Sensory processes that occurred in the climb amphetamine or during the 
resulting violence from that progress are psychoactive. 
amphetamine drugs 
http://www.surveyanalytics.com//userimages/sub-2/2007589/3153260/29851518/7787457-29851518-stopadd43.html
 
Once the ongoing scene has been run and the secondary college addition has been 
established, the fibers' line water is determined by the western astronauts of 
the strong brain, with the highest ranked practice going developmental and the 
lowest ranked adderall 10 mg going dendritic.)

> ActiveMQ 5.1.0 runs out of file descriptors with lots of 'CLOSE_WAIT' sockets
> -----------------------------------------------------------------------------
>
>                 Key: AMQ-1739
>                 URL: https://issues.apache.org/jira/browse/AMQ-1739
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 5.1.0
>         Environment: We have a single broker with no special network-stuff. 
> Our broker-system has two single core Opterons, 8GB of memory, plenty of I/O 
> and runs a recent 64bit debian with 2.6.21 kernel.
> Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
> Java HotSpot(TM) 64-Bit Server VM (build 10.0-b22, mixed mode)
> We left most of the activemq.xml-configuration as-is and adjusted the 
> start-up script to run with 2GB heap size and parallel garbage collector, 
> which was more or less needed for 5.0 and left for 5.1 in the start-up script.
>            Reporter: Arjen
>            Assignee: Rob Davies
>            Priority: Blocker
>             Fix For: 5.2.0
>
>         Attachments: stomp-overload-producer.tgz
>
>
> We have no idea why or when, but within a few days after start-up, ActiveMQ 
> suddenly runs out of file descriptors (we've raised the limit to 10240). 
> According to lsof it has lots of sockets which are in CLOSE_WAIT when that 
> happens. As soon as that happened once, it would re-occur within a few hours. 
> This behavior did not happen with ActiveMQ 5.0.
> We have five queues, all with only one consumer. All consumption and 
> production is via the Stomp-interface using PHP-clients. Three of those 
> queues get up to 50-100 messages/second in peak moments, while the consumers 
> adjust their own consumption rate to the systems load (normally its maxed to 
> about 50-150/sec). So in high-load moments on the consumers, the queues can 
> grow to a few thousand messages, normally the queues are emptied as soon as a 
> message occurs. Those five consumers stay connected indefinitely.
> The messages are all quite small (at most 1 KB or so) and come from 5 web 
> servers. For each web page-request (about 2-3M/day) a connection is made to 
> ActiveMQ via Stomp and at least one message is sent to ActiveMQ, for most 
> requests two are sent to the two most active queues. For all request a new 
> connection is made and at most 4 stomp-messages are sent to ActiveMQ 
> (connect, two messages, disconnect), since apache+php does not allow useful 
> reuse of sockets and similar resources.So 
> So the connection-rate is about the same as the highest message rate on any 
> of the queues (so 50-100connects/second).
> When the high amount of sockets in CLOSE_WAIT occurs, we manually disable the 
> producers and the sockets disappear gradually. After that the amount of 
> sockets stays around 180-190 (mostly opened jars), but seams to re-increase 
> more easily than when ActiveMQ is restarted. I have not checked if anything 
> special happens on the web servers or databases, since their producer and 
> consumer implementation has not changed since 5.0.
> What I did notice is that the memory-consumption increases heavily prior to 
> running out of descriptors, and the consumption re-increases way to fast 
> compared to before 11:45u:
> http://achelois.tweakers.net/~acm/tnet/activemq-5.1-memory-consumption.png



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to