BTW the other option is to just use persistent delivery and just turn
on async sending; then you get close to the performance of
non-persistent queues but being able to handle massive queues (since
messages are spooled to disk).

http://incubator.apache.org/activemq/async-sends.html

On 6/27/06, bdiesenhaus <[EMAIL PROTECTED]> wrote:

We are running a test with 1 producer and 20 consumers with the memory limit
on the queue set to 1gb. When the memory limit is hit everything is wedged.
What I would suspect to happen is that once a message is consumed the
producer can send another message.

We are using amq 4.0.1.

Broker config is:

<beans>

  <broker persistent="false" useJmx="true"
xmlns="http://activemq.org/config/1.0";>

    <memoryManager>
        <usageManager id="memory-manager" limit="2147483648"/>
    </memoryManager>

    <destinationPolicy>
      <policyMap><policyEntries>
          <policyEntry topic=">" memoryLimit="104857600"/>
          <policyEntry queue=">" memoryLimit="104857600"/>
      </policyEntries></policyMap>
    </destinationPolicy>


    <transportConnectors>
       <transportConnector name="default" uri="tcp://localhost:61616"
discoveryUri="multicast://bedrock"/>
    </transportConnectors>

  </broker>


--
View this message in context: 
http://www.nabble.com/Queue-Memory-Limits-tf1857084.html#a5071490
Sent from the ActiveMQ - User forum at Nabble.com.




--

James
-------
http://radio.weblogs.com/0112098/

Reply via email to