Thanks for the quick replies.  We are concerned not with splitting a single 
message, but distributing overflow messages to the journal.

Is it possible to create a queue with both ring and flow-to-disk policies with 
a journal larger than the in-memory queue? (Image attached).  When element 2000 
is enqueued, element 1 is "purged", all other elements shift right, and the 
newest element is enqueued at location 1999.  The key here is that the 
in-memory queue is half the size of journal.

Wes

-----Original Message-----
From: Kim van der Riet [mailto:[email protected]]
Sent: Thu 6/10/2010 10:24 AM
To: [email protected]
Subject: Re: Large persistent queues
 
On Thu, 2010-06-10 at 08:10 -0500, Wes Parish wrote:
> We have a broker with a modest amount of RAM, but a very large disk capacity. 
>  Can we create a small in-memory queue, but a large associated journal (for 
> persistence)?  Basically, we want to allow a persistent queue to hold a 
> larger number of elements than the broker has physical RAM to store in the 
> in-memory queue.
> 
> Wes

Currently the broker cannot split a message while processing it; it must
hold the entire message in memory before processing it. Consequently,
the machine's memory will impose a limit on how large a message you can
handle.

If a message fits into memory, flow-to-disk policy can be used to force
these messages (or any messages over a pre-set threshold of either size
or number) to disk and release them from memory; they will be reloaded
when they are consumed again (at some performance cost). However these
policies are scoped to individual queues at the moment and cannot act
across multiple queues nor on a global basis.


---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]




---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to