Hi Praveen,

I can only really guess to any design decision on that front as it
would have been before my time with the project, but I'd say its
likely just that way because theres never been a strong need / use
case that actually required doing anything else. For example, with
most of the users I liase with the data they are using has at least
some degree of time sensitivity to it and having anywhere near that
volume of persistent data in the broker would represent some sort of
ongoing period of catastrophic failure in their application. I can
only really think of one group who make it into multi-million message
backlogs at all, and that usually includes having knowingly published
things which noone will ever consume.

For a FIFO queue you are correct it would 'just' need to load in more
as required. Things get trickier when dealing with some of the other
queue types however, such as as LVQ/conflation and the recently added
Sorted queue types. Making the broker able to hold partial segments of
the queue in memory is something we have discussed doing in the past
for other reasons, but message volume hasnt really been a significant
factor in those considerations until now. I will take note of it for
any future work we do in that area though.

Robbie

On 1 January 2012 17:46, Praveen M <[email protected]> wrote:
> Hi,
>
> I was digging in the code base and was trying to understand how the broker
> is implemented.
> I see that for each message enqueued there are certain objects kept in
> memory one for each message.
>
> example: MessageTransferReference, SimpleQueueEntryImpl etc.
>
> I tried computing the memory footprint of each individual message and it
> amounts about 320 bytes/message.
> I see that because of the footprint of each message,  if i'm limited to 4GB
> of memory, then I am limited to only about 13 million messages in the
> system at one point.
>
> Since I'm using a persistent store I'd have expected to go over 13 million
> messages and be limited by disk store rather than physical memory, but
> I realized this isn't the case.
>
> I am curious as to what were the driving points for this design decision to
> keep a reference to every message in memory. I'd have expected in a FIFO
> queue you just need a subset of messages in memory and can  pull in
> messages on demand rather than maintain reference to every message in
> memory.
>
> Can someone please explain as to the reasons for this design? Also, was it
> assumed that we'd never flood the queues over 13 million messages at one
> time. Was there a bound
> decided upon?
>
> Thank you,
> Praveen
>
>
>
> --
> -Praveen

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to