Thanks for the replies Rafael.

>Rafael Schloming <[email protected]> wrote:
>Am I correct in thinking that the total number of subscriptions would be
(600-800)*100, i.e. 60,000-80,000 queues? If so that may be a lot, but the
C++ guys can probably give you more details.

No i was talking about 600-800 clients with 100 or so subscriptions on the
same queue. So that still makes only 800 queues at the client and 800 of
them at the server. Does AMQP require multiple queues for multiple
subscriptions? or would a single HUGE subscription be the way to go ?

(Name = '*jones AND Address = *Melbourne) OR (Name = "*joneses AND
Address=*Sydney) .......

>Without a fairly specialized setup I believe it would be difficult to
precisely measure the latency at each point of conduct.

I do have a fairly specialized setup available to do precisely this as it
would be daunting for my project to have a throughput / latency issue down
the line. Are there any similar test notes that i can access?

Thanks again
gs

On Tue, Feb 10, 2009 at 4:06 AM, Rafael Schloming <[email protected]>wrote:

>
> GS.Chandra N wrote:
>
>> Hi,
>>
>> I'm testing the qpid distribution using the pub-sub examples that come as
>> part of the python distribution and have certain doubts
>>
>> 1. Is ttl handled only by brokers ? At which points can a message be
>> dropped? Is there a way to track how many such messages where dropped? I'm
>> trying to look at this feature as a way to reduce bandwidth euqirements
>> for
>> slow clients for data which is useless after max n secs. Is this the way
>> this feature is supposed to be used?
>>
>
> Only the broker looks at ttls, and within a broker, only the queue will
> drop messages based on ttl. If the message lingers in a queue for longer
> than the ttl, then the message will be discarded rather than being
> delivered.
>
> I don't believe there is currently a way to track how many messages are
> dequeued due to ttl vs dequeued due to delivery. It sounds like an excellent
> candidate for a feature request though. Feel free to file a JIRA for one. ;)
>
>  2. The original publisher sample sets the message destination to be the
>> amq.topic. I tried modifying the publisher to send the message to the
>> amq.fanout exchange. (session.message_transfer(destination="amq.fanout",
>> message=....).
>>
>> But my client which subscribes from the amq.topic exchange does not get
>> the
>> message. Why is this so? My understanding was that fanout would send a
>> copy
>> to ALL queues in the system?
>>
>
> It sends to all queues bound to the amq.fanout exchange, but you still need
> to explicitly bind your queue to amq.fanout for it to receive messages sent
> there.
>
>  3. How expensive are multiple subscriptions using amq.headers exchange?
>> I'm
>> trying to evaluate a use case where there would be hundreds (600-800) of
>> clients per broker, with each having multiple subscriptions for multiple
>> data items (100 or so). I'm not sure if its better to create a single HUGE
>> subscription for all the data or multiple ones. Or is it better to simply
>> do
>> all the filtering at the client end? (WAN links so amount of data
>> transffered is a sensitive issue). Any guidance would be really
>> appreciated
>> here.
>>
>
> Am I correct in thinking that the total number of subscriptions would be
> (600-800)*100, i.e. 60,000-80,000 queues? If so that may be a lot, but the
> C++ guys can probably give you more details.
>
>  4. What is the preferred way to measure latency / throughput at each point
>> of conduct ? Does qpid have any special headers that can track this
>> automatically or is should this be done by the app itself?
>>
>
> I believe the cpp build includes a latencytest utility that might be of
> use, or at least a good example to work from. I don't know firsthand, but I
> believe its testing is based on round trip times at a given (fixed)
> throughput.
>
> There are two tricky issues when measuring latency: clock synchronization,
> and the latency/throughput tradeoff. The clock synchronization issue can be
> avoided by measuring the complete round trip time of a full
> request/response. This way you're comparing two timestamps from the same
> machine, and so you don't need to worry about precisely synchronizing clocks
> on two separate machines.
>
> The latency/throughput tradeoff happens because at higher throughputs
> various queues and I/O buffers start backing up, and this adversely effects
> latency. This is why we measure latency at a fixed throughput.
>
> Without a fairly specialized setup I believe it would be difficult to
> precisely measure the latency at each point of conduct.
>
> --Rafael
>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:[email protected]
>
>

Reply via email to