>On Tue, Feb 10, 2009 at 8:11 PM, Rafael Schloming <[email protected]>wrote:
>We could certainly add some API for dispatching all the incoming messages
from distinct server queues into a single local queue or listener. Is this
what you're thinking?

Yes. This way all that the client needs to do is create the bindings to the
local exchange and then start recieving messages. The local queue need not
be exposed if special features like persistency is not required IMHO.  Thats
less things for the code to track and the newbie to wrap their heads around.


Thanks for the replies Rafael. Hopefully i should now be able to complete my
tests and post some results here.

Thanks
gs

On Tue, Feb 10, 2009 at 8:11 PM, Rafael Schloming <[email protected]>wrote:

> GS.Chandra N wrote:
>
>> Thanks for the detailed reply Rafael. Might i ask for some clarification
>> on
>> the answers provided -
>>
>> Rafael Schloming <[email protected]> wrote:
>>>
>> One of the message properties is a free-form map you can use like this:
>>
>>> mp = session.message_properties(application_headers={"my_field": val,
>>> ...})
>>>
>>
>> Are these the top level application headers that can be used to subscribe
>> messages, that can be used in amq.match headers exchange? If so how i read
>> them off, in the other end?
>>
>
> Yes, the headers exchange operates on the application_headers map. You can
> access them this way:
>
> msg_props = message.get("message_properties")
> app_hdrs = msg_props.application_headers
>
>  4.is there any api that creates a new queue and connects directly to the
>>>>
>>>
>> destination without this being explictly handled in the code?
>>>
>>> What exactly do you mean by connect? Publish? Subscribe? Bind?
>>>
>>
>> hmmm I was comparing the subscriber code with that of the publisher. It
>> seemed as if the publisher was a lot more simpler due to the fact it was
>> not
>> creating any queues / binding to the exchange etc. How does message
>> transfer
>> work without queues at the publisher?
>>
>
> Actually, even in the simple publish scenario the queue is actually present
> on publish as well. It's specified in the routing_key and it tells the
> default exchange where to put the message.
>
> I was thinking on the same line for clients - like have an invisible
>> default
>> queue / session itself for all incoming data, if no explicit queue name is
>> specified in the subscription api's. It would make for a simpler model for
>> the first timer especially if they do not need multiple queues.
>>
>
> We could certainly add some API for dispatching all the incoming messages
> from distinct server queues into a single local queue or listener. Is this
> what you're thinking?
>
> Will multiple queues help me in scaling? Is it possible to do the queue.get
>> from multiple threads without having an underlying lock, synchronizing it?
>>
>
> Multiple server queues may depending on your app, but I wouldn't expect
> multiple client queues to be a huge factor one way or another.
>
>  I believe there is a qpid-queue-stats command that will report things
>>> like
>>>
>> the enqueue/dequeue statistics on the console. For the client end you'd
>> have
>> to track your own statistics.
>>
>> hmmm yes - is there a plan for any future releases to address this ? It
>> would have been cool if the same qpid-tool could connect to the clients
>> too
>> and get the details required off them. It would make management simpler.
>> The
>> project that i'm investigating qpid for would have MANY dsitributed
>> processes and it would be impossible to maintain it without some tool that
>> can periodically read off the stats and alert the user. Do the client
>> libraries publish some sort of stats?
>>
>
> No. I'd be a hesitant to make the clients automatically publish stats, but
> I could imagine tracking some stats in the library and making them available
> to the application to publish if/when it chooses.
>
> --Rafael
>
> g
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:[email protected]
>
>

Reply via email to