On 10/07/2011 10:48 AM, Fraser Adams wrote:
Gordon Sim wrote:
On 10/05/2011 09:32 AM, Luca Martini wrote:
However, I would like to add a key binding to an already created
Receiver.
Is it possible?

Not directly through the messaging API, no.

Is that statement 100% correct Gordon?

I guess it depends on how you interpret 'directly'. What I meant was that the API does not support modifying the bindings or other address properties for a receiver once it is created.

The approach you describe adds bindings when creating new receivers, using the auto-create feature. That approach can be used to alter the messages a previously created receiver receives, but I view it more as a side-effect.

You are quite correct though in pointing out another option.

With the headers exchange at
least I've found that if I were to specify an address that creates a
queue and add a binding say:

testqueue; {create: receiver, node: {x-declare: {arguments:
{'qpid.policy_type': ring, 'qpid.max_size': 500000000}}, x-bindings:
[{exchange: 'amq.match', queue: 'testqueue', key: 'data1', arguments:
{x-match: all, data-service: amqp-delivery, item-owner: fadams}}]}}

If I change the arguments thus:

testqueue; {create: receiver, node: {x-declare: {arguments:
{'qpid.policy_type': ring, 'qpid.max_size': 500000000}}, x-bindings:
[{exchange: 'amq.match', queue: 'testqueue', key: 'data2', arguments:
{x-match: all, data-service: amqp-delivery, item-owner: jdadams}}]}}

I end up with two bindings to testqueue. It also does this if I don't
change key, but in this case the broker generates a warning (rightly).
IIRC the 0.8 broker didn't warn about this.

TBH I sometimes wonder to myself whether this behaviour is more of a bug
than a feature :-) as it leads to counterintuitive behaviour when people
are experimenting with bindings, as they tend to assume that just
changing the binding is OK - Imagine their surprise when they find out
they've got ten bindings instead of just the most recent :-D I've done
it myself and I know about the behaviour :-(

Sometimes it has proved useful though. When I discovered the
qpid::messaging AddressParser problem where it was adding bindings with
binary values before I came up with the "utf8EncodeAddress()" fix I
posted last week my dirty workaround was to connect but not consume from
a Java client with the same address so the queue ended up with two
bindings that "looked" the same.

It might be different with the topic exchange though.

My concern is
that, with hundreds (or thousands) of subscriptions, the broker could
become a bottleneck for our system. I know I'm being rather vague, but
if you could give me some suggestions or remarks, it would be very
appreciated.

I suspect it depends a lot on the details of the bindings, exchange
and matching patterns. If each message matches only one binding, I
suspect that there is no great benefit to using the same queue in each
binding. However if the bindings overlap a lot then this may change.


I suspect that if there are hundreds or thousands of bindings there
might actually be a disbenefit using the same queue in each binding. I'm
still trying to get under the skin of the MRG whitepaper, but IIRC that
was talking of using multiple queues on a multi-core box. I don't know
how qpidd threading actually works, though with my box at home I've
never been able to drive it hard enough to show more than 100% CPU. I'm
assuming that with qpid-perftest, multiple boxes and a decent network
one could utilise all the cores.

That said I do wonder about the statement "the broker could become a
bottleneck for our system" I'd be pretty impressed to see that, its far
more likely to hit network saturation way before the broker has any issues.





---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project: http://qpid.apache.org
Use/Interact: mailto:[email protected]



---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to