Hi,

I've been working on a simple patch to qpidd that ports the AMQP 1.0 module to 
the new event interface provided by proton 0.8.  See 
https://issues.apache.org/jira/browse/QPID-6255 for the patch.

With the above patch, I've noticed a pretty consistent small drop in overall 
qpidd performance as gauged by qpid-cpp-benchmark (see comments in above jira). 
 Turns out the event loop is doing a lot more work when compared to the polled 
approach for the same message load.

Digging around a bit, there are a couple of issues that result in qpidd doing a 
lot of unnecessary work:

1) the PN_TRANSPORT event isn't needed by this driver - pending output is 
manually checked at a later point.  In my test, approximately 25% of the total 
events are PN_TRANSPORT events, which the driver simply discards

2) A more serious issue - I get a PN_LINK_FLOW event for _every_ message 
transfer!  Turns out that PN_LINK_FLOW is being issued for two different events 
(IMHO): when a flow frame is received (yay) and each time a transfer is done 
and credit is consumed (ugh).

Item #2 seems like a bug - these two events have different semantic meaning and 
would likely result in different processing paths in the driver (in the case of 
qpidd, the credit consumed case would be ignored).

I propose we fix #2 by breaking up that event into two separate events, 
something like PN_LINK_REMOTE_FLOW for when flow is granted, and 
PN_LINK_LOCAL_FLOW when credit is consumed (not in love with these names btw, 
they seem consistent with the endpoint states)

Furthermore, I think the event API would benefit from a way to 'opt-in' to 
specific events.  For example, for qpidd we would not want to receive 
PN_TRANSPORT nor PN_LINK_LOCAL_FLOW events.

I've hacked my proton library to avoid generating PN_TRANSPORT and PN_LINK_FLOW 
on local credit consumption and that results in performance parity with the 
existing polled approach.

Does this make sense?  Other ideas?


-- 
-K

Reply via email to