On Wed, Dec 10, 2014 at 11:23 AM, Ken Giusti <kgiu...@redhat.com> wrote:

> Hi,
>
> I've been working on a simple patch to qpidd that ports the AMQP 1.0
> module to the new event interface provided by proton 0.8.  See
> https://issues.apache.org/jira/browse/QPID-6255 for the patch.
>
> With the above patch, I've noticed a pretty consistent small drop in
> overall qpidd performance as gauged by qpid-cpp-benchmark (see comments in
> above jira).  Turns out the event loop is doing a lot more work when
> compared to the polled approach for the same message load.
>
> Digging around a bit, there are a couple of issues that result in qpidd
> doing a lot of unnecessary work:
>
> 1) the PN_TRANSPORT event isn't needed by this driver - pending output is
> manually checked at a later point.  In my test, approximately 25% of the
> total events are PN_TRANSPORT events, which the driver simply discards
>
> 2) A more serious issue - I get a PN_LINK_FLOW event for _every_ message
> transfer!  Turns out that PN_LINK_FLOW is being issued for two different
> events (IMHO): when a flow frame is received (yay) and each time a transfer
> is done and credit is consumed (ugh).
>
> Item #2 seems like a bug - these two events have different semantic
> meaning and would likely result in different processing paths in the driver
> (in the case of qpidd, the credit consumed case would be ignored).
>

It's not a bug, it was added intentionally since there are circumstances
where you get stalls if you don't have it. Each time a message is sent you
are actually updating the credit values on the link, and so if you don't
generate the event at that point, you need some fairly complex/subtle code
in your handlers to compensate. (You may actually have a bug yourself now
depending on how you've integrated the engine.)

The event shouldn't be generated for every message though, just once for
each batch of messages. In other words if I stuff a bunch of messages into
the engine at the same time, there should be only one flow event produced
once the engine has finished writing messages to the wire. If you're
actually observing one flow per message then either there is a bug in the
logic that elides duplicate events, or you're stuffing one message into the
engine at a time.


>
> I propose we fix #2 by breaking up that event into two separate events,
> something like PN_LINK_REMOTE_FLOW for when flow is granted, and
> PN_LINK_LOCAL_FLOW when credit is consumed (not in love with these names
> btw, they seem consistent with the endpoint states)
>
> Furthermore, I think the event API would benefit from a way to 'opt-in' to
> specific events.  For example, for qpidd we would not want to receive
> PN_TRANSPORT nor PN_LINK_LOCAL_FLOW events.
>
> I've hacked my proton library to avoid generating PN_TRANSPORT and
> PN_LINK_FLOW on local credit consumption and that results in performance
> parity with the existing polled approach.
>
> Does this make sense?  Other ideas?


I don't think it makes sense to introduce a new event type. The two events
mean very much the same thing, i.e. credit levels have changed. A distinct
event type would only make sense if there were common cases where you
wanted to react to the two different events differently, and having them be
the same made that difficult or cumbersome. Of course that doesn't preclude
us from adding some configuration to the engine that lets us control when
the flow event is produced, however as Andrew points out that isn't very
compatible with the whole "event bus" model of engine use.

I think it's worth investigating exactly why you're getting so many of them
since I wouldn't expect there to be one per message unless you are somehow
forcing only one message to go through the engine at a time. It would also
probably be good to look at exactly why ignored events are so expensive,
perhaps isolate/measure how expensive they actually are as I'm not entirely
convinced there aren't other factors in your case.

I'm also curious how your usage of the engine compares to mick's examples
as he seems to be getting pretty decent numbers now.

--Rafael

Reply via email to