On Mon, Dec 16, 2013 at 3:11 PM, Fraser Adams <[email protected]
> wrote:

> D'oh......
> OK so I reread your notes around the async python code and noted:
>
>
> "Also, the status is not updated when a message is settled without an
> explicit disposition. I believe this is just a bug. The workaround is to
> set a nonzero incoming window and explicitly accept/reject at the receiver."
>
> that turned out to be more significant than I had realised (slaps head
> very hard with back of hand......) adding
> pn_messenger_set_outgoing_window(messenger, 1024);
> pn_messenger_set_incoming_window(messenger, 1024);
>
> made it work .......
>
>
> would you perhaps be able to explain what's going on with respect to
> "disposition" and "window" here? Am I correct in thinking that the received
> message *should* be implicitly accepted? I've *no idea* what the optimal
> window sizes would be so I copied your values.
>

Ok, so this kind of benefits from an understanding of how acknowledgments
work in the 1.0 protocol. Unlike 0-x the 1.0 acknowledgement semantics are
factored into two independent aspects, namely delivery state, and
settlement. The delivery state encodes information about the processing of
a delivery, things like ACCEPTED, REJECTED, RELEASED, etc. Settlement on
the other hand deals exclusively around whether an endpoint retains state
regarding a delivery. In other words settling a delivery is really
synonymous with forgetting about its existence. The disposition frame in
the protocol is used to communicate when changes occur around delivery
state or settlement. It's important to note that these things can happen
somewhat independently, e.g. a delivery can be settled before its delivery
state reaches any meaningful value.

To map this back to the incoming/outgoing windows, these windows really
control settlement, not delivery state. The delivery state is controlled by
the explicit use of accept/reject in the API. So a window of size N means
that the endpoint will remember the state of the last N deliveries. When
deliveries roll off this window, they will be settled automatically, and in
the case of the incoming window this can happen before a delivery state is
explicitly assigned via use of accept or reject. In fact by default the
incoming window is zero, and so incoming deliveries are settled as soon as
they are read off of the wire, and there is no opportunity to explicitly
accept/reject them.

So to answer your question, incoming messages (or deliveries to be more
precise) are never implicitly accepted by messenger, however they are
implicitly settled, and can in fact be implicitly settled prior to being
accepted. This is in fact what is happening. It results in a disposition
frame being sent indicating the delivery is settled and that the delivery
state is null (it's default value). The bug is that the sender should be
noticing that the receiver settled the delivery and updating the status of
the delivery as visible through the messenger's API. It is in fact not
doing this and the delivery still remains in the PENDING state at the
sender.

Regarding the sizes of the windows, there isn't really an optimal per/se.
The numbers chosen really have more to do with application semantics than
performance tuning directly. These aren't credit windows and won't in and
of themselves cause any throttling to occur, it's just that deliveries
rolling off the window will be forgotten. That said, of course if your
application needs to track the status of every single delivery, then it
will need to ensure that it never sends more than whatever the outgoing
window size is, and it may need to throttle itself and/or choose a large
enough window to ensure that it doesn't lose information.


I'm fairly familiar with the qpid::messaging setCapacity() stuff and how
> that helps tune the internal queues, but I'm not clear at all how messenger
> concepts map to the qpid::messaging/JMS stuff that I'm more familiar with.
> I can see how to get the internal queue depth, but I'm not clear if it's
> bounded or unbounded - is the window stuff the equivalent of the
> sender/receiver setCapacity() or something else entirely?
>

Hopefully this is a bit clearer now. An incoming window of size zero is
roughly analogous to JMS auto-ack mode. Setting it to a positive value is
roughly analogous to client-ack mode.

The setCapacity stuff in the messaging API is really about flow control and
so is pretty orthogonal to the incoming/outgoing window.


>
> I don't suppose that you have a "messenger for qpid::messaging
> programmers" kick start guide do you :-D
>
> As I get a bit more familiar I'd really like to understand how to get the
> best performance out of this - though I'm probably a little way off that at
> the moment - are there any plans to add "proton perftest" code as a sort of
> canonical "here's how to make proton rock" guide? That'd be *really*
> useful.
>

We've had a couple of fits and starts in this directly. I think Ken may
have done some work on this. We probably could use a bit more of a
concerted effort to resurrect/complete/tie in some of this stuff.


>
>
> Anyway thanks yet again for all of your help, I'm definitely making
> progress now.
>

Excellent, hope to hear more soon.

--Rafael

Reply via email to