On Wed, Mar 6, 2013 at 7:35 AM, Ted Ross <tr...@redhat.com> wrote:

> On 03/06/2013 10:09 AM, Rafael Schloming wrote:
>> On Wed, Mar 6, 2013 at 6:52 AM, Ted Ross <tr...@redhat.com> wrote:
>>  On 03/06/2013 08:30 AM, Rafael Schloming wrote:
>>>  On Wed, Mar 6, 2013 at 5:15 AM, Ted Ross <tr...@redhat.com> wrote:
>>>>   This is exactly right.  The API behaves in a surprising way and causes
>>>>> reasonable programmers to write programs that don't work. For the sake
>>>>> of
>>>>> adoption, we should fix this, not merely document it.
>>>>>  This seems like a bit of a leap to me. Have we actually seen anyone
>>>> misusing or abusing the API due to this? Mick didn't come across it
>>>> till I
>>>> pointed it out and even then he had to construct an experiment where
>>>> he's
>>>> basically observing the over-the-wire behaviour in order to detect it.
>>>> --Rafael
>>>>  The following code doesn't work:
>>> while (True) {
>>>    wait_for_and_get_next_event(&****event);
>>>    pn_messenger_put(event);
>>> }
>>> If I add a send after every put, I'm going to limit my maximum message
>>> rate.  If I amortize my sends over every N puts, I may have
>>> arbitrarily/infinitely high latency on messages if the source of events
>>> goes quiet.
>>> I guess I'm questioning the mission of the Messenger API.  Which is the
>>> more important design goal:  general-purpose ease of use, or strict
>>> single-threaded asynchrony?
>> I wouldn't say it's a goal to avoid background threads, more of a really
>> nice thing to avoid if we can, and quite possibly a necessary mode of
>> operation in certain environments.
>> I don't think your example code will work though even if there is a
>> background thread. What do you want to happen when things start backing
>> up?
>> Do you want messages to be dropped? Do you want put to start blocking? Do
>> you just want memory to grow indefinitely?
> Good question.  I would want to either block so I can stop consuming
> events or get an indication that I would-block so I can take other actions.
>  I understand that this is what "send" is for, but it's not clear when, or
> how often I should call it.

I think there are really two orthogonal issues being discussed:

1) how do you get messenger to perform the appropriate amount of
outstanding work once enough has built up
2) should messenger be obligated to eventually complete any outstanding
work even if you never call into it again

The second one basically being the question of whether the API semantics
mandate a background thread or not, and I think (1) needs a good solution
regardless of how you answer (2).

I think we can address (1) by adding an integer argument N to
pn_messenger_send with the following semantics:

  - if N is -1, then pn_messenger_send behaves as it currently does, i.e.
block until everything is sent
  - if N is 0, then pn_messenger_send will send whatever it can/should
without blocking
  - if N is positive, then pn_messenger_send will block until N messages
are sent

I'd also suggest we modify pn_messenger_recv(0) to match this on the recv
side, so the overall semantics of pn_messenger_recv(N) would be:

  - if N is -1, then pn_messenger_recv behaves as it currently does, i.e.
block until something is received
  - if N is 0, then pn_messenger_recv will receive whatever it can/should
without blocking
  - if N is positive, then pn_messenger_recv behaves as it currently does,
i.e. block until something is received but with a limit of N

I think the above change would introduce some symmetry between send and
recv and let you implement a fully pipelined asynchronous sender in a more
obvious way than setting timeouts to zero, e.g. Ted's example could read:

while (True) {
   pn_messenger_put(m, event);
   if (pn_messenger_outgoing() > threshold) {
     pn_messenger_send(m, pn_messenger_outgoing() - threshold);

We can also add a pn_messenger_work(timeout) that would block until it's
done some kind of work (either sending or receiving). Calling it with a
zero argument would be equivalent to simultaneously calling
pn_messenger_send(0) and pn_messenger_recv(0). (Note that currently send
and recv don't actually factor in any biases towards sending and receiving,
they just do whatever outstanding work needs doing, however the API would
admit an implementation that did apply some bias in those cases.)

This would go some ways towards addressing issue (2) because the app could
create its own background thread that does work behind the scenes, and in
bindings where threads are readily available, we could have the buttercream
frosting layer supply such a thread if we wished.


Reply via email to