Oops, meant to send this to the list.

---------- Forwarded message ----------
From: Rafael Schloming <r...@alum.mit.edu>
Date: Tue, Mar 5, 2013 at 6:44 PM
Subject: Re: put vs. send
To: Bozo Dragojevic <bo...@digiverse.si>


On Tue, Mar 5, 2013 at 3:25 PM, Bozo Dragojevic <bo...@digiverse.si> wrote:

> Wow, by not ever calling pn_messenger_send(), but only pn_messenger_recv()
> things are, unexpectedly, better! I'll explain below what 'better' means.
>
> But first, this begs the question, what is the purpose of
> pn_messenger_send()
> and where (and why) it's appropriate/required to call it.
>

Well one example would be if you just want to send a bunch of messages
without ever receiving any.

I think it helps if you conceptually split the API along
blocking/non-blocking lines:

  non-blocking: put, subscribe, get, incoming, outgoing, etc...
        blocking: send, recv

Of the non blocking portion there are things that simply access data, like
get(), incoming, outgoing, etc, and then there are things that
schedule/request asynchronous work to be done, namely put() and subscribe().

For the blocking portion, I think it helps to understand that there is
really only one fundamental operation, and that is
do_any_outstanding_work_until_condition_X_is_met. Currently the blocking
portion of the API exposes exactly two conditions, namely "all messages are
sent" via send() and "there is a message waiting to be received" via recv().

So in your example below if you're basically just a request/response server
then as you've noticed you never actually need to call send since you don't
really care about waiting for that condition, and any work you schedule in
your handlers will be performed anyways as you wait for messages to be
received.

The results are for slightly modified 0.3 release.
> Most notably, I have a local change that exposes pn_driver_wakeup()
> through the messenger api.
>

I'm curious about why you had to make this change. Can you describe the
usage/scenario that requires it?

Our API is threaded internally but all proton communication is done
> via a dedicated thread that runs the following (stub) event loop:
> (The same event loop is used by both, the client library, and the 'broker')
>
>   while(1) {
>     ret = pn_messenger_recv(m,100)   // the 100 is hard to explain...
>     if (ret != PN_WAKED_UP) {        // new error code for wakeup case
>     /*
>      * apparently there's no need to call send...
>      * pn_messenger_send(m);
>      */
>     }
>     Command cmd = cmd_queue.get();   // cmd_queue.put() from another thread
>                                      // will call pn_driver_wakeup() and
> will
>                                      // break out of pn_messenger_recv()
>     if (cmd)
>       handle(cmd);                   // ends up calling pn_messenger_put()
>     if (pn_messenger_incoming(m)) {
>        msg = pn_messenger_get(m);    // handle just one message
>                                      // pn_messenger_recv() will not block
>                                      // until we're done
>        handle(msg);                  // can end up calling
> pn_messenger_put()
>     }
>   }
>
>
> So, before the change, a test client that produced messages needed to
> throttle a bit, about 8ms between each 'command' that resulted in
> one 'pn_messenger_put()'
>
> If a lower delay (or no delay) was used, the client's messenger got
> confused
> after some fairly small number of messages sent (order of magnitude 10)
> and ended up sitting in pn_driver_wait while it had unsent messages to
> send.
>

This sounds like a bug to me. As I mentioned above, you shouldn't need to
call send, but it also shouldn't hurt.

With the one line change of commenting out the send() it can go full speed!
>
>
> I know it's hard to comment on out-of-tree modified pseudo code, but
> is such an event loop within the design goals of the messenger?
>

Yes, certainly. Your code looks to me like it is almost the same as the
server.py example.

Longer term we'll most likely be switching from messenger to engine + driver
> so we can go multithreaded with the event loop.
>

You may find there is a fair amount you would need to reimplement if you
were to make that switch. One of the features I'd like to get into
messenger soon is factoring the messenger implementation and adding to the
API so that you can supply your own driver and write your own event loop
based on the messenger API rather than having to go all the way down to the
engine API. If what you want is more control over the I/O and threading but
you're happy with the protocol interface provided by messenger, then this
might be a better option for you than using the engine directly.

--Rafael

Reply via email to