Hey Rafael,
many thanks again for your relies, I'll take a look at the python code.

For info in the branch that I'm doing my JavaScript stuff in I "pimped" messenger.h and messenger.c slightly adding

PN_EXTERN int pn_messenger_wait(pn_messenger_t *messenger, int timeout);

to messenger.h and

int pn_messenger_wait(pn_messenger_t *messenger, int timeout)
{
    return pn_driver_wait(messenger->driver, timeout);
}

to messenger.c

so my notifier now looks like:

  while (1) {
pn_messenger_wait(messenger, -1); // Block indefinitely until there has been socket activity.
    main_loop(NULL);
  }

And that works perfectly - yay :-)

Would you have any issues with that going forward as an interim step until you're able to move forward with the fully decoupled driver?

Cheers,
Frase


On 13/12/13 12:49, Rafael Schloming wrote:

On Tue, Dec 10, 2013 at 2:33 PM, Fraser Adams <[email protected] <mailto:[email protected]>> wrote:

    On 10/12/13 15:51, Rafael Schloming wrote:

        To clarify my comment a little, it's not the
        pn_messenger_work(..., -1) in the loop that I found confusing.
        That usage of it, i.e. using it to block/avoid busy looping,
        is quite expected. It's the additional usage of it inside the
        main_loop() body that I found surprising. As far as I can tell
        you could just remove that call and your code would work. (Or
        at least work better.)

    Funnily enough....... after my last mail I ended up thinking that
    you might come back with exactly this response :-)

    I probably ought to add some more nuance to the saga :-D

    You might have seen my post a couple of weeks ago about me getting
    a proof of concept JavaScript implementation of messenger running
    by compiling from C to JavaScript using emscripten, so to do this
    I was able to keep the core proton messenger and engine code
    completely unchanged and only had to modify send/recv to behave
    asynchronously.

    In my original send-async code I actually had

    #if EMSCRIPTEN
      emscripten_set_main_loop(main_loop, 0, 0);
    #else


      while (1) {
        main_loop(NULL);

        struct timeval timeout;
        timeout.tv_sec = 0;
        timeout.tv_usec = 16667;
        select(0, NULL, NULL, NULL, &timeout);
      }
    #endif

    So for emscripten there's a notifier (in this case under the hood
    it's implemented by a JavaScript timeout) and for native code as
    you can see I just had a slightly crappy select based sleep. That
    code works, but is effectively polling at 60 frames per second.

    After I got *something* working that's when I got into the game of
    trying to be properly asynchronous, which is what precipitated
    this thread..


    For the emscripten side of things I've been adding some code that
    enables me to get notified asynchronously by the WebSocket data
    handler, I've been trying to do essentially the same for the
    native C code block in the stuff I've been describing in this thread.

    That's why I've ended up with the non-blocking
    pn_messenger_work(messenger, 0) in the main_loop() function - in
    practical terms data arrives on the WebSocket, it gets added to
    the buffer that the mapped C socket read will read from, then I
    trigger the callback to main_loop, which then does the proton
    goodness, so I *have* to call pn_messenger_work in the main_loop
    for emscripten - I'm being called from a real event handler.


    I guess that I *could* put a conditional compilation block around
    the pn_messenger_work(messenger, 0) in main loop and only call it
    from the emscripten based version and then use the blocking
    version in the main notification loop for the native C version,
    but TBH I'm not really keen on that, there really ought to be a
    way to get it to behave the same way however event notification
    has been triggered.

    It does feel slightly (to me at any rate) that pn_messenger_work
    is doing "too much" in this context (really I guess
    pn_messenger_tsync). TBH when I look at pn_messenger_tsync there's
    quite a bit of logic going on there and it's combining the
    blocking behaviour with quite a bit of other stuff (the messenger
    internal state updates).

    I guess that from my perspective it feels much cleaner to have the
    notifier loop doing no more than blocking until there's activity
    (an event), with the callback actually executing the business
    logic - surely a true notifier should just be doing the
    notification of an event (such as socket activity in this case)?
    It doesn't feel quite right to me in an async model that the
    notifier loop is also updating the internal state.

    As I say that's what I can achieve in my crazy JavaScript backed
    world, but not with native C because I can't access the basic
    blocking call without also executing the business logic that's
    updating the state.

    Clearly I *want* to update the state, I just don't think I should
    be doing it at the same time in an async model :-)


Ah, I think I follow a little bit more. I certainly agree that the driver and messenger are too closely coupled. I actually hope to support completely decoupling them at some point. Ideally you should be able to supply your own I/O with messenger rather than having to use messenger's built-in posix driver. What you see right now with pn_messenger_work is really a halfway point in between messenger's original blocking behaviour and a fully decoupled driver. I simply haven't had time yet to take that the rest of the way yet, but once we get there I expect you would have full control over exactly where/how you want to block and where/how you want to process I/O events.





        Ah, so you want to be notified when messages are acknowledged?
        For some
        reason I got it stuck in my head that you were trying to be
        notified of
        incoming messages.

        FWIW, I don't think good things would ever come of being able
        to directly
        call pn_driver_wait or block on the socket. The
        pn_messenger_work call is
        pretty much just blocking on the socket for you and then doing
        updates of
        messenger's internal state. Blocking on the socket without
        actually doing
        those updates won't actually accomplish anything since you'd
        never see any
        changes to messenger's visible state.

    As above it's all about cleanly separating responsibilities in the
    asynchronous case, yes "The pn_messenger_work call is pretty much
    just blocking on the socket for you ", but the key bit is that it
    goes on "then doing updates of messenger's internal state ". I
    think that the former certainly belongs in the notifier loop, but
    I believe that the latter belongs in the callback "business logic".

    I certainly agree that "Blocking on the socket without actually
    doing those updates won't actually accomplish anything since you'd
    never see any changes to messenger's visible state. ", but as I
    say I think that doing the update in the callback using the
    non-blocking pn_messenger_work call is actually the correct thing
    to do in asynchronous code.


    Actually you mention "For some reason I got it stuck in my head
    that you were trying to be notified of incoming messages." so I've
    not actually mentioned my asynchronous receiver yet - basically an
    async version of recv.c well funnily enough I think that actually
    backs up my argument about separating the responsibilities of
    blocking/notification from those of updating state in an async
    world. In recv-async my main loop looks like:

    void main_loop(void *arg) {

        pn_messenger_recv(messenger, -1); // Receive as many messages
    as messenger can buffer
        check(messenger);

        while(pn_messenger_incoming(messenger))
        {
          pn_messenger_get(messenger, message);
          check(messenger);

          char buffer[1024];
          size_t buffsize = sizeof(buffer);
          pn_data_t *body = pn_message_body(message);
          pn_data_format(body, buffer, &buffsize);

          printf("Address: %s\n", pn_message_get_address(message));
          const char* subject = pn_message_get_subject(message);
          printf("Subject: %s\n", subject ? subject : "(no subject)");
          printf("Content: %s\n", buffer);
        }

    }


    So again I'd really like my notifier loop to look something like

      while (1) {
        <block until some relevant socket activity event>
        main_loop(NULL);
      }

    Again I need the non-blocking pn_messenger_recv in the callback
    code as I'm truly being called asynchronously in the JavaScript case.

    All just my opinion of course.

    Hopefully this wider context of what I've been up to makes my
    mails on this subject to date seem just a little less weird?

    Again thanks for your responses.


I finally managed to finish off the async example. I did it in python for brevity, but the same pattern should work in C. It definitely shows up a few gaps in the API, but it works. I don't know if it's particularly useful to you given what you're trying to do, but it should give you some idea of what the work stuff is currently geared towards.

The two notable gaps it shows up are that there is no way to ask messenger for trackers that have an updated status. This is because they were initially added for scenarios where the user had the tracker already and was interested in the status, so the code simply iterates over all the trackers to call the async handlers. This works but could obviously be more efficient. Also, the status is not updated when a message is settled without an explicit disposition. I believe this is just a bug. The workaround is to set a nonzero incoming window and explicitly accept/reject at the receiver. In any case, I've attached the three files. The async.py file is a shared by the sender and receiver. It provides an adapter for using callbacks with messenger.

The files are attached.

--Rafael


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to