Re: Context object type in proton-j
On 1 March 2013 20:49, Saggi Mizrahi smizr...@redhat.com wrote: I've been using the proton-j 0.3 and I was surprised to find out that the context type of a connector returned from Listener.accept() is created with the context object type of the parent listener. I am wondering why that is that enforced. I think you are refering to the Proton-J driver. It appears to me that this is merely giving you a type safe mechanism to retrieve an application owned context object. The application context being the same object you supplied to the earlier call to Driver#createListener(). It is permissable for the context object to be null. Perhaps you could elaborate on why this presents a problem to your usage pattern, illustrating your concern with code, if possible? Kind regards, Keith.
[jira] [Created] (PROTON-255) Messenger stall in send(): frames buffered in transport not written to driver.
Ken Giusti created PROTON-255: - Summary: Messenger stall in send(): frames buffered in transport not written to driver. Key: PROTON-255 URL: https://issues.apache.org/jira/browse/PROTON-255 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.4 Reporter: Ken Giusti Priority: Blocker I've created a simple ping-pong test between two Messenger clients. One client sends 10k messages to the other, which replies. The test stalls after sending 10 messages in each direction, which is the value used for link credit. The cause of the problem is that the FLOW frame sent by one client is not be received by the other. The failure is due to the FLOW frame remaining buffered in the sender's transport buffer, not being written to the driver. The driver is waiting for socket activity, and never sees the pending output sitting in the transport. This bug appears to have been introduced by PROTON-225 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (PROTON-255) Messenger stall in send(): frames buffered in transport not written to driver.
[ https://issues.apache.org/jira/browse/PROTON-255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13592316#comment-13592316 ] Ken Giusti commented on PROTON-255: --- The FLOW frame is missed by the driver since it is created by the call to pn_transport_quiesced() done by the tsync call. Usually, transport output is generated by the driver when it checks for pending output (done during the call to pn_connector_process). Since the driver is not creating the output, it is not aware of the need to send it. So when pending connections are walked the connector is never returned. #4 0x7f4f4c2ff749 in pn_post_flow (transport=0xe73ce0, ssn_state=0xeb4748, state=0xf40798) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:2141 #5 0x7f4f4c2ff9a3 in pn_process_flow_receiver (transport=0xe73ce0, endpoint=0xf36fd0) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:2166 #6 0x7f4f4c3009ca in pn_phase (transport=0xe73ce0, phase=0x7f4f4c2ff8a5 pn_process_flow_receiver) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:2512 #7 0x7f4f4c300a85 in pn_process (transport=0xe73ce0) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:2525 #8 0x7f4f4c300d40 in pn_output_write_amqp (io_layer=0xe742b0, bytes=0xe74790 , size=16384) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:2579 #9 0x7f4f4c301cf7 in pn_io_layer_output_passthru (io_layer=0xe74278, bytes=0xe74790 , size=16384) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:2991 #10 0x7f4f4c301cf7 in pn_io_layer_output_passthru (io_layer=0xe74240, bytes=0xe74790 , size=16384) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:2991 #11 0x7f4f4c300f5c in transport_produce (transport=0xe73ce0) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:2619 #12 0x7f4f4c302000 in pn_transport_pending (transport=0xe73ce0) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:3070 #13 0x7f4f4c302142 in pn_transport_quiesced (transport=0xe73ce0) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/engine/engine.c:3104 #14 0x7f4f4c308603 in pn_messenger_sent (messenger=0xe6ddc0) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/messenger.c:1088 #15 0x7f4f4c306f81 in pn_messenger_tsync (messenger=0xe6ddc0, predicate=0x7f4f4c3085bb pn_messenger_sent, timeout=-1) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/messenger.c:621 #16 0x7f4f4c307282 in pn_messenger_sync (messenger=0xe6ddc0, predicate=0x7f4f4c3085bb pn_messenger_sent) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/messenger.c:687 #17 0x7f4f4c3087c2 in pn_messenger_send (messenger=0xe6ddc0) at /home/kgiusti/work/proton/qpid-proton/proton-c/src/messenger.c:1139 #18 0x00401f3f in main (argc=8, argv=0x7fff718b8b68) at /home/kgiusti/work/proton/qpid-proton/tests/traffic/clients/c/msgr-recv.c:241 Messenger stall in send(): frames buffered in transport not written to driver. -- Key: PROTON-255 URL: https://issues.apache.org/jira/browse/PROTON-255 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.4 Reporter: Ken Giusti Priority: Blocker I've created a simple ping-pong test between two Messenger clients. One client sends 10k messages to the other, which replies. The test stalls after sending 10 messages in each direction, which is the value used for link credit. The cause of the problem is that the FLOW frame sent by one client is not be received by the other. The failure is due to the FLOW frame remaining buffered in the sender's transport buffer, not being written to the driver. The driver is waiting for socket activity, and never sees the pending output sitting in the transport. This bug appears to have been introduced by PROTON-225 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (PROTON-255) Messenger stall in send(): frames buffered in transport not written to driver.
[ https://issues.apache.org/jira/browse/PROTON-255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13592319#comment-13592319 ] Ken Giusti commented on PROTON-255: --- Patch to work-around the problem: diff --git a/proton-c/src/messenger.c b/proton-c/src/messenger.c index 61d4fbc..46e2e96 100644 --- a/proton-c/src/messenger.c +++ b/proton-c/src/messenger.c @@ -1081,8 +1081,10 @@ bool pn_messenger_sent(pn_messenger_t *messenger) // check if transport is done generating output pn_transport_t *transport = pn_connector_transport(ctor); if (transport) { - if (!pn_transport_quiesced(transport)) + if (!pn_transport_quiesced(transport)) { +pn_connector_process(ctor); return false; + } } pn_connection_t *conn = pn_connector_connection(ctor); Messenger stall in send(): frames buffered in transport not written to driver. -- Key: PROTON-255 URL: https://issues.apache.org/jira/browse/PROTON-255 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.4 Reporter: Ken Giusti Priority: Blocker I've created a simple ping-pong test between two Messenger clients. One client sends 10k messages to the other, which replies. The test stalls after sending 10 messages in each direction, which is the value used for link credit. The cause of the problem is that the FLOW frame sent by one client is not be received by the other. The failure is due to the FLOW frame remaining buffered in the sender's transport buffer, not being written to the driver. The driver is waiting for socket activity, and never sees the pending output sitting in the transport. This bug appears to have been introduced by PROTON-225 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (PROTON-256) Improve Engine API JavaDoc
Philip Harvey created PROTON-256: Summary: Improve Engine API JavaDoc Key: PROTON-256 URL: https://issues.apache.org/jira/browse/PROTON-256 Project: Qpid Proton Issue Type: Improvement Components: proton-j Reporter: Philip Harvey Assignee: Philip Harvey Priority: Minor The JavaDoc in the Proton API engine package is very sparse and should be improved. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (PROTON-257) Add example to illustrate use of proton-j API
Philip Harvey created PROTON-257: Summary: Add example to illustrate use of proton-j API Key: PROTON-257 URL: https://issues.apache.org/jira/browse/PROTON-257 Project: Qpid Proton Issue Type: Improvement Components: proton-j Reporter: Philip Harvey Assignee: Philip Harvey Priority: Minor Add example to illustrate use of proton-j API. This is intended to be instructive to new developers to show a simple, conventional use of the lower level Proton APIs, i.e. Engine and Message but not Messenger. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (PROTON-255) Messenger stall in send(): frames buffered in transport not written to driver.
[ https://issues.apache.org/jira/browse/PROTON-255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13592496#comment-13592496 ] Ken Giusti commented on PROTON-255: --- Pushed the workaround for now - harmless and prevents the stall. svn.apache.org/viewvc?view=revisionrevision=1452439 Leaving bug open in lieu of better fix. Messenger stall in send(): frames buffered in transport not written to driver. -- Key: PROTON-255 URL: https://issues.apache.org/jira/browse/PROTON-255 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.4 Reporter: Ken Giusti Priority: Blocker I've created a simple ping-pong test between two Messenger clients. One client sends 10k messages to the other, which replies. The test stalls after sending 10 messages in each direction, which is the value used for link credit. The cause of the problem is that the FLOW frame sent by one client is not be received by the other. The failure is due to the FLOW frame remaining buffered in the sender's transport buffer, not being written to the driver. The driver is waiting for socket activity, and never sees the pending output sitting in the transport. This bug appears to have been introduced by PROTON-225 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
How about docs at top level?
I'm planning to start checking my docs into the proton tree soon. I was assuming I would just put them at top level, i.e. qpid-proton/documentation Anybody care to agree, object, counter-offer, praise, complain, argue, question, or muse ?
Re: messenger credit concern
Oops, this was supposed to go to the list. On Mon, Mar 4, 2013 at 11:56 AM, Rafael Schloming r...@alum.mit.edu wrote: On Thu, Feb 28, 2013 at 12:55 PM, Alan Conway acon...@redhat.com wrote: I think this is a common case: (1a) an app wants to receive and process messages indefinitely, but wants the implementation to use a bounded buffer of N messages or B bytes to do so. AKA credit window in AMQP 0.10 or prefetch in JMS. I'm not familiar enough with Messenger yet to say whether that belongs in the messenger API or in some other configuration, but I think it needs to be a use case that is easy to set up. Agreed that ideally we would have a dynamic flow control algorithm that can figure out the optimal credit settings by itself, but until we do I suspect the simple bounded buffer model will cover most cases, and doesn't require exposing the complexity of the underlying flow control. This is an interesting scenario because while it's a common case, it's arguably not a truly general case. In fact it is an anti-pattern in some cases. The problem this approach is that it depends on there being only a small penalty from using an oversized prefetch limit. Based on this assumption you can avoid computing the optimal prefetch amount and simply advise the user to statically assign a number large enough to guarantee in practice that there will always be messages sitting in the prefetch buffer. This works fine for a simple point to point scenario where the only cost is wasted memory on the client side for holding messages that didn't need to be prefetched in order to avoid stalling. Consider, however, what happens if you copy the point to point receiver and based on its code try to set up a job queue that load balances messages to competing consumers. Even a relatively small prefetch like 1024 can now result in pathological behaviour as your first consumer may well suck up all the messages in the queue and prevent other consumers from processing them. Now given that it is quite trivial to write message processing code that has no clue if it is running in a point to point vs load balanced environment, I think it would be unfortunate if our API forced users to make that code become specific to the topology in which it happens to be deployed. What this amounts to is that any number appearing in the API used by application code to fetch messages needs to actually have semantic meaning to the *application* and *not* the topology (e.g. the application is actually expecting N replies and no more). A possible consequence of this is that the messenger implementation needs to be smarter and track both round trip time and message processing time, however measuring these quantities isn't actually all that difficult or expensive, and if these numbers are known it is quite trivial to compute an optimal prefetch limit. Given this, I don't think there is really a good reason to make users guess at what an optimal value might be and I certainly wouldn't put it as part of the primary message retrieval API, rather as a tuning parameter for the internal algorithm. --Rafael
Re: [doc] rewrite of credit doc, to remove credit
On Fri, Mar 1, 2013 at 6:40 AM, Michael Goulish mgoul...@redhat.com wrote: I think Rafi brought up a kind of unassailable point when he suggested that a document about the Messenger API should probably confine itself to concepts that are ... in the Messenger API. I would be happy to explain the use of credit sometime (after the impl settles down...) but the Messenger API doc is not the place for that. SO! here is a rewrite just describing how sending and receiving works, without talking bout links credit or other concepts that the Messenger API is specifically trying to shield the developer from. - start - - start - - - start - - - - - start - - - - - - - start === Messenger I/O === Each messenger has a single queue of incoming messages, and a single queue of outgoing messages. Both input and output happen in two stages: between your application and the queue, and between the queue and the senders and receivers that your application is talking with. I think this is already exposing too much implementation/deployment topology. The app developer's model of the world should be much simpler than this. A messenger deals in sending and receiving messages. The fact that it might be receiving those messages from multiple parties or sending them to multiple parties is not something that an app developer should be concerned with. Input Incoming messages are received and placed on the incoming queue with pn_messenger_recv(). The second argument is the upper limit on how many messages will be enqueued. pn_messenger_recv ( messenger, 100 ); A call to pn_messenger_recv() will return when at least one message is available, or when the call times out if you have set a time limit with pn_messenger_set_timeout(). The call to pn_messenger_recv() may enqueue fewer than the number of messages you allowed To find how many messages it enqueued, use int enqueued = pn_messenger_incoming ( messenger ); When you are ready to process your incoming messages, dequeue them one at a time into your application: ( This is the second stage of message input. ) int i; pn_message_t msg; for ( i = 0; i enqueued; ++ i ) { pn_messenger_get ( messenger, msg ); } I think it's awkward to describe pn_messenger_recv as enqueuing messages or placing them on the incoming queue. It makes it sound like it should be called pn_messenger_enqueue, e.g. the user might use it to enqueue a message, but that is not what is actually happening. I don't think it's necessary to describe how the messages get onto the incoming queue. That's just implementation magic that a messenger does for you, not something you do by calling recv. In fact messages can arrive pretty much anytime, not just when you call recv. The key semantic of recv is that it will block until there are messages in the queue (or the timeout expires). Output --- Output is also accomplished in two stages: first enqueing as many messages as you wish, and and then transmitting them all. Enqueue your messages one at a time with: pn_messenger_put ( messenger, msg ); To learn how many you have on the outgoing queue waiting to be sent, use: int outgoing = pn_messenger_outgoing ( messenger ); And to transmit the messages in your output queue: pn_messenger_send ( messenger ); The semantics of pn_messenger_put allow it to send if it can do so without blocking. There aren't really two stages here. The put vs send distinction is more about separating the blocking and non blocking semantics into separate calls. Unspecified Limits on Input By using -1 as the second argument to pn_messenger_recv(), you let the Messenger implementation decide what limit to set on how many messages you may receive. It will give a chance to all senders that are currently connected to your application, but will try to set a reasonable bound on messages per sender. This reads like a footnote when actually it is probably the most common case. I also think this is exposing aspects of the implementation/deployment topology that the app developer shouldn't know about, e.g. describing senders and referencing the effect on the flow control algorithm. I would lead with the recv() case and follow up with the recv(N) case as an advanced scenario. Performance Implications Messaging system performance is always a tradeoff between throughput and latency. Throughput is the number of messages or number of bytes you can transmit and receive per unit time. Latency is the average time between the creation of a message by its sender, and its processing by a receiver. I don't think this is
[jira] [Assigned] (PROTON-254) Swig generated c code for java bindings fails to compile on Windows Visual Studio 2010
[ https://issues.apache.org/jira/browse/PROTON-254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cliff Jansen reassigned PROTON-254: --- Assignee: Cliff Jansen Swig generated c code for java bindings fails to compile on Windows Visual Studio 2010 -- Key: PROTON-254 URL: https://issues.apache.org/jira/browse/PROTON-254 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.4 Environment: Windows 7 Visual Studio 2010 Reporter: Keith Wall Assignee: Cliff Jansen Attachments: make-proton-jni-cxx-errors.txt Swig generated C code for the Java bindings fails to compile on Visual Studio 2010 with error message: C1083: Cannot open include file: 'stdbool.h': No such file or directory After discussion on list [1], we tried changing bindings/java/CMakeLists.txt to use CPLUSPLUS ON directive to CMake in the case where BUILD_WITH_CXX has been enabled earlier in the build. if (BUILD_WITH_CXX) SET_SOURCE_FILES_PROPERTIES(java.i PROPERTIES CPLUSPLUS ON) endif (BUILD_WITH_CXX) Switching to CPP in this way exposed a number of errors when compiling the CPP code resulting from the java.i (casting issues, pointer arithmetic etc - see attached make-proton-jni-cxx-errors.txt). However, even after resolving the CPP compilation issues, we then encounter a problem with Swig's handling of nested unions when when CPP. The seeming inability of SWIG (when used with CPP) to represent the union means that SWIG does not produce pn_atom_t_u.java. JNIMessage.java, which references pn_atom_t_u, consequently fails to compile. The warning from swig is: /home/keith/src/proton/proton-c/include/proton/codec.h:91: Warning 312: Nested union not currently supported (ignored). [1] http://mail-archives.apache.org/mod_mbox/qpid-proton/201302.mbox/%3ccamyv19mgbdvd2wextvpwywtysskadokb7wtj+u-3jkncwdx...@mail.gmail.com%3E -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (PROTON-254) Swig generated c code for java bindings fails to compile on Windows Visual Studio 2010
[ https://issues.apache.org/jira/browse/PROTON-254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593140#comment-13593140 ] Cliff Jansen commented on PROTON-254: - This error also happens on Linux with -DBUILD_WITH_CXX. It's a catch 22. swig fails because it worries about complexities of nested structures that don't affect us since proton doesn't use any C++ outside C. The fact that JNI defines separate calls in C and C++ makes it an unnatural act to feed a swig C wrapper to a C++ compile. So it looks like a custom wrapper object needs to be made for pn_atom_t_u. Fortunately, older versions of swig would create working wrappers for the nested structure. So, even though my swig expertise is minimal, I hope to be able to use the wrapper from the older swig as a basis for a generic version. Swig generated c code for java bindings fails to compile on Windows Visual Studio 2010 -- Key: PROTON-254 URL: https://issues.apache.org/jira/browse/PROTON-254 Project: Qpid Proton Issue Type: Bug Components: proton-c Affects Versions: 0.4 Environment: Windows 7 Visual Studio 2010 Reporter: Keith Wall Assignee: Cliff Jansen Attachments: make-proton-jni-cxx-errors.txt Swig generated C code for the Java bindings fails to compile on Visual Studio 2010 with error message: C1083: Cannot open include file: 'stdbool.h': No such file or directory After discussion on list [1], we tried changing bindings/java/CMakeLists.txt to use CPLUSPLUS ON directive to CMake in the case where BUILD_WITH_CXX has been enabled earlier in the build. if (BUILD_WITH_CXX) SET_SOURCE_FILES_PROPERTIES(java.i PROPERTIES CPLUSPLUS ON) endif (BUILD_WITH_CXX) Switching to CPP in this way exposed a number of errors when compiling the CPP code resulting from the java.i (casting issues, pointer arithmetic etc - see attached make-proton-jni-cxx-errors.txt). However, even after resolving the CPP compilation issues, we then encounter a problem with Swig's handling of nested unions when when CPP. The seeming inability of SWIG (when used with CPP) to represent the union means that SWIG does not produce pn_atom_t_u.java. JNIMessage.java, which references pn_atom_t_u, consequently fails to compile. The warning from swig is: /home/keith/src/proton/proton-c/include/proton/codec.h:91: Warning 312: Nested union not currently supported (ignored). [1] http://mail-archives.apache.org/mod_mbox/qpid-proton/201302.mbox/%3ccamyv19mgbdvd2wextvpwywtysskadokb7wtj+u-3jkncwdx...@mail.gmail.com%3E -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira