[VOTE]: Release Proton 0.5 RC3 as 0.5 final

2013-08-22 Thread Rafael Schloming
Hi Everyone,

After a bunch more tests/fixes I think we're ready to go for a vote now.
I've posted 0.5 RC3 in the usual places:

Source is here:

  http://people.apache.org/~rhs/qpid-proton-0.5rc3/

Java binaries are here:

  https://repository.apache.org/content/repositories/orgapacheqpid-109/

I've attached the svn change log for everything since RC2. Please
peruse/test and register your vote:

[ ] Yes, release 0.5 RC3 as 0.5 final
[ ] No, 0.5 RC3 has the following issues...

--Rafael

r1516495 | rhs | 2013-08-22 12:04:14 -0400 (Thu, 22 Aug 2013) | 1 line

fixed java driver stall; fixed java messenger to clean up the driver after 
stopping; added useful illegal state exceptions to the java messenger impl

r1516485 | rhs | 2013-08-22 11:25:32 -0400 (Thu, 22 Aug 2013) | 1 line

made logging use consistent ids

r1516484 | rhs | 2013-08-22 11:24:14 -0400 (Thu, 22 Aug 2013) | 1 line

added -n option for running multiple iterations; useful in debugging 
intermittent failures

r1516433 | rhs | 2013-08-22 08:28:19 -0400 (Thu, 22 Aug 2013) | 1 line

PROTON-397: added example contributed by Marc Berkowitz

r1516431 | rhs | 2013-08-22 08:25:26 -0400 (Thu, 22 Aug 2013) | 1 line

added target to svn:ignore

r1516430 | rhs | 2013-08-22 08:24:43 -0400 (Thu, 22 Aug 2013) | 1 line

added target directories to svn:ignore

r1516427 | rhs | 2013-08-22 08:14:20 -0400 (Thu, 22 Aug 2013) | 1 line

Fixed hang in Messenger.stop(); added recv() to Messenger interface.

r1516194 | chug | 2013-08-21 12:03:42 -0400 (Wed, 21 Aug 2013) | 4 lines

PROTON-405: Windows install can't find jar files.
This patch adds a cmake option to skip building/installing jars.



r1516192 | chug | 2013-08-21 11:46:17 -0400 (Wed, 21 Aug 2013) | 1 line

NO-JIRA: Repair windows build after nonportable rhs commit r1516161

r1516184 | rhs | 2013-08-21 11:16:19 -0400 (Wed, 21 Aug 2013) | 1 line

don't set an error for PN_OVERFLOW, this fixes PROTON-336, also fixed error 
accessors to be consistent

r1516161 | rhs | 2013-08-21 09:53:25 -0400 (Wed, 21 Aug 2013) | 11 lines

Added simple smoke tests for the bindings. Encountered numerous issues
along the way and fixed as appropriate, including:
  - added tracker return values for ruby put/get
  - fixed ruby accept/reject to omit the tracker arg
  - fixed a deprecation warning in the perl binding
  - added disposition calls for php (PROTON-365)
  - added incoming/outgoing window properties for php
  - fixed put of messages without an address (PROTON-368)
  - added C level inspection method for messages to allow consistent
printing of messages across bindings


r1515860 | rhs | 2013-08-20 12:31:43 -0400 (Tue, 20 Aug 2013) | 1 line

PROTON-389: added all dispositions to switch statement

r1515858 | rhs | 2013-08-20 12:18:19 -0400 (Tue, 20 Aug 2013) | 1 line

removed reference to a nonexistent file

r1515795 | mcpierce | 2013-08-20 08:20:53 -0400 (Tue, 20 Aug 2013) | 6 lines

PROTON-406: Fix installing the Ruby bindings.

Previously it checked for RUBY_VENDORLIB_DIR as provided by Cmake. But
this variable is not provided by older versions of CMake. Additionally,
older versions of Ruby did not provide a vendorlibdir. In those cases,
the behavior is to now get the Ruby sitearch dir instead.

r1515614 | chug | 2013-08-19 17:30:57 -0400 (Mon, 19 Aug 2013) | 2 lines

PROTON-407: [proton-c] Windows install does not install .lib nor .pdb files
This patch installs the .lib file(s) to the /bin directory.

r1515559 | chug | 2013-08-19 15:01:10 -0400 (Mon, 19 Aug 2013) | 1 line

PROTON-408: [proton-c] Windows build does not put "d" suffix on debug file names

r1515455 | philharveyonline | 2013-08-19 10:58:11 -0400 (Mon, 19 Aug 2013) | 2 
lines

PROTON-343: Removed proton-logging module and its usages (all of which are 
fun

Re: [VOTE]: Release Proton 0.5 RC3 as 0.5 final

2013-08-27 Thread Rafael Schloming
I was going to close the vote/branch/tag/etc today.

--Rafael


On Tue, Aug 27, 2013 at 6:21 AM, Phil Harvey wrote:

> Hi,
>
> Is there a rough date in the frame for when 0.5 will be released?
>
> Phil
>
>
> On 23 August 2013 16:56, Darryl L. Pierce  wrote:
>
> > On Thu, Aug 22, 2013 at 12:26:54PM -0400, Rafael Schloming wrote:
> > > Hi Everyone,
> > >
> > > After a bunch more tests/fixes I think we're ready to go for a vote
> now.
> > > I've posted 0.5 RC3 in the usual places:
> > >
> > > Source is here:
> > >
> > >   http://people.apache.org/~rhs/qpid-proton-0.5rc3/
> > >
> > > Java binaries are here:
> > >
> > >
> https://repository.apache.org/content/repositories/orgapacheqpid-109/
> > >
> > > I've attached the svn change log for everything since RC2. Please
> > > peruse/test and register your vote:
> > >
> > [X] Yes, release 0.5 RC3 as 0.5 final
> > [ ] No, 0.5 RC3 has the following issues...
> >
> > --
> > Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
> > Delivering value year after year.
> > Red Hat ranks #1 in value among software vendors.
> > http://www.redhat.com/promo/vendor/
> >
> >
>


Re: [VOTE]:[RESULT] Release Proton 0.5 RC3 as 0.5 final

2013-08-27 Thread Rafael Schloming
The vote carries with 4 +1's and 0 -1's. I've created the 0.5 tag and
branch in the repo, and I'll post the final artifacts shortly.

--Rafael

On Tue, Aug 27, 2013 at 6:47 AM, Rafael Schloming  wrote:

> I was going to close the vote/branch/tag/etc today.
>
> --Rafael
>
>
> On Tue, Aug 27, 2013 at 6:21 AM, Phil Harvey wrote:
>
>> Hi,
>>
>> Is there a rough date in the frame for when 0.5 will be released?
>>
>> Phil
>>
>>
>> On 23 August 2013 16:56, Darryl L. Pierce  wrote:
>>
>> > On Thu, Aug 22, 2013 at 12:26:54PM -0400, Rafael Schloming wrote:
>> > > Hi Everyone,
>> > >
>> > > After a bunch more tests/fixes I think we're ready to go for a vote
>> now.
>> > > I've posted 0.5 RC3 in the usual places:
>> > >
>> > > Source is here:
>> > >
>> > >   http://people.apache.org/~rhs/qpid-proton-0.5rc3/
>> > >
>> > > Java binaries are here:
>> > >
>> > >
>> https://repository.apache.org/content/repositories/orgapacheqpid-109/
>> > >
>> > > I've attached the svn change log for everything since RC2. Please
>> > > peruse/test and register your vote:
>> > >
>> > [X] Yes, release 0.5 RC3 as 0.5 final
>> > [ ] No, 0.5 RC3 has the following issues...
>> >
>> > --
>> > Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
>> > Delivering value year after year.
>> > Red Hat ranks #1 in value among software vendors.
>> > http://www.redhat.com/promo/vendor/
>> >
>> >
>>
>
>


Re: [VOTE]:[RESULT] Release Proton 0.5 RC3 as 0.5 final

2013-08-28 Thread Rafael Schloming
FYI, the artifacts were posted yesterday and should have propagated to all
the mirrors by now. I've also updated the web site.

--Rafael


On Tue, Aug 27, 2013 at 7:27 AM, Rafael Schloming  wrote:

> The vote carries with 4 +1's and 0 -1's. I've created the 0.5 tag and
> branch in the repo, and I'll post the final artifacts shortly.
>
> --Rafael
>
> On Tue, Aug 27, 2013 at 6:47 AM, Rafael Schloming wrote:
>
>> I was going to close the vote/branch/tag/etc today.
>>
>> --Rafael
>>
>>
>> On Tue, Aug 27, 2013 at 6:21 AM, Phil Harvey 
>> wrote:
>>
>>> Hi,
>>>
>>> Is there a rough date in the frame for when 0.5 will be released?
>>>
>>> Phil
>>>
>>>
>>> On 23 August 2013 16:56, Darryl L. Pierce  wrote:
>>>
>>> > On Thu, Aug 22, 2013 at 12:26:54PM -0400, Rafael Schloming wrote:
>>> > > Hi Everyone,
>>> > >
>>> > > After a bunch more tests/fixes I think we're ready to go for a vote
>>> now.
>>> > > I've posted 0.5 RC3 in the usual places:
>>> > >
>>> > > Source is here:
>>> > >
>>> > >   http://people.apache.org/~rhs/qpid-proton-0.5rc3/
>>> > >
>>> > > Java binaries are here:
>>> > >
>>> > >
>>> https://repository.apache.org/content/repositories/orgapacheqpid-109/
>>> > >
>>> > > I've attached the svn change log for everything since RC2. Please
>>> > > peruse/test and register your vote:
>>> > >
>>> > [X] Yes, release 0.5 RC3 as 0.5 final
>>> > [ ] No, 0.5 RC3 has the following issues...
>>> >
>>> > --
>>> > Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
>>> > Delivering value year after year.
>>> > Red Hat ranks #1 in value among software vendors.
>>> > http://www.redhat.com/promo/vendor/
>>> >
>>> >
>>>
>>
>>
>


Re: [VOTE]:[RESULT] Release Proton 0.5 RC3 as 0.5 final

2013-08-28 Thread Rafael Schloming
Oops, I forgot. Thanks for the reminder!

--Rafael


On Wed, Aug 28, 2013 at 2:20 PM, Robbie Gemmell wrote:

> Nothing showing up on search.maven.org yet, have you released the staging
> repo (and dropped the old ones) ?
>
> Robbie
>
> On 28 August 2013 15:20, Rafael Schloming  wrote:
>
> > FYI, the artifacts were posted yesterday and should have propagated to
> all
> > the mirrors by now. I've also updated the web site.
> >
> > --Rafael
> >
> >
> > On Tue, Aug 27, 2013 at 7:27 AM, Rafael Schloming 
> > wrote:
> >
> > > The vote carries with 4 +1's and 0 -1's. I've created the 0.5 tag and
> > > branch in the repo, and I'll post the final artifacts shortly.
> > >
> > > --Rafael
> > >
> > > On Tue, Aug 27, 2013 at 6:47 AM, Rafael Schloming  > >wrote:
> > >
> > >> I was going to close the vote/branch/tag/etc today.
> > >>
> > >> --Rafael
> > >>
> > >>
> > >> On Tue, Aug 27, 2013 at 6:21 AM, Phil Harvey <
> p...@philharveyonline.com
> > >wrote:
> > >>
> > >>> Hi,
> > >>>
> > >>> Is there a rough date in the frame for when 0.5 will be released?
> > >>>
> > >>> Phil
> > >>>
> > >>>
> > >>> On 23 August 2013 16:56, Darryl L. Pierce 
> wrote:
> > >>>
> > >>> > On Thu, Aug 22, 2013 at 12:26:54PM -0400, Rafael Schloming wrote:
> > >>> > > Hi Everyone,
> > >>> > >
> > >>> > > After a bunch more tests/fixes I think we're ready to go for a
> vote
> > >>> now.
> > >>> > > I've posted 0.5 RC3 in the usual places:
> > >>> > >
> > >>> > > Source is here:
> > >>> > >
> > >>> > >   http://people.apache.org/~rhs/qpid-proton-0.5rc3/
> > >>> > >
> > >>> > > Java binaries are here:
> > >>> > >
> > >>> > >
> > >>>
> https://repository.apache.org/content/repositories/orgapacheqpid-109/
> > >>> > >
> > >>> > > I've attached the svn change log for everything since RC2. Please
> > >>> > > peruse/test and register your vote:
> > >>> > >
> > >>> > [X] Yes, release 0.5 RC3 as 0.5 final
> > >>> > [ ] No, 0.5 RC3 has the following issues...
> > >>> >
> > >>> > --
> > >>> > Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
> > >>> > Delivering value year after year.
> > >>> > Red Hat ranks #1 in value among software vendors.
> > >>> > http://www.redhat.com/promo/vendor/
> > >>> >
> > >>> >
> > >>>
> > >>
> > >>
> > >
> >
>


Re: question about proton error philosophy

2013-09-16 Thread Rafael Schloming
FYI, as of 0.5 you should be able to use pn_messenger_error(pn_messenger_t
*) to access the underlying error object (pn_error_t *) and clear it if an
error has occurred.

--Rafael

On Mon, Sep 16, 2013 at 12:40 PM, Michael Goulish wrote:

>
> No, you're right.
>
> "errno is never set to zero by any system call or library function"
> ( That's from Linux doco. )
>
> OK, I was just philosophically challenged.
> I think what confused me was the line in the current Proton C doc (about
> errno) that says "an error code or zero if there is no error."
> I'll just remove that line.
>
> OK, I withdraw the question.
>
>
> ( I still don't like this philosophy, but the whole world is using it, and
> the whole world is bigger than I am... )
>
>
>
>
> - Original Message -
> Do other APIs reset the errno?  I could have sworn they didn't.
>
> On Mon, Sep 16, 2013 at 12:01 PM, Michael Goulish 
> wrote:
> >
> > I was expecting errno inside the messenger to be reset to 0 at the end
> of any successful API call.
> >
> > It isn't: instead it looks like the idea is that errno preserves the
> most recent error that happened, regardless of how long ago that might be.
> >
> > Is this intentional?
> >
> > I am having a hard time understanding why we would not want errno to
> always represent the messenger state as of the completion of the most
> recent API call.
> >
> >
> > I would be happy to submit a patch to make it work this way, and see
> what people think - but not if I am merely exhibiting my own
> philosophical ignorance here.
> >
> >
>
>
>
> --
> Hiram Chirino
>
> Engineering | Red Hat, Inc.
>
> hchir...@redhat.com | fusesource.com | redhat.com
>
> skype: hiramchirino | twitter: @hiramchirino
>
> blog: Hiram Chirino's Bit Mojo
>


Re: bug in java ReceiverImpl?

2013-09-29 Thread Rafael Schloming
On Sun, Sep 29, 2013 at 7:35 AM, Bozo Dragojevic  wrote:

> Hi,
>
> I've just noticed something that doesnt seem right:
>
> ReceiverImpl.advance()  {
> ...
> getSession().**incrementIncomingBytes(-**current.pending());
>
> but the Delivery should always have 0 pending bytes when advance() is
> called?
>

Not necessarily, the delivery will only have zero bytes if you read all the
message data, however the API lets you advance over a message without
necessarily doing this.

I initially thought the same as you do and didn't have this code in there,
however some of the tests actually don't bother looking at the message
content,and so the accounting was in error by the number of bytes of data
that wasn't read.

In practice it might be a bit odd to skip an entire message like that,
however I can imagine scenarios where an application would read part of a
message, e.g. examine the early bytes of a message and decide that the rest
of that particular message was uninteresting and then call advance while
there was still pending data for that delivery.

--Rafael


Re: [doc] I will check these in on Friday the 4th...

2013-10-01 Thread Rafael Schloming
Hey, sorry to take so long getting around to this. Thanks for the prodding.
See comments inline...

On Mon, Sep 30, 2013 at 1:46 PM, Michael Goulish wrote:

>
>
>
> I will check this stuff in this coming Friday, 4 Oct,
> ( at midnight, in the last timezone on Earth...)
> if I don't hear any objections / suggestions,
> so please take a look before then if you would like to
> provide feedback.
>
>
>
> These are expanded descriptions that I'd like to add to the C API
> documentation.  ( These are the descriptions only -- where the
> current info already explains the parameters and returns values
> I will just leave those in place. )
>
>
> These are the only ones I plan to change at this time.
>
>
>
>
> Please take a look to see
>
>   1. whether the description matches your understanding
>  of what the functions do, and how they fit together.
>
>
>   2. whether you, as a developer using this code, would
>  find the description useful, sufficient, understandable,
>  etc.
>
>
> Question 2 is still very valuable even if you have no
> idea about Question 1.
>
>
>
> This is not yet a complete list.  Some of the functions are
> clear already, and some I have no clue about as yet.
>
>
>
> Here they are:
>
>
>
>
> pn_messenger_accept
> {
>   Signal the sender that you have received and have acted on the message
>   pointed to by the tracker.  If the PN_CUMULATIVE flag is set, all
>   messages prior to the tracker will also be accepted, back to the
>   beginning of your incoming window.
> }
>

Minor quibble, but I'm not sure it adds much to say that it signals that
you've received the message. That's kind of implicit in having acted upon
the message anyways.


>
> pn_messenger_errno
> {
>   Return the code for the most recent error.
>   Initialized to zero at messenger creation.
>   Error numbers are "sticky" i.e. are not reset to 0
>   at the end of successful API calls.
>
>   (NOTE! This is the only description that is intentionally false.
>There *is* one API call that resets errno to 0 -- but I think
>it shouldn't, and I will complain about it Real Soon Now.)
> }
>
>
>
> pn_messenger_error
> {
>   Return a text description of the most recent error.
>   Initialized to null at messenger creation.
>   Error text is "sticky" i.e. not reset to null
>   at the end of successful API calls.
> }
>
>
This is no longer accurate. As of 0.5 pn_messenger_error returns a pointer
to a pn_error_t. There is a pn_error_* API that allows for accessing the
text, error number, and doing things like setting/clearing the error code
explicitly if you wish to do so.


>
> pn_messenger_get
> {
>   Pop the oldest message off your incoming message queue,
>   and copy it into the given message structure.
>   If the given pointer to a message structure in NULL,
>   the popped message is discarded.
>   Returns PN_EOS if there are no messages to get.
>   Returns an error code only if there is a problem in
>   decoding the message.
> }
>

That should probably read "If the given pointer to a message structure *is*
NULL, ..."


>
> pn_messenger_get_certificate
> {
>   Return the certificate path if one has been set,
>   by pn_messenger_set_certificate, or null.
> }
>

This is a little bit ambiguous, maybe "Return the certificate path. This
value may be set by pn_messenger_set_certificate. The default certificate
path is null."


>
> pn_messenger_get_incoming_window
> {
>   Returns the size of the incoming window that was
>   set with pn_messenger_set_incoming_window.  The
>   default is 0.
> }
>
>
> pn_messenger_get_outgoing_window
> {
>   Returns the size of the incoming window that was
>   set with pn_messenger_set_incoming_window.  The
>   default is 0.
> }
>
>
>
> pn_messenger_incoming_subscription
> {
>   Returns a pointer to the subscription of the message returned by the
>   most recent call to pn_messenger_get(), or NULL if pn_messenger_get()
>   has never been called.
> }
>
>
>
> pn_messenger_incoming_tracker
> {
>   Returns a tracker for the message most recently fetched by
>   pn_messenger_get().  The tracker allows you to accept or reject its
>   message, or its message plus all prior messages that are still within
>   your incoming window.
> }
>
>
>
> pn_messenger_interrupt
> {
>   Call this from a non-messenger thread to interrupt
>   a messenger that is blocking.
>   Return value:  0 if all is well, or -1.
>   If -1 is returned, that is not PN_EOS.  It is the return
>   value of the system call write(3), and can be printed with
>   perror(3).
> }
>
>
>
> pn_messenger_is_blocking
> {
>   Accessor for messenger blocking mode.
>   Note: this tells you only whether the messenger is in
>   blocking mode.  This will not tell you (if called from
>   a separate thread) that a messenger is currently blocking
> }
>
>
>
>
> pn_messenger_outgoing_tracker
> {
>   Returns a tracker for the outgoing message most recently given
>   to pn_messenger_put.  Use this tracker with pn_messenger_status
>   to determine the delivery status o

Re: [doc] I will check these in on Friday the 4th...

2013-10-01 Thread Rafael Schloming
Comments inline...

On Mon, Sep 30, 2013 at 2:43 PM, Ted Ross  wrote:

> Comments in-line below...
>
> A general question/comment:  Is the single-threaded nature of this API
> clearly spelled out somewhere?  There's a lot of the use of the "returns X
> associated with the most recent call to Y" pattern, which isn't
> multi-thread-friendly.  Then, there are calls like pn_messenger_interrupt
> and pn_messenger_stopped that suggest multi-threaded use.  What's the
> intent here?
>

This is a good point. We probably need some sort of API level description
that states basic assumptions like this. To answer Ted's question, in
general the API is single threaded. The only exception to that is
pn_messenger_interrupt which can be safely called from another thread to
interrupt a blocking call.


>
> -Ted
>
>
> On 09/30/2013 01:46 PM, Michael Goulish wrote:
>
>>
>>
>> I will check this stuff in this coming Friday, 4 Oct,
>> ( at midnight, in the last timezone on Earth...)
>> if I don't hear any objections / suggestions,
>> so please take a look before then if you would like to
>> provide feedback.
>>
>>
>>
>> These are expanded descriptions that I'd like to add to the C API
>> documentation.  ( These are the descriptions only -- where the
>> current info already explains the parameters and returns values
>> I will just leave those in place. )
>>
>>
>> These are the only ones I plan to change at this time.
>>
>>
>>
>>
>> Please take a look to see
>>
>>1. whether the description matches your understanding
>>   of what the functions do, and how they fit together.
>>
>>
>>2. whether you, as a developer using this code, would
>>   find the description useful, sufficient, understandable,
>>   etc.
>>
>>
>> Question 2 is still very valuable even if you have no
>> idea about Question 1.
>>
>>
>>
>> This is not yet a complete list.  Some of the functions are
>> clear already, and some I have no clue about as yet.
>>
>>
>>
>> Here they are:
>>
>>
>> pn_messenger_interrupt
>> {
>>Call this from a non-messenger thread to interrupt
>>a messenger that is blocking.
>>Return value:  0 if all is well, or -1.
>>If -1 is returned, that is not PN_EOS.  It is the return
>>value of the system call write(3), and can be printed with
>>perror(3).
>> }
>>
>
> It appears that the error-space for this function is different from all of
> the other functions.  This call uses the posix errors whereas
> pn_messenger_error uses a Proton-specific error code.


That's not intentional, it's just passing up the driver error code. We
probably should fix this.


>
>
>  pn_messenger_receiving
>> {
>>Returns the number of messages that
>>was requested by the most recent call
>>to pn_messenger_recv.
>> }
>>
>
> I'd like to see a case where this is needed?   When would you use it?


The description isn't quite right. It is supposed to return the number of
messages that could be inbound, i.e. the credit you've given messenger to
use for incoming messages. This does correspond to what was most recently
passed to recv, however it should change as incoming messages arrive.


>
>
>  pn_messenger_reject
>> {
>>Rejects the message indicated by the tracker.  If the PN_CUMULATIVE
>>flag is used this call will also reject all prior messages that
>>have not already been settled.  The semantics of message rejection
>>are application-specific.  If messages represent work requests,
>>then rejection would leave the sender free to try another receiver,
>>without fear of having the same task done twice.
>> }
>>
>
> It is my understanding that rejected messages should never be re-sent.
>  Isn't the above description appropriate for RELEASED, not REJECTED?
>
>
>
>>
>>
>> pn_messenger_rewrite
>> {
>>Similar to pn_messenger_route(), except that the destination of
>>the message is determined before the message address is rewritten.
>>If a message has an outgoing address of "amqp://0.0.0.0:5678", and a
>>rewriting rule that changes its outgoing address to "foo", it will
>> still
>>arrive at the peer that is listening on "amqp://0.0.0.0:5678", but
>> when
>>it arrives there, its outgoing address will have been changed to "foo".
>> }
>>
>
> What is the purpose of this function?  If the to-field has been re-written
> to "foo", how will it then arrive at your intended destination?


It allows a layer of indirection between addresses in your application and
addresses you use on the wire, e.g. you can use "FOOBAR" inside your app
and then configure messenger to rewrite that into "//bar.com/foo" or
something like that. Internally the mechanism is used to rewrite
credentials out of addresses.


>
>
>  pn_messenger_stopped
>> {
>>If a call to pn_messenger_stop returns a non-zero code,
>>use this to determine whether the messenger has stopped.
>> }
>>
>
> Does this function block?  Do you need to call it in a loop?  What are the
> multi-threading implications?
>

It doesn't block, it

Re: Simple guide to use it

2013-10-16 Thread Rafael Schloming
Could you provide some info about what system you're trying to install on?

--Rafael


On Wed, Oct 16, 2013 at 9:04 AM, Darryl L. Pierce wrote:

> On Wed, Oct 16, 2013 at 04:39:34PM +0800, Azmi abdul rahman wrote:
> > Hi,
> >
> > i have build the protoon 0.5. and i have created send.exe and recv.exe.
> >
> > but how do i link it to python ?
> >
> > from the sample codes, i can see that i need to import proton. but
> there's
> > no setup.py for me to install in the 1st place.
> >
> > Pls pls help me.
>
> I'll be happy to help you.
>
> Currently the way to install the Proton Python bindings is to use the
> CMake build environment and do "make install". That will get the
> proper site package directory from your installed Python to know where
> to put the libraries.
>
> There are existing Python examples for send and recv that are in the
> examples/messenger/py directory.
>
> --
> Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
> Delivering value year after year.
> Red Hat ranks #1 in value among software vendors.
> http://www.redhat.com/promo/vendor/
>
>


0.6 Release/0.7 Planning

2013-10-23 Thread Rafael Schloming
Hi Everyone,

There have been a bunch of good fixes and improvements on trunk since 0.5,
and we now have a dwindling number of JIRAs slated for 0.6, so it's
probably time to start thinking about another release. I believe right now
there are 7 outstanding 0.6 JIRAs. (Please give a shout if there are any
JIRAs you believe should be included in 0.6 that are not currently on the
list.)

Also, in the spirit of having a little more up front planning, I'd like to
solicit comments on where people would like to see progress in 0.7, either
specific JIRAs or general feature areas.

Thanks,

--Rafael


Re: 0.6 Release/0.7 Planning

2013-10-23 Thread Rafael Schloming
On Wed, Oct 23, 2013 at 8:53 AM, Ken Giusti  wrote:

> Hi Rafi,
>
> I'd like to get PROTON-200 finished for 0.6, which involves porting the
> algorithm in the C implementation of Messenger to the Java implementation.
>  This port is dependent on PROTON-444, so I'd like to include that in 0.6
> also.
>
> Ok?
>

Yeah, that makes sense.

--Rafael


Re: 0.6 Release/0.7 Planning

2013-10-24 Thread Rafael Schloming
They look pretty straightforward, I don't see why we can't get them into
0.6.

--Rafael


On Thu, Oct 24, 2013 at 7:26 AM, Bozo Dragojevic  wrote:

> On 23. 10. 13 13:10, Rafael Schloming wrote:
>
>> Hi Everyone,
>>
>> There have been a bunch of good fixes and improvements on trunk since 0.5,
>> and we now have a dwindling number of JIRAs slated for 0.6, so it's
>> probably time to start thinking about another release. I believe right now
>> there are 7 outstanding 0.6 JIRAs. (Please give a shout if there are any
>> JIRAs you believe should be included in 0.6 that are not currently on the
>> list.)
>>
>
> I'd like to see PROTON-428 and PROTON-290 in 0.6
>
> Bozzo
>


Re: messenger store and links

2013-10-24 Thread Rafael Schloming
Can you post the exact addresses and routing configuration you're using and
which direction messages are flowing? I'd like to try to simulate this
using the example send/recv scripts.

My guess is that the issue may not be so much related to whether the
addresses are NULL or not but whether there are multiple receivers
competing for the same messages.

--Rafael


On Thu, Oct 24, 2013 at 11:52 AM, Bozo Dragojevic wrote:

> Hi!
>
> Chasing down a weird behavior...
>
> looking at messengers pni_pump_out() and how it's used from
> pn_messenger_endpoints()
>
>link = pn_link_head(conn, PN_LOCAL_ACTIVE | PN_REMOTE_ACTIVE);
>while (link) {
>  if (pn_link_is_sender(link)) {
>pni_pump_out(messenger, 
> pn_terminus_get_address(pn_**link_target(link)),
> link);
>
>
> is it really fair to assume that target address is always expected to be
> non NULL?
>
>
> I've added a bit of debug code to pn_messenger_endpoints() so it reads:
>
>   link = pn_link_head(conn, PN_LOCAL_ACTIVE | PN_REMOTE_ACTIVE);
>   while (link) {
> if (pn_link_is_sender(link)) {
>   static int addrnull, addrok;
>   const char *address = pn_terminus_get_address(pn_**
> link_target(link));
>   if (!address) {
> addrnull++;
>   } else {
> addrok++;
>   }
>   fprintf(stderr, "links with null address: %d links with ok address
> %d\n",
>   addrnull, addrok);
>   pni_pump_out(messenger, address, link);
>
>
> and I never see 'addrok' change from 0
>
>
> when pni_pump_out is called with address==NULL:
>
> int pni_pump_out(pn_messenger_t *messenger, const char *address, pn_link_t
> *sender)
> {
>   pni_entry_t *entry = pni_store_get(messenger->**outgoing, address);
>   if (!entry) return 0;
>
> pni_store_get cheerfuly returns first message on the list
>
>
> end effect is that random clients start receiving messages not directed at
> them.
>
>
> For some inexplicable reason is mostly works out while there are just two
> clients
> connected to the messenger and we're not pushing it really hard. Still
> trying to come
> up with a simple test-case.
>
> Can anyone shed some light how the addressing on the link level is
> supposed to work in mesenger?
>
> Bozzo
>


Re: proposed Python API doc changes -- will check in on All Hallow's Eve

2013-10-24 Thread Rafael Schloming
Thanks for the update. I've put some specific comments in line, but overall
this should be a big improvement.

--Rafael

On Thu, Oct 24, 2013 at 1:33 PM, Michael Goulish wrote:

>
>
>   Dear Proton Proponents --
>
>
> Here is my proposed text for Python Messenger API documentation.
>
> If you'd like to comment, please do so within the next week.
> I will incorporate feedback and check in the resulting
> changes to the codebase at the stroke of midnight, on
> All Hallows Eve.  ( Samhain. )
>
>
> I have given you the current text for each method and property,
> and then my changes.  My changes are either proposed replacements
> ( NEW_TEXT ) or proposed additions ( ADD_TEXT ).
>
> Mostly, this is highly similar to the C API text, but with
> minor changes for Pythonification.
>
>
>   -- Mick .
>
>
>
>
> Class Comments
> {
>   CURRENT_TEXT
>   {
> The Messenger class defines a high level interface for
> sending and receiving Messages. Every Messenger contains
> a single logical queue of incoming messages and a single
> logical queue of outgoing messages. These messages in these
> queues may be destined for, or originate from, a variety of
> addresses.
>   }
>
>   ADD_TEXT
>   {
> The messenger interface is single-threaded.  All methods
> except one ( interrupt ) are intended to be used from within
> the messenger thread.
>   }
> }
>
>
>
>
> Sending & Receiving Messages
> {
>   CURRENT_TEXT
>   {
> The L{Messenger} class works in conjuction with the L{Message}
> class. The L{Message} class is a mutable holder of message content.
> The L{put} method will encode the content in a given L{Message}
> object into the outgoing message queue leaving that L{Message}
> object free to be modified or discarded without having any impact
> on
> the content in the outgoing queue.
>
> Similarly, the L{get} method will decode the content in the
> incoming
> message queue into the supplied L{Message} object.
>   }
>
>
>
>   NEW_TEXT
>   {
> The Messenger class works in conjuction with the Message class. The
> Message class is a mutable holder of message content.
>
> The put method copies its message to the outgoing queue, and may
> send queued messages if it can do so without blocking.  The send
> method blocks until it has sent the requested number of messages,
> or until a timeout interrupts the attempt.
>
> Similarly, the recv() method receives messages into the incoming
> queue, and may block until it has received the requested number of
> messages, or until timeout is reached.  The get method pops the
> eldest message off the incoming queue and copies it into the
> message
> object that you supply.  It will not block.
>   }
>

The blocking behavior of recv is actually slightly different than what you
state. It will block until it recvs *up to* N messages.

It's also probably worth noting somewhere that the blocking flag turns off
blocking entirely for the whole API with the exception
Messenger.work()/pn_messenger_work(). With blocking turned off calls like
send/recv will do what work they can without blocking and then return. You
can then use interrogatives like checking the number if incoming/outgoing
messages to see what outstanding work still remains.


>
>
>   NOTE
>   {
> I thought it would be better in this comment to only emphasize
> the blocking and non-blocking differences between get/put and
> recv/send.  Details about how the arg message is handled are moved
> to the comments for specific methods.
>   }
>
> }
>
>
>
>
> Method Details
> {
>   __init__
>   {
> CURRENT_TEXT
> {
>   Construct a new L{Messenger} with the given name. The name has
>   global scope. If a NULL name is supplied, a L{uuid.UUID} based
>   name will be chosen.
> }
>
> NEW_TEXT
> {
>   // no change
> }
>   }
>
>
>   __del__
>   {
> CURRENT_TEXT
> {
>   // none
> }
>
> NEW_TEXT
> {
>   Destroy the messenger.  This will close all connections that
>   are managed by the messenger.  Call the stop method before
>   destroying the messenger.
> }
>   }
>
>
>   start
>   {
> CURRENT_TEXT
> {
>   Transitions the L{Messenger} to an active state. A L{Messenger}
> is
>   initially created in an inactive state. When inactive a
>   L{Messenger} will not send or receive messages from its internal
>   queues. A L{Messenger} must be started before calling L{send} or
>   L{recv}.
> }
>
> NEW_TEX

Re: messenger store and links

2013-10-24 Thread Rafael Schloming
If this is with trunk I'm guessing you might be noticing a change in
reply-to behaviour from a recent fix:
https://issues.apache.org/jira/browse/PROTON-278

As you mention, previously an unset reply-to would get automatically filled
in with the messenger's name. This is no longer the case. That behaviour
was unintentional as there are times when you legitimately want the
reply-to to be left unset. The intended behaviour was to expand "~" at the
beginning of an address to the messenger's name. That is now how trunk
behaves, so if you set your reply-to's to "~" then your problem might go
away, although your question is still an interesting one as I believe if
you wished you could intentionally set up competing receivers using
explicit non-uuid addresses that collide.

--Rafael


On Thu, Oct 24, 2013 at 2:49 PM, Bozo Dragojevic  wrote:

> All messengers are created with default constructor (uuid-based names).
> The 'broker' messenger does a pn_messenger_subscribe("amqp:/**/~
> 0.0.0.0:8194")
> All messages are constructed with address amqp://127.0.0.1:8194 and leave
> reply_to unset (so it's set to amqp://$uuid)
>
> Broker does application-level routing of messages
>   a publisher sends a special 'register' message
>   replies are constructed using stored 'reply_to' address from incoming
> message
>   forwarded messages are constructed using stored 'reply_to' address from
> incoming 'registration' messages
>
> Messenger routing facility is not used in any way.
> All Messengers are running in async mode (broker and client library share
> the same 'event loop code').
> We're using outgoing trackers, mostly for the 'buffered' check
> All incoming messages are accepted as soon as they are processed.
> All outgoing messages are settled as soon as they are not buffered anymore
>
> maybe it'd be possible to simulate the situation by commenting out the
> pni_pump_out() in pn_messenger_put(), that, or at least checking if sender
> link address really has anything to do with
> the address calculated in pn_messenger_put()
>
> Bozzo
>
>
> On 24. 10. 13 20:25, Rafael Schloming wrote:
>
>> Can you post the exact addresses and routing configuration you're using
>> and
>> which direction messages are flowing? I'd like to try to simulate this
>> using the example send/recv scripts.
>>
>> My guess is that the issue may not be so much related to whether the
>> addresses are NULL or not but whether there are multiple receivers
>> competing for the same messages.
>>
>> --Rafael
>>
>>
>> On Thu, Oct 24, 2013 at 11:52 AM, Bozo Dragojevic**
>> wrote:
>>
>>  Hi!
>>>
>>> Chasing down a weird behavior...
>>>
>>> looking at messengers pni_pump_out() and how it's used from
>>> pn_messenger_endpoints()
>>>
>>> link = pn_link_head(conn, PN_LOCAL_ACTIVE | PN_REMOTE_ACTIVE);
>>> while (link) {
>>>   if (pn_link_is_sender(link)) {
>>> pni_pump_out(messenger, pn_terminus_get_address(pn_
>>> link_target(link)),
>>>
>>> link);
>>>
>>>
>>> is it really fair to assume that target address is always expected to be
>>> non NULL?
>>>
>>>
>>> I've added a bit of debug code to pn_messenger_endpoints() so it reads:
>>>
>>>link = pn_link_head(conn, PN_LOCAL_ACTIVE | PN_REMOTE_ACTIVE);
>>>while (link) {
>>>  if (pn_link_is_sender(link)) {
>>>static int addrnull, addrok;
>>>const char *address = pn_terminus_get_address(pn_**
>>>
>>> link_target(link));
>>>if (!address) {
>>>  addrnull++;
>>>} else {
>>>  addrok++;
>>>}
>>>fprintf(stderr, "links with null address: %d links with ok address
>>> %d\n",
>>>addrnull, addrok);
>>>pni_pump_out(messenger, address, link);
>>>
>>>
>>> and I never see 'addrok' change from 0
>>>
>>>
>>> when pni_pump_out is called with address==NULL:
>>>
>>> int pni_pump_out(pn_messenger_t *messenger, const char *address,
>>> pn_link_t
>>> *sender)
>>> {
>>>pni_entry_t *entry = pni_store_get(messenger->outgoing, address);
>>>
>>>if (!entry) return 0;
>>>
>>> pni_store_get cheerfuly returns first message on the list
>>>
>>>
>>> end effect is that random clients start receiving messages not directed
>>> at
>>> them.
>>>
>>>
>>> For some inexplicable reason is mostly works out while there are just two
>>> clients
>>> connected to the messenger and we're not pushing it really hard. Still
>>> trying to come
>>> up with a simple test-case.
>>>
>>> Can anyone shed some light how the addressing on the link level is
>>> supposed to work in mesenger?
>>>
>>> Bozzo
>>>
>>>
>


Re: Send and receive a message through a broker.

2013-10-31 Thread Rafael Schloming
If you build the qpid cpp broker with proton enabled, you should be able to
use the qpid messaging API to send and receive messages over AMQP 1.0 the
same way it is used for 0-10.

--Rafael


On Thu, Oct 31, 2013 at 5:19 AM, Tomáš Šoltys wrote:

> Hi,
>
> I am trying to get into a AMQP 1.0 and proton library but I am bit lost.
>
> With AMQP 0.10 you could create an exchange, queue bind them together. Then
> you could send a message with some routing key and read it from the
> particular queue. This seems to have changed.
>
> I have noticed that there are topics now.
>
> Is there a way to send a message and receive it in similar fashion as with
> AMQP 0.10 using proton?
> Thanks,
>
> Tomas Soltys
>


Re: [jira] [Commented] (PROTON-401) Ordering issue prevents credit drain from working properly

2013-11-05 Thread Rafael Schloming
You should be able to use pn_link_get_drain(sender) to determine the drain
mode of a sender.


On Tue, Nov 5, 2013 at 11:50 AM, Ted Ross (JIRA)  wrote:

>
> [
> https://issues.apache.org/jira/browse/PROTON-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13814034#comment-13814034]
>
> Ted Ross commented on PROTON-401:
> -
>
> If there is a way for the Engine user to determine the drain mode of a
> sender, I believe I can solve my problem.
>
> > Ordering issue prevents credit drain from working properly
> > --
> >
> > Key: PROTON-401
> > URL: https://issues.apache.org/jira/browse/PROTON-401
> > Project: Qpid Proton
> >  Issue Type: Bug
> >  Components: proton-c
> >Affects Versions: 0.5
> >Reporter: Ken Giusti
> >Assignee: Rafael H. Schloming
> > Fix For: 0.6
> >
> > Attachments: drain-error.patch, drain-hack.patch
> >
> >
> > If the sending link calls pn_link_drained() to indicate that it has send
> all pending data, and afterwards it receives a Flow frame with drain=true
> from the peer, then the drain never completes.
> > The ordering is the problem: if the flow frame w/drain=true is received
> _BEFORE_ the sender calls pn_link_drained(), then it works.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.1#6144)
>


Re: The implicit reply_to address

2013-11-05 Thread Rafael Schloming
Hi,

I'm scratching my head a bit to understand your scenario so I'll answer the
easy question first. ;-) The address format currently used by messenger
isn't part of any official spec, and the official spec will likely look a
bit different. When the spec is ratified, we will have to change and
probably do some work to maintain backwards compatibility.

Now as for your scenario, I applied your patch and ran through the scenario
you describe and what I observed was the first reply going from window 1 to
window 3 getting dropped due to the (intentionally) aborted connection, and
the second reply ending up going to window 2 because the window 1 messenger
saw that there was no connection and was able to initiate its own
connection based on the address in the reply_to. What is perhaps a little
odd about this configuration is the fact that the messenger in window 3 has
chosen a dns registered name that collides with the ip/port used in window
2. If you imagine instead that the messenger in window 3 is actually
subscribed to  amqp://~127.0.0.1:23456/, then the behaviour could be quite
useful. The server could in fact get a reply back to the client even if the
client's original connection dies.

So assuming I understand your scenario/suggestion correctly, I'd say the
ability to choose a dns registered name for your messenger is actually a
feature, not a bug. I'll also point out that because the messenger name is
placed in the reply_to address with simple text substitution, you can
always choose your messenger name to be '/foo' instead of foo (or indeed
you could use any illegal domain character) in order to avoid the
possibility of accidental collision.

Does this help at all or do I have the wrong end of the stick regarding
your scenario?

--Rafael

On Sun, Nov 3, 2013 at 11:47 AM, Bozo Dragojevic  wrote:

> On 3. 11. 13 17:41, Bozo Dragojevic wrote:
>
> [snip]
>
>  The attached patch 0002-... implements the former solution.
>>
>>
> I just noticed it's possible to say reply_to = "~/foo/bar". If the
> principle is sound I can supply the rest of needed changes.
>
> Bozzo
>


Proton 0.6 RC1

2013-11-08 Thread Rafael Schloming
Hi Everyone,

I've put together a 0.6 RC1. Sources/binaries are in the usual places
listed below. There are still a couple of outstanding 0.6 JIRAs, but they
should not preclude getting some good testing in on the bulk of the code.
FYI I will be travelling for the next two weeks, so please record any
issues you find in JIRA and I'll sort through them when I get back.

Source: http://people.apache.org/~rhs/qpid-proton-0.6rc1/
Java Binaries:
https://repository.apache.org/content/repositories/orgapacheqpid-104/

--Rafael


Re: Dynamic language install and CMAKE_INSTALL_PREFIX

2013-12-05 Thread Rafael Schloming
As I understand it, the following process is roughly what we're going
through for each binding to determine possible/actual install locations:

1. Query the (python/perl/ruby/php/...) interpreter to find the appropriate
directory that is in the interpreters search path by default, e.g.
site-packages for python. Let's call this the QUERIED_LOCATION.
2. Modify the QUERIED_LOCATION by substituting the CMAKE_INSTALL_PREFIX for
the interpreter's own install prefix. Let's call this the
CONSTRUCTED_LOCATION.
3. In the case where {LANG}_INSTALL_PREFIX is specified, step (2) is
modified to substitute {LANG}_INSTALL_PREFIX rather than
CMAKE_INSTALL_PREFIX. Let's call this the CUSTOM_LOCATION.

Just in case things aren't clear from above, here are some example values
for python:

  Example 1: CMAKE_INSTALL_PREFIX=/usr/local
  ===
  QUERIED_LOCATION=/usr/lib64/python2.6/site-packages
  CONSTRUCTED_LOCATION=/usr/local/lib64/python2.6/site-packages
  CUSTOM_LOCATION=*N/A*

  Example 2: CMAKE_INSTALL_PREFIX=/usr, PYTHON_INSTALL_PREFIX=/usr/local
  ===
  QUERIED_LOCATION=/usr/lib64/python2.6/site-packages
  CONSTRUCTED_LOCATION=/usr/lib64/python2.6/site-packages
  CUSTOM_LOCATION=/usr/local/lib64/python2.6/site-packages

  Example 3: CMAKE_INSTALL_PREFIX=/home/rhs/proton,
PYTHON_INSTALL_PREFIX=/home/rhs/modules
  ===
  QUERIED_LOCATION=/usr/lib64/python2.6/site-packages
  CONSTRUCTED_LOCATION=/home/rhs/proton/lib64/python2.6/site-packages
  CUSTOM_LOCATION=/home/rhs/modules/lib64/python2.6/site-packages

The existing trunk behaviour is to simply use the QUERIED_LOCATION
directly. This will place installed code where it will be found by
precisely the interpreter it was built against without any users being
required to set up custom search paths. This has a number of benefits and a
couple of drawbacks also.

The primary benefit is that the build will adapt itself to the user's
environment. If the user has some custom python/ruby interpreter in their
path, it will configure and built itself against it and get the user up and
running right away. This makes for a very simple and idiot proof README for
someone wanting to get up and running quickly from a source build, and for
much the same reason it is also very handy for testing. I depend on it
myself quite a bit since I have a number of differently configured VMs that
I use for install testing, and for each one I can simply log in and use the
same incantation and be confident I'm running/testing the code that I
expect. There is also a second order testing benefit since having a dirt
simple and robust build option lets us give a source tarball to other
people to test easily and not have to explain to them how to set up custom
search paths for each language before they can get bootstrapped into
running test code.

The drawbacks that have been pointed out are that when you do specify a
CMAKE_INSTALL_PREFIX, it is unintuitive for things to be placed outside
that prefix, and this can happen if the QUERIED_LOCATION for a given
interpreter happens to be outside the specified prefix. It's also been
pointed out that if you happen to have an rpm installed version of proton
then with the existing trunk behaviour you could could end up accidentally
overwriting it since rpm installs proton code into the QUERIED_LOCATION
also.

Based on my reading, the change you've pointed to removes the ability to
install directly to the QUERIED_LOCATION and instead uses the
CONSTRUCTED_LOCATION. It also adds a consistent control interface for
providing a custom location, i.e. the {LANG}_INSTALL_PREFIX variables.
Assuming I've read this correctly, I have the following comments.

First, I'm not ok with losing the ability to install directly to the
queried location. I don't mind if it's not the default, but I want a simple
and easy way to get back that behaviour as it is of significant value in
the scenarios I've mentioned above.

Second, I think it's important to realize that the CONSTRUCTED_LOCATION is
almost guaranteed to be meaningless and quite possibly harmful if it is at
all different from the QUERIED_LOCATION. To understand this you can take a
look at the values from example 1 above. The queried interpreter is the
system interpreter installed under /usr, but the binding code is installed
under /usr/local. In the best case scenario, this code will never be found
because nothing under /usr/local is in the default python search path. In
the worst case scenario there may be other python interpreters installed
under /usr/local that will find and attempt to load the code but will fail
because the code was built against a differently configured python (the
python version could be different, or it could even be the same version but
with a different build configuration).

Third, the custom location doesn't actually give you full control over
where the module is installed because it appends a portion of the queried
location. This strikes

Re: Dynamic language install and CMAKE_INSTALL_PREFIX

2013-12-05 Thread Rafael Schloming
On Thu, Dec 5, 2013 at 2:29 PM, Darryl L. Pierce  wrote:

> On Thu, Dec 05, 2013 at 01:57:22PM -0500, Rafael Schloming wrote:
> 
> > So overall I'd say this change should have some kind of switch to control
> > whether the QUERIED_LOCATION is used directly, and I'd argue that for the
> > CUSTOM_LOCATION we should just pass through directly what the user
> supplies
> > and not attempt to merge it with the queried value. As for the
> > CONSTRUCTED_LOCATION, it's worth noting that we don't necessarily need to
> > compute that either, we could just pick an arbitrary location, e.g.
> > ${CMAKE_INSTALL_PREFIX}/lib64/proton-bindings or some such thing.
>
> I find the simplicity in this scenario to be very attractive. It also
> avoids situations like what I saw with the PHP ini directory location,
> to use project-defined directories for defaults.
>
> What I've seen, for each of the language bindings, is a need to know:
>
>  1. the directory to install platform-independent modules,
>  2. the directory to install platform-specific modules, and
>  3. the directory to install configuration (PHP only)
>
> So perhaps ${LANG}_LIBDIR, ${LANG}_ARCHDIR and ${LANG}_CONFDIR? In an
> RPM specfile we could define each one using the provide language's macro
> and it would be a fairly easy integration point. And if you don't define
> them then they work as you suggest above.
>

Sounds good to me. We could also use one for docs. The python bindings have
documentation, and hopefully the other bindings will eventually need
somewhere for docs as well.

--Rafael


>
> > Wherever we end up, we should also probably abstract the behaviour into a
> > macro so that the behaviour is easier to keep consistent between
> bindings,
> > and so that new bindings pick up the same behaviour automatically.
>
> +1
>
> --
> Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
> Delivering value year after year.
> Red Hat ranks #1 in value among software vendors.
> http://www.redhat.com/promo/vendor/
>
>


Re: Dynamic language install and CMAKE_INSTALL_PREFIX

2013-12-05 Thread Rafael Schloming
On Thu, Dec 5, 2013 at 3:26 PM, Justin Ross  wrote:

> On Thu, Dec 5, 2013 at 1:57 PM, Rafael Schloming  wrote:
> > The primary benefit is that the build will adapt itself to the user's
> > environment. If the user has some custom python/ruby interpreter in their
> > path, it will configure and built itself against it and get the user up
> and
> > running right away. This makes for a very simple and idiot proof README
> for
> > someone wanting to get up and running quickly from a source build, and
> for
> > much the same reason it is also very handy for testing. I depend on it
> > myself quite a bit since I have a number of differently configured VMs
> that
> > I use for install testing, and for each one I can simply log in and use
> the
> > same incantation and be confident I'm running/testing the code that I
> > expect. There is also a second order testing benefit since having a dirt
> > simple and robust build option lets us give a source tarball to other
> > people to test easily and not have to explain to them how to set up
> custom
> > search paths for each language before they can get bootstrapped into
> > running test code.
> >
> > The drawbacks that have been pointed out are that when you do specify a
> > CMAKE_INSTALL_PREFIX, it is unintuitive for things to be placed outside
> > that prefix, and this can happen if the QUERIED_LOCATION for a given
> > interpreter happens to be outside the specified prefix. It's also been
> > pointed out that if you happen to have an rpm installed version of proton
> > then with the existing trunk behaviour you could could end up
> accidentally
> > overwriting it since rpm installs proton code into the QUERIED_LOCATION
> > also.
>
> Yes, indeed.  These are coming from me.  I want:
>
> 1. A relatively easy way to do test builds under a single prefix.
> What's there now requires a tediously long command line that repeats
> the prefix for each binding, and my scripts would surely break if
> someone added a new binding.
>
> 2. A safely isolated default prefix: It's more than "you could end up
> accidentally overwriting". It's really quite likely, because the
> out-of-the-box, no-extra-steps behavior will write to OS-reserved
> locations.  Down below you talk about doing harm.  *This* is harm.
>
> > Based on my reading, the change you've pointed to removes the ability to
> > install directly to the QUERIED_LOCATION and instead uses the
> > CONSTRUCTED_LOCATION. It also adds a consistent control interface for
> > providing a custom location, i.e. the {LANG}_INSTALL_PREFIX variables.
> > Assuming I've read this correctly, I have the following comments.
> >
> > First, I'm not ok with losing the ability to install directly to the
> > queried location. I don't mind if it's not the default, but I want a
> simple
> > and easy way to get back that behaviour as it is of significant value in
> > the scenarios I've mentioned above.
>
> As a note in passing (since I'm not really proposing you change back):
> most autotools- or cmake-based projects don't have this behavior, and
> we all get along fine.  For instance, qpid-cpp has for a long time.
> That's because in the end it really isn't difficult to explain to
> users that they need to edit their interpreter's search path to match
> the install path they choose.  All the value you get in those test
> scenarios is still easily had by doing this.  Indeed, that's why all
> those other projects haven't felt pressure to add query-based install
> paths.
>

Most projects are established enough to already be part of every
distribution and can therefore depend on rpm style install behaviours to
provide a seamless experience for the user. As such, they don't need to
think of their build system as performing any sort of installation function
at all other than the bare minimum necessary to bootstrap into distros.
Proton on the other hand is a) not in every distribution yet, and b) wants
to be able to reach environments that don't necessarily use one of the
popular distribution systems. Given that, I would argue that proton's build
system has the perhaps somewhat old fashioned requirement of also
functioning as an installer, and being able to easily install into the
usual locations fits with that role.

Even disregarding the installation experience, the bottom line is that
without this I will need to hardcode different install scripts for each
differently configured environment I currently test with, or short of that
try to implement the same functionality with my own script outside of the
build system.  This will result in a 

Re: Python wrapper - SASL and SSL class API

2013-12-11 Thread Rafael Schloming
On Wed, Dec 11, 2013 at 9:35 AM, Ken Giusti  wrote:

>
> Hi all - just wanted to get some opinions on $Subject:
>
> While I was trying to implement a fix for
> https://issues.apache.org/jira/browse/PROTON-476  I found that the
> lifecycle model for the python SASL and SSL objects differs for the C
> engine.  I think the python wrapper's impl is buggy.
>
> In the C engine, these objects are singletons with respect to their
> associated transport - there can only be one SSL and SASL object associated
> with a given transport.  This is enforce by the C api - the transport
> provides factory classes for these objects.
>
> The python wrapper doesn't enforce this.  For both objects, a "public"
> constructor is supplied (I say "public" because it is exported by the
> wrapper's __all__ list).  This makes it trivial for an application to
> construct multiple instances of SASL/SSL objects that reference the same
> underlying C object.  While this can technically be done safely using
> reference counting, I think it may lead to unanticipated behavior - not to
> mention that it differs from the object model provide by the C engine.
>
> I'd like to fix this by modifying the python wrapper to remove the SSL and
> SASL objects from the __all__ list, and provide factory methods on the
> Transport class for creating instances of these objects.
>
> This would result in a change to the public API.
>

Why exactly do you need to change the API to do this? I would expect there
should be a number of ways to keep it the same, e.g. rename the class and
use a factory function with the same name as the class, or override __new__
if you want to keep the class name the same. Am I missing something?

--Rafael


Proton 0.6 RC2

2013-12-18 Thread Rafael Schloming
Hi Everyone,

I've just posted proton 0.6 RC2. The changes since RC1 are listed in the
attached file. Please check it out and let me know if you run into any
issues.

Sources are available here:

http://people.apache.org/~rhs/qpid-proton-0.6rc2/

Java binaries are available here:

https://repository.apache.org/content/repositories/orgapacheqpid-070/

--Rafael

r1541098 | mcpierce | 2013-11-12 10:15:27 -0500 (Tue, 12 Nov 2013) | 4 lines

PROTON-450: Use random ports for Ruby Rspec tests.

The ports used are randomly chosen above 5700, to avoid interference
from a running Qpid broker.

r1542435 | kgiusti | 2013-11-15 19:52:22 -0500 (Fri, 15 Nov 2013) | 1 line

PROTON-449: skip SSL tests if no SSL libraries available.

r1543085 | kgiusti | 2013-11-18 12:06:45 -0500 (Mon, 18 Nov 2013) | 1 line

PROTON-453: add missing valgrind suppressions

r1543112 | mcpierce | 2013-11-18 14:17:42 -0500 (Mon, 18 Nov 2013) | 5 lines

PROTON-452: Expose the Messenger interrupt method in Ruby.

Exposed the method in Qpid::Proton::Messenger. Also provided the
Qpid::Proton::Error::InterruptedError type that is thrown when
necessary.

r1543292 | kgiusti | 2013-11-18 21:34:29 -0500 (Mon, 18 Nov 2013) | 1 line

PROTON-453: add valgrind suppression for older libraries

r1543481 | mcpierce | 2013-11-19 11:36:30 -0500 (Tue, 19 Nov 2013) | 1 line

PROTON-457: Exposed the interrupt method of Messenger in Perl.

r1543550 | mcpierce | 2013-11-19 14:57:11 -0500 (Tue, 19 Nov 2013) | 1 line

PROTON-454: Added the route method to Ruby's Messenger class.

r1543565 | mcpierce | 2013-11-19 15:42:26 -0500 (Tue, 19 Nov 2013) | 1 line

PROTON-455: Added rewrite method to Ruby Messenger class.

r1543595 | mcpierce | 2013-11-19 17:05:02 -0500 (Tue, 19 Nov 2013) | 1 line

PROTON-456: Added password property to Ruby Messenger class.

r1545697 | tross | 2013-11-26 10:54:20 -0500 (Tue, 26 Nov 2013) | 2 lines

PROTON-466 - driver fix.


r1547022 | rhs | 2013-12-02 09:02:16 -0500 (Mon, 02 Dec 2013) | 1 line

initial stab at PROTON-439

r1547025 | rhs | 2013-12-02 09:12:14 -0500 (Mon, 02 Dec 2013) | 1 line

fixes for PROTON-439

r1547027 | rhs | 2013-12-02 09:14:38 -0500 (Mon, 02 Dec 2013) | 1 line

PROTON-290: applied updated patch from Bozo

r1547036 | rhs | 2013-12-02 09:46:37 -0500 (Mon, 02 Dec 2013) | 1 line

PROTON-290: skip buffered tests for java

r1547038 | rhs | 2013-12-02 09:47:38 -0500 (Mon, 02 Dec 2013) | 1 line

PROTON-418: fixed data error interface to match other APIs

r1547063 | rhs | 2013-12-02 10:39:49 -0500 (Mon, 02 Dec 2013) | 1 line

PROTON-439: properly format subscription address

r1547066 | mcpierce | 2013-12-02 10:42:49 -0500 (Mon, 02 Dec 2013) | 1 line

PROTON-463: Add Tracker class to Perl bindings

r1547067 | mcpierce | 2013-12-02 10:44:22 -0500 (Mon, 02 Dec 2013) | 1 line

PROTON-463: Updated ChangeLog for Perl

r1547072 | mcpierce | 2013-12-02 10:58:11 -0500 (Mon, 02 Dec 2013) | 4 lines

PROTON-463: Fixed the Perl incoming tracker method name.

Renamed it to "get_incoming_tracker" to make the "get_outgoing_tracker"
method.

r1547089 | rhs | 2013-12-02 11:34:31 -0500 (Mon, 02 Dec 2013) | 1 line

PROTON-439: don't substitute remode addresses that are already absolute

r1547100 | rhs | 2013-12-02 11:45:59 -0500 (Mon, 02 Dec 2013) | 1 line

PROTON-439: remove spurious slash for absolute addresses

r1547155 | mcpierce | 2013-12-02 14:14:54 -0500 (Mon, 02 Dec 2013) | 4 line

Re: Proton 0.6 RC2

2013-12-18 Thread Rafael Schloming
You're right, the address accessor is supposed to block until the address
is available. It certainly looks like it's not doing that. I'll try to
reproduce locally or failing that produce a debugging patch for you to try
in your environment.

--Rafael


On Wed, Dec 18, 2013 at 5:44 PM, Ted Ross  wrote:

> Digging further into this, I see that Messenger is providing the
> subscription and the address before the dynamic-attach handshake is
> completed.  It was my understanding that one or both of those calls would
> block until the name was resolved.
>
> -Ted
>
>
> On 12/18/2013 05:25 PM, Ted Ross wrote:
>
>> QPID-439 seems to have reverted in this RC.
>>
>> Here's my client code:
>>
>> self.M.route("amqp:/*", "amqp://%s/$1" % host)
>> self.subscription = self.M.subscribe("amqp:/#")
>> self.reply = self.subscription.address
>> print "REPLY:", self.reply
>>
>> The output is:
>>
>> REPLY: None
>>
>> yet the trace looks like this:
>>
>> [0x26135e0]:0 -> @attach(18) [name="receiver-xxx", handle=0,
>> role=true, snd-settle-mode=2, rcv-settle-mode=0,
>> source=@source(40) [durable=0, timeout=0, dynamic=true],
>> target=@target(41) [durable=0, timeout=0, dynamic=false],
>> initial-delivery-count=0]
>> [0x26135e0]:0 <- @attach(18) [name="receiver-xxx", handle=0,
>> role=false, snd-settle-mode=2, rcv-settle-mode=0,
>> source=@source(40) [address="amqp:/_topo/0/
>> Router.A/temp.4TQT_a",
>> durable=0, timeout=0, dynamic=true], initial-delivery-count=0]
>>
>>
>>
>> On 12/18/2013 04:53 PM, Rafael Schloming wrote:
>>
>>> Hi Everyone,
>>>
>>> I've just posted proton 0.6 RC2. The changes since RC1 are listed in the
>>> attached file. Please check it out and let me know if you run into any
>>> issues.
>>>
>>> Sources are available here:
>>>
>>> http://people.apache.org/~rhs/qpid-proton-0.6rc2/ <
>>> http://people.apache.org/%7Erhs/qpid-proton-0.6rc2/>
>>>
>>> Java binaries are available here:
>>>
>>> https://repository.apache.org/content/repositories/orgapacheqpid-070/
>>>
>>> --Rafael
>>>
>>>
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>> For additional commands, e-mail: users-h...@qpid.apache.org
>>>
>>
>>
>>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Proton 0.6 RC2

2013-12-19 Thread Rafael Schloming
I believe I've fixed this on trunk. Let me know if you still see the
problem there.

--Rafael


On Wed, Dec 18, 2013 at 5:44 PM, Ted Ross  wrote:

> Digging further into this, I see that Messenger is providing the
> subscription and the address before the dynamic-attach handshake is
> completed.  It was my understanding that one or both of those calls would
> block until the name was resolved.
>
> -Ted
>
>
> On 12/18/2013 05:25 PM, Ted Ross wrote:
>
>> QPID-439 seems to have reverted in this RC.
>>
>> Here's my client code:
>>
>> self.M.route("amqp:/*", "amqp://%s/$1" % host)
>> self.subscription = self.M.subscribe("amqp:/#")
>> self.reply = self.subscription.address
>> print "REPLY:", self.reply
>>
>> The output is:
>>
>> REPLY: None
>>
>> yet the trace looks like this:
>>
>> [0x26135e0]:0 -> @attach(18) [name="receiver-xxx", handle=0,
>> role=true, snd-settle-mode=2, rcv-settle-mode=0,
>> source=@source(40) [durable=0, timeout=0, dynamic=true],
>> target=@target(41) [durable=0, timeout=0, dynamic=false],
>> initial-delivery-count=0]
>> [0x26135e0]:0 <- @attach(18) [name="receiver-xxx", handle=0,
>> role=false, snd-settle-mode=2, rcv-settle-mode=0,
>> source=@source(40) [address="amqp:/_topo/0/
>> Router.A/temp.4TQT_a",
>> durable=0, timeout=0, dynamic=true], initial-delivery-count=0]
>>
>>
>>
>> On 12/18/2013 04:53 PM, Rafael Schloming wrote:
>>
>>> Hi Everyone,
>>>
>>> I've just posted proton 0.6 RC2. The changes since RC1 are listed in the
>>> attached file. Please check it out and let me know if you run into any
>>> issues.
>>>
>>> Sources are available here:
>>>
>>> http://people.apache.org/~rhs/qpid-proton-0.6rc2/ <
>>> http://people.apache.org/%7Erhs/qpid-proton-0.6rc2/>
>>>
>>> Java binaries are available here:
>>>
>>> https://repository.apache.org/content/repositories/orgapacheqpid-070/
>>>
>>> --Rafael
>>>
>>>
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>>> For additional commands, e-mail: users-h...@qpid.apache.org
>>>
>>
>>
>>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Proton 0.6 RC2

2013-12-19 Thread Rafael Schloming
On Wed, Dec 18, 2013 at 6:43 PM, Frank Quinn  wrote:

> Hi Rafael,
>
> Could we get PROTON-420 in there too? I attached a patch to the ticket
> which fixes, it's just for compiler warning prevention when compiling
> against proton with strict flags.
>

I have applied a modified version of the PROTON-420 patch. I really don't
like adding a space to examples. I feel like it is pretty inevitable that
people will cut and paste example snippets and if there is an extra space
in there then it will lead to lots of problems. I do recall that there is
an alternative syntax for detailed comment descriptions in doxygen that
makes use of // style comments rather than /* style comments. I've switched
over to this form and removed the -Wno-comment from the build. This appears
to work for me, so hopefully it will fix your warnings also. SVN appears to
be down now, but I will check it in as soon as it is back up.

--Rafael


improving cross language maintainability

2013-12-19 Thread Rafael Schloming
Hi Everyone,

I've been struggling with some of the cross language maintenance aspects of
proton for a while, and I believe we need to take some steps to improve the
situation. I'm one of a tiny number of people (two possibly) who regularly
commit changes to both the Java and C codebase and attempt to keep the two
in sync and at feature parity. Because of this, not everyone is necessarily
aware of the process, but to summarize the issue, currently there are far
too many moving parts and layers of indirection involved. This is not only
a significant drag on my personal productivity, but perhaps more
importantly it is a significant barrier to growing the number of proton
contributors in general as all those moving parts and layers of indirection
need to be understood before being able to make complete contributions.

To get an idea of what's going on here I think it helps to look at what's
involved in a simple change. Recently I noticed an edge case around status
updates in the messenger interface. The fix involved adding another value
to messenger's status enum and making a trivial logic change to make use of
that value under the appropriate circumstances. You can look at the full
commit here[1] if you like, but it breaks down in the following way:

Changes to C implementation (.h and .c file): 8 LOC
Changes to the python test shim for C: 3 LOC
Changes to the Java API: 2 LOC
Changes to the pure Java implementation: 6 LOC
Changes to the JNI binding: 4 LOC
Changes to the python test shim for Java: 3 LOC
Changes to the python test suite: 2 LOC

Now obviously from a personal productivity perspective it is at a minimum
annoying to have to touch so many different parts in order to make even a
trivial change, but setting that aside for the moment, what is really
sobering about this is that each one of the above parts involve a non
trivial learning curve on their own, and while it's true that only a few
lines of code are needed in each area, it is necessary to understand each
piece before being able to write the correct few lines of code. This
presents a pretty daunting hurdle for bringing new contributors to the
codebase up to the level needed to make even a trivial change like the one
above.

The JNI binding and the python test shim for Java both exist to serve
similar purposes, namely to facilitate running a common test suite against
both implementations in order to ensure common behaviour. The python test
shims allow the python test suite to run against both proton-c and proton-j
(via jython), and the JNI binding allows pure  Java tests to be run against
proton-c as well as proton-j.

Currently by my count there are about 19 such tests in Java. By comparison
there are about 266 tests in the python test suite. Also, judging by
commits, the python test suite is growing/actively maintained, and the java
system tests (the subset of java tests that are run against both
implementations) are neither. On top of this the JNI binding itself has
suffered significantly from bit rot. As far as I know it is not made use of
outside of our own test infrastructure, and currently about 50% of the
tests run against it are skipped because it is only minimally maintained
when necessary to get the tests to pass.

Because of all this I'm proposing that we remove the JNI binding and roll
back the Java API/impl split that was (hastily) done to facilitate the test
infrastructure. This should significantly simplify the development process
relative to what it is now, and while there is still more learning curve
than desired with the python shims, I believe this will put us in a
position to improve the shims, remove duplication, and minimize the
overhead associated with them, ultimately allowing us to make the codebase
more transparently communicate its design and  hopefully lessen the
learning curve for new contributions.

I would love to hear thoughts and/or alternative ideas on how to improve
things. I would like to start addressing this early in the 0.7 development
cycle.

[1] http://svn.apache.org/r1551950


Proton 0.6 RC3

2013-12-19 Thread Rafael Schloming
Hi Everyone,

I've put out an RC3 with fixes to the issues people have noted so far with
RC2. You can find the source tarballs here:

http://people.apache.org/~rhs/qpid-proton-0.6rc3/

The java binaries are here:

https://repository.apache.org/content/repositories/orgapacheqpid-003/

I've attached the list of changes since RC2.

--Rafael

r1552390 | rhs | 2013-12-19 13:22:51 -0500 (Thu, 19 Dec 2013) | 1 line

PROTON-420: modified pn_messenger_route comment to use alternative detailed 
block syntax to avoid comment warnings due to embedded examples containing /\*

r1552341 | mcpierce | 2013-12-19 11:29:19 -0500 (Thu, 19 Dec 2013) | 3 lines

PROTON-482: Fix the Ruby install directory.

Use vendorarchdir rather than vendorlibdir.

r1552223 | rhs | 2013-12-18 22:48:51 -0500 (Wed, 18 Dec 2013) | 1 line

added released status

r1552221 | rhs | 2013-12-18 22:41:15 -0500 (Wed, 18 Dec 2013) | 1 line

PROTON-420: added error.h portion of patch

r1552218 | rhs | 2013-12-18 22:16:12 -0500 (Wed, 18 Dec 2013) | 1 line

fixed braino in PROTON-439



Re: improving cross language maintainability

2013-12-19 Thread Rafael Schloming
On Thu, Dec 19, 2013 at 9:01 AM, Darryl L. Pierce wrote:

> On Thu, Dec 19, 2013 at 08:49:59AM -0500, Rafael Schloming wrote:
> 
> > I would love to hear thoughts and/or alternative ideas on how to improve
> > things. I would like to start addressing this early in the 0.7
> development
> > cycle.
>
> In a similar way, I'm trying to keep our Ruby and Perl bindings in
> parity as best I can with what's going on in the C and Python code. Can
> we use JIRA to create umbrella tasks for when new features are added,
> with subtasks that are binding specific? Or if there's a change to the C
> code that would require a change in the bindings, have the C code be the
> top JIRA and the language bindings be subtasks to that? That way I
> wouldn't need to look through commits to see what's changed in C to know
> what should be added to the other languages.
>
> Also, could we add a component for each of the language bindings? It
> doesn't feel right to add a JIRA for something in Ruby that's at the
> Ruby level but have its component be "proton-c".
>

It certainly makes sense to make it as easy as possible to track what is
going on, and I can see how that would help keep bindings up to date where
there is interest and resources to do so. However we do this though, I
don't want to just brainlessly duplicate each C jira across every binding
(not that you're necessarily suggesting this).

The problem I have with that approach is that there isn't equivalent
interest/resources associated with each binding, so e.g. if we were to make
every JIRA a full umbrella that depends on sub tasks for each binding we
would continually accumulate php jiras that never end up getting closed off
because we don't keep php as up to date as the other bindings, and this in
turn would cause the umbrella JIRAs to never get closed off. Jira is really
a task oriented tool, and I think JIRAs should really only be created when
there is intention/interest to actually complete the task they represent,
otherwise they usually end up being noise/clutter that will eventually be
irrelevant and out of date. I'd suggest that perhaps a more document
oriented description of those features for which we are trying to keep
parity would possibly be helpful.

All that said, I'm certainly sure we can improve our usage of JIRA, and
I've gone ahead and added ruby-binding, python-binding, perl-binding, and
php-binding components as you suggest.

--Rafael


Re: improving cross language maintainability

2013-12-19 Thread Rafael Schloming
On Thu, Dec 19, 2013 at 2:44 PM, Ken Giusti  wrote:

> Sorry for top-posting.  I'm trying to understand the consequences of what
> you are proposing.
>
> First, as I understand it, there are two separate test suites in the
> proton tree: one written in Java - containing 19 tests as you point out -
> and a much larger one written in python.  Each test suite exercises both
> the Java and C implementations.  By testing both implementations using the
> same tests, we ensure consistency across the two implementations (some, as
> a lot of the python tests are skipped)
>
> What you're proposing would remove the ability for the Java test suite to
> exercise the C implementation, right?
>

Yes, it would remove the ability for Java tests to exercise C code by
calling it through JNI. We do have the interop test suite which checks that
there is common behaviour without using JNI, i.e. by comparing binary
output of codec and such across languages and I can imagine a number of
other ways we could test for common behaviour that would not involve Java
code calling into C code, e.g. comparing protocol traces, or running
interop scenarios over the wire.


>
> This means that only the python test suite would be used to ensure cross
> implementation consistency (Java v. C), right?
>

Yes


>
> What doesn't change is the two python wrapper implementations - one that
> wraps the Java API, the other wraps the C API - that are used by the python
> test suite to test both implementations.  We'd still have to keep both of
> those sync'ed.
>

Correct, although I believe there are some ways that we can significantly
improve the commonality between the two python wrapper implementations
(shims), and I suspect this will also improve the consistency of the APIs
as well.


>
> If all my assumptions above are correct, then I can live with this.  It
> still ensures (some) consistency checking between the two implementations.
>   Drift between the two python wrappers could be caught by the tests
> themselves, so I'm not too concerned about that.
>
> I'd like to see that time saved maintaining two test beds invested in
> bringing both implementations to parity - IMHO we're skipping far too many
> tests due to feature disparity.
>

Agreed.

--Rafael


Re: improving cross language maintainability

2013-12-20 Thread Rafael Schloming
On Fri, Dec 20, 2013 at 5:46 AM, Rob Godfrey wrote:

> Since I have pretty much given up on trying to work on proton, I'm not sure
> that my opinion counts for much anymore... however I suppose I'm a little
> curious how the JNI binding differs in "extra work" vs. PHP or Ruby or
> such. Similarly why the API distinction is "hasty" when again it should
> (modulo some language idiom changes) reflect the API that the C is
> presenting to Ruby/PHP/Python/etc.
>

That's a really good question. One obvious factor is that the JNI bindings
cover the full surface of the engine API, whereas the PHP and Ruby bindings
only cover the messenger API, the latter being much smaller. The other
factor is that we have a different workflow with how we maintain the
scripting language bindings. From a design perspective each of those
bindings are more tightly coupled to the C code since that is the only
implementation they use, but from a workflow perspective each one is
actually more independent than the Java binding. When we add a feature to
the C messenger implementation we don't try to add it simultaneously across
all the bindings. It generally goes into python because that is how we
test, and it goes into perl and ruby pretty quickly after that because
Darryl is good about keeping those up to date, but, e.g., in the case of
php, we haven't tried to keep the binding at full feature parity. Finally,
the JNI binding is a bit of a different animal from the others in that it
is not just performing a simple wrapping of the C API and directly exposing
that to Java. It is adapting between the C API underneath it and the Java
API above it.

As for the hastiness, perhaps that was a poor choice of words. The problem
with the Java API/impl split is that the API was produced from an existing
implementation after the fact in order to facilitate testing. As such it
isn't well suited to be implemented by two independent implementations,
particularly one that is JNI based.


>
> To me the fact that APIs aren't defined but that instead the only way of
> knowing what a particular call should do seems to be 1) look at the code in
> C and, if it's not clear, then 2) ask Rafi; is why I no longer feel it
> productive to try to contribute.  As such, it seems that this change is
> more about vertical scaling (making it easier for Rafi, and those sitting
> very near him, to work more productively), than horizontal scaling
> (allowing people who are not Rafi or with easy access to him to work on
> it).
>

Really there are two separate issues here. You're talking about conceptual
barriers to contribution, and I'm talking about mechanical barriers. Both
are barriers to contribution, and both prevent horizontal scaling. Fixing
some of the mechanical barriers to contribution does happen to also improve
vertical scaling, but as I said in my original post, I think the more
important factor is making it easier/possible for people to make complete
contributions.

FWIW, I agree we need lots of additional work around API definition and
documentation. Having more time for that is also one of the reasons I would
like to reduce some of the overhead involved in the current development
process.


>
> Overall I don't care very much about Messenger in Java (either pure Java or
> JNI), but I do care about the Engine.  When the Proton Engine was first
> discussed it was always said that the C code should be able to be called
> from within Java as an alternative to a pure Java impl. - has this changed?
>

I'm fine with that as a goal, however I can't say it's a priority for me.
More importantly, as I've said before, I don't believe the use of JNI in
production is at all compatible with the use of JNI for testing purposes.
The current JNI binding exposes each part of the engine API in very fine
detail. This what you want for testing, so you can exercise every part of
the C API from Java. This is probably exactly the opposite of what you'd
want for production use though. I expect you'd want to minimize the number
of times you cross the C/Java boundary and just tie into the engine API at
key points that are shifting a lot of bytes around.


>
> Going back to the horizontal vs. vertical scaling... If the project were
> structured differently I would be very happy to contribute to the engine
> side of things in Java.  However I think that would require us to be
> organised differently with a recognition that the Engine and Messenger are
> conceptually separate (with Messenger depending on the Engine API, and
> having no more authority over defining/testing the engine API than any
> other client - such as the C++ broker, ActiveMQ, or the upcoming JMS clent,
> etc.). As such I would rather see us fix the definition of the API (if you
> believe it to be "hasty") rather than simply try to remove any notion of
> their being an API which is distinct from the implementation.
>

I'm not sure exactly what you're asking for here with respect to Messenger
and Engine. I believe they are

Re: improving cross language maintainability

2013-12-20 Thread Rafael Schloming
On Fri, Dec 20, 2013 at 11:43 AM, Fraser Adams <
fraser.ad...@blueyonder.co.uk> wrote:

> I've been following this thread with interest and I guess that Rob's
> comment
>
>
> "
>
> However I think that would require us to be
> organised differently with a recognition that the Engine and Messenger are
> conceptually separate (with Messenger depending on the Engine API, and
> having no more authority over defining/testing the engine API than any
> other client
> "
>
> Struck some resonance.
>
> I think perhaps the layering is OK if you're familiar with the code base
> and perhaps it's more about "packaging" than layering, but FWIW coming into
> proton quite "cold" it seems like messenger and engine are essentially
> "combined". Now I know that's not the case now from the responses to my
> "how does it all hang together" question of a few weeks ago, but in terms
> of packaging that's still kind of the case.
>
> So what I mean by that is that qpid::messaging and proton messenger are as
> far as has been discussed with me "peer" APIs - both high level that can
> achieve similar things albeit with different semantics, whereas engine is a
> lower level API.
>
> Why is it in that case that the proton library is built that contains both
> messenger and engine? OK, so it's convenient, but as far as I'm aware
> neither qpid::messaging nor Ted's dispatch router actually use messenger at
> all, they both use engine? That would suggest to me that engine and
> messenger probably ought to be exposed as separate libraries? (and
> presumably there's a similar position in Java where the JMS implementation
> is engine not messenger based - though I've not looked at the jar
> packaging).
>
> I guess (perhaps related) it just might be better if messenger was in a
> separate source tree, which I'd agree might be a faff, but would clarify
> the hierarchical rather than peer relationship between messenger and engine.
>
> So I guess that my take is that Rafael is perfectly accurate when he says
> that "I believe they are currently layered precisely the way you
> describe" that certainly seems to be the case at the logical level, but at
> the "physical" level it's an awful lot less clear of the layering -
> certainly from the perspective of a novice trying to navigate the code base.
>

There is certainly a pragmatic element in keeping messenger and engine
bundled the way they are. From a development perspective, having messenger
in the same code base makes testing significantly easier. Writing a simple
command line program to fire off a few messages in order to reproduce some
scenario is quite trivial with messenger, but would involve lots of boiler
plate with just the engine directly. That's something I would like to work
on improving about the engine API, but it is very slow and difficult to
make changes given the current setup I described in my original post. Given
that, realistically I think if we were to pull messenger into a separate
code base, there would be multiple sources of duplicate effort, not only
having to duplicate/produce another non trivial multi lingual build system,
but we would also need to build up a test harness that duplicates a good
chunk of what messenger already does. Granted a chunk of that is
boilerplate that should be somehow refactored into the engine API proper so
that it is easier to use in general, and if this were to happen such a
split would probably be more feasible, but that kind of refactoring is
really prohibitively expensive given all the different shims and bindings
that would be impacted.

I'd also point out that from a user perspective I think there is some
synergy in bundling a higher level tool together with the engine. It is
very rare that you are going to want to embed the engine in something and
not want some convenient high level way to send that thing a message. Case
in point, Ted's dispatch router proper doesn't actually use messenger as
you say, however he ships it with several command line scripts that do use
messenger. Splitting them up would result in both him and his users having
to deal with an extra dependency.


> I'd have to agree also with Rob about the horizontal scaling aspects of
> lack of comments and documentation, particularly with respect to engine.
> I'm currently trying to get a JavaScript_messenger_  working because that
> seems tractable given the examples and documentation around messenger
> (though as you know even there I've had a fair few complications) my
> personal preference/familiarity is however around the JMS/qpid::messaging
> API and I'd ultimately quite like to have a JavaScript "binding" for that,
> but at this stage I wouldn't know where to begin and I'd probably end up
> "reverse engineering" the work Gordon has done with qpid::messaging and/or
> Rob and co. are doing with the new JMS implementation.
>
> I'd be interested to hear from Gordon and Ted about how they went about
> building capability on top of the engine API.
>
> I hope I'm not comin

[VOTE]: Release Proton 0.6 RC3 as 0.6 final

2014-01-02 Thread Rafael Schloming
Hi Everyone,

It looks like there haven't been any major issues reported so far with 0.6
RC3, so I guess it's about time to call for a formal vote.

Source is here:

http://people.apache.org/~rhs/qpid-proton-0.6rc3/

Java binaries are here:

https://repository.apache.org/content/repositories/orgapacheqpid-003/

Please peruse/test and register your vote:

[ ] Yes, release 0.6 RC3 as 0.6 final
[ ] No, 0.6 RC3 has the following issues...

--Rafael


Re: improving cross language maintainability

2014-01-07 Thread Rafael Schloming
So far there has been some good discussion on this thread. I can certainly
appreciate the frustration that has been expressed, specifically around
under defined APIs, and confusion regarding the various components of
proton. I believe there are steps we can take to improve both of those
situations. I've created additional components in JIRA for tracking changes
across bindings, and I'm currently working on some content describing
proton's overall architecture and how messenger and engine fit within it
and relate to each other.

That said, no amount of documentation or JIRA usage will help with the
issues I raised in my original post. In particular making even small code
changes requires touching too many different parts: (C code, Java impl,
Java API, JNI binding, C shim, Java shim, and python tests). As a first
step towards improving this I'd like to reiterate my proposal of removing
the JNI binding for the following reasons:

  - It is experiencing significant bit rot.
  - It is not currently used outside of testing.
  - It only provides a minimal amount of additional coverage (some 19 or so
tests as compared to the 266 tests already in the shared python test suite).
  - It is a significant maintenance burden as it covers the entire engine
API rather than just messenger.
  - It is unlikely to ever be of production value, because a JNI layer
designed for testing an existing C library has fundamentally different
requirements than a JNI layer designed to improve the performance of an
existing Java library.

--Rafael


Re: improving cross language maintainability

2014-01-08 Thread Rafael Schloming
On Tue, Jan 7, 2014 at 3:24 PM, Rob Godfrey  wrote:

> On 20 December 2013 19:49, Rafael Schloming  wrote:
>
> > On Fri, Dec 20, 2013 at 11:43 AM, Fraser Adams <
> > fraser.ad...@blueyonder.co.uk> wrote:
> >
> > > I've been following this thread with interest and I guess that Rob's
> > > comment
> > >
> > >
> > > "
> > >
> > > However I think that would require us to be
> > > organised differently with a recognition that the Engine and Messenger
> > are
> > > conceptually separate (with Messenger depending on the Engine API, and
> > > having no more authority over defining/testing the engine API than any
> > > other client
> > > "
> > >
> > > Struck some resonance.
> > >
> > > I think perhaps the layering is OK if you're familiar with the code
> base
> > > and perhaps it's more about "packaging" than layering, but FWIW coming
> > into
> > > proton quite "cold" it seems like messenger and engine are essentially
> > > "combined". Now I know that's not the case now from the responses to my
> > > "how does it all hang together" question of a few weeks ago, but in
> terms
> > > of packaging that's still kind of the case.
> > >
> > > So what I mean by that is that qpid::messaging and proton messenger are
> > as
> > > far as has been discussed with me "peer" APIs - both high level that
> can
> > > achieve similar things albeit with different semantics, whereas engine
> > is a
> > > lower level API.
> > >
> > > Why is it in that case that the proton library is built that contains
> > both
> > > messenger and engine? OK, so it's convenient, but as far as I'm aware
> > > neither qpid::messaging nor Ted's dispatch router actually use
> messenger
> > at
> > > all, they both use engine? That would suggest to me that engine and
> > > messenger probably ought to be exposed as separate libraries? (and
> > > presumably there's a similar position in Java where the JMS
> > implementation
> > > is engine not messenger based - though I've not looked at the jar
> > > packaging).
> > >
> > > I guess (perhaps related) it just might be better if messenger was in a
> > > separate source tree, which I'd agree might be a faff, but would
> clarify
> > > the hierarchical rather than peer relationship between messenger and
> > engine.
> > >
> > > So I guess that my take is that Rafael is perfectly accurate when he
> says
> > > that "I believe they are currently layered precisely the way you
> > > describe" that certainly seems to be the case at the logical level, but
> > at
> > > the "physical" level it's an awful lot less clear of the layering -
> > > certainly from the perspective of a novice trying to navigate the code
> > base.
> > >
> >
> > There is certainly a pragmatic element in keeping messenger and engine
> > bundled the way they are. From a development perspective, having
> messenger
> > in the same code base makes testing significantly easier.
>
>
> It might make it easier for you (since you have a complete view of every
> piece of the code and design) - for myself I found that it made things
> *much* harder, as tests were running through multiple layers rather than
> being isolated to a single component.  When a test "failed" it was entirely
> unclear which layer the failure resided in or even why the outcome should
> be as the test expeted.  Personally I'd say the testing is a strong
> argument for separation (although it means more work upfront it makes it
> easier for everyone in the long run).
>

I'm not suggesting messenger tests are in any way a substitute for API
tests against the engine itself, rather that are other important categories
of testing, e.g. soak testing, that would require building something that
is pretty much equivalent to what messenger already does.


>
>
> > Writing a simple
> > command line program to fire off a few messages in order to reproduce
> some
> > scenario is quite trivial with messenger, but would involve lots of
> boiler
> > plate with just the engine directly. That's something I would like to
> work
> > on improving about the engine API, but it is very slow and difficult to
> > make changes given the current setup I described in my original post.
>
>
>
> > Given
> > th

Re: Getting -9 when setting blocking to 0 in Perl

2014-01-09 Thread Rafael Schloming
Well, proton/error.h defines -9 to be PN_INPROGRESS, however given that
this is the source code of pn_messenger_set_blocking:

int pn_messenger_set_blocking(pn_messenger_t *messenger, bool blocking)
{
  messenger->blocking = blocking;
  return 0;
}

I don't see how it's possible for it to return anything other than 0.

--Rafael



On Wed, Jan 8, 2014 at 3:18 PM, Darryl L. Pierce  wrote:

> Since, in Perl, we don't have a true/false value, I try to turn off
> blocking in qpid::perl::Messenger with:
>
> my $msgr = qpid::perl::Messenger->new();
>
> $msgr->set_blocking(0); // I just added this as a passthrough to
> pn_messenger_set_blocking
>
> If I pass in 0, "0" or undef to signify it's non-blocking, I get a -9
> error.
> If I pass in 1 or some other value, things work correctly.
>
> Any ideas?
>
> --
> Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
> Delivering value year after year.
> Red Hat ranks #1 in value among software vendors.
> http://www.redhat.com/promo/vendor/
>
>


Re: Proton-C: How to set application properties?

2014-01-09 Thread Rafael Schloming
On Thu, Jan 9, 2014 at 11:12 AM, Andreas Mueller  wrote:

> Hi,
>
> I don't find a way to set the application properties of a message in
> Proton-C. Can someone give me a hint?
>

You should be able to access the application properties like this:

pn_data_t *properties = pn_message_properties(message);

You can then use the data API to access/modify the values, e.g.:

pn_data_put_map(properties);
pn_data_enter(properties);
pn_data_put_string(properties, ...); // key 1
pn_data_put_xxx(properties, ...);  // value 1
pn_data_put_string(properties, ...); // key 2
pn_data_put_yyy(properties, ...); // value 2
pn_data_exit(properties);


>
> Is there something like a user guide or getting started docs about
> Messenger?
>

What we have is currently available here:

  http://qpid.apache.org/components/messenger/index.html

It is probably a bit more developed for the python binding than for the C
API itself. Certainly accessing/manipulating data is properly
under-documented in C as all of that is done automatically in the python
binding and so requires much less documentation. Let me know if there are
particular parts you'd like to see filled out and I'll see about improving
them.

--Rafael


Re: Getting -9 when setting blocking to 0 in Perl

2014-01-09 Thread Rafael Schloming
On Thu, Jan 9, 2014 at 10:29 AM, Darryl L. Pierce wrote:

> On Thu, Jan 09, 2014 at 10:19:36AM -0500, Rafael Schloming wrote:
> > Well, proton/error.h defines -9 to be PN_INPROGRESS, however given that
> > this is the source code of pn_messenger_set_blocking:
> >
> > int pn_messenger_set_blocking(pn_messenger_t *messenger, bool blocking)
> > {
> >   messenger->blocking = blocking;
> >   return 0;
> > }
> >
> > I don't see how it's possible for it to return anything other than 0.
>
> Sorry, I meant that I'm getting a -9 error condition on calling
> pn_messenger_receive when blocking is set to 0.
>
> I added to perl.i a conversion for the bool type so that it Perl->C
> would use true/false and for C->Perl would use 1/undef. With that done
> the Perl bindings stopped getting a -9 on calling receive.
>
> I'll post it on Review Board once I get it working (I'm writing a Perl
> version of {send,recv}_async to test this new feature).
>

In the case of the C API getting a PN_INPROGRESS error (-9) from recv is
expected behaviour if you're in non blocking mode. That just means that
there is blocking work that was deferred.

--Rafael


Re: pn_messenger_send return code

2014-01-10 Thread Rafael Schloming
Hi,

Welcome to the list.

The put and send return codes are used for more basic errors, e.g. coding
errors. Tracking the status of outgoing messages has it's own API similar
to the one you've already used in your receiver. Below is a rough example.
I've omitted the return code checking for brevity:

pn_messenger_set_outgoing_window(messenger, N); // track the status of the
last N outgoing messages
pn_messenger_put(messenger, m1);
pn_tracker_t t1 = pn_messenger_outgoing_tracker(messenger); // Grab the
tracker for the last outgoing message (in this case m1)
pn_messenger_put(messenger, m2);
pn_tracker_t t2 = pn_messenger_outgoing_tracker(messenger); // Grab the
tracker for the last outgoing message (in this case m2)
// ...
// ... if you "put" more than N messages without calling send, messenger
won't keep the status for the older trackers
// ...
pn_messenger_send(messenger, -1);

// now check the status for whatever messages I care about:
printf("Status off M1: %d\n", pn_messenger_status(messenger, t1));
// ...

Hopefully this will get you going, but please follow up if you have anymore
questions.

--Rafael


On Fri, Jan 10, 2014 at 2:34 PM, serega  wrote:

> Hello. I am evaluating Qpid Proton AMQP Messenger  to communicate with
> SwiftMQ broker. I've had issues with receiving messages, but I used
> suggestion  here    and
> it
> solved the problem.  The use case I am testing is broker is down. Modified
> send.c code:
>
>  int res = pn_messenger_put(messenger, message);
>  printf("Error %d\n", res);
>  check(messenger);
>  res = pn_messenger_send(messenger, -1);
>  printf("Error %d\n", res);
>  check(messenger);
>
>
> The output:
> __
> Error 0
> read: Connection refused
> [0x101009c00:0] ERROR[-2] SASL header mismatch: ''
> send: Broken pipe
> CONNECTION ERROR connection aborted
> Error 0
> __
>
>
> Shouldn't there be a non zero error code as documented in messenger.h ?
> * @return an error code or zero on success
>  * @see error.h
>  */
> PN_EXTERN int pn_messenger_send(pn_messenger_t *messenger, int n);
>
>
> Thanks,
> Sergey.
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/pn-messenger-send-return-code-tp7602570.html
> Sent from the Apache Qpid Proton mailing list archive at Nabble.com.
>


Re: SASL / proton-c questions

2014-01-14 Thread Rafael Schloming
Hi,

Messenger will simply use anonymous if you don't specify a username and
password, e.g. amqp://broker/node will result in the client forcing
ANONYMOUS. If you do specify a username and password, e.g.
amqp://user:pass@broker/node, then messenger will force PLAIN. There is no
way to disable SASL entirely. If you're interested in stronger security
than PLAIN offers, your best bet is probably to use SSL with client side
certificates.

--Rafael

On Tue, Jan 14, 2014 at 9:05 AM, Andreas Mueller  wrote:

> Hi,
>
> I've looked through the proton-c Messenger API docs but can't find it. Can
> someone please tell me:
>
> 1) How to disable SASL at all to start direct with AMQP?
>
> 2) How to force a specific SASL mechanism like PLAIN?
>
> Thanks,
> Andreas
>
> --
> Andreas Mueller
> IIT Software GmbH, Bremen/Germany
> http://www.swiftmq.com
>
>
>
>
>
> IIT Software GmbH
> Fahrenheitstr. 13, D28359 Bremen, Germany
> Tel: +49 421 330 46 088, Fax: +49 421 330 46 090
> Amtsgericht Bremen, HRB 18624, Geschaeftsfuehrer: Andreas Mueller
> Steuernummer: 460/118/06404 VAT: DE199945912
>
>


Re: SASL / proton-c questions

2014-01-14 Thread Rafael Schloming
On Tue, Jan 14, 2014 at 9:44 AM, Andreas Mueller  wrote:

> Disabling SASL should be an option in proton to connect to simple services
> that do not provide SASL or just to skip this additional step if SASL is
> not requires.
>

Agreed, feel free to file a JIRA if we don't already have one.


>
> The other problem was already reported from Sergey.
>
> I receive a SASL protocol header and a SASL Init message with mechanism
> PLAIN from proton.
>
> [ProtocolHeader, name=AMQP, id=3, major=1, minor=0, revision=0]
> [SaslInit mechanism=PLAIN, initialResponse=006775657374006775657374]
>
> Then occasionally this error is thrown:
>
> ./send -a amqp://guest:guest@127.0.0.1:5672/testqueue Test
> [0xbe1030:0] ERROR[-2] SASL header mismatch:
> '\x00\x00\x00\x17\x02\x01\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00D\xc0\x03\x01P\x00AMQP\x03\x01\x00\x00\x00\x00\x00=\x02\x01\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00@
> \xc0)\x01\xe0&\x04\xa3\x05PLAIN\x09ANONYMOUS\x0aDIGEST-MD5\x08CRAM-MD5'
>
> My first thought was that this might be caused by using an unsupported
> mechanisnm but this is not the case as PLAIN is used. I'm getting this with
> ANONYMOUS too.
>

The bytes that follow the "...header mismatch: ..." part of the error
message are what proton thinks it is reading from the wire, and it looks to
me like there is garbage in front of the SASL header. The SASL header looks
to start where you see the sequence "...AMQP\x03\x01...". This either means
that proton's internal buffers are getting messed up somehow, or there
actually is garbage on the wire. Do you have any way of determining which
is the case, e.g. running the connection through tcpdump or something like
that and verifying that the first 8 bytes of data are in fact
"AMQP\x03\x01\x00\x00"? This would give us a better idea of where to look
for the problem.

--Rafael


Re: [VOTE]: Release Proton 0.6 RC3 as 0.6 final

2014-01-14 Thread Rafael Schloming
Adding my +1 for the record.

--Rafael

On Thu, Jan 2, 2014 at 2:17 PM, Rafael Schloming  wrote:

> Hi Everyone,
>
> It looks like there haven't been any major issues reported so far with 0.6
> RC3, so I guess it's about time to call for a formal vote.
>
> Source is here:
>
> http://people.apache.org/~rhs/qpid-proton-0.6rc3/
>
> Java binaries are here:
>
> https://repository.apache.org/content/repositories/orgapacheqpid-003/
>
> Please peruse/test and register your vote:
>
> [ ] Yes, release 0.6 RC3 as 0.6 final
> [ ] No, 0.6 RC3 has the following issues...
>
> --Rafael
>


[RESULT] [VOTE]: Release Proton 0.6 RC3 as 0.6 final

2014-01-14 Thread Rafael Schloming
The vote carries with 3 binding +1's, 2 non-binding +1's, and no other
votes. I'll post the artifacts shortly.

--Rafael

On Thu, Jan 2, 2014 at 2:17 PM, Rafael Schloming  wrote:

> Hi Everyone,
>
> It looks like there haven't been any major issues reported so far with 0.6
> RC3, so I guess it's about time to call for a formal vote.
>
> Source is here:
>
> http://people.apache.org/~rhs/qpid-proton-0.6rc3/
>
> Java binaries are here:
>
> https://repository.apache.org/content/repositories/orgapacheqpid-003/
>
> Please peruse/test and register your vote:
>
> [ ] Yes, release 0.6 RC3 as 0.6 final
> [ ] No, 0.6 RC3 has the following issues...
>
> --Rafael
>


Re: pn_messenger_send return code

2014-01-14 Thread Rafael Schloming
On Sun, Jan 12, 2014 at 11:59 AM, serega  wrote:

> A related question regarding subscription API. There are several use cases.
> A single subscription to invalid queue on a running broker, the messenger
> is
> blocking
>   pn_messenger_subscribe(messenger,
> "amqp://guest:guest@127.0.0.1:5772/testqueue2");
>   pn_messenger_rewrite(messenger, "amqp://%/*", "$2");
>   pn_messenger_set_incoming_window(messenger, 200);
>   pn_messenger_set_blocking(messenger, 1);
>   for(;;) {
>   {
> int ret = pn_messenger_recv(messenger, -1);
> printf("after receive ret=%d\n", ret);
> ...
> program prints
> LINK ERROR (amqp:not-found) Neither a queue nor a topic is defined with
> that
> name: testqueue2
> and waits.
>
> If I set it non blocking program prints
> after receive ret=-9
> after receive ret=-9
> LINK ERROR (amqp:not-found) Neither a queue nor a topic is defined with
> that
> name: testqueue2
> after receive ret=-9
> after receive ret=-9
> ...
>
>
> The same code but with valid queue and no messages in it
> pn_messenger_subscribe(messenger,
> "amqp://guest:guest@127.0.0.1:5772/testqueue");
> also returns -9.
> after receive ret=-9
> after receive ret=-9
> ...
>
>
>
> Another use case is two subscriptions, one is valid the second is dead,
> messenger is blocking.
> .
> read: Connection refused
> [0x100200640]:ERROR[-2] SASL header mismatch: ''
>
> send: Broken pipe
> CONNECTION ERROR connection aborted
> ..
>
> There is an error message printed, but I don't see any way to get that
> error
> programmatically in my client code.  I also brought the broker up and sent
> messages to the queue, but the messenger didn't pick them up. I assume
> there
> is no any recovery logic in the messenger itself.  If I bring the second
> broker down the pn_messenger_recv returns immediately with error code
> PN_STATE_ERR (-5).
>
>
> I understand that intent of the messenger is simple API and hiding all
> connection management, but there has to be either recovery logic built in
> or
> a way to communicate all errors to the client so it can take appropriate
> actions. I do test edge cases which aren't frequent, but they do happen.
>

You're right, the subscription API is a little bit overly simplistic at the
moment. In particular there is no way to cancel/access the status of
individual subscriptions, and as you point out there is no internal
recovery logic. The latter would probably be a pretty straightforward patch
to provide if that would be sufficient for your case.

--Rafael


engine API improvements

2014-01-15 Thread Rafael Schloming
Hi Everyone,

In a recent thread I alluded to some improvements I have in mind for the
engine API and promised a separate post on the topic, so here it is.

There are two areas where I'm interested in extending the API. These
extensions are strictly speaking independent of each other, but at the same
time somewhat complimentary in their utility.

The first area would be adding a formal Container class to the engine API.
If you are familiar with the AMQP specification then you might have noticed
that while the engine API models connections, sessions, and links pretty
faithfully to how they are defined in the specification, one notable
difference is that the engine API doesn't explicitly model the concept of
an AMQP Container. I believe adding such a class to the engine API would
allow us to refactor some of the more generally useful stuff that Messenger
does into the engine proper, while leaving the less desirable stuff (such
as blocking behaviour and hard coded driver) in messenger proper.

The second area is providing an event oriented interface into the engine.
One of the original ideas behind the engine design is that you don't need
event callbacks because the engine itself is simply a state machine, and
changes in its state are only ever triggered by calling into the engine
API. This means that whenever you make such a call you can simply query to
see if anything relevant has changed. While this is strictly speaking true,
the whole "query to see what has changed part" is responsible for a fair
amount of biolerplate code in using the engine and makes for a steeper
learning curve than is desirable. So the idea behind an event oriented
interface is to provide a uniform way of discovering what has changed that
has a strong analog to the event callback systems most people are used to,
but at the same time doesn't go quite so far as to use callbacks.

The details of this event oriented interface are somewhat TBD, but my
thinking right now on this is to introduce a Collector object that can be
registered with an arbitrary number of Containers and/or Connections. The
Collector will work in concert with an Event class.

The Event class will contain optional references to at most one of each of
the following: Container, Connection, Session, Link, Delivery. The event
class will also have type, i.e. an enum field indicating what of interest
may have occurred.

When something of interest happens to a Container, Connection, Session,
Link, or Delivery, the library will check to see if there is a Collector
registered, and if so create/initialize an Event that points to the
relevant engine objects and indicates the appropriate type. Note that no
callback occurs here, the Collector just holds the event until the
application comes looking for it.

The Collector of course has an API for accessing and consuming any events.

There are a couple of important principles to note. As stated above there
are no callbacks in this interface, this makes it easy to swig into other
languages, however it should be only a few lines of code to write a
dispatch loop on top of the swigged version of this API, i.e.:

  for event in collector:
dispatch(event)

Another important point is that Events are simple transient value objects.
They don't carry any significant state in and of themselves, rather they
are simply pointers to part of the existing engine state machine that may
be of interest. The protocol state is still entirely encapsulated in the
engine objects proper, not in the Events.

It's also worth noting that I believe both of these extensions are purely
additive, i.e. would not require changing any existing API, simply adding
new API.

Hopefully this type of event oriented interface will provide something that
people will have a bit of an easier time digesting/using, but even so, I
think it also provides some interesting avenues of future exploration. I
believe it would be possible to define a concept of
interceptors/filter/whatever that could be configured into a Collector. We
could use this to provide reusable/preconfigured behaviours, e.g. if you
wanted to use the engine API but didn't want to have to worry about credit,
you could install an interceptor that would handle all the credit events
for you. Ultimately when combining this interface with the Container
concept and introducing Container level events, I believe we could even
provide reusable chunks of logic that could handle all of the
connection/session/link establishment stuff and thereby provide a single
integrated API that would offer the full range of trade-offs between
Messenger's simplicity and engine's flexibility.

I'd like to flesh this idea out a bit more befire diving in. As a first
step I'm working on documenting the existing proton design more thoroughly.
You can see what I have so far here, although it is very much in progress
still:

  https://cwiki.apache.org/confluence/display/qpid/Proton+Architecture

Once the above document is more complete I will p

Re: [RESULT] [VOTE]: Release Proton 0.6 RC3 as 0.6 final

2014-01-16 Thread Rafael Schloming
The release artifacts for Proton 0.6 are now available from the web site:
http://qpid.apache.org/releases/qpid-proton-0.6/index.html

--Rafael


On Tue, Jan 14, 2014 at 10:20 AM, Rafael Schloming  wrote:

> The vote carries with 3 binding +1's, 2 non-binding +1's, and no other
> votes. I'll post the artifacts shortly.
>
> --Rafael
>
> On Thu, Jan 2, 2014 at 2:17 PM, Rafael Schloming  wrote:
>
>> Hi Everyone,
>>
>> It looks like there haven't been any major issues reported so far with
>> 0.6 RC3, so I guess it's about time to call for a formal vote.
>>
>> Source is here:
>>
>> http://people.apache.org/~rhs/qpid-proton-0.6rc3/
>>
>> Java binaries are here:
>>
>> https://repository.apache.org/content/repositories/orgapacheqpid-003/
>>
>> Please peruse/test and register your vote:
>>
>> [ ] Yes, release 0.6 RC3 as 0.6 final
>> [ ] No, 0.6 RC3 has the following issues...
>>
>> --Rafael
>>
>
>


Re: call pn_messenger_accept from another thread

2014-01-17 Thread Rafael Schloming
You would need a mutex protecting the messenger object in order to safely
call pn_messenger_accept from another thread. You can do it, but it would
likely run into complications since the messenger thread would need to hold
the mutex while blocking.

I'm guessing if you are processing messages in a different thread then you
must have some thread-safe strategy for passing message data from the
messenger thread over to the processing thread, i.e. a thread safe queue of
messages to be processed. I would think the simplest thing to do would be
to simply set up another such queue in the reverse direction to pass
trackers from the processing thread back to the messenger thread and call
pn_messenger_accept from there.

--Rafael


On Fri, Jan 17, 2014 at 10:00 AM, serega  wrote:

> There are several discussions on this forum regarding multi-threading, and
> as
> I understand messenger isn't thread safe.  However, my use case is simple.
> A single thread receives messages, but message processing is handed over to
> another thread
>
> Is it possible to call pn_messenger_accept(messenger, tracker, 0) from
> another thread?
>
> - Sergey
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/call-pn-messenger-accept-from-another-thread-tp7602918.html
> Sent from the Apache Qpid Proton mailing list archive at Nabble.com.
>


Re: Finding available channels on a pn_connection_t

2014-01-23 Thread Rafael Schloming
Hi Alan,

I think you're correct. I don't think there is any attempt to limit the
number of channels used. I'll add some API to make the max number of
channels available.

--Rafael

On Wed, Jan 22, 2014 at 4:38 PM, Alan Conway  wrote:

> I'm trying to solve a problem for HA on the Qpid C++ broker. I need to
> figure out, for a given connection, will an attempt to open another channel
> at the other end exceed the channel-max for this connection?
>
> To do that I figure I need to know the number of currently open channels
> (easy) and the negotiated channel-max for the connection. How can I find
> that?
>
> Based on a novice reading of transport.c, it looks like the channel-max
> field of the open performative is ignored. Am I reading this wrong? If not,
> what is proton assuming - that it can always open 64k channels on any
> connection? This would violate the limit imposed by the Qpid C++ broker and
> client of 32k channels (I don't know why they impose that limit but they
> do.)
>
> Any pointers much appreciated!
> Alan.
>


Re: Protocol detection

2014-01-23 Thread Rafael Schloming
On Thu, Jan 23, 2014 at 12:07 PM, Barak Azulay  wrote:

> Thanks ken,
> This looks very helpful - we will try it.
>
> In addition - who can we talk to about the java binding for qpid-proton
> (AMQP 1.0),
>

What would you like to know?

--Rafael


heads up on build changes

2014-01-29 Thread Rafael Schloming
Hi,

I just wanted to give everyone a heads up on a change I just committed.
This may end up impacting downstream java users as the artifact names have
changed a bit.

You can see PROTON-499 for some of the gory details, but long story short I
ended up moving around some code and flattening out the structure a little
bit. Part of the changes were necessary to fix the aforementioned build
issue, and some of the changes are simply me taking advantage of the
opportunity to do a little cleanup.

I believe the impact on artifact names is that if you were previously
depending on proton-j-impl and/or proton-api, you should now just depend on
proton-j.

Please let me know if you run into any issues with this.

Thanks,

--Rafael


Re: [RESULT] [VOTE]: Release Proton 0.6 RC3 as 0.6 final

2014-01-29 Thread Rafael Schloming
Yup, thanks for the reminder, they should be available shortly.

--Rafael


On Wed, Jan 29, 2014 at 5:47 PM, Robbie Gemmell wrote:

> The Java binaries for 0.6 seem to be MIA. Still sitting in a staging repo
> somewhere?
>
> Robbie
>
> On 16 January 2014 10:42, Rafael Schloming  wrote:
>
> > The release artifacts for Proton 0.6 are now available from the web site:
> > http://qpid.apache.org/releases/qpid-proton-0.6/index.html
> >
> > --Rafael
> >
> >
> > On Tue, Jan 14, 2014 at 10:20 AM, Rafael Schloming 
> > wrote:
> >
> > > The vote carries with 3 binding +1's, 2 non-binding +1's, and no other
> > > votes. I'll post the artifacts shortly.
> > >
> > > --Rafael
> > >
> > > On Thu, Jan 2, 2014 at 2:17 PM, Rafael Schloming 
> > wrote:
> > >
> > >> Hi Everyone,
> > >>
> > >> It looks like there haven't been any major issues reported so far with
> > >> 0.6 RC3, so I guess it's about time to call for a formal vote.
> > >>
> > >> Source is here:
> > >>
> > >> http://people.apache.org/~rhs/qpid-proton-0.6rc3/
> > >>
> > >> Java binaries are here:
> > >>
> > >>
> > https://repository.apache.org/content/repositories/orgapacheqpid-003/
> > >>
> > >> Please peruse/test and register your vote:
> > >>
> > >> [ ] Yes, release 0.6 RC3 as 0.6 final
> > >> [ ] No, 0.6 RC3 has the following issues...
> > >>
> > >> --Rafael
> > >>
> > >
> > >
> >
>


Re: Proton port to OpenVMS

2014-02-07 Thread Rafael Schloming
Hi,

You could start by filing a JIRA and attaching the changes as a patch:
https://issues.apache.org/jira/browse/PROTON

--Rafael


On Fri, Feb 7, 2014 at 5:34 AM, Tomáš Šoltys  wrote:

> Hi,
>
> I have made some changes to proton-c so it can be used on OpenVMS.
> http://en.wikipedia.org/wiki/Openvms
>
> I would like to make these changes public and to be included in next
> release.
>
> How should I proceed?
>
> Thanks,
>
> Tomáš Šoltys
>


Re: Heartbeat

2014-02-07 Thread Rafael Schloming
There is already a single per messenger timeout for blocking operations. It
should be a pretty trivial patch to set that same value as the idle timeout
on each connection.

--Rafael


On Fri, Feb 7, 2014 at 8:26 AM, Ken Giusti  wrote:

> Thanks Tomáš - I agree with a single per-messenger timeout approach.
>
>
> - Original Message -
> > From: "Tomáš Šoltys" 
> > To: proton@qpid.apache.org
> > Sent: Friday, February 7, 2014 3:04:34 AM
> > Subject: Re: Heartbeat
> >
> > Hi Ken,
> >
> > I have just opened a new JIRA as you suggested.
> > https://issues.apache.org/jira/browse/PROTON-512
> >
> > My idea is that there would be one global idle timeout for whole
> messenger
> > and this timeout will be used for all connections.
> >
> > Regards,
> > Tomas
> >
> >
> > 2013-11-11 Ken Giusti :
> >
> > > Hi Tomas,
> > >
> > > - Original Message -
> > > > From: "Tomáš Šoltys" 
> > > > To: proton@qpid.apache.org
> > > > Sent: Monday, November 11, 2013 6:43:06 AM
> > > > Subject: Re: Heartbeat
> > > >
> > > > Hi Ken,
> > > >
> > > > thanks for your answer.
> > > > And is there a way how to set it for pn_messenger_t
> > > >
> > >
> > > No - not at present.  Messenger doesn't give the application access to
> the
> > > transport since Messenger hides all that connection-related stuff.
> > >
> > > I'd recommend opening a JIRA requesting this feature so we won't forget
> > > about it.
> > >
> > > https://issues.apache.org/jira/browse/PROTON
> > >
> > > thanks.
> > >
> > > -K
> > >
> > >
> > >
> > > > Thanks,
> > > > Tomas
> > > >
> > > >
> > > > 2013/11/8 Ken Giusti 
> > > >
> > > > > Hi Tomas,
> > > > >
> > > > > The C implementation of proton allows you to set an idle time out
> for a
> > > > > connection as described in the AMQP 1.0 spec.  This value is used
> to
> > > > > generate "null" frames (ie. frames with no bodies) on idle
> connections
> > > as
> > > > > to not expire the timeout.   The connection will be dropped if the
> > > local
> > > > > side does not receive any traffic for it's configured timeout
> period.
> > > > >
> > > > > The C api is available on the transport object - see engine.h:
> > > > >
> > > > > /* timeout of zero means "no timeout" */
> > > > > PN_EXTERN pn_millis_t pn_transport_get_idle_timeout(pn_transport_t
> > > > > *transport);
> > > > > PN_EXTERN void pn_transport_set_idle_timeout(pn_transport_t
> *transport,
> > > > > pn_millis_t timeout);
> > > > > PN_EXTERN pn_millis_t
> > > pn_transport_get_remote_idle_timeout(pn_transport_t
> > > > > *transport);
> > > > >
> > > > >
> > > > > I don't think the java implementation has this - yet.  See
> > > > > https://issues.apache.org/jira/browse/PROTON-111
> > > > >
> > > > >
> > > > >
> > > > > - Original Message -
> > > > > > From: "Tomáš Šoltys" 
> > > > > > To: proton@qpid.apache.org
> > > > > > Sent: Friday, November 8, 2013 6:06:15 AM
> > > > > > Subject: Heartbeat
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I am looking for a way how to manually specify heartbeat for a
> > > > > connection.
> > > > > >
> > > > > > Is there a way how to do it using proton?
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Tomas Soltys
> > > > > >
> > > > >
> > > > > --
> > > > > -K
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Tomáš Šoltys
> > > > tomas.sol...@gmail.com
> > > > http://www.range-software.com
> > > > (+420) 776-843-663
> > > >
> > >
> > > --
> > > -K
> > >
> >
> >
> >
> > --
> > Tomáš Šoltys
> > tomas.sol...@gmail.com
> > http://www.range-software.com
> > (+420) 776-843-663
> >
>
> --
> -K
>


Re: Heartbeat

2014-02-07 Thread Rafael Schloming
No, Messenger.setBlocking(True/False) is independent of
Messenger.setTimeout(...).

--Rafael


On Fri, Feb 7, 2014 at 10:29 AM, Ken Giusti  wrote:

> Would that force an app to use blocking operations if it desires idle
> timeout?  IOW, should the need for idle monitoring determine the choice of
> blocking vs non-blocking?
>
> I probably don't fully grasp the implications
>
> -K
>
>
> - Original Message -
> > From: "Rafael Schloming" 
> > To: proton@qpid.apache.org
> > Sent: Friday, February 7, 2014 9:23:20 AM
> > Subject: Re: Heartbeat
> >
> > There is already a single per messenger timeout for blocking operations.
> It
> > should be a pretty trivial patch to set that same value as the idle
> timeout
> > on each connection.
> >
> > --Rafael
> >
> >
> > On Fri, Feb 7, 2014 at 8:26 AM, Ken Giusti  wrote:
> >
> > > Thanks Tomáš - I agree with a single per-messenger timeout approach.
> > >
> > >
> > > - Original Message -
> > > > From: "Tomáš Šoltys" 
> > > > To: proton@qpid.apache.org
> > > > Sent: Friday, February 7, 2014 3:04:34 AM
> > > > Subject: Re: Heartbeat
> > > >
> > > > Hi Ken,
> > > >
> > > > I have just opened a new JIRA as you suggested.
> > > > https://issues.apache.org/jira/browse/PROTON-512
> > > >
> > > > My idea is that there would be one global idle timeout for whole
> > > messenger
> > > > and this timeout will be used for all connections.
> > > >
> > > > Regards,
> > > > Tomas
> > > >
> > > >
> > > > 2013-11-11 Ken Giusti :
> > > >
> > > > > Hi Tomas,
> > > > >
> > > > > - Original Message -
> > > > > > From: "Tomáš Šoltys" 
> > > > > > To: proton@qpid.apache.org
> > > > > > Sent: Monday, November 11, 2013 6:43:06 AM
> > > > > > Subject: Re: Heartbeat
> > > > > >
> > > > > > Hi Ken,
> > > > > >
> > > > > > thanks for your answer.
> > > > > > And is there a way how to set it for pn_messenger_t
> > > > > >
> > > > >
> > > > > No - not at present.  Messenger doesn't give the application
> access to
> > > the
> > > > > transport since Messenger hides all that connection-related stuff.
> > > > >
> > > > > I'd recommend opening a JIRA requesting this feature so we won't
> forget
> > > > > about it.
> > > > >
> > > > > https://issues.apache.org/jira/browse/PROTON
> > > > >
> > > > > thanks.
> > > > >
> > > > > -K
> > > > >
> > > > >
> > > > >
> > > > > > Thanks,
> > > > > > Tomas
> > > > > >
> > > > > >
> > > > > > 2013/11/8 Ken Giusti 
> > > > > >
> > > > > > > Hi Tomas,
> > > > > > >
> > > > > > > The C implementation of proton allows you to set an idle time
> out
> > > for a
> > > > > > > connection as described in the AMQP 1.0 spec.  This value is
> used
> > > to
> > > > > > > generate "null" frames (ie. frames with no bodies) on idle
> > > connections
> > > > > as
> > > > > > > to not expire the timeout.   The connection will be dropped if
> the
> > > > > local
> > > > > > > side does not receive any traffic for it's configured timeout
> > > period.
> > > > > > >
> > > > > > > The C api is available on the transport object - see engine.h:
> > > > > > >
> > > > > > > /* timeout of zero means "no timeout" */
> > > > > > > PN_EXTERN pn_millis_t
> pn_transport_get_idle_timeout(pn_transport_t
> > > > > > > *transport);
> > > > > > > PN_EXTERN void pn_transport_set_idle_timeout(pn_transport_t
> > > *transport,
> > > > > > > pn_millis_t timeout);
> > > > > > > PN_EXTERN pn_millis_t
> > > > > pn_transport_get_remote_idle_timeout(pn_transport_t
> > > > > > > *transport);
> > > > > > >
> > > > > > >
> > > > > > > I don't think the java implementation has this - yet.  See
> > > > > > > https://issues.apache.org/jira/browse/PROTON-111
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > - Original Message -
> > > > > > > > From: "Tomáš Šoltys" 
> > > > > > > > To: proton@qpid.apache.org
> > > > > > > > Sent: Friday, November 8, 2013 6:06:15 AM
> > > > > > > > Subject: Heartbeat
> > > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > I am looking for a way how to manually specify heartbeat for
> a
> > > > > > > connection.
> > > > > > > >
> > > > > > > > Is there a way how to do it using proton?
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > >
> > > > > > > > Tomas Soltys
> > > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > -K
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Tomáš Šoltys
> > > > > > tomas.sol...@gmail.com
> > > > > > http://www.range-software.com
> > > > > > (+420) 776-843-663
> > > > > >
> > > > >
> > > > > --
> > > > > -K
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Tomáš Šoltys
> > > > tomas.sol...@gmail.com
> > > > http://www.range-software.com
> > > > (+420) 776-843-663
> > > >
> > >
> > > --
> > > -K
> > >
> >
>
> --
> -K
>


Re: Source/Target Question

2014-02-25 Thread Rafael Schloming
On Mon, Feb 24, 2014 at 8:37 AM, Ken Giusti  wrote:

> +1 this question - I've never quite understood Messenger's current
> behavior w.r.t. setting the source and target to the same value.
>

I don't think there is a good reason, I think it's just an accident of
history.


>
> As far as interpreting the behavior of a subscription, and the interaction
> with the API, I would think the most 'natural' approach would be:
>
> sub = M.subscribe("amqp://~0.0.0.0/name") would imply:
>
> > > "I am a service called 'name'
> > > and I wish to begin receiving messages"
>
> and
>
> sub = M.subscribe("amqp://0.0.0.0/name") would imply:
>
> > > "I wish to receive messages
> > > from a node called 'name'".
>
> As the ~ causes Messenger to open a TCP listener on the address, and the
> absence of ~ causes Messenger to initiate a connection attempt to the
> address.
>
> In the two cases, I agree with Ted's point that only the Target or Source
> should be set.  My opinion: set them based on the subscription's mode:
> Target should be set in the case of ~, Source in the case of !~.
>
> Thoughts?
>

When a ~ is used messenger is not the one establishing the link, the remote
peer establishes the link so it's the remote peer that is going to end up
supplying the source and target. Currently messenger just copies whatever
the remote peer supplies and proceeds to pass through any and all incoming
messages.

As for the original question, in the general case we need to think about
local terminus/remote terminus rather than source/target as messenger has
similar behaviour for outgoing links as well as incoming links. This is
even true for ~ URLs since you can send messages to them and they will get
held in the messenger's message store until someone connects and grabs them.

Given that in many respects messenger will behave like an embedded broker
rather than just a simple client, it would in some sense be consistent for
messenger to set the local terminus to the name of the messenger unless
otherwise overridden. This is consistent with what a broker would do, e.g.
when a broker sends messages from a queue, the local terminus is the name
of the queue, when a broker receives messages into a queue, the local
terminus is still the name of the queue.

Regardless of the default though, I think we probably need a mechanism for
explicitly controlling both ends, e.g. something like
M.subscribe("source->target") for when you want a non default target.

--Rafael


Proton 0.7 RC1

2014-03-14 Thread Rafael Schloming
Hi Everyone,

There's been a bunch of key improvements/fixes since Proton 0.6, so it's
probably about time for a new release. I've just posted the first RC in the
usual places. Please check it out and give a shout if you run into any
issues.

Source artifacts:

  http://people.apache.org/~rhs/qpid-proton-0.7rc1/

Java binaries:

  https://repository.apache.org/content/repositories/orgapacheqpid-1001/

--Rafael


Re: ConnectionImpl::getWorkSequence()

2014-03-18 Thread Rafael Schloming
I doubt there is a good reason, however I suspect the new events API would
probably be an easier alternative to getWorkHead() and friends.
Unfortunately there aren't docs for the Java version of the API yet, but it
shouldn't be difficult to figure out how to use it from the C API docs.

--Rafael


On Mon, Mar 17, 2014 at 12:44 PM, Clebert Suconic wrote:

> Why getWorkSequence is not exposed through Connection?
>
>
> Forcing me to use getWorkHead() would make me re-implement the exact same
> Iterator that's being implemented at ConnectionImpl(); Why not just expose
> it properly? Iterators are common practice in Java anyways.


Re: ConnectionImpl::getWorkSequence()

2014-03-18 Thread Rafael Schloming
On Tue, Mar 18, 2014 at 11:43 AM, Clebert Suconic wrote:

> On Mar 18, 2014, at 11:25 AM, Rafael Schloming  wrote:
>
> > I doubt there is a good reason, however I suspect the new events API
> would
> > probably be an easier alternative to getWorkHead() and friends.
> > Unfortunately there aren't docs for the Java version of the API yet, but
> it
> > shouldn't be difficult to figure out how to use it from the C API docs.
> >
>
> I know nothing about the new events API...
> but I think it would be a mistake to have java being an exact mirror of
> the C API. Things like Iterators are pretty common in Java.
>

I didn't say it was an exact mirror, just that it's close enough that you
should be able to figure it out from the C documentation. I would think
that would be a good thing in general.

As for iterators, in this particular case it's not really C vs Java, it's
the fact that the link/delivery data structure is a linked list, and Java's
collection API doesn't really do linked lists. (I'm quite familiar with
what java does offer in terms of java.util.LinkList, but I don't really
count that as it is entirely missing any sort of node abstraction.) Of
course that doesn't preclude offering iterators, however iterators will
never be able to fully express what the underlying data structure is
attempting to represent. That said I've no objection to making the iterator
available as a convenience, although I'd probably call it getWorkIterator
rather than getWorkSequence.


>
> Right now my implementation is forced to cast to ConnectionImpl what
> breaks the purpose of the interface. Can you guys move it?
>

I'm happy to accept a patch for it, although I'd encourage you to check out
the events stuff in any case.

--Rafael


Re: ConnectionImpl::getWorkSequence()

2014-03-18 Thread Rafael Schloming
On Tue, Mar 18, 2014 at 11:53 AM, Clebert Suconic wrote:

> >>
> >>
> >> Unfortunately there aren't docs for the Java version of the API yet,
> but it
> >> shouldn't be difficult to figure out how to use it from the C API docs.
> >>
>
> BTW I don't need the javadoc.. just point me what class are you talking
> about and I will figure out
>
>
The relevant classes would be org.apache.qpid.proton.engine.Collector, and
org.apache.qpid.proton.engine.Event.

You pretty much just create a Collector, register it with a connection by
calling Connection.collect(...) and then you can use the Collector API to
access any events that have occurred.

--Rafael


Re: ConnectionImpl::getWorkSequence()

2014-03-18 Thread Rafael Schloming
On Tue, Mar 18, 2014 at 2:23 PM, Clebert Suconic wrote:

> >
> >>
> >> Right now my implementation is forced to cast to ConnectionImpl what
> >> breaks the purpose of the interface. Can you guys move it?
> >>
> >
> > I'm happy to accept a patch for it, although I'd encourage you to check
> out
> > the events stuff in any case.
> >
> > --Rafael
>
>
> I sure will take a look on the events stuff..
>
>
> I thought you were objecting Iterators just because of the C / Java API
> compatibility.
>

Ah, no, not as such. If you wanted to limit the Java side to only using
iterators that would break the python tests that run against both the C and
Java codebase, but providing iterators as a convenience is certainly not an
issue.


>
>
> I would know how to provide a patch if it was git or github. I'm a bit
> rusty on SVN.. would you be ok if I provided you a git branch with a
> commit? I'm not a proton committer yet (although I'm planning to get down
> to it a lot more).


A pointer to a git commit would be excellent.

--Rafael


Re: Proton PHP message receiving code encountering fatal error

2014-03-25 Thread Rafael Schloming
This looks like a typo in proton.php. Unfortunately I can't think of a good
workaround other than changing the '.' to a '->' on line 283 of proton.php.
If you file a JIRA I will make sure a fix goes into the next RC for 0.7.
FWIW, you won't hit the codepath unless the message happens to have
delivery annotations, so another way to avoid it would be to make sure your
messages don't have delivery annotations, but I presume that they do
otherwise you wouldn't have hit the issue. ;-) I'll follow up here if I
think of any better workaround for you.

--Rafael


On Mon, Mar 24, 2014 at 4:14 PM, Tom McDonald wrote:

> I¹m using QPid (qpid-cpp v0.26) with Proton v0.6 with bindings for php
> (PHP 5.3.10-1ubuntu3.10 with Suhosin-Patch (cli) (built: Feb 28 2014
> 23:14:25) and perl (This is perl 5, version 14, subversion 2 (v5.14.2)
> built for x86_64-linux-gnu-thread-multi).
>
> When I use a simple send client in php, a simple receive client in php
> correctly receives the message. When I use a simple send client in Perl,
> the receive client errors out with the following:
>
> PHP Fatal error:  Call to undefined function get_object() in
> /usr/share/php/proton.php on line 283
>
> Attached are the three source files (send.php, recv.php, send.pl).
>
> OS: Linux 3.2.0-38-generic #61-Ubuntu SMP Tue Feb 19 12:18:21 UTC 2013
> x86_64 x86_64 x86_64 GNU/Linux
>
>
>
>
> --
> CONFIDENTIALITY NOTICE:
> The information contained in this communication, including attachments, is
> confidential and intended only for the exclusive use of the addressee. If
> the reader of this message is not the intended recipient, or the employee
> or agent responsible for delivering it to the intended recipient, you are
> hereby notified that any dissemination, distribution or copying of this
> communication is strictly prohibited. If you have received this
> communication in error, please notify us by telephone immediately. Thank
> you.


Re: Race condition in the TransportImpl in Proton-J

2014-03-25 Thread Rafael Schloming
On Mon, Mar 24, 2014 at 9:37 PM, Rajith Attapattu wrote:

> I encountered an issue in Proton J which I believe is a race condition.
> If the input stream is read and passed into the transport, before the
> sasl() method of TransportImpl.java is called then the _inputProcessor
> defaults to FrameParser instead of being wrapped by the SASL frame parser.
> This causes a Frame Parsing error as it expects '0' as per the regular
> header but instead finds '3' which is the correct format if the process is
> the SASL frame parser.
>
> We should either test the incoming header and determine the right
> inputProcessor
> OR
> clearly document that the transport needs to be ready (i.e the sasl()
> method has to be called) before any incoming data is fed to the transport.
>

I'd say we should do the latter. In fact we should probably throw an
exception if you attempt to configure a sasl layer after input has occurred
since there is never anything sensible we can do if you try to configure
the sasl layer "midstream". It all needs to be set up prior to actually
processing any input bytes.

--Rafael


please remove your cmake cache

2014-03-27 Thread Rafael Schloming
Hi,

Just a heads up, in order to fix a few install issues I found in 0.7RC1, I
recently committed some changes to the cmake build that included changing a
few previously persistent (cached) variables to non cached ones. As a
result if you're working with trunk you should probably blow away your
cmake cache and do a clean build just to make sure the old values aren't
hanging around to mess things up.

Apologies for the inconvenience.

--Rafael


Re: svn commit: r1583443 - in /qpid/proton/trunk: README proton-c/CMakeLists.txt proton-c/bindings/CMakeLists.txt

2014-04-01 Thread Rafael Schloming
It's good that we are checking for rubygem dependencies in cmake now, but I
think the checks should probably be mandatory when building the binding is
enabled rather than simply omitting the tests. Previously we could take a
source tarball and run 'cmake -DBUILD_[LANG}=ON && make test' and have
confidence that the language binding would be tested. With the tests being
(relatively) silently omitted when the test dependencies are not present,
it is much easier to get a false positive and think that we've tested
something when we haven't.

--Rafael


On Mon, Mar 31, 2014 at 4:21 PM,  wrote:

> Author: mcpierce
> Date: Mon Mar 31 20:21:28 2014
> New Revision: 1583443
>
> URL: http://svn.apache.org/r1583443
> Log:
> PROTON-550: Add check for Ruby gem dependencies for tests.
>
> If these dependencies are missing then raise a warning message during
> the CMake generation process.
>
> Modified:
> qpid/proton/trunk/README
> qpid/proton/trunk/proton-c/CMakeLists.txt
> qpid/proton/trunk/proton-c/bindings/CMakeLists.txt
>
> Modified: qpid/proton/trunk/README
> URL:
> http://svn.apache.org/viewvc/qpid/proton/trunk/README?rev=1583443&r1=1583442&r2=1583443&view=diff
>
> ==
> --- qpid/proton/trunk/README (original)
> +++ qpid/proton/trunk/README Mon Mar 31 20:21:28 2014
> @@ -236,10 +236,14 @@ Testing
>
>  Additional packages required for testing:
>
> -yum install rubygem-minitest
> +yum install rubygem-minitest rubygem-rspec rubygem-simplecov
> +
> +On non-RPM based systems, you can install them using:
> +
> +gem install minitest rspec simplecov
>
>  To test Proton, use the cmake build and run 'make test'. Note that
> -this will invoke the maven tests as well, so the maven prerequisates
> +this will invoke the maven tests as well, so the maven prerequisites
>  are required in addition to the cmake prerequisites.
>
>  Running Tests
>
> Modified: qpid/proton/trunk/proton-c/CMakeLists.txt
> URL:
> http://svn.apache.org/viewvc/qpid/proton/trunk/proton-c/CMakeLists.txt?rev=1583443&r1=1583442&r2=1583443&view=diff
>
> ==
> --- qpid/proton/trunk/proton-c/CMakeLists.txt (original)
> +++ qpid/proton/trunk/proton-c/CMakeLists.txt Mon Mar 31 20:21:28 2014
> @@ -472,20 +472,25 @@ if (RUBY_EXE)
>set (rb_rubylib "${rb_root}:${rb_src}:${rb_bin}:${rb_bld}:${rb_lib}")
>
># ruby unit tests:  tests/ruby/proton-test
> -  add_test (ruby-unit-test ${PYTHON_EXECUTABLE} ${env_py}
> "PATH=${rb_path}" "RUBYLIB=${rb_rubylib}"
> -"${rb_root}/proton-test")
> +  # only enable the tests if the Ruby gem dependencies were found
> +  if (DEFAULT_RUBY_TESTING)
> +add_test (ruby-unit-test ${PYTHON_EXECUTABLE} ${env_py}
> "PATH=${rb_path}" "RUBYLIB=${rb_rubylib}"
> +  "${rb_root}/proton-test")
>
> -  # ruby spec tests
> -  find_program(RSPEC_EXE rspec)
> -  if (RSPEC_EXE)
> -add_test (NAME ruby-spec-test
> -  WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/bindings/ruby
> -  COMMAND ${PYTHON_EXECUTABLE} ${env_py} "PATH=${rb_path}"
> "RUBYLIB=${rb_rubylib}"
> -  ${RSPEC_EXE})
> +# ruby spec tests
> +find_program(RSPEC_EXE rspec)
> +if (RSPEC_EXE)
> +  add_test (NAME ruby-spec-test
> +WORKING_DIRECTORY
> ${CMAKE_CURRENT_SOURCE_DIR}/bindings/ruby
> +COMMAND ${PYTHON_EXECUTABLE} ${env_py} "PATH=${rb_path}"
> "RUBYLIB=${rb_rubylib}"
> +${RSPEC_EXE})
>
> -  else(RSPEC_EXE)
> -message (STATUS "Cannot find rspec, skipping rspec tests")
> -  endif(RSPEC_EXE)
> +else(RSPEC_EXE)
> +  message (STATUS "Cannot find rspec, skipping rspec tests")
> +endif(RSPEC_EXE)
> +  else (DEFAULT_RUBY_TESTING)
> +message(STATUS "Skipping Ruby tests: missing dependencies")
> +  endif (DEFAULT_RUBY_TESTING)
>  else (RUBY_EXE)
>message (STATUS "Cannot find ruby, skipping ruby tests")
>  endif (RUBY_EXE)
>
> Modified: qpid/proton/trunk/proton-c/bindings/CMakeLists.txt
> URL:
> http://svn.apache.org/viewvc/qpid/proton/trunk/proton-c/bindings/CMakeLists.txt?rev=1583443&r1=1583442&r2=1583443&view=diff
>
> ==
> --- qpid/proton/trunk/proton-c/bindings/CMakeLists.txt (original)
> +++ qpid/proton/trunk/proton-c/bindings/CMakeLists.txt Mon Mar 31 20:21:28
> 2014
> @@ -36,9 +36,34 @@ if (PYTHONLIBS_FOUND)
>  endif (PYTHONLIBS_FOUND)
>
>  # Prerequisites for Ruby:
> +find_program(GEM_EXE "gem")
> +macro(CheckRubyGem varname gemname)
> +  execute_process(COMMAND ${GEM_EXE} list --local ${gemname}
> +OUTPUT_VARIABLE CHECK_OUTPUT)
> +
> +  set (${varname} OFF)
> +
> +  if (CHECK_OUTPUT MATCHES "${gemname}[ ]+\(.*\)")
> +message(STATUS "Found Ruby gem: ${gemname}")
> +set (${varname} ON)
> +  else()
> +message(STATUS "Missing Ruby gem dependency: ${gemname}")
> +set (${varname}

Re: proton's new event API - feedback

2014-04-02 Thread Rafael Schloming
On Wed, Apr 2, 2014 at 9:17 AM, Ken Giusti  wrote:

> Folks,
>
> I've been playing with the new Event/Collector engine API that's been
> introduced in 0.7.  I've found this API to be a nice fit for an
> event-oriented project I've been hacking on for the past few weeks:
> https://github.com/kgiusti/dingus
>
> After working with the new API for awhile, I'd like to propose a change to
> the API as it stands now - I'd like to see a bit more granularity with
> respect to the endpoint state change events.
>
> In the current implementation, the PN_{CONNECTION,SESSION,LINK}_STATE
> events are generated only when the flags for the _remote_ endpoint state
> changes.  I see how this makes sense, as the local flags are changed by the
> application invoking 'open()' or 'close()' - and the application can be
> written to track when these calls are made.  But I'd like to avoid having
> to implement this extra state tracking.
>
> Would it be possible to issue separate events for remote and local
> endpoint state changes?  For example, instead of having just a PN_*_STATE
> event issued when the remote flags change, could the API issue:
>
>  PN_*_LOCAL_EP_STATE when the local flags are modified, and
>  PN_*_REMOTE_EP_STATE when the remote flags have changed?
>

Makes sense to me, although I'd probably drop the _EP_ from the names as we
don't use that elsewhere, e.g. the accessor for state is
pn_connection_state(), not pn_connection_ep_state().


>
> One other minor nit: the documentation describing the PN_LINK_FLOW and
> PN_DELIVERY events needs some clarity.  Specifically:
>
> 1) PN_DELIVERY is generated only when the remote disposition has changed
> (correct me if I'm wrong, but this appears to be the behavior I'm
> observing).  This event is _NOT_ generated when the delivery's writable
> state has changed (see 2). Is the event generated when the delivery becomes
> 'readable'?  I haven't checked that.
>

I don't think it is currently, however it could be in all those cases (or
distinct events could be generated).


>
> 2) PN_LINK_FLOW - the documentation should explain how to handle this type
> of event.  My understanding is that the application should use this event
> to trigger processing of outgoing deliveries that may have become writable.
>  This should be noted, as the event itself does not reference a particular
> delivery (and, as noted above, the PN_DELIVERY event won't be generated
> when it becomes writable).
>
> Thoughts?
>

I've also noted that some sort of PN_*_NEW/FREE events would be handy to
provide a consistent way of doing context setup/teardown in some of my uses.

--Rafael


Proton 0.7 RC2

2014-04-08 Thread Rafael Schloming
Hi Everyone,

I've posted 0.7 RC2 in the usual places. Source can be found here:

  http://people.apache.org/~rhs/qpid-proton-0.7rc2/

Java binaries here:

  https://repository.apache.org/content/repositories/orgapacheqpid-1002/

Note that the install procedure has changed from prior versions and the
README has had some significant updates, so any build/install testing and
README proof reading are very welcome.

Please follow up on this thread with any issues you encounter.

--Rafael


Re: Proton 0.7 RC2

2014-04-09 Thread Rafael Schloming
Thanks, that should be fixed on trunk now. I'll do another RC soon. Please
continue to report any other issues you encounter here.

--Rafael


On Wed, Apr 9, 2014 at 3:33 PM, Bozo Dragojevic  wrote:

> On 8. 04. 14 21:34, Rafael Schloming wrote:
>
>> Hi Everyone,
>>
>> I've posted 0.7 RC2 in the usual places. Source can be found here:
>>
>>http://people.apache.org/~rhs/qpid-proton-0.7rc2/
>>
>> Java binaries here:
>>
>>https://repository.apache.org/content/repositories/orgapacheqpid-1002/
>>
>> Note that the install procedure has changed from prior versions and the
>> README has had some significant updates, so any build/install testing and
>> README proof reading are very welcome.
>>
>> Please follow up on this thread with any issues you encounter.
>>
>> --Rafael
>>
>>  PROTON-559 typo prevents compilation of posix/io.c on OSX
>
> Bozzo
>


Re: Can only access ~24 selectables before blowing up...

2014-04-10 Thread Rafael Schloming
On Wed, Apr 9, 2014 at 3:58 PM, Darryl L. Pierce  wrote:

> In working with passive mode and selectables, I've hit a strange
> reproducible problem.
>
> I have a simple add-on to Messenger that lets me have it push incoming
> messages to an EventMachine channel [1]. If I send messages to the echo
> example app (it's not echoing yet), it will consistently receive around
> 24 messages before dying. I've tried putting delays in between the
> sends. I've sent the messages in batches (tried 100, tried sending in
> units of 10 and 5). Each time it blows up around the 24th message.
>
> The underlying error the comes up is always the same:
>
> ---8<[snip]---
>  *** I received: This is message 22
>  ??? read_array=[9, 7] write_array=[9, 7]
>  +++ fd=9 : rarray=[]
>  +++ fd=7 : rarray=[#]
>  +++ fd=9 : warray=[]
>  +++ fd=7 : warray=[#]
>  ### capacity=16384 pending=0
>  ??? read_array=[9, 7, 12] write_array=[9, 7, 12]
>  +++ fd=9 : rarray=[]
>  +++ fd=7 : rarray=[#]
>  +++ fd=12 : rarray=[#, #]
>  +++ fd=9 : warray=[]
>  +++ fd=7 : warray=[#]
>  +++ fd=12 : warray=[#, #]
>  ### capacity=16384 pending=0
>  ??? read_array=[9, 7, 12] write_array=[9, 7, 12]
>  +++ fd=9 : rarray=[]
>  +++ fd=7 : rarray=[#]
>  +++ fd=12 : rarray=[#, #]
>  +++ fd=9 : warray=[]
>  +++ fd=7 : warray=[#]
>  +++ fd=12 : warray=[#, #]
> recv: Bad file descriptor
> [0x7f88b80700c0]:ERROR[-2] AMQP header mismatch: '' (connection aborted)
>
> CONNECTION ERROR connection aborted (remote)
>  !!! deleting fd=12
>  ??? read_array=[9, 7] write_array=[9, 7]
>  +++ fd=9 : rarray=[]
> /home/mcpierce/Programming/eventful-qpid-proton/lib/eventful-qpid-proton/event_machine.rb:80:in
> `initialize': Bad file descriptor (Errno::EBADF)
> from
> /home/mcpierce/Programming/eventful-qpid-proton/lib/eventful-qpid-proton/event_machine.rb:80:in
> `new'
> from
> /home/mcpierce/Programming/eventful-qpid-proton/lib/eventful-qpid-proton/event_machine.rb:80:in
> `block (3 levels) in start_event_monitor'
> from
> /home/mcpierce/Programming/eventful-qpid-proton/lib/eventful-qpid-proton/event_machine.rb:78:in
> `each'
> from
> /home/mcpierce/Programming/eventful-qpid-proton/lib/eventful-qpid-proton/event_machine.rb:78:in
> `block (2 levels) in start_event_monitor'
> from
> /home/mcpierce/Programming/eventful-qpid-proton/lib/eventful-qpid-proton/event_machine.rb:39:in
> `loop'
> from
> /home/mcpierce/Programming/eventful-qpid-proton/lib/eventful-qpid-proton/event_machine.rb:39:in
> `block in start_event_monitor'
> ruby-mri:
> /home/mcpierce/Programming/Proton/proton-c/src/object/object.c:99:
> pn_free: Assertion `pn_refcount(object) == 1' failed.
> Aborted
> ---8<[snip]---
>
> At the point where the error occurs, Ruby is attempting to wrap the
> fileno returned by the Selectable in an IO object. In this case it is
> almost always the number 9 that's coming up.
>
> Anybody see anything related to this before?
>
> [1] https://github.com/mcpierce/eventful-qpid-proton
>

Based on the assertion it could be a bug in the ruby binding around
selectables, however I don't see any of that code on trunk or in the git
repo you pointed to. Are you using a patched version of proton?

--Rafael


Proton 0.7 RC3

2014-04-10 Thread Rafael Schloming
Hi Everyone,

I've put out RC3 due to the compilation issue found on OSX (PROTON-559).
The source is posted here:

  http://people.apache.org/~rhs/qpid-proton-0.7rc3/

Java binaries are here:

  https://repository.apache.org/content/repositories/orgapacheqpid-1003/

The only delta from RC2 is the fix contributed by Bozo (thanks!). Please
continue to test and post any issues you find here. Given the isolated
nature of the delta, any issues you find in RC2 should also be relevant to
RC3, so feel free to post those here as well.

--Rafael


Re: Flow Control callback...

2014-04-16 Thread Rafael Schloming
Hi Clebert,

First off, a patch is a great way to get started contributing something.
For bigger/more involved patches we often use review board (
reviews.apache.org) to share, discuss and comment on patches in detail, but
for something small like this that's not really necessary.

Now for your patch, I have a couple of thoughts. I'm not sure if you're
aware of it, but the event API is intended to provide the sort of
functionality you're going for here. I don't know offhand if it covers the
exact events you've defined, but I would think it would make sense to look
into extending it to cover your scenario if it doesn't already.

One thing to be aware of is that the event API is attempting to offer the
same functionality while avoiding the sort of deeply nested callbacks that
your patch is adding. The reason for this is due to some general reentrance
issues that occur with those kinds of callbacks. This is probably easiest
to explain in terms of some specific scenarios, but still a bit difficult
without a whiteboard, so bear with me. ;-)

Let's say you've embedded the protocol engine in a concurrent application
that is attempting to bridge messages between connections. Now in this
context the engine itself is a shared data structure and needs to be
protected with a mutex whenever it is accessed. Now let's say this bridging
app is attempting to handle a credit increment callback by pulling messages
off of another connection. We now have the following logical stack traces
that are possible:

  Thread 1:
grab mutex for engine A
invoke transport's process()
...
...
engine invokes deeply nested callback
application tries to grab a message from engine B
application grabs mutex for engine B
...

So far so good, but now let's suppose that we have Thread 2 shuffling
messages in the other direction:

  Thread 2:
grab mutex for engine B
invoke transport's process()
...
...
engine invokes deeply nested callback
application tries to grab a message from engine A
application grabs mutex for engine A
...

Now here is when we run into trouble since we now have two different
threads attempting to acquire the same two locks in opposite order and of
course that results in a deadlock. While this is just a specific scenario,
this same general class of deadlock is really easy to run into if you
aren't very careful. Even in a completely single threaded app you can run
into similar issues if A and B end up being the same engine (say you're
replying to a request or something). In such a case the engine ends up
becoming reentrant in unexpected and difficult to anticipate ways.

The way the event API avoids this issue is rather than executing a callback
from deep inside engine code, it basically defers executing of the callback
by posting an event to an event queue. This way the event can be pulled off
of the queue and handled after the engine has completed its processing and
returned itself to a consistent state. This also avoids the mutex issue
since the application will have released the mutex for an engine when it
handles any events for that engine.

So as far as I can tell there are really three (potential) differences in
the way the event API works and what your patch is providing:

  1. The semantics of the events. There may be some state change you are
capturing that isn't exposed through the event API.

  2. The style of dispatch. The event API models event types as enums, so
currently you'd have to write a switch statement rather than overriding a
callback.

  3. The timing of the dispatch. With your style of callbacks the callbacks
are always executed for you. With the event API it's a little bit more
do-it-yourself. Proton doesn't have a main loop so it can't actually
execute the callbacks for you, instead it is giving you back the events so
you can dispatch them yourself from the relative safety of your own event
loop.

Given that it would be nice to have one consistent event model rather than
two disjoint ones, I'd suggest the following approach. I think (1) is
pretty easily addressed by making sure we have event types covering the
state changes you are looking for. As for (2), it should be pretty easy and
natural to add event handler callbacks so we can provide both enum-style
dispatch and callback style dispatch. I think the biggest question is
around (3). If you depend on the deeply nested nature of the dispatch and
the fact that they are invoked from inside the engine, I'd first like to
understand why, and then if you really really really need that style of
execution, I'd suggest we offer that by giving you the option of
implementing your own Collector. The Collector interface is essentially
just an abstraction over an event queue, so if you were able to implement
your own you could dispatch the event directly where it is posted rather
than queueing it for later dispatch.

--Rafael

On Tue, Apr 15, 2014 at 10:25 PM, Clebert Suconic wrote:

> I have change

Re: new team working with AMQP and Apache Qpid Proton

2014-04-17 Thread Rafael Schloming
Hello and welcome,

First off let me just say this is great news and please don't feel shy
about engaging. The whole point of Proton is to make it easy to speak AMQP
and if there is any way we can make it easier then we welcome the feedback.

I'll do my best to answer your questions below...

On Thu, Apr 17, 2014 at 8:36 AM, Rob Nicholson wrote:

> Hello proton mailing list.
>
> I think that some folks here have noticed that within IBM we have an
> incubator project called MQ Light [1] which is using the AMQP 1.0 wire
> protocol and is making use of Apache Qpid Proton both standalone[1] and in
> our cloud PaaS incubator [2].
> Up until this point we have been largely in listen mode on the mailing
> list but now we will be engaging with the community so we thought it would
> be polite to introduce ourselves, what we are doing and how we are thinking
> of engaging with you.
>
> Currently we use the proton C messenger API in our client and the Java
> messaging API in our "server" which uses code derived from IBM's Websphere
> MQ products.
>
> We have raised some Jiras, we plan to raise more for some specific
> problems we have had, supplying patches which show how we worked around or
> addressed each problem we had.
>
> We also have some of more strategic queries the first of which are:
>
>-
>
>  - We are finding that we need to extend Messenger in order to make it
> capable of doing what we want it to. Is this valid or did you want to keep
> messenger really simple.   Should we just supply patches for these
> extensions also?
>

This is hard question to answer in general. Simplicity is definitely a
goal, however we want it to solve real use cases as well, and sometimes
those two things work against each other. Understanding your use cases is
probably key here, so patches would be a great start even if they just
serve to illustrate the use case.


>
> -  We want to create language bindings for MQ Light which are _really_
> easy to use by programmers in that language. Specifically we find these
> bindings need to understand our messaging model. AMQP does not have a
> preconceived idea of a messaging model. We would like to put all of the
> client code into open source but do these language bindings belong in the
>  Proton project? If not, I suspect we'll create a separate project which
> has a dependency to proton.
>

At the lowest layer (i.e. the protocol engine) proton cannot make any
assumptions that would limit interop with any conforming AMQP
implementation. In other words if you are using the engine to speak AMQP
you need to be able to express and interact with the full range of protocol
capabilities. But, if layered properly, there is room for simplified models
built on top of the engine, e.g. for the sake of simplifying the user
experience Messenger makes some choices about how it uses the protocol that
limit the full range of interactions you can express purely through the
Messenger API. I'd say this kind of thing is fine so long as it is generic,
i.e. not limited to interacting with just one implementation that behaves
in an overly specific way.

There is also a contrib area in the Proton tree that we have the option to
use for the kind of thing you're describing, and this isn't necessarily an
either or thing. We could start out with something in contrib and factor
parts of it into core and then move the rest of it to an external project
if that ever became warranted. Again, it's hard to say exactly what makes
sense without specifics, but hopefully you get a sense of the options.


>
> -  As we have consumed the Java engine API we have run up against some
> threading issues. This might be because we do not understand the threading
> model in the Engine.  At some point we'd like to have a design discussion
> with the community on the threading model in the Engine implementation.


I'm eager to hear what your issues are. I'm aware of some common problems
people have had and there are some ideas in the works to address those, so
it would be good to take into account your experiences sooner rather than
later.

--Rafael


Proton 0.7 RC4

2014-04-17 Thread Rafael Schloming
Hi Everyone,

I've posted 0.7 RC4 in the usual places. Source artifacts are here:

  - http://people.apache.org/~rhs/qpid-proton-0.7rc4/

Java binaries are here:

 - https://repository.apache.org/content/repositories/orgapacheqpid-1004/

Changes from RC3 are:

 - PROTON-550 (missing dependency checks)
 - PROTON-554 (compatibility bug with older swig versions)
 - PROTON-560.(SSL bug triggered by large messages)

I'm hoping this will be the last one, so please check it out and if I don't
hear anything bad I will call for a vote soon.

--Rafael


[VOTE]: Release Proton 0.7 RC4 as 0.7 final

2014-04-22 Thread Rafael Schloming
Hi Everyone,

I haven't heard of any issues in RC4, so I'm going to put this to a formal
vote now:

Source artifacts are here:

  - http://people.apache.org/~rhs/qpid-proton-0.7rc4/

Java binaries are here:

 - https://repository.apache.org/content/repositories/orgapacheqpid-1004/

Please review and register your vote:

  [ ] Yes, release 0.7 RC4 as 0.7 final
  [ ] No, 0.7 RC4 as the following issues...


Thanks,

--Rafael


Re: Using the messenger API to connect to a server without sending or subscribing

2014-04-22 Thread Rafael Schloming
Hi Chris,

Sorry for chiming in late, I've been in and out of meetings all day.
Comments are inline...

On Tue, Apr 22, 2014 at 2:49 AM, Chris White1 wrote:

> Hi
>
> I'm part of the IBM team developing MQ Light (
> https://www.ibmdw.net/messaging/mq-light/) and we are implementing our
> client API using the AMQP Messenger C API. Our client API has a connect
> function, which is required  to be invoked before sending or receiving
> messages. The AMQP Messager C API does not seem to have an API function to
> perform a connect, without sending a message or subscribing to receive
> messages.
>

As Fraser mentioned in his reply, part of the idea behind Messenger is to
be Message oriented as opposed to Connection oriented. One of the key
requirements behind this is the idea that you should be able to change the
topology of the physical connections in use without having any impact on
the application itself. For example, say a typical JMS application is coded
to interact with 3 or 4 different queues. If any of those queues are moved
to a different broker, more often than not you would probably need to
recode the JMS application. For a Messenger app though you simply adjust
the addresses in use and no code changes are necessary. This is just one
example of how that flexibility is useful, and there are a lot of other
possibilities, most of which aren't implemented yet, but which I don't want
to preclude. Things like:

  - automatic reconnect
  - automatically reclaiming idle connections
  - having messenger manage redundant pathways (useful for things that are
traditionally done with failover and load balancing in the broker)
  - peer to peer operation
  - server operation
  - disconnected operation
  - client side persistence

All that said, I completely buy that it would be nice to be able to fail
fast in certain scenarios (as Dominic points out in his email), and if we
can find a way to do that without necessarily surfacing connections so
directly then I'm all for it.


>
> Looking at the messenger.c source code I found that function
> pn_messenger_resolve appears to give the connect behaviour we require. So
> could the pn_messenger_resolve be added to the API please (maybe with a
> different name, say: pn_messenger_connect, which seems more intuitive)?
>
> I was thinking that the pn_messenger_start function should eventually be
> doing the connect, but that does not take an address argument, so is
> probably not appropriate.
>

I don't know if you've looked at pn_messenger_route at all, but it might be
possible for pn_messenger_start to optionally resolve any specified routes.
This might provide some of what you're looking for.


>
> I would also be interested in others opinions about this, as it may seem
> to be a strange thing to want to do, i.e. why would you want to connect if
> you're not going to send or receive messages?  A use case for this could
> be that a server wants to be aware of active clients communicating with it
> before they are ready to send or receive messages. Also a connect function
> enables a client to determine if a server is available before exchanging
> data with it.
>

As I said above it makes sense to me from a fail fast perspective in some
scenarios, but I would think you would want to be able to explicitly
control it. For example if I've got a typo in my hostname I want to find
out about it right when I start the client as opposed to later on, but if I
want my client to be able to operate in disconnected mode then the fact
that I can't connect to a given host doesn't necessarily mean it is
invalid, and in that case the correct behaviour might be to just locally
queue the message and wait for connectivity to return.

--Rafael


Re: Using the messenger API to connect to a server without sending or subscribing

2014-04-23 Thread Rafael Schloming
On Wed, Apr 23, 2014 at 10:51 AM, Chris White1 wrote:

> Hi all
>
> Thanks for the informative and very helpful responses.
>
> We did look at qpid:Messaging but this seems to be separate from the
> qpid-proton library, and there is a concern that the is no Java API and
> some of the function we require is missing. Our server backend is built on
> the qpid-proton library so ideally we would like our client API to also be
> built using qpid-proton library function.
>
> As an aside, why is the qpid::messaging alternative API part of qpid
> rather than the qpid-proton package? Is there a specific reason why it
> wasn't built on top of the qpid-proton engine?
>

The qpid::messaging API actually predates proton. It was originally
implemented over the 0-10 version of the protocol. The 1.0 implementation
does in fact use the proton engine, however the dependencies make it
difficult to separate from the cpp broker.


>
> The  qpid-proton Messenger seems to give us the functionality that we
> require, except connect. So I can think of three options for the way
> forward:
> Write our API based on the qpid-proton engine directly.
> Have a qpid:Messaging like API be built on the qpid-proton engine, and we
> implement our API based on that.
> See if we can't win you around to the idea of  adopting the addition of a
> pn_messenger_test_connection function at the Messenger API, as opposed to
> the original idea of a pn_messenger_connect function. This would then
> enable client applications to fail fast if the supplied connection details
> were invalid.
> With the experience of the community what would you recommend?
>

My inclination would be to add some sort of policy or mode to messenger.
I'm not sure what I'd call it, but with this mode enabled, messenger (when
started) would always maintain active connections and/or links to any
declared routes. I think this is a bit more flexible than just the ability
to test a connection because a route can include the node information as
well. This would, e.g. give you the option of failing fast not only if the
broker was down, but also if the queue doesn't exist. In the python binding
it would look something like this:

  messenger.blah_mode = True
  messenger.route("broker1/*", "broker1.foo.com/$1")
  messenger.route("queueA", "broker2.bar.com/queueA")
  messenger.start() # this would now blow up if broker1 or broker2 is
inaccessable, or if queueA doesn't exist.

Does this seem like it would cover your use case?

--Rafael


Re: Using the messenger API to connect to a server without sending or subscribing

2014-04-23 Thread Rafael Schloming
On Wed, Apr 23, 2014 at 12:10 PM, Dominic Evans wrote:

> > My inclination would be to add some sort of policy or mode to messenger.
> > I'm not sure what I'd call it, but with this mode enabled, messenger
> (when
> > started) would always maintain active connections and/or links to any
> > declared routes. I think this is a bit more flexible than just the
> ability
> > to test a connection because a route can include the node information as
> > well. This would, e.g. give you the option of failing fast not only if
> the
> > broker was down, but also if the queue doesn't exist. In the python
> > binding
> > it would look something like this:
> >
> >   messenger.blah_mode = True
> >   messenger.route("broker1/*", "broker1.foo.com/$1")
> >   messenger.route("queueA", "broker2.bar.com/queueA")
> >   messenger.start() # this would now blow up if broker1 or broker2 is
> > inaccessable, or if queueA doesn't exist.
> >
> > Does this seem like it would cover your use case?
>
> That sounds like a good solution and would certainly meet our needs.
>
> Should we raise a New Feature issue in JIRA to track and discuss this
> further?
> That way you can have a think about how you'd prefer to see it implemented,
> and
> in the mean time we can put together a small patch toward this general idea
> and
> either submit that on JIRA, or allow you to come up with your own and we
> can
> rebase our API on top of that later.
>

Yeah, that would be great.

--Rafael


[RESULT] [VOTE]: Release Proton 0.7 RC4 as 0.7 final

2014-04-25 Thread Rafael Schloming
On Tue, Apr 22, 2014 at 7:12 AM, Rafael Schloming  wrote:

> Hi Everyone,
>
> I haven't heard of any issues in RC4, so I'm going to put this to a formal
> vote now:
>
> Source artifacts are here:
>
>   - http://people.apache.org/~rhs/qpid-proton-0.7rc4/
>
> Java binaries are here:
>
>  - https://repository.apache.org/content/repositories/orgapacheqpid-1004/
>
> Please review and register your vote:
>
>   [ ] Yes, release 0.7 RC4 as 0.7 final
>   [ ] No, 0.7 RC4 as the following issues...
>
>
> Thanks,
>
> --Rafael
>
>


Re: [RESULT] [VOTE]: Release Proton 0.7 RC4 as 0.7 final

2014-04-25 Thread Rafael Schloming
Oops, bumped the send key a little early there.

The vote passes with 5 (4 binding) +1 votes and zero -1 votes. I will
follow up when the release is posted.

--Rafael


On Fri, Apr 25, 2014 at 11:04 AM, Rafael Schloming  wrote:

>
>
>
> On Tue, Apr 22, 2014 at 7:12 AM, Rafael Schloming wrote:
>
>> Hi Everyone,
>>
>> I haven't heard of any issues in RC4, so I'm going to put this to a
>> formal vote now:
>>
>> Source artifacts are here:
>>
>>   - http://people.apache.org/~rhs/qpid-proton-0.7rc4/
>>
>> Java binaries are here:
>>
>>  - https://repository.apache.org/content/repositories/orgapacheqpid-1004/
>>
>> Please review and register your vote:
>>
>>   [ ] Yes, release 0.7 RC4 as 0.7 final
>>   [ ] No, 0.7 RC4 as the following issues...
>>
>>
>> Thanks,
>>
>> --Rafael
>>
>>
>


Re: Minimizing the Proton Engine/Messenger RAM Footprint for embedded devices

2014-04-25 Thread Rafael Schloming
Hello and welcome to the list.

See inline for further comments...

On Fri, Apr 25, 2014 at 11:55 AM, dcjohns41 wrote:

> Hello Proton Experts,
>
> I am new to this mailing list and use of the Proton system, so please do
> not
> get offended if I say something naïve.
>

No worries. Please don't be shy. ;-)


> My team is attempting to use the Proton-C Engine/Messenger package on a
> small footprint embedded device. We are attempting to send sensor
> information and receive data to/from the MS Azure Service Bus
> infrastructure.  We are attempting to use the QPid Proton 0.6 release.  The
> system is using C code on an ARM-xxx processor.
>
> Our configuration has very limited RAM and have been attempting to reduce
> the run-time requirement to fit in <50Kb or so.  Our data needs are not
> very
> big and normally would fit in <1024 bytes of payload.
>
> We have trimmed the Proton input/output buffers down from 16KB to smaller
> values.  Have attempted to disable logging and anything else we could to
> get
> the footprint smaller.  At the moment we have the RAM consumption near
> 43Kbytes.  However, once we start attempting to make socket connections, we
> see additional RAM allocations that push over the 50KB limit.
>
> My first question is- has this been done on other systems where the RAM is
> very limited?  If yes, is that example available for us to review?
>

I know it's been used on some embedded platforms, but I'm afraid I don't
have any details. I'm guessing either the footprint requirements were not
as small as yours or whatever tuning was required didn't make it back to
the codebase.


> Is it feasible to limit some of the functions/options in the library to at
> least get basic functionality of AMQP 1.0 to Azure/Service Bus running?
>  Our
> expectation is that we need to have at least one transmit link and one
> receive link to pre-defined topics on the Service Bus side.
>

There's probably a bunch of different things we could try, but to be honest
we haven't really taken a serious look at memory utilization. I'd start by
instrumenting malloc/free so we can identify where the memory usage is
coming from. I'd actually like to get that kind of instrumentation into the
Proton codebase so if you are inclined to do this I'd love to see the
patch. If not I hope to get around to it soon, but its always difficult to
predict how much time I will have.


>
> Ultimately we need to have AMQPS (via TLS).  But in the short term we are
> just attempting to get basic communication running.
>

The builtin TLS support relies on openssl, so there may be limits to what
we can do in terms of memory utilization there. Of course you always have
the option of not using the builtin TLS layer and supplying your own, but
again I'd say first thing would probably be to instrument and see how close
we are to your goal.

--Rafae


Re: Messenger: pn_delivery leaking pn_disposition memory?

2014-04-25 Thread Rafael Schloming
On Fri, Apr 25, 2014 at 1:39 PM, Dominic Evans wrote:

> In one of our automated client stress tests, we've noticed that we seem to
> be
> leaking memory. We were previously seeing this on qpid-proton 0.6 and I've
> retested on 0.7 RC3 and it is still occurring
>
> ==16195== 45,326,848 bytes in 25,294 blocks are possibly lost in loss
> record
> 1,865 of 1,867
> ==16195==at 0x4C274A0: malloc (in
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==16195==by 0x86CC7AC: pn_data (codec.c:363)
> ==16195==by 0x86D7372: pn_disposition_init (engine.c:1066)
> ==16195==by 0x86D756B: pn_delivery (engine.c:1102)
> ==16195==by 0x86DB93E: pn_do_transfer (transport.c:738)
> ==16195==by 0x86D3A21: pn_dispatch_frame (dispatcher.c:146)
> ==16195==by 0x86D3B28: pn_dispatcher_input (dispatcher.c:169)
> ==16195==by 0x86DCB4C: pn_input_read_amqp (transport.c:1117)
> ==16195==by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==by 0x86DC74E: transport_consume (transport.c:1037)
> ==16195==by 0x86DF89B: pn_transport_process (transport.c:2052)
> ==16195==
> ==16195== 45,326,848 bytes in 25,294 blocks are possibly lost in loss
> record
> 1,866 of 1,867
> ==16195==at 0x4C274A0: malloc (in
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==16195==by 0x86CC7AC: pn_data (codec.c:363)
> ==16195==by 0x86D4E58: pn_condition_init (engine.c:203)
> ==16195==by 0x86D738A: pn_disposition_init (engine.c:1067)
> ==16195==by 0x86D756B: pn_delivery (engine.c:1102)
> ==16195==by 0x86DB93E: pn_do_transfer (transport.c:738)
> ==16195==by 0x86D3A21: pn_dispatch_frame (dispatcher.c:146)
> ==16195==by 0x86D3B28: pn_dispatcher_input (dispatcher.c:169)
> ==16195==by 0x86DCB4C: pn_input_read_amqp (transport.c:1117)
> ==16195==by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==by 0x86DC74E: transport_consume (transport.c:1037)
> ==16195==
> ==16195== 45,328,640 bytes in 25,295 blocks are possibly lost in loss
> record
> 1,867 of 1,867
> ==16195==at 0x4C274A0: malloc (in
> /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==16195==by 0x86CC7AC: pn_data (codec.c:363)
> ==16195==by 0x86D7360: pn_disposition_init (engine.c:1065)
> ==16195==by 0x86D756B: pn_delivery (engine.c:1102)
> ==16195==by 0x86DB93E: pn_do_transfer (transport.c:738)
> ==16195==by 0x86D3A21: pn_dispatch_frame (dispatcher.c:146)
> ==16195==by 0x86D3B28: pn_dispatcher_input (dispatcher.c:169)
> ==16195==by 0x86DCB4C: pn_input_read_amqp (transport.c:1117)
> ==16195==by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==by 0x86DF4FD: pn_io_layer_input_passthru (transport.c:1964)
> ==16195==by 0x86DC74E: transport_consume (transport.c:1037)
> ==16195==by 0x86DF89B: pn_transport_process (transport.c:2052)
>
>
> Looking at the code I can see this should get freed in
> pn_disposition_finalize once pn_decref(delivery) is called, but I haven't
> yet had a chance to determine why this isn't occurring. Has anyone else
> seen
> this before and is there anything obvious we could be doing wrong?
>

Are these the only kinds of valgrind records you are seeing? I can't see
offhand how it would be possible to leak the the nodes inside a pn_data_t
without also leaking a whole bunch of other stuff. I ran the simple
messenger send/recv examples under valgrind and it was clean for me.

--Rafael


Re: Minimizing the Proton Engine/Messenger RAM Footprint for embedded devices

2014-04-28 Thread Rafael Schloming
On Mon, Apr 28, 2014 at 2:37 PM, Andrew Stitcher wrote:

> On Mon, 2014-04-28 at 13:34 -0400, Alan Conway wrote:
> > I wouldn't instrument the code for this, there are tools that can
> > measure this kind of thing without code changes.
> >
> > valgrind has cachegrind which profiles calls in general and can be used
> > to identify the sources of malloc calls - there's a nice visualization
> > tool kcahcegrind for looking at the output.
> >
> > valgrind also has a memory profiler called massif which I haven't really
> > used.
> >
> > systemtap provides a general scripting language that you can use to
> > intercept malloc calls. There are a bunch of memory related scripts
> > pre-packaged
> > http://sourceware.org/systemtap/examples/keyword-index.html#MEMORY
> >
> > There are surely others...
>
> These are great tools that work really well and I've enjoyed using them
> myself on big heavy weight platforms like Linux. However on the little
> embedded platforms the OP was asking about there is no real alternative
> but to use something much lighter weight. For example some simple built
> in instrumentation. Perhaps built in to the linked-in libc runtime
> library though so it needn't necessarily require code changes to Proton.
>
> Of course if you can run the initial scenario on Linux under
> valgrind/systemtap etc. first then that will give you an idea what you
> need to go and change, but you will still need the target machine
> instrumentation in the end.
>

I'd also like to be able to track peak memory usage as well as
mallocs/frees under various load scenarios, and I suspect valgrind would
add too much overhead for that.

--Rafael


[ANNOUNCE] Qpid Proton 0.7 released

2014-04-29 Thread Rafael Schloming
Hi Everyone,

Qpid Proton 0.7 is now officially available. You can find it here:

  - http://qpid.apache.org/releases/qpid-proton-0.7/index.html

In addition to numerous bug fixes, there are several new features worth
mentioning.

== The Event API ==

One of the consistent pieces of feedback from people using the engine API
is that it is cumbersome and error prone to determine what high level state
changes have occurred in the wake of I/O. The event API aims to address
this by providing a significantly more robust interface for handling
protocol state changes. It is new in 0.7 and will hopefully improve and
evolve with feedback.

== Selectable Messenger ==

The messenger API includes a new selectables API that can be used to
integrate it into external I/O loops.

== Documentation ==

Last but not least, the 0.7 release comes with significantly improved
documentation. There's still a long way to go on this front, but the C API
is now (almost) entirely documented, and much of that documentation is at
least somewhat applicable to the non C APIs. You can check out the improved
documentation here:

  -
http://qpid.apache.org/releases/qpid-proton-0.7/protocol-engine/c/api/modules.html

--Rafael


simplified factory pattern for proton-j

2014-04-29 Thread Rafael Schloming
Hi Everyone,

I've put together a patch that makes the proton-j factory usage a bit
simpler and more consistent. You can review it here if you like:

  - https://reviews.apache.org/r/20854/

The main point of the patch is to make all the factories consistently
follow this pattern:

  package.Interface iface = package.Interface.Factory.create(...);

I like this because it is simple and easy to remember and doesn't require
importing the impl packages directly.

The patch preserves the convenience constructors, e.g. Proton.connection()
and so forth, but it does remove the old factory APIs. I think this is a
reasonable thing to do because the old factories were cumbersome enough to
use that I don't think anyone actually bothered (including our own
exmaples).

In any case, please shout if this patch will be troublesome for you. If I
don't hear anything I'll go ahead and commit it later this week.

Thanks,

--Rafael


Re: Optimizations on Proton-j

2014-04-29 Thread Rafael Schloming
On Tue, Apr 29, 2014 at 9:27 AM, Clebert Suconic wrote:

> I have done some work last week on optimizing the Codec.. and I think i've
> gotten some interesting results.
>
>
> - The Decoder now is stateless, meaning the same instance can be used over
> and over (no more need for one instance per connection). Bozo Dragojefic
> has actually seen how heavy is to create a Decoder and has recently
> optimized MessageImpl to always take the same instance through
> ThreadLocals. This optimization goes a step further
> - I have changed the ListDecoders somehow  you won't need intermediate
> objects to parse Types. For now I have only made Transfer as that effective
> type but I could do that for all the other Types at some point
> - There were a few hotspots that I found on the test and I have refactored
> accordingly, meaning no semantic changes.
>
> As a result of these optimizations, DecoderImpl won't have a setBuffer
> method any longer. Instead of that each method will take a
> read(ReadableBuffer..., old signature).
>
>
> And talking about ReadableBuffer, I have introduced the interface
> ReadableBuffer. When integrating on the broker, I had a situation where I
> won't have a ByteBuffer, and this interface will allow me to further
> optimize the Parser later as I could take the usage of Netty Buffer (aka
> ByteBuf).
>
>
> You will find these optimizations on my branch on github:
> https://github.com/clebertsuconic/qpid-proton/tree/optimizations
>
>
> Where I will have two commits:
>
> I - a micro benchmark where I added a testcase doing a direct read on the
> buffer without any framework. I've actually written a simple parser that
> will work for the byte array I have, but that's very close to reading
> directly from the bytes.
>I used that to compare raw reading and interpreting the buffer to the
> current framework we had.
>I was actually concerned about the number of intermediate objects, so I
> used that to map these differences.
>
>
> https://github.com/clebertsuconic/qpid-proton/commit/7b2b02649e5bdd35aa2e4cc487ffb91c01e75685
>
>
> I - a commit with the actual optimizations:
>
>
>
> https://github.com/clebertsuconic/qpid-proton/commit/305ecc6aaa5192fc0a1ae42b90cb4eb8ddfe046e
>
>
>
>
>
>
>
>
> Without these optimizations my MicroBenchmark, parsing 1000L instances
> of Transfer, without reallocating any buffers could complete on my laptop
> in:
>
> - 3480 milliseconds , against 750 milliseconds with raw reading
>
>
> After these optimizations:
> - 1927 milliseconds, against 750 milliseconds with raw reading
>

This sounds very promising and an excellent excuse for me to dust off some
of my somewhat rusty git skills. ;-)


>
> Notice that this will also minimize the footprint of the codec but I'm not
> measuring that here.
>

Just out of curiosity, when you say minimize the footprint, are you
referring to the in memory overhead, or do you mean the encoded size on the
wire?


> I'm looking forward to work with this group, I actually had a meeting with
> Rafi and Ted last week, and I plan to work closer to you guys on this
>

Excellent! I'm also looking forward to digging in a bit more on the Java
side of things.

--Rafael


Re: simplified factory pattern for proton-j

2014-04-30 Thread Rafael Schloming
I forgot to mention, but another part of the reasoning here is that Java 8
is (finally!!!) allowing static methods in interfaces, so the "natural"
pattern for this sort of thing would just be Interface.create(...), and
while we won't be able to use that for a while, the
Interface.Factory.create(...) option is about as idiomatically close to
that as we can get in Java 7.

--Rafael


On Tue, Apr 29, 2014 at 2:57 PM, Rafael Schloming  wrote:

> Hi Everyone,
>
> I've put together a patch that makes the proton-j factory usage a bit
> simpler and more consistent. You can review it here if you like:
>
>   - https://reviews.apache.org/r/20854/
>
> The main point of the patch is to make all the factories consistently
> follow this pattern:
>
>   package.Interface iface = package.Interface.Factory.create(...);
>
> I like this because it is simple and easy to remember and doesn't require
> importing the impl packages directly.
>
> The patch preserves the convenience constructors, e.g. Proton.connection()
> and so forth, but it does remove the old factory APIs. I think this is a
> reasonable thing to do because the old factories were cumbersome enough to
> use that I don't think anyone actually bothered (including our own
> exmaples).
>
> In any case, please shout if this patch will be troublesome for you. If I
> don't hear anything I'll go ahead and commit it later this week.
>
> Thanks,
>
> --Rafael
>
>


Re: simplified factory pattern for proton-j

2014-04-30 Thread Rafael Schloming
Right, I wasn't suggesting the proton codebase use Java 8 anytime soon (I
would think not until Java 7 is EOLed), just that if a Java 8 codebase uses
proton-j then the idioms will be a little bit closer to each other.

--Rafael


On Wed, Apr 30, 2014 at 8:33 AM, Clebert Suconic wrote:

> The only issue is that all the users using Proton-j would have to be at
> least Java8.
> For instance, I can have maybe my users using Java8 on the server, but
> they won't migrate all their clients.
>
>
> On Apr 30, 2014, at 6:48 AM, Rafael Schloming  wrote:
>
> > I forgot to mention, but another part of the reasoning here is that Java
> 8
> > is (finally!!!) allowing static methods in interfaces, so the "natural"
> > pattern for this sort of thing would just be Interface.create(...), and
> > while we won't be able to use that for a while, the
> > Interface.Factory.create(...) option is about as idiomatically close to
> > that as we can get in Java 7.
> >
> > --Rafael
> >
> >
> > On Tue, Apr 29, 2014 at 2:57 PM, Rafael Schloming 
> wrote:
> >
> >> Hi Everyone,
> >>
> >> I've put together a patch that makes the proton-j factory usage a bit
> >> simpler and more consistent. You can review it here if you like:
> >>
> >>  - https://reviews.apache.org/r/20854/
> >>
> >> The main point of the patch is to make all the factories consistently
> >> follow this pattern:
> >>
> >>  package.Interface iface = package.Interface.Factory.create(...);
> >>
> >> I like this because it is simple and easy to remember and doesn't
> require
> >> importing the impl packages directly.
> >>
> >> The patch preserves the convenience constructors, e.g.
> Proton.connection()
> >> and so forth, but it does remove the old factory APIs. I think this is a
> >> reasonable thing to do because the old factories were cumbersome enough
> to
> >> use that I don't think anyone actually bothered (including our own
> >> exmaples).
> >>
> >> In any case, please shout if this patch will be troublesome for you. If
> I
> >> don't hear anything I'll go ahead and commit it later this week.
> >>
> >> Thanks,
> >>
> >> --Rafael
> >>
> >>
>
>


Re: Optimizations on Proton-j

2014-04-30 Thread Rafael Schloming
On Wed, Apr 30, 2014 at 8:35 AM, Clebert Suconic wrote:

> @Rafi: I see there is a patch review  process within Apache (based on your
> other thread on Java8)
>
> Should we make this through the patch process at some point?
>

I'm fine looking at it on your git branch, but if you'd like to play with
the review tool then feel free.  Just let me know if you need an account
and I will try to remember how to set one up (or who to bug to get you
one). ;-)

--Rafael


Re: Messenger.receive(1) causing sluggish behavior

2014-05-01 Thread Rafael Schloming
I'm not sure this is a bug per/se as opposed to a combination of
missunderstood and/or ill conceived features.  The ~/ prefix when used in a
reply-to gets substituted with the name of the messenger. That is what is
substituting the container name into the address. The container name when
the messenger was constructed was left unspecified, and so that is why it
ends up being a UUID. That whole part is functioning as expected and that
is why the reply comes back to the same container that the request
originated from. What's going wrong here is the use of the # notation. That
is signalling messenger to create a dynamic link instead of a regular one,
and it was introduced with the expectation that it would be used for remote
subscriptions, e.g.:

  sub = messenger.subscribe("remote-broker/#")
  ...
  msg.reply_to = sub.address

Note that in the above usage, the # notation requests the dynamically
created node, and the "sub.address" expression accesses the value. The #
notation is never used directly on a message. If you were to print
sub.address it would be some random gobbledygook assigned by the remote
peer, and not #. Now messenger treats addresses symmetrically for sending
and receiving so when you send to foo/# it is also going to create a
dynamic node for sending. The problem here is that unlike the subscription
scenario, there is no way to actually access the remote address assigned by
the peer, and each time we send we end up creating a new link.

The reason this slows things down when recv(1) is used instead of recv(-1)
is that the student only has one credit at a time to share amongst all the
spurious links that are created by the use of the # notation. So basically
by the time you get to N messages, you need to wait for the credit
algorithm to poll N links for one message at a time. I'm actually quite
happy to see that it doesn't freeze, because that means the credit
algorithm is actually doing it's job. This is a pretty pathological case
from a flow control perspective, and it is good news that we aren't locking
up entirely. The problem still exists if you use a larger value for credit
or even if you use -1 for credit, but it will take a while longer to become
noticeable because you have a lot more credit to poll with.

I can think of a bunch of options for changing/extending the # feature to
make it a bit more useful for sending, but regardless, the use of # in this
particular scenario makes no sense even if it were extended/improved. Using
it in a reply-to in this way is basically asking the server to ask the
client to dynamically create an address for the reply. It is much less
perverse for the client to simply create the necessary addresses itself
instead of asking the server to ask the client to create the address, e.g.
the client can just use "~/reply1", "~/reply2", ... or whatever other
scheme it might want to use for identifying replies. In fact in your case
(and many others) you don't really even need more than one address. If you
modify your example to just use "~/replies", it will work properly
regardless of what value you pass to recv.

On the overall # issue it's probably worth a JIRA detailing the problem
with using it on send rather than subscribe, however I'd say it's probably
higher priority to document what it actually means since I don't think very
many sensible uses would run into the problem you're seeing. In fact in
general it probably never makes sense to send a literal '#' in an address
appearing in a message like you're doing. You really would only use it to
query for an address from your peer and then send the result of the query.
Unfortunately I don't think we can just make it illegal to use a '#' since
the '#' behaviour is really local to messenger and '#' might be a
meaningful address if you're speaking to a non messenger implementation.

--Rafael

On Wed, Apr 30, 2014 at 3:42 PM, Ken Giusti  wrote:

> I think this may be a bug in messenger.
>
> From tracing the wire, I see that every time guru sends a reply using the
> reply_to field in the message, a completely new session and link are
> created.  Over time this leads to a large number of sessions and links -
> one for each reply message sent.
>
> The student is setting the reply_to in its request to "~/#", which causes
> a uuid-based address to be substituted in the outgoing request messages'
> reply_to field.  For example, guru might get the following reply-to in each
> received message:  amqp://cd5ec72c-8d99-4d5f-b796-4b2447f35b6a/#
>
> What appears to be happening is messenger on guru.rb fails to re-use an
> existing 'return link' back to student, and creates a new one.   I think
> this is due to the way messenger marks the link as 'dynamic' and never sets
> the terminus address in the new link.  Thus when the next request arrives
> with the same reply-to, the link resolution logic doesn't find a link with
> a matching terminus address, and creates another one.
>
> Sounds wrong to me - though honestly I'm not 100%

Re: simplified factory pattern for proton-j

2014-05-01 Thread Rafael Schloming
Just a heads up, I've committed this to trunk. Please let me know if it
causes any problems.

--Rafael


On Tue, Apr 29, 2014 at 2:57 PM, Rafael Schloming  wrote:

> Hi Everyone,
>
> I've put together a patch that makes the proton-j factory usage a bit
> simpler and more consistent. You can review it here if you like:
>
>   - https://reviews.apache.org/r/20854/
>
> The main point of the patch is to make all the factories consistently
> follow this pattern:
>
>   package.Interface iface = package.Interface.Factory.create(...);
>
> I like this because it is simple and easy to remember and doesn't require
> importing the impl packages directly.
>
> The patch preserves the convenience constructors, e.g. Proton.connection()
> and so forth, but it does remove the old factory APIs. I think this is a
> reasonable thing to do because the old factories were cumbersome enough to
> use that I don't think anyone actually bothered (including our own
> exmaples).
>
> In any case, please shout if this patch will be troublesome for you. If I
> don't hear anything I'll go ahead and commit it later this week.
>
> Thanks,
>
> --Rafael
>
>


Re: Optimizations on Proton-j

2014-05-01 Thread Rafael Schloming
Hi Clebert,

I've been (amongst other things) doing a little bit of investigation on
this topic over the past couple of days. I wrote a microbenchmark that
takes two engines and directly wires their transports together. It then
pumps about 10 million 1K messages from one engine to the other. I ran this
benchmark under jprofiler and codec definitely came up as a hot spot, but
when I apply your patch, I don't see any measurable difference in results.
Either way it's taking about 40 seconds to pump all the messages through.

I'm not quite sure what is going on, but I'm guessing either the code path
you've optimized isn't coming up enough to make much of a difference, or
I've somehow messed up the measurements. I will post the benchmark shortly,
so hopefully you can check up on my measurements yourself.

On a more mundane note, Andrew pointed out that the new files you've added
in your patch use an outdated license header. You can take a look at some
existing files in the repo to get a current license header.

--Rafael



On Wed, Apr 30, 2014 at 2:15 PM, Clebert Suconic wrote:

> I just submitted it as a git PR:
>
> https://github.com/apache/qpid-proton/pull/1
>
>
>
> On Apr 30, 2014, at 10:47 AM, Robbie Gemmell 
> wrote:
>
> > I think anyone can sign up for ReviewBoard themselves. It certainly
> didn't
> > used to be linked to the ASF LDAP in the past, presumably for that
> reason.
> >
> > Its probably also worth noting you can initiate pull requests against the
> > github mirrors. If it hasn't already been done for the proton mirror, we
> > can have the emails that would generate be directed to this list (e.g.
> >
> http://mail-archives.apache.org/mod_mbox/qpid-dev/201401.mbox/%3c20140130180355.3cf9e916...@tyr.zones.apache.org%3E
> ).
> > We obviously can't merge the pull request via github, but you can use
> > the reviewing tools etc and the resultant patch can be downloaded or
> > attached to a JIRA and then applied in the usual fashion (I believe there
> > is a commit message syntax that can be used to trigger closing the pull
> > request).
> >
> > Robbie
> >
> > On 30 April 2014 15:22, Rafael Schloming  wrote:
> >
> >> On Wed, Apr 30, 2014 at 8:35 AM, Clebert Suconic  >>> wrote:
> >>
> >>> @Rafi: I see there is a patch review  process within Apache (based on
> >> your
> >>> other thread on Java8)
> >>>
> >>> Should we make this through the patch process at some point?
> >>>
> >>
> >> I'm fine looking at it on your git branch, but if you'd like to play
> with
> >> the review tool then feel free.  Just let me know if you need an account
> >> and I will try to remember how to set one up (or who to bug to get you
> >> one). ;-)
> >>
> >> --Rafael
> >>
>
>


Re: Optimizations on Proton-j

2014-05-05 Thread Rafael Schloming
our
> >> mailing list and the Pull Requests.
> >>
> >> Regarding closing the pull requests, it seems like something along the
> >> lines of "This closes # at GitHub" added to the end of
> the
> >> svn commit message should do the trick:
> >> https://help.github.com/articles/closing-issues-via-commit-messages
> >>
> >> I havent had a chance to really look at the actual code change but when
> I
> >> was quickly scrolling down the PR, in addition to the licence headers on
> >> the new files that Rafi already mentioned (which I spotted due to the
> >> Copyright notices we wouldnt typically have) I noticed Encoder.java
> having
> >> its existing licence header corrupted a little by some wayward code.
> >>
> >> Robbie
> >> I just submitted it as a git PR:
> >>
> >> https://github.com/apache/qpid-proton/pull/1
> >>
> >>
> >>
> >> On Apr 30, 2014, at 10:47 AM, Robbie Gemmell 
> >> wrote:
> >>
> >>> I think anyone can sign up for ReviewBoard themselves. It certainly
> didn't
> >>> used to be linked to the ASF LDAP in the past, presumably for that
> reason.
> >>>
> >>> Its probably also worth noting you can initiate pull requests against
> the
> >>> github mirrors. If it hasn't already been done for the proton mirror,
> we
> >>> can have the emails that would generate be directed to this list (e.g.
> >>>
> >>
> http://mail-archives.apache.org/mod_mbox/qpid-dev/201401.mbox/%3c20140130180355.3cf9e916...@tyr.zones.apache.org%3E
> >> ).
> >>> We obviously can't merge the pull request via github, but you can use
> >>> the reviewing tools etc and the resultant patch can be downloaded or
> >>> attached to a JIRA and then applied in the usual fashion (I believe
> there
> >>> is a commit message syntax that can be used to trigger closing the pull
> >>> request).
> >>>
> >>> Robbie
> >>>
> >>> On 30 April 2014 15:22, Rafael Schloming  wrote:
> >>>
> >>>> On Wed, Apr 30, 2014 at 8:35 AM, Clebert Suconic  >>>>> wrote:
> >>>>
> >>>>> @Rafi: I see there is a patch review  process within Apache (based on
> >>>> your
> >>>>> other thread on Java8)
> >>>>>
> >>>>> Should we make this through the patch process at some point?
> >>>>>
> >>>>
> >>>> I'm fine looking at it on your git branch, but if you'd like to play
> with
> >>>> the review tool then feel free.  Just let me know if you need an
> account
> >>>> and I will try to remember how to set one up (or who to bug to get you
> >>>> one). ;-)
> >>>>
> >>>> --Rafael
> >>>>
> >
>
>
/*
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *   http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing,
 * software distributed under the License is distributed on an
 * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 * KIND, either express or implied.  See the License for the
 * specific language governing permissions and limitations
 * under the License.
 *
 */

package org;

import org.apache.qpid.proton.engine.*;
import static java.lang.Math.*;
import java.nio.*;
import java.util.*;


/**
 * Main
 *
 */

public class Main
{

public static final void main(String[] args)
{
Collector c = Collector.Factory.create();

Connection c1 = Connection.Factory.create();
Connection c2 = Connection.Factory.create();
Transport t1 = Transport.Factory.create();
Transport t2 = Transport.Factory.create();
t1.bind(c1);
t2.bind(c2);

c1.collect(c);
c2.collect(c);
c1.open();
c2.open();

Session ssn1 = c1.session();
ssn1.open();
Sender snd = ssn1.sender("sender");
snd.open();


t1.setContext(t2);
t2.setContext(t1);

int total = 10*1024*1024;
PingPong pp = new PingPong(total);
pp.dispatchAll(c);

if (pp.rcvd != total || pp

codec strategies

2014-05-07 Thread Rafael Schloming
I've put together some mock ups of a few different codec strategies both to
compare from an API/usability perspective and to get a rough idea of some
of the performance implications of the different choices. Please see the
attached code for the full details. I'll summarize the different strategies
below.

The SimpleEncoder is pretty straighforward, the only real point here is to
use basic types to represent values and thereby minimize the amount of
intermediate memory and CPU required in order to use the codec.

The DispatchingDecoder works similarly to a sax style parser. It basically
iterates over the encoded content and dispatches values to a handler.

The StreamingDecoder is similar to the DispatchingDecoder except instead of
an internal "bytecode" loop calling out to a handler, it is externally
driven by calling into the decoder. This appears to be marginally slower
than the DispatchingDecoder in the particular scenario in the mock up,
however it may have some API benefitis, e.g. conversions can be done on
demand and it is possible to skip over uninteresting data rather than
parsing it.

The mock up also includes the same data being encoded/decode using the
existing codec (with Clebert's patch).

Fair warning, the data I chose to encode/decode is completely arbitrary and
not intended to be representative at all. That said, the numbers I'm
getting suggest to me that we can do a whole lot better than the current
codec if we start with something simple and keep it that way. Here is the
output I'm getting for a run with a hundred million iterations:

  simple encode: 4416 millis
  dispatching decode: 3049 millis
  streaming decode: 3243 millis
  existing encode: 9515 millis
  existing decode: 13931 millis

Another factor to consider is the difficulty in quantifying the impact of
generating lots of garbage. In a small benchmark like this there isn't a
lot of memory pressure, so extra garbage doesn't have a lot of impact,
however in a real application that would translate into increased GC cycles
and so might be more of a factor. What I can say from watching memory usage
under the profiler is that at any given point there are typically hundreds
of megs worth of garbage Integer and UUID instances lying around when the
existing codec is running. All of the alternative strategies I've included
don't generate any garbage.

--Rafael
/*
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *   http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing,
 * software distributed under the License is distributed on an
 * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 * KIND, either express or implied.  See the License for the
 * specific language governing permissions and limitations
 * under the License.
 *
 */
package org;

import java.util.UUID;
import org.apache.qpid.proton.codec.*;
import java.nio.*;

/**
 * Codec
 *
 */

public class Codec
{

public static final void main(String[] args) {
int loop = 10*1024*1024;
if (args.length > 0) {
loop = Integer.parseInt(args[0]);
}

String test = "all";
if (args.length > 1) {
test = args[1];
}

boolean runDispatching =
test.equals("all") || test.equals("dispatching");
boolean runStreaming = test.equals("all") || test.equals("streaming");
boolean runExisting =
test.equals("all") || test.equals("existing");

byte[] bytes = new byte[1024];
ByteBuffer buf = ByteBuffer.wrap(bytes);

long start, end;

if (runDispatching || runStreaming) {
start = System.currentTimeMillis();
int size = simpleEncode(bytes, loop);
end = System.currentTimeMillis();
time("simple encode", start, end);

if (runDispatching) {
start = System.currentTimeMillis();
dispatchingDecode(bytes, size, loop);
end = System.currentTimeMillis();
time("dispatching decode", start, end);
}

if (runStreaming) {
start = System.currentTimeMillis();
streamingDecode(bytes, size, loop);
end = System.currentTimeMillis();
time("streaming decode", start, end);
}
}

if (runExisting) {
start = System.currentTimeMillis();
DecoderImpl dec = existingEncode(buf, loop);
end = System.currentTimeMillis();
time("existing encode", start, end);

buf.flip();

  

Re: Optimizations on Proton-j

2014-05-07 Thread Rafael Schloming
Hi,

Comments inline...

On Mon, May 5, 2014 at 5:51 PM, Clebert Suconic  wrote:

> I have some ideas as well:
>
>
> - Calculating size prior to sending:
>
>  - We could write zeroes, write to the buffer... come back to the previous
> position.. write the size instead of calculating it.
>

Yeah, this is what I've done before. The only tricky thing here is figuring
out how much space to reserve for the size. In order to minimize the size
of the encoded data, it's better to use an encoding with a 1 byte size when
you can, but of course you don't know in advance if the size of the encoded
data will fit within a single byte.


>
> I have read this code a lot.. and I wouldn't rewrite the code.. just
> optimize these cases... I wouldn't optimize it for every possible case
> TBH.. just on message Delivery and Settling unless you want to also
> optimize other cases for use cases that I'm not aware at the moment.
>

I think the ability to access key data without doing a full decode is
likely to be useful at some point. I should also say that I think the
actual codec interface is not terribly useful/friendly right now for end
users. I don't particularly mind whether we iterate or replace the current
implementation, but I do think we need a solid idea of the end goal. To
that end I've put together a mock up of a few different strategies that
I've posted in another thread.


>
>
> other things that could boost performance based on the micro benchmark I
> wrote:
>
>
> - Using Integer, Long.. etc..inside of UnsignedInt, UnsignedLong would
> give you a good boost in performance. The JDK is already optimized to box
> these types... while UnsignedInt, UnsignedLong.. etc.. its not well
> optimized.
>

I haven't noticed much of a difference between Integer and UnsignedInteger
in any of the profiling I've done, but using the unboxed variants would
definitely make a difference.


>
> - Reusing buffers.. maybe adding a framework where we could reuse
> buffers.. or delegate into other frameworks (e.g. Netty).
>

Yeah, we should look at this in the context of copying as well.

--Rafael


Re: Minimizing the Proton Engine/Messenger RAM Footprint for embedded devices

2014-05-12 Thread Rafael Schloming
I've checked in a few easy tweaks that should help some of the worst
offenders in your list. I don't know close it will get you to your goal,
but it would be helpful if you could rerun your test against the latest
code on trunk.

--Rafael


On Fri, May 9, 2014 at 11:41 AM, dcjohns41 wrote:

> Thanks for the various suggestions on capturing the memory allocation.  We
> started the process and have found several areas that seem to be large RAM
> users. Proton-C_MemoryAllocation.xlsx
> <
> http://qpid.2158936.n2.nabble.com/file/n7607980/Proton-C_MemoryAllocation.xlsx
> >
>
> The attached file is a list of the main startup allocation.  In the
> rightmost column, we have highlighted any allocation that is larger in size
> in red.  As you can see there are several 16KByte chunks of RAM allocated
> (object.c /pni_map_allocate) as well as multiple 960 byte chunks(over 24
> occurrences for approx. 24Kbytes)(pn_data.c).  Also the pn_buffer and
> pn_dispatcher functions each allocate 4Kbytes.  This exceeded 75Kbytes
> prior
> to our system being able to attempt communication setup.
>
> So this drives a few questions as to being able to minimize these
> allocations. The multiple 960 byte allocations and the 32Kbyte allocations
> seem to be the big hitters.
>
> Any hints from the mailing list on reducing these structures?
>
> Dave
>
>
>
> --
> View this message in context:
> http://qpid.2158936.n2.nabble.com/Minimizing-the-Proton-Engine-Messenger-RAM-Footprint-for-embedded-devices-tp7607409p7607980.html
> Sent from the Apache Qpid Proton mailing list archive at Nabble.com.
>


Re: codec strategies

2014-05-13 Thread Rafael Schloming
On Thu, May 8, 2014 at 9:42 AM, Alan Conway  wrote:

> I vote for DispatchingDecode: it's the simplest, the fastest and is
> based on a well established parsing pattern with a good track record for
> performance (SAX). Its not so hard to ignore data in a handler.
>
> Writing a handler state machine is a bit more complex than writing a
> sequence of calls to a stream API, but I think you could encapsulate
> most of a standard state machine that given a sequence of type codes
> will fill a sequence of variables. Not sure about the right way to do
> that in Java performance-wise.
>
> Hmm. That might be worth another performance test though - if you did
> have such tools for making it easy to build handlers, would those tools
> introduce a penalty that would make the StreamingDecode look more
> attractive...
>

The biggest difference from an API perspective has to do with data
conversions/coersion. Say you're writing a piece of Java code that wants to
operate on Java integer or a Java float and doesn't care what the
underlying wire type is so long as it can be reasonable converted to that
type. In a stream style API you would simple write:

  int i = decoder.getInt();
  float f = decoder.getFloat();

The decoder implementation itself can the be smart enough to convert
whatever underlying wire type there might be into the appropriate java
type. The SAX API on the other hand will have a distinct callback for byte
vs ubyte vs short vs ushort, etc, and it could be quite cumbersome to
convert all the different possibilities into the type you actually want to
operate on. Put another way, the stream style API is capable of
incorporating the desired output type of the user, whereas the SAX style
API is not.

It might be possible to provide some kind of coercing handler that would
help the SAX situation. As you say though it's probably worth checking that
something like that would be usable and not introduce its own penalties.

--Rafael


Re: delivery.setPayload

2014-05-13 Thread Rafael Schloming
I'm not sure this will work as an API. One problem I see with this is that
it is circumventing the engine's flow control logic. If you notice there is
logic inside send() to update counters on the session. Unless I've missed
something your patch doesn't seem to have equivalent logic. This might just
be an oversight, but I don't see how you could easily add the same logic
since you don't know how many bytes the payload is until much much later on
in the control flow of the engine.

Can you supply some more detail as to why it got 5% faster? If it was
merely avoiding the copy, then I can think of some ways we could avoid that
copy without changing the API quite so drastically, e.g. just overload
send() to take some sort of releasable buffer reference.

FWIW, I think that a good buffer abstraction that we could use everywhere
would help a lot. I suspect having distinct abstractions for payload
buffers vs encodable buffers vs decodable buffers is just going to result
in lots of unnecessary conversions.

--Rafael

On Tue, May 13, 2014 at 11:19 AM, Clebert Suconic wrote:

> I have been playing with the API, and there is one change that would make
> the API clearer IMO.
>
>
> Right now you have to send a delivery, and then call send(bytes) to add
> Bytes to the delivery what will make it copy data to the Delivery and self
> expand the buffer.
>
>
>
> I have played with a change that made it 5% faster than the most optimal
> way to expand the payload on the Delivery (using the same buffer over and
> over)
>
>
> And 15% on a brief calculation against creating the buffer every time...
> but there are cases where this could be a bit worse.
>
>
>
> Basically I have created an interface called Payload, and added a method
> setPayload on Delivery.
>
>
>
> I'm not sure yet how I would implement framing into multiple packages..
> but I think it could be done.. this is just a prototyped idea:
>
>
>
> https://github.com/clebertsuconic/qpid-proton/commit/02abe61fc54911955ddcce77b792a153c5476aef
>
>
>
> in case you want to fetch the buffer from my git, it's this branch:
> https://github.com/clebertsuconic/qpid-proton/tree/payload
>
>
>
> In any case I liked the idea of the setPayload better than
> sender.send(bytes) to set the payload of a message.
>
>
>
> Ideas?


Re: delivery.setPayload

2014-05-14 Thread Rafael Schloming
On Tue, May 13, 2014 at 4:40 PM, Clebert Suconic wrote:

>
>
> On May 13, 2014, at 1:46 PM, Rafael Schloming  wrote:
>
> > I'm not sure this will work as an API. One problem I see with this is
> that
> > it is circumventing the engine's flow control logic. If you notice there
> is
> > logic inside send() to update counters on the session. Unless I've missed
> > something your patch doesn't seem to have equivalent logic. This might
> just
> > be an oversight, but I don't see how you could easily add the same logic
> > since you don't know how many bytes the payload is until much much later
> on
> > in the control flow of the engine.
> >
>
> as I told you  this was just a prototyped idea... it's not in fact
> updating the window yet..
>
> If this idea is a good idea, we could pursue the idea here.
>

Providing the option to pass in something more abstract than a byte[] is a
good idea. I'm just observing that the Payload/setPayload interface as it
stands is changing two fundamental aspects of the send interface. The first
aspect being that the Sender implementation no longer has any up front idea
of how much data it is being offered. The second aspect is the stream
oriented nature of the Sender interface. The interface is designed to
operate in terms of chunks of message data rather than in terms of an
entire message. This is intentional because an AMQP message might be
arbitrarily large, and the interface needs to allow for the possibility of
an intermediary that can stream through message data without buffering the
whole message. I don't see how such streaming could easily be accomplished
with the Payload/setPayload interface.


>
> > Can you supply some more detail as to why it got 5% faster? If it was
> > merely avoiding the copy, then I can think of some ways we could avoid
> that
> > copy without changing the API quite so drastically, e.g. just overload
> > send() to take some sort of releasable buffer reference.
>
> The encoding is done directly the the FrameWriter::__outputBuffer.  I've
> made a framework where I'm testing the send and it made it somewhat fast
> than copying the encoding over 1 million messages.
>
> On this case it could be a bit more if you encoded a MesasgeImpl on a new
> buffer every time
>
> >
> > FWIW, I think that a good buffer abstraction that we could use everywhere
> > would help a lot. I suspect having distinct abstractions for payload
> > buffers vs encodable buffers vs decodable buffers is just going to result
> > in lots of unnecessary conversions.
>
> probably.. I was just trying to improve the idea of the payloads. I don't
> like the send API right now.. I think it would make more sense to set the
> payload on the delivery than send bytes through sender.
>

 This is really a question of the generality of the API. Operating in terms
of chunks of a message is always going to be lower level than operating in
terms of entire messages, however it is also more general. If we only
permitted the latter then we would be omitting important use cases.

I think we can treat these issues separately though. We should be able to
eliminate copying in the stream oriented API, while still providing a
convenience API to make sending discrete messages more convenient.

--Rafael


Re: delivery.setPayload

2014-05-14 Thread Rafael Schloming
On Wed, May 14, 2014 at 1:05 PM, Clebert Suconic wrote:

> I was just playing with possibilities and trying to find ways to improve
> the API how things are done.
>
>
>
> >  The interface is designed to
> > operate in terms of chunks of message data rather than in terms of an
> > entire message.
>
>
>
> That's one thing that I'm lacking control through the API actually.. which
> I'm looking to improve.
>
>
> My application (hornetQ) has the ability to store messages on the disk and
> send in chunks to the server.
>
> Using the current API, I would have to parse the entire message to Proton,
> and have proton dealing with the chunks.
>

I'm not sure I follow this. The byte[] that you pass to send doesn't need
to be a complete message.


> I have tests for instance on sending 1 Gigabyte messages, where the body
> stays on the disk on the server, while the client streams bytes while
> receiving, and having flow control holding them.
>
> Proton has a big buffer internally, and which actually is not exposing
> flow control in such way I would stop feeding Proton. as a result this use
> case would easily cause OMEs from Proton.
>

The Delivery.pending() API will tell you exactly how much data is being
buffered for a given delivery, and
Session.getOutgoingBytes()/getIncomingBytes() will tell you how much is
being buffered in aggregate by the Session. Between the two of these you
should be able to limit what Proton is buffering.


>
> as I said I was just for now exploring with possibilities. I could send a
> ReadableBuffer (the new interface from my previous patch) and use a Pool on
> this buffer to encode my Message. But I stlil lack control on my large
> messages streaming and flow control.
>
>
> The way things are done now, if i wanted to preserve my large message
> functionality at the client level (outside of Proton's API), I would need
> to bypass proton and send Delivery directly without Proton intervention.


I'm still confused by this. Can you point me to the code or post some
samples of what you're having trouble with?

--Rafael


  1   2   3   4   5   6   7   >