QQ

2015-12-21 Thread aconway
On Mon, 2015-12-14 at 11:40 -0700, Philippe Le Rohellec wrote:
> Thanks Alan,
> 
> I tried that but a recent change to the SWIG bindings broke the
> reactor
> interface in ruby. I commented on
> https://issues.apache.org/jira/browse/PROTON-949 as you already notic
> ed.
> Even after it's fixed there are other issues popping up with the
> reactor
> interface, something with a prototype mismatch of an ssl.rb method
> (https://gist.github.com/plerohellec/3a73b71b04aaa22845c3), I couldn'
> t find
> an easy fix for that.
> I also switched to the 0.10.x branch with the same result so I gave
> up on
> the reactor.

Funny you should say that. I also recently gave up on the reactor in
ruby - the last straw was the GVL, which blocks every ruby thread in
the process while proton is in blocking select calls.

However messenger is also problematic, so I am working on a
ConnectionEngine, a kind of mini-reactor. The user API is almost
identical to reactor apps (event-based), but the setup is different.

Take a look at:

https://github.com/BitScoutOrg/docker-fluentd/tree/master/amqp_qpid/lib
/fluent/plugin

qpid_proton_extra.rb is all stuff I plan to bring into proton when I
get a chance to clean it up. The rest is a fluent plugin, which is not
relevant, but illustrates the use of ConnectionEngine.

The handlers are the same as the reactor handlers but the IO and
threading parts are native in Ruby which avoids the GVL problems and
fits more neatly into a Ruby application.

I haven't tried anything with SSL, if you find a problem there JIRA it. I'm on 
the hook for ruby now.

> I added the missing binding (Messenger#get_link) but I haven't been
> able to
> get the expected result from passing a filter yet. I will open a PR
> if I
> manage to get it to work. By the way, this is a modified recv.rb that
> sets
> the filter: https://gist.github.com/plerohellec/55f3fde1b303f04d259d
> I'm not sure the filter are actually used when when receiving a
> message,
> could you give it a quick review?

It looks ok but I hit the get link problem also. I think the code will
work though, as I tried a similar example using the reactor and both
wireshark and qpidd debug logs confirm that the filter string is passed
correctly.

I found some other bugs in the ruby binding while doing this so I will
fix it all up and put up the example ASAP. Please feel free to raise
JIRAs for any and all problems you find, I want to beat the ruby
binding into shape and your input helps tremendously.


Re: [VOTE] Release Qpid Proton 0.11.1

2015-12-17 Thread aconway
On Tue, 2015-12-15 at 19:32 +, Robbie Gemmell wrote:
> Hi all,
> 
> I have put up an RC for 0.11.1, please test it and vote accordingly.

+1

> 
> The release archive and sig/checksums can be grabbed from:
> https://dist.apache.org/repos/dist/dev/qpid/proton/0.11.1-rc1/
> 
> Maven artifacts for the Java bits can be found in a temporary staging
> repo at:
> https://repository.apache.org/content/repositories/orgapacheqpid-1057
> 
> Other than version updates, there are only two changes since 0.11.0:
> https://issues.apache.org/jira/browse/PROTON-1059
> https://issues.apache.org/jira/browse/PROTON-1077
> 
> It was created using commit 99ee72b897395b0abb72df001709095b498edbc5
> on the 0.11.x branch, and is tagged as 0.11.1-rc1.
> 
> Regards,
> Robbie
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
> 


Re: Extending the lease duration on messages received with Proton?

2015-12-07 Thread aconway
On Mon, 2015-12-07 at 10:29 +, Robbie Gemmell wrote:
> On 4 December 2015 at 19:56, Philippe Le Rohellec 
> wrote:
> > I'm using Proton to receive messages from an Azure Service Bus
> > queue.
> > The receiver keeps a lock on each message it receives for a
> > predefined
> > amount of time, which I think matches the "Lock Duration" set on
> > the queue
> > on the Azure side. The max value in Azure is 5 minutes.
> > 
> > What if processing the message takes more than 5 minutes? Is there
> > a way to
> > renew the lock using Proton or are the 5 minutes non negotiable? I
> > didn't
> > find anything in the API.
> > 
> > 
> > 
> > --
> > View this message in context: http://qpid.2158936.n2.nabble.com/Ext
> > ending-the-lease-duration-on-messages-received-with-Proton
> > -tp7634944.html
> > Sent from the Apache Qpid Proton mailing list archive at
> > Nabble.com.
> 
> Hi Philippe,
> 
> I believe the locking you mention is handled server-side, I'm not
> sure
> that the Proton reciever is ever aware of it. I think you'll need to
> ask the Azure folks about whether there is a way to adjust/renew the
> lock.
> 

May be a conflict of terminology here: AMQP doesn't define a "lock" but
it does define "settlement" of a message. Messages can be sent "pre
settled" which means the sender just fires-and-forgets, or "unsettled"
which means the sender is expecting some acknowledgement. I don't know
Azure but on most brokers (or any system with reliable delivery) that
does amount to a "lock" - the broker sends a message unsettled and will
not send it to any other receiver until it is acknolwedged.

AMQP lets the receiver respond to a message with "accept" (I took it),
"reject" (the message is invalid) or "release" (I didn't take it but
you can try somebody else.) Proton lets you control what response the
receiver sends back and when.

AMQP also lets you define a message "timeout" which says "if the
message is not processed by this time, forget it." So some combination
of acks and timeouts will probably let you do what you want.

(Obviously using timeouts in a network creates the possibility of
occasional duplicate processing, but if you have pre-emptive server
-side lock breaking in Aure you probably know that already ;)

Cheers,
Alan.



Re: Acknowledging received messages with Proton and an Azure Service Bus queue

2015-12-07 Thread aconway
On Fri, 2015-12-04 at 12:20 -0700, plerohel wrote:
> Thanks James, that article is extremely helpful.
> After I set the messenger incoming_window to a positive value,
> messages do
> get acknowledged as expected.

> Rejecting the message doesn't work though. It might be because I'm
> not
> setting the settle mode as described in the article. The ruby API
> doesn't
> seem to allow it. However if the receiver just skips the "accept"
> step then
> the message can be processed again by another receiver.

This may be related to

 https://issues.apache.org/jira/browse/PROTON-1067 

If it is can you add your add your comments/reproducers to it?

If it doesn't seem related can you open another JIRA for the Ruby
binding? I also working on the Ruby also at the moment so would be good
to look at these together even if they're not the same issue since they
are at least in related areas. Please attach code reproducers if you
have any.

Cheers,
Alan.




Re: Function pn_message_encode() returns -2

2015-12-07 Thread aconway
traverse is intended to walk the data structure from the beginning.
Node "1" is the first child node (node 0 is the "root" of the tree).
Nodes are referred to by number rather than pointer because the node
storage can be re-allocated if the pn_data object grows which changes
the node addresses. So while your fix may solve your immediate problem
it is not correct in general.

Can you post a small C program that illustrates the error? That would
help to figure out the problem.

On Sun, 2015-12-06 at 23:36 -0700, sanddune008 wrote:
> I see that following function pn_message_encode() returns -2 with
> send.c
> example.
> 
> My findings, Kindly review the below code:
> 
> Changes made to fix is highlighted in bold, What is the significance
> of
> passing '1' first node? 
> 
> int *pni_data_traverse*(pn_data_t *data,
>   int (*enter)(void *ctx, pn_data_t *data,
> pni_node_t
> *node),
>   int (*exit)(void *ctx, pn_data_t *data,
> pni_node_t
> *node),
>   void *ctx)
> {
>   *pni_node_t *node = data->size ? pn_data_node(data, data->current)
> :
> NULL;*
>   //pni_node_t *node = data->size ? pn_data_node(data,* 1*) : NULL;
> 
> 
> Thanks in Advance
> 
> 
> 
> 
> 
> --
> View this message in context: http://qpid.2158936.n2.nabble.com/Funct
> ion-pn-message-encode-returns-2-tp7634948.html
> Sent from the Apache Qpid Proton mailing list archive at Nabble.com.


Re: Question on the information contained in a flow event

2015-11-09 Thread aconway
On Mon, 2015-11-09 at 13:49 +, Marinov, Vladimir wrote:
> Hello all,
> 
> We are implementing AMQP support in our messaging server and for that
> purpose we use Proton-j 0.9.1. I'm currently trying to implement
> transactional acquisition and I'm stuck with the following:
> When a client tries to acquire messages transactionally from the
> server, it send a flow frame with a 'txn-id' entry in its properties
> so I need to access this information. The thing is that in the
> onLinkFlow handler method I get an object of type Event which doesn't
> seem to contain the txn-id (it actually seems pretty blank) and I
> can't find it anywhere else. On the other hand, I'm pretty certain
> this information is sent to us because we trace incoming frames in
> our trace handler and the flow frames all contain txn-id. Am I
> missing something and do you have any suggestions?
> 
> Regards,
> Vladimir

The information is there, I've had to scrape it out before. I believe
it is is part of the delivery disposition state: 

pn_event_delivery_t(), 
pn_delivery_remote_state(),
pn_disposition_something_my_memory_isnt_what_it_used_to_be()

Give me a shout if you can't track it down, I wrote code in qpidd at
some point that did something with the TXN id, I can find it need be.

Cheers,
Alan.


Re: pn_error_code() returning -2 error

2015-11-09 Thread aconway
On Mon, 2015-11-09 at 17:12 +0530, Sanny wrote:
> Hi,
> 
> Using Proton C library.
> 
> I am following the demo example "send.c" shared in the archive.
> Currently
> using the mqx OS.
> 
> 
> 1. I have following function in *pn_error_code*(messenger->error);
> always
> returning *-2*. while calling the following function*
> pn_messenger_put*(messenger,
> message);
> 2. In the following demo "send.c" when will the socket be open. As I
> see
> following function "*pn_messenger_start*(messenger);" doesnt have any
> significance , since
>  if (*messenger->flags | PN_FLAGS_CHECK_ROUTES*) *messenger->flags is
> never **PN_FLAGS_CHECK_ROUTES.*
> 
> 3. Is there any basic code flow document that i can refer to start of
> the
> debug would be very helpful.

[oops, just discovered a new keyboard shortcut for "send" while typing
this mail, will attempt to complete it now]

You may want to consider the event-driven approach which is becoming
more popular than messenger. Messenger has some limitations
(particularly around error handling/reporting) and right now it appears
like more development is going into the more flexible event-driven
approach. The C doc for the event approach is not good but there are 2
C examples `proton/tests/tools/apps/c/reactor-*.c`. The C++ and python
bindings have pretty good tutorials and examples and are very close
analogues to the C API so you can also work backwards from them.

On your original question: 

pn_error_text() may give you more of a clue about your error, -2 is
PN_ERR which only really tells you that proton doesn't have a specific
error code for whatever is going wrong.

The line "if (*messenger->flags | PN_FLAGS_CHECK_ROUTES*)" means that
the checks are *always* run if *any* of messenger->flags are set so
that code might be an issue in your problem. 

(I believe the line is a proton bug, 
https://issues.apache.org/jira/browse/PROTON-1043)

> 
> 
> Thanks in Advance
> 
> On Thu, Oct 29, 2015 at 7:21 PM, Sanny  wrote:
> 
> > Hi,
> > 
> > Using Proton C library.
> > 
> > I am following the demo example "send.c" shared in the archive.
> > Currently
> > using the mqx OS.
> > 
> > 
> > 1. I have following function in *pn_error_code*(messenger->error);
> > always
> > returning *-2*. while calling the following function*
> > pn_messenger_put*(messenger,
> > message);
> > 2. In the following demo "send.c" when will the socket be open. As
> > I see
> > following function "*pn_messenger_start*(messenger);" doesnt have
> > any
> > significance , since
> >  if (*messenger->flags | PN_FLAGS_CHECK_ROUTES*) *messenger->flags
> > is
> > never **PN_FLAGS_CHECK_ROUTES.*
> > 
> > 3. Is there any basic code flow document that i can refer to start
> > of the
> > debug would be very helpful.
> > 
> > 
> > Thanks in Advance


Re: Proton 0.11.0 release update - The release candidate

2015-11-09 Thread aconway
On Tue, 2015-11-03 at 22:15 -0500, Justin Ross wrote:
> Hi, all.  The RC is now available here:
> 
>   https://dist.apache.org/repos/dist/dev/qpid/proton/0.11.0-rc/
> 
> Maven staging repo:
> 
>   https://repository.apache.org/content/repositories/orgapacheqpid-10
> 48/
> 
> I will be away from my keyboard for the remainder of this week.  On
> Monday,
> the 9th, I will start the release vote if no blockers have been
> discovered
> in the interim.
> 
> Thank you very much to everyone who tested the alpha and beta.  If
> you've
> not yet taken a look, please test the RC and report your results.


The Go and C++ examples build and run as expected, the Go/C++ doc
appears to generate correctly. Apart from the ccache-swig problem I
noted earlier, cmake and ctest work fine for me on fedora 22. 

This is on my development box with everything installed/enabled, I
haven't tried any tests with with minimized dependencies.



Re: pn_error_code() returning -2 error

2015-11-09 Thread aconway
On Mon, 2015-11-09 at 17:12 +0530, Sanny wrote:
> Hi,
> 
> Using Proton C library.
> 
> I am following the demo example "send.c" shared in the archive.
> Currently
> using the mqx OS.
> 
> 
> 1. I have following function in *pn_error_code*(messenger->error);
> always
> returning *-2*. while calling the following function*
> pn_messenger_put*(messenger,
> message);
> 2. In the following demo "send.c" when will the socket be open. As I
> see
> following function "*pn_messenger_start*(messenger);" doesnt have any
> significance , since
>  if (*messenger->flags | PN_FLAGS_CHECK_ROUTES*) *messenger->flags is
> never **PN_FLAGS_CHECK_ROUTES.*
> 
> 3. Is there any basic code flow document that i can refer to start of
> the
> debug would be very helpful.

You may want to consider the event-driven approach which is becoming
more popular than messenger. Messenger has some limitations
(particularly around error handling/reporting) and right now it appears
like more development is going into the more flexible event-driven
approach. The C doc for the event approach is limited



> 
> 
> Thanks in Advance
> 
> On Thu, Oct 29, 2015 at 7:21 PM, Sanny  wrote:
> 
> > Hi,
> > 
> > Using Proton C library.
> > 
> > I am following the demo example "send.c" shared in the archive.
> > Currently
> > using the mqx OS.
> > 
> > 
> > 1. I have following function in *pn_error_code*(messenger->error);
> > always
> > returning *-2*. while calling the following function*
> > pn_messenger_put*(messenger,
> > message);
> > 2. In the following demo "send.c" when will the socket be open. As
> > I see
> > following function "*pn_messenger_start*(messenger);" doesnt have
> > any
> > significance , since
> >  if (*messenger->flags | PN_FLAGS_CHECK_ROUTES*) *messenger->flags
> > is
> > never **PN_FLAGS_CHECK_ROUTES.*
> > 
> > 3. Is there any basic code flow document that i can refer to start
> > of the
> > debug would be very helpful.
> > 
> > 
> > Thanks in Advance
> > 


Re: Question on the information contained in a flow event

2015-11-09 Thread aconway
On Mon, 2015-11-09 at 17:17 +, Robbie Gemmell wrote:
> On 9 November 2015 at 17:09, aconway <acon...@redhat.com> wrote:
> > On Mon, 2015-11-09 at 13:49 +, Marinov, Vladimir wrote:
> > > Hello all,
> > > 
> > > We are implementing AMQP support in our messaging server and for
> > > that
> > > purpose we use Proton-j 0.9.1. I'm currently trying to implement
> > > transactional acquisition and I'm stuck with the following:
> > > When a client tries to acquire messages transactionally from the
> > > server, it send a flow frame with a 'txn-id' entry in its
> > > properties
> > > so I need to access this information. The thing is that in the
> > > onLinkFlow handler method I get an object of type Event which
> > > doesn't
> > > seem to contain the txn-id (it actually seems pretty blank) and I
> > > can't find it anywhere else. On the other hand, I'm pretty
> > > certain
> > > this information is sent to us because we trace incoming frames
> > > in
> > > our trace handler and the flow frames all contain txn-id. Am I
> > > missing something and do you have any suggestions?
> > > 
> > > Regards,
> > > Vladimir
> > 
> > The information is there, I've had to scrape it out before. I
> > believe
> > it is is part of the delivery disposition state:
> > 
> > pn_event_delivery_t(),
> > pn_delivery_remote_state(),
> > pn_disposition_something_my_memory_isnt_what_it_used_to_be()
> > 
> > Give me a shout if you can't track it down, I wrote code in qpidd
> > at
> > some point that did something with the TXN id, I can find it need
> > be.
> > 
> > Cheers,
> > Alan.
> 
> I believe you are thinking of transactional retirement, whereas
> Vladimir is talking about transactional acquisition, which uses the
> flow frame for some of the process rather than disposition frames.
> 

You are probably right, my memories are vague. Sorry if I sent anyone
barking up the wrong tree.



Re: Several queues on direct proton receiver?

2015-11-05 Thread aconway
On Wed, 2015-11-04 at 23:47 +0300, Michael Ivanov wrote:
> Hallo,
> 
> I have a question about direct message receive in proton.
> 
> I tried to receive messages directly using amqp://~1.2.3.4 url.
> It works but it seems that queue names in this case are ignored.
> I.e. when I subscribe to amqp://~1.2.3.4/q1 I also get the messages
> for amqp://~1.2.3.4/q2, q3 etc.
> 
> Is it possible either receive only message for queue specified
> in url or at least to get the target queue name from the received
> message?
> 
> Best regards,

The direct_recv message is rather simple and just accepts any message
it gets on the connection without looking at the link or message
address.

Normally the link address is used identify a queue or topic in a
broker. You can see this address when the link is established or when
you receive a message you can check the link it arrived on. Check the
broker.hpp example on_link_opening() and on_message().

Cheers,
Alan.


Configured proton repo for reviewboard

2015-10-29 Thread aconway
Added .reviewboardrc configuration file so a simple `rbt post` on a
branch will post a review to the Apache reviewboard. See `rbt --help`
for more details.

Tips:

rbt post  # Create a new review of current branch with diff against master
rbt post --parent branchx  # Review with diff against branchx instead of master
rbt post -r12345 # Post a new patch to existing review 12345


The Go electron API: a tale of two brokers.

2015-10-22 Thread aconway
The Go binding for proton provides 2 alternate APIs, `proton` is an
exact analogue of the event-driven proton C API and `electron` which is
a more go-oriented, procedural API. The differences were motivated by
the concurrency features of the Go language but there may be lessons to
learn for other languages. Take a look at 

https://github.com/alanconway/qpid-proton/tree/master/examples/go


## A tale of two brokers

The `proton` and `electron` packages provide two alternate APIs for
AMQP applications.  See [the proton Go README](https://github.com/apach
e/qpid-proton/blob/master/proton
-c/bindings/go/src/qpid.apache.org/README.md) for a discussion of why
there are two APIs.

The examples `proton/broker.go` and `electron/broker.go` both implement
the same simple broker-like functionality using each of the two APIs.
They both handle multiple connections concurrently and store messages
on bounded queues implemented by Go channels.

However the `electron/broker` is less than half as long as the
`proton/broker` illustrating why it is better suited for most Go
applications.

`proton/broker` must explicitly handle proton events, which are
processed in a single goroutine per connection since proton is not
concurrent safe. Each connection uses channels to exchange messages
between the event-handling goroutine and the shared queues that are
accessible to all connections. Sending messages is particularly tricky
since we must monitor the queue for available messages and the sending
link for available credit in order to send messages.


`electron/broker` takes advantage of the `electron` package, which
hides all the event handling and passing of messages between goroutines
beind behind straightforward interfaces for sending and receiving
messages. The electron broker can implement links as simple goroutines
that loop popping messages from a queue and sending them or receiving
messages and pushing them to a queue.




The un-reactor

2015-10-22 Thread aconway
The proton reactor provides a complete solution for integrating foreign
IO into a single threaded proton event loop. This is useful in
situations where proton is being used in isolation, there is no other
IO handling framework available and everything is single threaded.

However often that is not going to be the case.  There are many
existing C and C++ libraries for managing polling of file descriptors
and dispatching of work, and most server applications that might want
to embed proton already have their own frameworks.

So I'm thinking about how we can make integrating proton into an
existing (possibly multi-threaded) framework easier. I think all it
requires is a few simple functions to pull data out of a file
descriptor as events and push data back into a file descriptor, and
make this easier than directly using the collector and transport.  I've
done it in Go already but I think it could be captured in C and C++ as
well.

Anyone already done something like this?

Cheers,
Alan.


Re: PN_REACTOR_QUIESCED

2015-10-21 Thread aconway
On Wed, 2015-10-14 at 10:31 -0400, Rafael Schloming wrote:
> It wasn't actually an accidental commit. If I recall correctly I
> ended up
> using it more like a 0xDEADBEEF value. It makes it easy to
> distinguish
> between the failure mode of an actual hang (e.g. infinite loop or
> blocking
> call inside a handler) vs reaching a state where there are simply no
> more
> events to process. I guess you can think of it like a heartbeat in a
> way.
> 

It seems like an odd default. Can we make it 0 in the code and set it
non-0 in tests if that is needed? Randomly waking up a user app that
hasn't asked for heartbeats every sort-of-3-seconds is a surprising
behavior.

> --Rafael
> 
> On Tue, Oct 13, 2015 at 10:56 AM, Michael Goulish <
> mgoul...@redhat.com>
> wrote:
> 
> > 
> > But it's obvious how this constant was chosen.
> > 
> > With circular reasoning.
> > 
> > 
> > 
> > 
> > 
> > - Original Message -
> > > On Mon, 2015-10-12 at 16:05 -0400, aconway wrote:
> > > > ...
> > > > +1, that looks like the right fix. 3141 is an odd choice of
> > > > default,
> > > > even for a mathematician.
> > > > 
> > > 
> > > At this point, I'm desperately trying to find an appropriate pi
> > > joke :
> > > -)
> > > 
> > > Andrew
> > > 
> > > 
> > 


Re: ractor - sending message

2015-10-14 Thread aconway
On Tue, 2015-10-13 at 14:28 +0200, Tomáš Šoltys wrote:
> All right, I think I am finally cracking it.
> 
> Please correct me if I am wrong.
> 
> PN_LINK_FLOW is not meant for sending but for "Updates the flow state
> for
> the specified link." as stated in specification (2.7.4 Flow - OASIS
> Advanced Message Queuing Protocol (AMQP) Version 1.0).
> 
> I can send message any time I want if there is enough credit on a
> link.

You are right. In AMQP, messages flow in one direction, credit flows in
the other. In proton, receiving a FLOW event means you have received
credit but you don't have to use it right away, you can accumulate it
and use it when you want. In an application where you expect continuous
flow of messages you probably want to react to FLOW events because it i
s normal to be blocked with messages to send but no credit. But you can
also have the opposite situation, having credit but no messages to
send. In that case, when you do have a message you can send
immediately.

> 
> Regards,
> Tomas
> 
> 2015-10-12 13:04 GMT+02:00 Tomáš Šoltys :
> 
> > Hi,
> > 
> > in reactor example, message is sent on PN_LINK_FLOW event.
> > 
> > Let's say that I have a client that needs to send messages on user
> > input.
> > Is there a way how to "force" reactor API emit PN_LINK_FLOW event?
> > 
> > Thanks,
> > Tomas
> > 
> 
> 
> 


Re: PN_REACTOR_QUIESCED

2015-10-12 Thread aconway
On Sat, 2015-10-10 at 10:57 +0200, Bozo Dragojevic wrote:
> Hi Alan, Rafael,
> 
> On 9. 10. 15 21.25, aconway wrote:
> > I'm fiddling with the C++ example broker, and when I install a
> > debug
> > handler, I see that when the broker is doing absolutely nothing
> > there
> > is a PN_REACTOR_QUIESCED event about every 3 seconds. Does anybody
> > know
> > what this is about? Why is the reactor waking up just to tell us
> > that
> > it is asleep?
> > 
> > 
> 
> On first sight seems like a debug thing accidentally committed.
> 
> I think something like this is in order:
> 
> $ git diff
> diff --git a/proton-c/src/reactor/reactor.c b/proton
> -c/src/reactor/reactor.c
> index 6b328bc..7542d4c 100644
> --- a/proton-c/src/reactor/reactor.c
> +++ b/proton-c/src/reactor/reactor.c
> @@ -484,7 +484,6 @@ void pn_reactor_stop(pn_reactor_t *reactor) {
> 
>  void pn_reactor_run(pn_reactor_t *reactor) {
>assert(reactor);
> -  pn_reactor_set_timeout(reactor, 3141);
>pn_reactor_start(reactor);
>while (pn_reactor_process(reactor)) {}
>pn_reactor_stop(reactor);
> 
> workaround is to pn_set_reactor_timeout(r, 0) in PN_REACTOR_INIT in
> your
> broker.

+1, that looks like the right fix. 3141 is an odd choice of default,
even for a mathematician.



PN_REACTOR_QUIESCED

2015-10-09 Thread aconway
I'm fiddling with the C++ example broker, and when I install a debug
handler, I see that when the broker is doing absolutely nothing there
is a PN_REACTOR_QUIESCED event about every 3 seconds. Does anybody know
what this is about? Why is the reactor waking up just to tell us that
it is asleep?

Cheers,
Alan.


Default error behavior in event-driven APIs (was Re: Error detection/handling)

2015-10-05 Thread aconway
On Wed, 2015-09-16 at 14:04 +0200, Tomáš Šoltys wrote:
> Hi,
> 
> I have a client that sends a messages to broker. It can happen that
> message
> contains incorrect subject which will trigger ACL deny on the broker.
> But on the client side everything seems to be OK.
> 
> How do I detect such errors?
> 
> Regards,
> Tomas

A different but related question: what should be the default error
handling behavior in an event-driven API?

The "silently ignore" option is just bad IMO. Telling people after
their applicaiton has malfunctioned but continued to run with bad data
that "hey didn't you have an on_*_error handler" is not friendly.

Composable event-driven APIs pose a challenge here. It's not really ok
to require at compile time that event handlers *must* have on_*_error
handlers, since it would be legit design to have one handler that
deliberately doesn't handle errors but is intended to be used with
another that does.

At runtime, when an error occurs it is possible to throw, raise, panic
or abort if no user handler takes care of it. I would argue that this
is better than silently ignoring it, on the "fail fast" principle. The
user *should* have error handlers, if not their program is incorrect
and it is better to crash than close your eyes and carry on down the
wrong path.

It would be much better to analyze the user's handlers and detect the
"no error handler" problem at the start of the event loop, but in
general that is tricky.

I would advocate that all bindings should throw/raise/panic/abort on an
unhandled error as a last line of defence, and we should add
better/earlier safeguards if we can come up with them

Cheers,
Alan.


Re: Proton-996: Surfacing SASL mechanisms into Messenger API

2015-10-05 Thread aconway
On Mon, 2015-09-21 at 11:35 -0400, Andrew Stitcher wrote:
> I worked on PROTON-996[1] last week, and added a new C API which
> surfaced the allowed SASL mechanism into the Proton Messenger API.
> 
> For simplicity I added a new API pn_messenger_sasl_allowed_mechs()
> which just mirrored the pn_sasl_allowed_mechs() API.
> 
> However When I went to create a new API for the Python binding (the
> higher level OO binding)It wasn't at all obvious where it fits.
> 
> This is largely because all the transport details were deliberately
> left out of the Messenger API because it is intended to be message
> focused. In practice all the previous transport oriented details were
> specified in the destination address URL.
> 
> So I'm now questioning my whole approach. It could be that the API is
> just, as a whole, inadequate to it's task as it can't cope with all
> the
> complexity necessary. And I think that some would argue that.
> 
> I definitely think that continually adding extra little warts here
> and
> there will make this API unusable in the long term. But even in the
> short term I can't really see how to map this transport detail into
> the
> world of the Python Messenger object.
> 
> Does anyone have any thoughts or advice (ranting and pontificating
> allowed!).
> 


Ohh ranting allowed!

I actually like messenger's idea of a message-oriented API, but I think
it is a mistake to assume you can simply hard code a trivial map of
address-to-connection, or put all the connection details in the URL.
The former is too limited, the latter is no different in practice from
a traditional connection-oriented API, but with extra layers of
abstraction in your way. 

If we really want to push a "connection agnostic" programming model,
then we need to add a separate layer where the application designer can
specify how addresses map to actualy connections, including security
parameters etc.

However I'm not sure that this is the way to go. Qpid dispatch provides
that kind of flexibility *outside* the client/server process where it
is much more, um, flexible. In many/most applications (especially if
you have dispatch router available) your endpoints tend to have just
one or very few connections so abstracting away the connection has
limited value in terms of simplifying the developers problem.



Create new proton components: go-binding and cpp-binding

2015-09-30 Thread aconway
I'd like to create these two components, who can edit the project? I
can be owner for go-binding, cjansen should probably be owner for cpp
-binding.

Cheers
Alan.


Go binding for proton

2015-09-29 Thread aconway
I've pushed the Go binding to master, read all about it at 

https://github.com/apache/qpid-proton/blob/master/proton-c/bindings/go/
README.md

The documentation needs a lot of work but you can check the examples
and start playing with the code now.

Please ignore the packages go/amqp, go/messaging and go/event listed at
the end of the reference doc page. They are old and out of date and I
have not figured out how to get them off the godoc page yet.

Feedback most extremely welcome!

Cheers,
Alan.



Re: Bug in proton interop suite??

2015-09-09 Thread aconway
On Wed, 2015-09-09 at 08:06 -0400, Chuck Rolke wrote:
> https://issues.apache.org/jira/browse/PROTON-308
> 
> More documentation required...

That's not it. The python code that generates the message does indeed
use "hello" so I would expect a vbin containing the bytes "hello", but
what I see is a vbin containing *the AMQP string encoding* of the
string hello - i.e. a 7 byte binary sequence with the typecode for AMQP
string + the length 5 + the bytes "hello"

> 
> ----- Original Message -
> > From: "aconway" <acon...@redhat.com>
> > To: "proton" <proton@qpid.apache.org>
> > Sent: Tuesday, September 8, 2015 5:36:39 PM
> > Subject: Bug in proton interop suite??
> > 
> > I'm doing some interop work on the go binding, and I see something
> > strange in the 'message.amqp' file in tests/interop. The message
> > body
> > is encoded as:
> > 
> > 0x77, 0xa0, 0x7, 0xa1, 0x5, 0x68, 0x65, 0x6c, 0x6c, 0x6f
> > ^ AMQP value section
> >   ^Binary
> > ^7 bytes
> >  ^String
> >^5 bytes
> > h e  l lo
> > 
> > In other words there's an AMQP-encoded string *inside* an AMQP
> > encoded
> > binary. Looking at the python code that generated this message I
> > would
> > expect it to be an AMQP 5 byte binary value "hello". I think the
> > intent
> > was for it to be a string, but in python plain "hello" is binary
> > you
> > need to say u"hello" to get a string. However I can't see any
> > reason
> > why there would be a string *inside* a binary. Anyone have a clue
> > what's going on here?
> > 
> > Cheers,
> > Alan.
> > 
> > 


Re: Linker errors with C++ examples

2015-09-09 Thread aconway
On Wed, 2015-09-09 at 15:07 -0400, Chuck Rolke wrote:
> I get the same error with VS2012. Taking a look now...
> 
> - Original Message -
> > From: "Clemens Vasters" 
> > To: proton@qpid.apache.org
> > Sent: Wednesday, September 9, 2015 2:39:03 PM
> > Subject: Linker errors with C++ examples
> > 
> > Trying to build current qpid-proton master on Windows with Visual
> > Studio 2015
> > (MSVC 19) and I'm getting linker errors for std::auto_ptr for all
> > the
> > examples in examples/cpp
> > 
> > The errors all look largely the same and are all about auto_ptr, so
> > I just
> > give one:
> > 
> > sync_client.obj : error LNK2019: unresolved external symbol
> > "__declspec(dllimport) public: static class std::auto_ptr > proton::message> __cdecl proton::message::create(void)"
> > (__imp_?create@message@proton@@SA?AV?$auto_ptr@Vmessage@proton@@@st
> > d@@XZ)
> > referenced in function "public: __thiscall
> > proton::message_value::message_value(void)"
> > (??0message_value@proton@@QAE@XZ)
> > 
> > My C++ skills are pretty rusty, so I'm not sure how to unblock
> > this. Just me?
> > 

My bad. I am trying to cater to both C++11 and C++03, but in the
process I have created a monster. There is a mismatch in compiler flags
passed to the examples and the library resulting in an inconsistent
view of what the return value of some functions should be.

Will do something more intelligent ASAP.



Bug in proton interop suite??

2015-09-08 Thread aconway
I'm doing some interop work on the go binding, and I see something
strange in the 'message.amqp' file in tests/interop. The message body
is encoded as: 

0x77, 0xa0, 0x7, 0xa1, 0x5, 0x68, 0x65, 0x6c, 0x6c, 0x6f
^ AMQP value section
  ^Binary
^7 bytes
 ^String
   ^5 bytes
h e  l lo

In other words there's an AMQP-encoded string *inside* an AMQP encoded
binary. Looking at the python code that generated this message I would
expect it to be an AMQP 5 byte binary value "hello". I think the intent
was for it to be a string, but in python plain "hello" is binary you
need to say u"hello" to get a string. However I can't see any reason
why there would be a string *inside* a binary. Anyone have a clue
what's going on here?

Cheers,
Alan.



C++ binding update - moving to master this week.

2015-09-01 Thread aconway
I have updated the C++ proton binding, for details see:

http://people.apache.org/~aconway/proton/c-and-cpp.html
http://people.apache.org/~aconway/proton

The highlights of the change:

- 0 overhead C++ facade classes, facade pointers point directly at C structs.
- proton::counted_ptr for automated refcounting of proton objects in any C++ 
version.
- std:: and boost:: smart pointers supported as alternative to 
proton::counted_ptr.
- APIs take/return foo& for facade foo, facade can convert to smart pointer.

This is simpler and more obvious than the proposal floated on the proton list 
but has essentially the same properties.



Re: A case in favor of separate repos for language bindings

2015-08-24 Thread aconway
My 2c.

The use python to drive C and Java tests was a good idea at the time
(because we only had C, Java and Python and python is the most
productive language to write tests in) and has served us well, but we
have outgrown it. 

There are two things we need:

- good unit tests for each binding/component that can be built/run
easily by developers in that language without excessive dependencies.

- good interop testing between *all* the components, not just a
special relationship between C, Java and python.

There is already work underway on an AMQP-focused (rather than language
focused) qpid interop test suite to cover all Qpid components.

So I would suggest reviwing the existing python tests to identify tests
that are *really* unit tests, and tests that are really about AMQP. The
latter to be adopted into the Qpid interop test framework to benefit
all the languages/components we care about. I suspect a lot of them are
the latter.

I see no issue with using python as part of the driver for C tests
since C is such a painful language to work with, but that shouldn't
require the entire python binding.

That does mean that you need to check-out, build and run tests in 2
repos for any given task: the component repo and the tests repo. On the
up side you *don't* have to check out and build every binding to work
on the core: just the core and the tests (which depends on the
bindings, which can be installed) That actually helps in the sense that
if you make core changes that break released bindings you know about it
before you ship.

Cheers,
Alan.

On Wed, 2015-08-19 at 19:05 +0100, Robbie Gemmell wrote:
 On 19 August 2015 at 13:05, Flavio Percoco fla...@redhat.com wrote:
  On 19/08/15 12:34 +0100, Robbie Gemmell wrote:
   
   I can see certain benefits to such a separation, mainly for folks
   interested only in the bindings, but if I'm honest I'm not sure 
   those
   outweigh the additional complication it seems it may bring in 
   some of
   the other areas.
  
  
  I think they actually do but probably because I don't see any
  complications being added to proton-c itself but rather a
  simplification of the current structure.
 
 Ken has covered much of what I was thinking about.
 
   It'll bring a more intuitive
  structure to the project rather than having the bindings under
  `proton-c`, which is very confusing :)
  
 
 I can see having them elsewhere might be more intuitive, especially
 for folks who dont know or care that they are using proton-c under 
 the
 covers while using them.
 
 I guess if they were kept in-tree they could be moved at to the top
 level to make them more obvious but then presumably the same might
 need done for the others, and that still wouldn't make a difference 
 to
 the various other points in your suggestion.
 
   The python bindings are slightly more interesting than the others 
   due
   to being at the heart of the python based tests that exist for
   proton-c and proton-j, so splitting them out actually creates a 
   bit of
   a circular dependency with proton / its tests (especially when 
   having
   the submodules in both repos). You mentioned thinking we wouldn't 
   need
   to track the binding within the proton tree (but could do so to
   prevent a breaking change for existing things), and instead use 
   an
   installed version of the bindings. In practice I guess it would 
   be
   about the same in terms of updating proton-c and the bindings and 
   the
   related python-based tests at the same time (adding an install, 
   or
   possibly two, in the middle).
  
  
  It won't be the same.
 
 I just meant in terms of which files people would need to update it
 would be similar, but with some install steps added (depending on 
 what
 you change), not any effect on what is or isnt being tested.
 
  The reason is that we're currently ensuring that
  proton-c doesn't break the bindings 'master' version. This is being
  done on the assumption that bindings will be released along with
  proton-c and there'll be a binding release per every proton-c 
  minor.
  
  The above is not an ideal scenarion in my opinion. Ideally, we'd 
  have
  a release of the bindings that will be able to track several minor
  releases of proton-c, but this will probably come later once proton
  -c
  stabilizes a bit more. However, I believe the current structure 
  allows
  for breaking changes to land because it tests things against master
  only rather than making sure older, still maintained, versions are 
  not
  being broken.
  
  Going back to the proposal, I think the circular dependency is not
  really necessary and we could just have the python bindings being
  installed. The installed version can either be the current master
  branch (pip knows how to install from git) or the latest stable
  version. Furthermore, we could even have a list of versions that we
  want to test against.
  
  To expand the above issue a bit more, I think it's wrong to have 
  the
  python tests as part of 

Re: Final (maybe really) proposal for C++ binding memory management.

2015-08-21 Thread aconway
 pointer takes ownership of the existing reference.

The get() function does nothing to the refcount, so the programmer must follow
proton's rules about valid scope and `delete`. It is an explicit function rather
than a conversion to avoid confusion with smart pointer constructors that take
ownership of a plain pointer.

One more trick. If we made a new shared_ptr on every conversion, that would be a
new shared_ptr *family* per call, not just an *instance*. There is one more
secret:

5. The first time someone wants a shared_ptr to a given proton object we
allocate a weak_ptr and stash it in a context slot on the object. Subsequently
we use that as a shared_ptr factory. Thus there is only one shared_ptr family
per object, and only if you ask for it. 

If you need to know more, read the source!
*/

#if (not defined PN_USE_CPP11  (defined(__cplusplus)  __cplusplus = 201100) || (defined(_MSC_FULL_VER)  _MSC_FULL_VER = 150030729))
#define PN_USE_CPP11 1
#endif

#include iostream
#include memory
using namespace std;

//  Simulate private proton .h files.

// Fake pn object.
struct pn_foo_t {
string s_;// Dummy state
int refs_; // Simulate the external refcount


// The external refcount is the number of c++ shared_ptr *groups* referencing the object.
// This is in addition to any internal library refs - for this test we don't care about those
// so we start refs_ at 0, meaning no external references.
//
// It is OK to have multiple shared_ptr groups pointing at the same pn_foo_t
// since the end of a shared_ptr group only *decrefs* the pn_foo, it doesn't
// delete it.
//
pn_foo_t(string s) : s_(s), refs_(0) { cout  s_   created  endl; }
~pn_foo_t() { cout  s_   freed  endl; }
};

//  Simulate public proton C API and proton internal refcounts.

pn_foo_t* pn_foo_create(string s) { return new pn_foo_t(s); } // 0 external refs
string pn_foo_get_s(pn_foo_t* p) { return p-s_; }
void pn_foo_set_s(pn_foo_t* p, string s) { p-s_ = s; }
void pn_foo_free(pn_foo_t* p) { delete p; }


// Simulate pn_object_incref/decref
void pn_object_incref(void* v) {
pn_foo_t* p = reinterpret_castpn_foo_t*(v);
++p-refs_;
}

void pn_object_decref(void* v) {
pn_foo_t* p = reinterpret_castpn_foo_t*(v);
--p-refs_;
if (p-refs_ == 0) delete p;
}

//  Simulate C++ binding

// Placeholder, real definition in proton/types.hpp. Generates all comparison ops from  and ==.
template class T class comparable {}; 

template class T class pn_shared_ptr;

/**@internal
 *
 * This is the main engine for pointer conversion, used only as a base
 * class. Derived types provide a `give()` function which returns a pointer
 * that can be deleted safely by the caller. `Derived::give()` will incref or not,
 * the derived classes know if we are in a loan or transfer situation.
 * (No virtual functions, uses the CRTP.)
 */
template class T, class Derived class pn_ptr : public comparablepn_ptrT, Derived  {
  public:
typedef T element_type;
typedef typename T::pn_type pn_type;

T* get() const { return ptr_; }
pn_type* get_pn() const { return reinterpret_castpn_type(ptr_); }
T* operator-() const { return ptr_; }
operator bool() const { return ptr_; }
bool operator!() const { return !ptr_; }

void swap(pn_ptr x) { std::swap(ptr_, x.ptr_); }

operator pn_shared_ptrT() { return pn_shared_ptrT(give()); }
operator std::auto_ptrT() { return std::auto_ptrT(give()); }
#if PN_USE_CPP11
// FIXME aconway 2015-08-21: need weak pointer context for efficient shared_ptr
operator std::shared_ptrT() { return std::shared_ptrT(give()); }
operator std::unique_ptrT() { return std::unique_ptrT(give()); }
#endif
#if PN_USE_BOOST
// FIXME aconway 2015-08-21: need weak pointer context for efficient shared_ptr
operator boost::shared_ptrT() { return boost::shared_ptrT(give()); }
operator boost::intrusive_ptrT() { return boost::intrusive_ptrT(give()); }
#endif

// FIXME aconway 2015-08-20: template conversions for pointers to related types as per std:: pointers.

  protected:
pn_ptr(pn_type* p) : ptr_(reinterpret_castT*(p)) {}
T* incref() const { pn_object_incref(reinterpret_casttypename T::pn_type*(ptr_)); return ptr_; }
T* decref() const { pn_object_decref(reinterpret_casttypename T::pn_type*(ptr_)); return ptr_; }
T* give() { return static_castDerived*(this)-give(); }
T* ptr_;
};

template class T, class D inline
bool operator==(pn_ptrT,D x, pn_ptrT,D y) { return x.get() == y.get(); }

template class T, class D inline
bool operator(pn_ptrT,D x, pn_ptrT,D y) { return x.get()  y.get(); }

/**
   pn_shared_ptr is a smart pointer template that uses proton's internal
   reference counting. Proton objects hold their own reference count so there is
   no separate counter object allocated. The memory footprint is the same as a
   plain pointer.

   It is provided as a convenience for programmers that have pre-c++11 compilers
   and cannot

Re: All about proton memory management (or C++, python and Go - Oh My!)

2015-08-19 Thread aconway
On Wed, 2015-08-19 at 10:45 -0400, Ken Giusti wrote:
 Nicely done Alan! 
 
 One point - I'm a little confused about your advice regarding 
 pn_object_decref: 
 
  The proton C API has standard reference counting rules (but see [1] 
  below)
 
  * A pointer returned by a pn_ function is either borrowed by the 
  caller, or
  the caller owns a reference (the API doc says which.)
  * The owner of a reference must call pn_object_decref() exactly 
  once to
  release it.
 
 Really?  What about those proton objects that have a pn_X_free() 
 method?  I believe the corresponding pn_X() allocation methods pass 
 back an owning reference to the object.  If a pn_X_free() exists (and 
 the user never calls pn_object_incref() on the object), shouldn't the 
 user use the pn_X_free() method instead?
 
 Maybe the doc should make that distinction, e.g.:
 
 * if an object of type pn_X_t has a corresponding pn_X_free() method, 
 call the pn_X_free() method to release your reference to the object.
 * otherwise, if you have called pn_object_incref() on the object, you 
 must call pn_object_decref() on the object to release your reference.

From skimming the source, I believe pn_x_free looks at the refcount
itself and behaves like pn_object_decref without the

if (--refcount=0) pn_x_free(me) 

So in theory they *should* work together. The proton lib has no way of
knowing whether the last thing to release an object will be an internal
decref or a pn_x_free from the user, so I sure *hope* they work
together.

However I did want to make the point that you should EITHER use
refcounts only OR free only, not both. Even if it works today your code
or your head may explode later. IMO normal C code (including bindings
to languages without finalizers like Go) should use free only, bindings
to automated memory languages like python and soon C++ should use
refcounts only.

Of course the difference between theory and practice is that in theory 
they are the same and in practice they are different.




Re: All about proton memory management (or C++, python and Go - Oh My!)

2015-08-18 Thread aconway
On Tue, 2015-08-18 at 12:09 -0400, Andrew Stitcher wrote:
 On Tue, 2015-08-18 at 07:38 -0400, Rafael Schloming wrote:
  Nice writeup!
 
 I agree.
 
 Andrew
 
 [Did you make a pass through the doc to ensure that the claimed API 
 doc
 is actually there? that is, doc on ownership and scope?]
 

No, will add to my TODO list :) Some of it definitely is but I don't
know if is uniform and complete. The existing doc does not refer to
refcounts, it talks of pointers becoming invalid. I think that is the
correct language - we don't want to give the impression that refcounts
are mandatory. The discussion of refcounting can clarify that becomes
invalid means the implied _reference_ becomes invalid. The actual
object might not be freed immediately (with or without user refcounts
because of containment) but in any case it is no longer your business.
You must treat that pointer or the implied reference as invalid or you
are deep in works in tests, core dumps in production territory.

Thanks for the feedback!
Alan.


Re: Integrating C++ and proton C memory management UPDATED

2015-08-17 Thread aconway
On Mon, 2015-08-17 at 10:38 -0400, Andrew Stitcher wrote:
 I like the way you're thinking - I expect to have real time to look 
 at
 your code Tomorrow/Wednesday.
 
 One point that occurred to me over the weekend (that I think is
 probably incorporated in what you've done here). Is that C++ code 
 never
 needs to use a shared_ptr to any Proton struct because Proton ref
 counts by itself. In other words the C++ ref count could only ever by 
 0
 or 1. All the C++ code ever needs is a unique_ptr. I suspect this 
 point
 doesn't really affect your proposal here though.
 
 Andrew
 

Here's a preview of what I'm thinking now, it follows from the code I
published

/**
   The C++ proton binding can be used in a memory-safe way via
std::shared_ptr,
   std::unique_ptr, boost::shared_ptr and boost::intrusive_ptr. If you
are using
   an older C++ compiler and cannot use boost, you can can optionally
use
   proton::pn_shared_ptr to automate memory management.

   You can also use the binding in an *unsafe* C-like way with plain
pointers if
   you must avoid the (low) overhead of reference counting, or if you
are writing
   mixed C/C++ code. Caveat emptor.
*/

template class T class pn_shared_ptr;
template class T class pn_borrowed_ptr;

/** Base class for pn_ smart pointers, see pn_shared_ptr and
pn_borrowed_ptr */
template class T class pn_base_ptr {
  public:
typedef typename T::pn_type pn_type;

T* get() const { return ptr_; }
operator T*() const { return ptr_; }
T* operator-() const { return ptr_; }

// get_pn is only needed if you are mixing C and C++ proton code.
pn_type* get_pn() const { return reinterpret_get_pnP*(ptr_); }

#if PN_USE_CPP11
operator std::shared_ptrT() const { return
std::shared_ptrT(incref(), pn_object_decref); }
operator std::unique_ptrT() const { return
std::unique_ptrT(incref(), pn_object_decref); }
#endif
#if PN_USE_BOOST
operator boost::shared_ptrT() const { return
boost::shared_ptrT(incref(), pn_object_decref); }
operator boost::intrusive_ptrT() const { TODO; }
#endif

// FIXME aconway 2015-08-17: get_pning conversions to compatible
pointers

  private:
pn_base_ptr(T* p) : ptr_(p) {}

T* incref() { pn_object_incref(get_pn()); return ptr_; }
void decref() { pn_object_decref(get_pn()); }

T* ptr_;

template  friend class pn_shared_ptrT;
template  friend class pn_borrowed_ptrT;
};

/**
   pn_shared_ptr is a smart pointer template that uses proton's
internal
   reference counting. Proton objects hold their own reference count so
there is
   no separate counter object allocated. The memory footprint is the
same as a
   plain pointer.

   It is provided as a convenience for programmers that have pre-c++11
compilers
   and cannot use the boost libraries. If you have access to
std::shared_ptr,
   std::unique_ptr, boost::shared_ptr or boost::intrusive_ptr you can
use them
   instead. pn_shared_ptr converts automatically with correct reference
   counting.
*/
template class T class pn_shared_ptr : public pn_base_ptrT {
  public:
pn_shared_ptr(pn_shared_ptr p) : ptr_(p) { incref(); }
pn_shared_ptr(pn_borrowed_ptr p) : ptr_(p) { incref(); }
~pn_shared_ptr() { decref(); }

  private:
pn_shared_ptr(T* p) : pn_base_ptrT(p) {}
template  friend class pn_shared_ptrT;
};

/**
   pn_borrowed_ptr is used as a return and parameter type in
proton::functions
   to automate correct memory management. You do not normally need to
use it in
   your own code.

   It behaves like (and converts to/from) a plain pointer, the only
difference
   from a plain pointer is that converting it to any owning smart
pointer type
   will automatically add a reference. Converting from (i.e.
borrowing) a plain
   or smart pointer will not change the reference couint.
*/
template class T class pn_borrowed_ptr : public pn_base_ptrT {
  public:
typedef typename pn_base_ptrT::pn_type pn_type;

/** This constructor is only needed if you are mixing C and C++
proton code */
explict pn_borrowed_ptr(pn_type* p) :
pn_base_ptr(dynamic_castT*(p)) {}
pn_borrowed_ptr(T* p) : pn_base_ptrT(p) {}
operator T*() const { return ptr_; }
operator pn_shared_ptrT() const { return
pn_shared_ptrT(incref()); }
};


Re: Integrating C++ and proton C memory management UPDATED

2015-08-17 Thread aconway
On Mon, 2015-08-17 at 10:38 -0400, Andrew Stitcher wrote:
 I like the way you're thinking - I expect to have real time to look 
 at
 your code Tomorrow/Wednesday.
 
 One point that occurred to me over the weekend (that I think is
 probably incorporated in what you've done here). Is that C++ code 
 never
 needs to use a shared_ptr to any Proton struct because Proton ref
 counts by itself. In other words the C++ ref count could only ever by 
 0
 or 1. All the C++ code ever needs is a unique_ptr. I suspect this 
 point
 doesn't really affect your proposal here though.

I'll do you one better - C++ can use *either* (or boost
shared/intrusive pointers) seamlessly if we say that all C++ smart
pointers own a *proton reference*, not a *proton object*.

Will have another proposal out shortly to clarify.

 
 On Sat, 2015-08-15 at 06:09 -0400, aconway wrote:
  In case you spotted the bug in the previous proposal here is a much
  better one. This one doesn't have code yet but you can imagine how 
  it
  would work based on the previous code. I'll post updated code 
  shortly.
  
  Updated proposal to integrate C++ and proton C memory management.
  
  - use refcounting consistently, no pn_free. Fixes bug in the 
  previous
  proposal.
  - added pn_shared_ptr for portable refcounting in C++11 and C++03.
  - better integration with std:: and boost:: smart pointers in C++11 
  and
  C++03.
  
  The idea is that every ::pn_foo_t type has a corresponding C++
  proton::foo class with member functions so you can do `foo* p=...; 
  p
  -something()` in C++ and it will call `::pn_foo_something()` on 
  the
  underlying `::pn_foo_t`.
  
  The first trick: the foo class is *empty and never instantiated*. A
  foo* returned from the C++ API points directly to the raw C `struct
  pn_foo_t`. You can reinterpret_cast between the two if you want to 
  mix
  C and C++ APIs (you don't really want to.) Doing `foo* p=...; 
  delete 
  p`
  will actually call pn_object_decref(reinterpret_castvoid*(p)).
  
  The next trick: proton:: functions return 
  proton::pn_shared_ptrfoo, 
  a
  smart pointer using proton refcounts directly 
  (pn_object_incref/decref)
  It is portable and useful as-is in c++03 and c++11, but is not as
  featureful as the std:: and boost:: smart pointers.
  
  However it can be converted to any of std::unique_ptr, 
  std::shared_ptr,
  std::auto_ptr, boost::shared_ptr and boost::intrusive_ptr safely 
  with
  correct refcounting. Each unique_ptr, auto_ptr or shared_ptr family
  owns a proton refcount (not the actual proton object) so it is safe 
  to
  have multiple unique/shared_ptr to the same underlying proton 
  object.
  
  So some examples, given:
  
  class foo { pn_shared_ptrfoo create(); ... }
  class event { pn_shared_ptrfoo foo(); }
  event e;
  
  This works in C++11:
  
  - std::shared_ptrfoo p = e.foo(); // shared_ptr refcounts 
  integrated
  with proton
  - std::unique_ptrfoo p = foo::create();
  
  These are all safe and portable in C++03 or C++11:
  
  - e.foo()-somefunc();  // call direct, no 
  refcounting.
  - pn_shared_ptrfoo p = e.foo();   // use pn_shared_ptr 
  directly.
  - std::auto_ptrfoo p = foo::create(); // portable but deprecated 
  in
  C++11
  - boost::intrusive_ptrfoo p = e.foo() // use proton refcount
  directly.
  - boost::shared_ptrfoo p = e.foo()// boost refcounts 
  integrated
  with proton
  
  The following are *unsafe* but legal in all C++ versions:
  
  - foo* p = e.foo(); // unsafe, p may be invalidated by 
  later
  proton actions.
  - foo* p = foo::create();   // unsafe, p will not automatically be
  freed.
  
  There is almost no overhead compared to using the raw C interface. 
   If
  you use boost|std::shared_ptr it will allocate an internal counter 
  per
  pointer *family* (not instance) which is not much overhead, 
  otherwise
  there are 0 extra allocations. The the template magic will 
  evaporate 
  at
  compile time.
  
  NOTE: proton::functions will take foo as a parameter so you can 
  always
  pass *p for any pointer type.
  
  On Fri, 2015-08-14 at 20:52 -0400, aconway wrote:
   I have a proposal to integrate C++ and proton C memory 
   management, 
   I
   need a sanity check.
   
   Attached is an executable C++ sketch and test (pn_ptr.cpp) and a 
   script
   (test.sh) that runs the combinations of g++/clang++ and 
   c++11/c++03, 
   as
   well as some tests to verify that we get compile errors to 
   prevent
   mistakes.
   
   The idea is that every pn_foo_t type has a corresponding C++ foo 
   class
   with member functions to make it easy to call pn_foo_* functions 
   in 
   C++
   (converting std::string/char* etc.)
   
   The first trick: the foo class is empty and never instantiated. A 
   foo*
   actually points to the same memory location as the pn_foo_t*. You 
   can
   reinterpret_cast between the two, and deleting the foo* will 
   actually
   call pn_foo_free(pn_foo_t*).
   
   The next trick

All about proton memory management (or C++, python and Go - Oh My!)

2015-08-17 Thread aconway
I've been doing a lot of thinking about memory management and proton
bindings in 4 languages (C, Go, python and C++) and I have Seen The
Light. Here is a write-up, I'd appreciate feedback in the form of
email, reviewboard diffs, regular diffs, or just commit improvements if
you're a commiter. I will add this to the official proton API doc after
incorporating feedback:

https://github.com/apache/qpid-proton/blob/master/docs/markdown/memory_
management.md




Re: Integrating C++ and proton C memory management UPDATED

2015-08-15 Thread aconway
In case you spotted the bug in the previous proposal here is a much
better one. This one doesn't have code yet but you can imagine how it
would work based on the previous code. I'll post updated code shortly.

Updated proposal to integrate C++ and proton C memory management.

- use refcounting consistently, no pn_free. Fixes bug in the previous
proposal.
- added pn_shared_ptr for portable refcounting in C++11 and C++03.
- better integration with std:: and boost:: smart pointers in C++11 and
C++03.

The idea is that every ::pn_foo_t type has a corresponding C++
proton::foo class with member functions so you can do `foo* p=...; p
-something()` in C++ and it will call `::pn_foo_something()` on the
underlying `::pn_foo_t`.

The first trick: the foo class is *empty and never instantiated*. A
foo* returned from the C++ API points directly to the raw C `struct
pn_foo_t`. You can reinterpret_cast between the two if you want to mix
C and C++ APIs (you don't really want to.) Doing `foo* p=...; delete p`
will actually call pn_object_decref(reinterpret_castvoid*(p)).

The next trick: proton:: functions return proton::pn_shared_ptrfoo, a
smart pointer using proton refcounts directly (pn_object_incref/decref)
It is portable and useful as-is in c++03 and c++11, but is not as
featureful as the std:: and boost:: smart pointers.

However it can be converted to any of std::unique_ptr, std::shared_ptr,
std::auto_ptr, boost::shared_ptr and boost::intrusive_ptr safely with
correct refcounting. Each unique_ptr, auto_ptr or shared_ptr family
owns a proton refcount (not the actual proton object) so it is safe to
have multiple unique/shared_ptr to the same underlying proton object.

So some examples, given:

class foo { pn_shared_ptrfoo create(); ... }
class event { pn_shared_ptrfoo foo(); }
event e;

This works in C++11:

- std::shared_ptrfoo p = e.foo(); // shared_ptr refcounts integrated
with proton
- std::unique_ptrfoo p = foo::create();

These are all safe and portable in C++03 or C++11:

- e.foo()-somefunc();  // call direct, no refcounting.
- pn_shared_ptrfoo p = e.foo();   // use pn_shared_ptr directly.
- std::auto_ptrfoo p = foo::create(); // portable but deprecated in
C++11
- boost::intrusive_ptrfoo p = e.foo() // use proton refcount
directly.
- boost::shared_ptrfoo p = e.foo()// boost refcounts integrated
with proton

The following are *unsafe* but legal in all C++ versions:

- foo* p = e.foo(); // unsafe, p may be invalidated by later
proton actions.
- foo* p = foo::create();   // unsafe, p will not automatically be
freed.

There is almost no overhead compared to using the raw C interface.  If
you use boost|std::shared_ptr it will allocate an internal counter per
pointer *family* (not instance) which is not much overhead, otherwise
there are 0 extra allocations. The the template magic will evaporate at
compile time.

NOTE: proton::functions will take foo as a parameter so you can always
pass *p for any pointer type.

On Fri, 2015-08-14 at 20:52 -0400, aconway wrote:
 I have a proposal to integrate C++ and proton C memory management, I
 need a sanity check.
 
 Attached is an executable C++ sketch and test (pn_ptr.cpp) and a 
 script
 (test.sh) that runs the combinations of g++/clang++ and c++11/c++03, 
 as
 well as some tests to verify that we get compile errors to prevent
 mistakes.
 
 The idea is that every pn_foo_t type has a corresponding C++ foo 
 class
 with member functions to make it easy to call pn_foo_* functions in 
 C++
 (converting std::string/char* etc.)
 
 The first trick: the foo class is empty and never instantiated. A 
 foo*
 actually points to the same memory location as the pn_foo_t*. You can
 reinterpret_cast between the two, and deleting the foo* will actually
 call pn_foo_free(pn_foo_t*).
 
 The next trick: proton::event accessor functions return a 
 pn_ptrfoo,
 which is an internal class that users cannot instantiate. What they 
 can
 do is convert it by assignment or construction to any of: foo*,
 std::auto_ptrfoo, std::unique_ptrfoo or std::shared_ptrfoo. In
 the shared_ptr case the conversion automatically does a pn_incref().
 
 The upshot of this is that you can use plain foo* or any of the
 std::smart pointers to point to a foo and it Just Works. If you don't
 use shared_ptr you need to understand the proton C API lifecycle 
 rules,
 but with shared_ptr it is all fully automatic.
 
 Moreover if you don't use shared_ptr there is almost no overhead over
 using pn_foo_t* directly in the C API, as the compiler should 
 optimise
 away all the inline template magic.
 
 This works with c++11 (everything works) or c++03 (just foo* and
 auto_ptr). It will be trivial to add support for boost::shared_ptr so
 nice memory management will work with c++03 and boost.


C++ API tutorial complete

2015-08-07 Thread aconway
The C++ tutorial and examples are complete, except for selector and
browsing examples which we can fill in later. Preview at: 
http://people.apache.org/~aconway/proton/ 

The code is on branch cjansen_cpp_client



Re: Semantics of proton refcounts [was Re: proton 0.10 blocker]

2015-07-16 Thread aconway
On Thu, 2015-07-16 at 15:11 +0100, Gordon Sim wrote:
 On 07/16/2015 02:40 PM, aconway wrote:
  The fix mentioned above has this, which make no sense under 
  traditional
  refcounting:
  
   pn_incref(endpoint);
   pn_decref(endpoint);
 
 Note that this is not added as part of my fix, it is already there. 
 [snip]
 
 The reference counting logic may not match the ideal, but we can't 
 postpone a fix for the current issue pending some nicer overall 
 solution. We can avoid it, by backing out the problematic previous 
 commit, or we can adjust that commit.
 

+1, I'm just proposing that we get a clear statement of the intended
semantics of the *existing* proton refcounting scheme into the code as
it has tripped up a couple of people so far. I for one am still not
very clear on it. This is not an issue for 0.10 but for longer term
code health.

Cheers,
Alan.


Proton reactor documentation

2015-07-15 Thread aconway
I'm documenting the C++ binding but the proton C reactor.h is very
light on documentation. Is anyone working on this? I'm figuring it out
from source but it would be good to have some docs there.

Cheers,
Alan.


Re: Git repo for the proton Go binding.

2015-07-14 Thread aconway
On Mon, 2015-07-13 at 23:02 +0100, Dominic Evans wrote:
 -aconway acon...@redhat.com wrote: -
  I would like to create a separte git repo for the proton Go 
  binding. 
  
  Go provides go get to grab online go source libraries, based on
  cloning repos. The go tools assume that each go project has its own
  repo. I have tried to make this work directly from the proton repo
  but it is a mess and doesn't work properly.
 
 Is this just because the go code isn't at the top-level?
 
 Can't you just rename your current branch out of the way and then 
 subtree the binding and push it to the go1 branch?
 
 e.g., something like...
 
 # rename old full tree branch and push it to a new location
  git checkout -b proton-go origin/go1
  git push origin --set-upstream proton-go
 # split out just the go binding and push it
  git subtree split --prefix=proton-c/bindings/go -b go1
  git checkout go1
  git push -f origin go1
 
 `go get` will look for the go1 branch by default, so it should just 
 work (tm) ?
 

Hah, I missed git subtree. It's not mentioned on the main git man page
in my distro (I know, I know, what's a 'man page' grandad?) 

This sounds exactly right but doesn't work for me. When I run the
subtree split my go1 branch still appears to have the full proton tree
in it. Must be doing something wrong...



Re: Tutorial for C++ binding

2015-07-14 Thread aconway
On Tue, 2015-07-14 at 10:07 +0100, Gordon Sim wrote:
 On 07/13/2015 11:22 PM, aconway wrote:
  I've got a (very rough, very incomplete) draft of the tutorial for 
  the
  C++ binding up on http://people.apache.org/~aconway/proton/
  
  I'm interested in feedback on whether this is going in the right
  direction. Hope to have this complete in a day or two, along with 
  more
  substantial API doc for the C++ API.
 
 Looks good to me (but then I'm a bit biased!).
 

I moved the discussion of broker vs. direct up front with instructions
to run the example broker, I found that confusing. Other than that I
love the structure of the tutorial, does a nice job of building
concepts.

One other thing I think needs some explanation and consistency is the
relationship between URLs, connection addresses and queue/topic names.

As we only need then to initiate one link, the sender, we can do that
by passing in a url rather than an existing connection, and the
connection will also be automatically established for us. 

I think we need to spell out that the /path part of the URL is the
queue/topic name. I'm wary to use the term AMQP address since the
addressing spec is a little unclear at this point (esp. on the point as
to whether connection info is or is not part of the link
source/target.)



Re: Git repo for the proton Go binding.

2015-07-14 Thread aconway
Problem solved, thanks Dominic!!

On Mon, 2015-07-13 at 23:02 +0100, Dominic Evans wrote:
 -aconway acon...@redhat.com wrote: -
  I would like to create a separte git repo for the proton Go 
  binding. 
  
  Go provides go get to grab online go source libraries, based on
  cloning repos. The go tools assume that each go project has its own
  repo. I have tried to make this work directly from the proton repo
  but it is a mess and doesn't work properly.
 
 Is this just because the go code isn't at the top-level?
 
 Can't you just rename your current branch out of the way and then 
 subtree the binding and push it to the go1 branch?
 
 e.g., something like...
 
 # rename old full tree branch and push it to a new location
  git checkout -b proton-go origin/go1
  git push origin --set-upstream proton-go
 # split out just the go binding and push it
  git subtree split --prefix=proton-c/bindings/go -b go1
  git checkout go1
  git push -f origin go1
 
 `go get` will look for the go1 branch by default, so it should just 
 work (tm) ?
 
 Unless stated otherwise above:
 IBM United Kingdom Limited - Registered in England and Wales with 
 number 741598. 
 Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire 
 PO6 3AU
 


FYI: updated C++ tutorial

2015-07-14 Thread aconway
C++ tutorial is mostly done, if that interests you
 http://people.apache.org/~aconway/proton/
or check out cjansen-cpp-client.


Re: Git repo for the proton Go binding.

2015-07-13 Thread aconway
On Mon, 2015-07-13 at 18:28 +0100, Robbie Gemmell wrote:
 On 13 July 2015 at 16:23, aconway acon...@redhat.com wrote:
  On Mon, 2015-07-13 at 13:03 +0100, Robbie Gemmell wrote:
   I don't really know much about Go, so I mainly have questions 
   rather
   than answers.
   
   - What would actually be included in this 'qpid-proton-go' repo 
   vs
   the
   existing qpid-proton repo?
  
  The contents of proton-c/bindings/go on branch go1. Basically the 
  Go
  binding source code.
  
   - Have you looked into how other Apache projects are supporting 
   go
   get, if there are any, to see what do they do?
  
  Nope, good point. I've looked at a bunch of google and github 
  projects,
  but all pure Go so the tools Just Work. Anyone know of other mixed
  -language projects with a go component?
  
   I'm not sure how well it would go down with infra to be routinely
   'distributing' things directly out of the repo.
  
  I'm pretty sure they're OK with distributing source code out of the
  repo :)  Go distributes everything as pure source, so no unusual 
  use of
  the repo is implied.
 
 I mainly meant that developers and interested folks using the repo to
 get at the source for inspection and modification isn't necessarily
 considered the same as using it as a regular entry point for most
 users. For example, we use the mirror system for our actual [source]
 releases. That said, if we are discussing a fairly small binding that
 probably isn't much of an issue, and as I wondered, perhaps the 
 github
 mirror of the repo might be a useful tool there.
 
+1, we could certainly do that. That doesn't solve the structural
problems but I'll keep digging and head scratching.

  I have go get *almost* working directly out of
  the ASF repo now, but I can't work around all the glitches - in
  particular the documentation browser is a mess. If I could I would
  rather keep it all in the same repo.
  
Might be worth
   discussing with infra. Perhaps we could point folks at the GitHub
   mirror to alleviate that? Not sure if there are path issues 
   involved
   with that though. Somewhat looping back to 'what do other 
   projects
   do?' again.
  
  Good point, I'll ask around and on infra, I may be missing 
  something.
  If anyone's interested in brainstorming about the proper way to do 
  this
  (esp. anyone who's done Go work) I'd be happy to go over the issues 
  in
  painful detail.
  
   On 10 July 2015 at 16:34, aconway acon...@redhat.com wrote:
I would like to create a separte git repo for the proton Go
binding.

Go provides go get to grab online go source libraries, based 
on
cloning repos. The go tools assume that each go project has its 
own
repo. I have tried to make this work directly from the proton 
repo
but
it is a mess and doesn't work properly.

Any objections or suggestions?

Anyone got pointers to speed me thru the apache infra process?

Cheers,
Alan.


Re: Git repo for the proton Go binding.

2015-07-13 Thread aconway
On Mon, 2015-07-13 at 13:03 +0100, Robbie Gemmell wrote:
 I don't really know much about Go, so I mainly have questions rather
 than answers.
 
 - What would actually be included in this 'qpid-proton-go' repo vs 
 the
 existing qpid-proton repo?

The contents of proton-c/bindings/go on branch go1. Basically the Go
binding source code.

 - Have you looked into how other Apache projects are supporting go
 get, if there are any, to see what do they do?

Nope, good point. I've looked at a bunch of google and github projects,
but all pure Go so the tools Just Work. Anyone know of other mixed
-language projects with a go component?

 I'm not sure how well it would go down with infra to be routinely
 'distributing' things directly out of the repo.

I'm pretty sure they're OK with distributing source code out of the
repo :) Go distributes everything as pure source, so no unusual use of
the repo is implied. I have go get *almost* working directly out of
the ASF repo now, but I can't work around all the glitches - in
particular the documentation browser is a mess. If I could I would
rather keep it all in the same repo.

  Might be worth
 discussing with infra. Perhaps we could point folks at the GitHub
 mirror to alleviate that? Not sure if there are path issues involved
 with that though. Somewhat looping back to 'what do other projects
 do?' again.

Good point, I'll ask around and on infra, I may be missing something.
If anyone's interested in brainstorming about the proper way to do this
(esp. anyone who's done Go work) I'd be happy to go over the issues in
painful detail.

 On 10 July 2015 at 16:34, aconway acon...@redhat.com wrote:
  I would like to create a separte git repo for the proton Go 
  binding.
  
  Go provides go get to grab online go source libraries, based on
  cloning repos. The go tools assume that each go project has its own
  repo. I have tried to make this work directly from the proton repo 
  but
  it is a mess and doesn't work properly.
  
  Any objections or suggestions?
  
  Anyone got pointers to speed me thru the apache infra process?
  
  Cheers,
  Alan.


Tutorial for C++ binding

2015-07-13 Thread aconway
I've got a (very rough, very incomplete) draft of the tutorial for the
C++ binding up on http://people.apache.org/~aconway/proton/

I'm interested in feedback on whether this is going in the right
direction. Hope to have this complete in a day or two, along with more
substantial API doc for the C++ API.

Gory details: The site is doxygen. I used pandoc to convert gsim's rst
tutorial to markdown (which doxygen understands) and just started
hacking. 

I was hoping to find ways we could re-use tutorial text automatically
but so far the tutorial discussion is too closely intertwined with
language specifics for me to see how that could reasonably work so
although the text is mostly the same its all manual editing.

Cheers,
Alan.


Git repo for the proton Go binding.

2015-07-10 Thread aconway
I would like to create a separte git repo for the proton Go binding. 

Go provides go get to grab online go source libraries, based on
cloning repos. The go tools assume that each go project has its own
repo. I have tried to make this work directly from the proton repo but
it is a mess and doesn't work properly.

Any objections or suggestions? 

Anyone got pointers to speed me thru the apache infra process?

Cheers,
Alan.


New python tox tests failing on fedora 22, Python 2.7.10/3.4.2

2015-07-10 Thread aconway
Anyone seeing errors like this? I think the ERROR: InterpreterNotFound
should be a warning, it seems to be testing both 2.7 and 3.4 so not
finding 2.6 and 3.3 doesn't seem like an ERROR.

The missing attribute is defined as a @staticmethod so I don't
understand that.

The 3 argument raise is definitely gone in python 3 so I don't
understand why its still there or what is the portable way to replace
it.

Cheers,
Alan. 

1/1 Test #3: python-tox-test ..***Failed  Error regular
expression found in output. Regex=[ERROR:[ ]+py[0-9]*: commands failed]
28.49 sec
GLOB sdist-make: /home/aconway/proton/proton-c/bindings/python/setup.py
py26 create: /home/aconway/proton/proton-c/bindings/python/.tox/py26
ERROR: InterpreterNotFound: python2.6
py27 inst-nodeps: /home/aconway/proton/proton
-c/bindings/python/.tox/dist/python-qpid-proton-0.10.0.zip
py27 runtests: PYTHONHASHSEED='147840795'
py27 runtests: commands[0] | /home/aconway/proton/proton
-c/bindings/python/../../../tests/python/proton-test
Traceback (most recent call last):
  File /home/aconway/proton/proton
-c/bindings/python/../../../tests/python/proton-test, line 620, in
module
m = __import__(name, None, None, [dummy])
  File /home/aconway/proton/tests/python/proton_tests/__init__.py,
line 20, in module
import proton_tests.codec
  File /home/aconway/proton/tests/python/proton_tests/codec.py, line
21, in module
from . import common
  File /home/aconway/proton/tests/python/proton_tests/common.py, line
133, in module
if SASL.extended():
AttributeError: type object 'SASL' has no attribute 'extended'
ERROR: InvocationError: '/home/aconway/proton/proton
-c/bindings/python/../../../tests/python/proton-test'
py33 create: /home/aconway/proton/proton-c/bindings/python/.tox/py33
ERROR: InterpreterNotFound: python3.3
py34 inst-nodeps: /home/aconway/proton/proton
-c/bindings/python/.tox/dist/python-qpid-proton-0.10.0.zip
py34 runtests: PYTHONHASHSEED='147840795'
py34 runtests: commands[0] | /home/aconway/proton/proton
-c/bindings/python/../../../tests/python/proton-test
Traceback (most recent call last):
  File /home/aconway/proton/proton
-c/bindings/python/../../../tests/python/proton-test, line 620, in
module
m = __import__(name, None, None, [dummy])
  File /home/aconway/proton/tests/python/proton_tests/__init__.py,
line 20, in module
import proton_tests.codec
  File /home/aconway/proton/tests/python/proton_tests/codec.py, line
21, in module
from . import common
  File /home/aconway/proton/tests/python/proton_tests/common.py, line
26, in module
from proton import Connection, Transport, SASL, Endpoint, Delivery,
SSL
  File /usr/local/lib64/proton/bindings/python/proton/__init__.py,
line 3733
raise exc, val, tb
 ^
SyntaxError: invalid syntax
ERROR: InvocationError: '/home/aconway/proton/proton
-c/bindings/python/../../../tests/python/proton-test'
___ summary
___
ERROR:   py26: InterpreterNotFound: python2.6
ERROR:   py27: commands failed
ERROR:   py33: InterpreterNotFound: python3.3
ERROR:   py34: commands failed


Re: New python tox tests failing on fedora 22, Python 2.7.10/3.4.2

2015-07-10 Thread aconway
On Fri, 2015-07-10 at 10:34 -0400, Ted Ross wrote:
 I had a similar error, not sure if it was exactly the same.  I 
 discovered that I was missing the python3-devel package.  Once 
 python3-devel was installed and I did a completely clean build, the 
 problems went away.
 

I have that - are you fedora 21 or 22?


Re: New python tox tests failing on fedora 22, Python 2.7.10/3.4.2

2015-07-10 Thread aconway
On Fri, 2015-07-10 at 11:53 -0400, Ken Giusti wrote:
 
 - Original Message -
  From: aconway acon...@redhat.com
  To: proton proton@qpid.apache.org
  Sent: Friday, July 10, 2015 10:01:14 AM
  Subject: New python tox tests failing on fedora 22, Python 
  2.7.10/3.4.2
  
  Anyone seeing errors like this? I think the ERROR: 
  InterpreterNotFound
  should be a warning, it seems to be testing both 2.7 and 3.4 so not
  finding 2.6 and 3.3 doesn't seem like an ERROR.
  

The problem was having an install of proton. If I clear out the install
prefix everthing works fine, but this will fail:

make install  ctest -VV -R python-tox-test

So somewhere in the cmake or python setup scripts we are using either
the system PYTHONPATH/LD_LIBRARY_PATH or the CMAKE_INSTALL_PREFIX where
we should be using an explicit path to the .tox tree.
 
 
 
 InterpreterNotFound should be a warning.  The ctest parser will 
 ignore those, so they don't show up as an error.  I'm not sure if tox 
 gives us a way to suppress those (will check).
 
 
  The missing attribute is defined as a @staticmethod so I don't
  understand that.
  
 
 Yeah me either - I'm not hitting that on latest trunk under 
 python3.4.
 
  
  The 3 argument raise is definitely gone in python 3 so I don't
  understand why its still there or what is the portable way to 
  replace
  it.
 
 
 There isn't a alternative in py3.  I've written a raise_ method 
 that simulates the same behavior in both py2 and py3 - take a look in 
 proton-c/bindings/python/proton/_compat.py
 
 
 
  
  Cheers,
  Alan.
  
  1/1 Test #3: python-tox-test ..***Failed  Error 
  regular
  expression found in output. Regex=[ERROR:[ ]+py[0-9]*: commands 
  failed]
  28.49 sec
  GLOB sdist-make: /home/aconway/proton/proton
  -c/bindings/python/setup.py
  py26 create: /home/aconway/proton/proton
  -c/bindings/python/.tox/py26
  ERROR: InterpreterNotFound: python2.6
  py27 inst-nodeps: /home/aconway/proton/proton
  -c/bindings/python/.tox/dist/python-qpid-proton-0.10.0.zip
  py27 runtests: PYTHONHASHSEED='147840795'
  py27 runtests: commands[0] | /home/aconway/proton/proton
  -c/bindings/python/../../../tests/python/proton-test
  Traceback (most recent call last):
File /home/aconway/proton/proton
  -c/bindings/python/../../../tests/python/proton-test, line 620, in
  module
  m = __import__(name, None, None, [dummy])
File 
  /home/aconway/proton/tests/python/proton_tests/__init__.py,
  line 20, in module
  import proton_tests.codec
File /home/aconway/proton/tests/python/proton_tests/codec.py, 
  line
  21, in module
  from . import common
File /home/aconway/proton/tests/python/proton_tests/common.py, 
  line
  133, in module
  if SASL.extended():
  AttributeError: type object 'SASL' has no attribute 'extended'
  ERROR: InvocationError: '/home/aconway/proton/proton
  -c/bindings/python/../../../tests/python/proton-test'
  py33 create: /home/aconway/proton/proton
  -c/bindings/python/.tox/py33
  ERROR: InterpreterNotFound: python3.3
  py34 inst-nodeps: /home/aconway/proton/proton
  -c/bindings/python/.tox/dist/python-qpid-proton-0.10.0.zip
  py34 runtests: PYTHONHASHSEED='147840795'
  py34 runtests: commands[0] | /home/aconway/proton/proton
  -c/bindings/python/../../../tests/python/proton-test
  Traceback (most recent call last):
File /home/aconway/proton/proton
  -c/bindings/python/../../../tests/python/proton-test, line 620, in
  module
  m = __import__(name, None, None, [dummy])
File 
  /home/aconway/proton/tests/python/proton_tests/__init__.py,
  line 20, in module
  import proton_tests.codec
File /home/aconway/proton/tests/python/proton_tests/codec.py, 
  line
  21, in module
  from . import common
File /home/aconway/proton/tests/python/proton_tests/common.py, 
  line
  26, in module
  from proton import Connection, Transport, SASL, Endpoint, 
  Delivery,
  SSL
File 
  /usr/local/lib64/proton/bindings/python/proton/__init__.py,
  line 3733
  raise exc, val, tb
   ^
  SyntaxError: invalid syntax
  ERROR: InvocationError: '/home/aconway/proton/proton
  -c/bindings/python/../../../tests/python/proton-test'
  ___ summary
  ___
  ERROR:   py26: InterpreterNotFound: python2.6
  ERROR:   py27: commands failed
  ERROR:   py33: InterpreterNotFound: python3.3
  ERROR:   py34: commands failed
  
 


Re: ProtonJ compilation and test failures

2015-07-06 Thread aconway
On Mon, 2015-07-06 at 16:48 +0100, Gordon Sim wrote:
 On 07/06/2015 04:08 PM, Rafael Schloming wrote:
  Any sort of missing class really should be a compile time 
  exception, which
  I think means you must have stale class files *somewhere*. You 
  could try
  doing a find checkout -name *.class just as a sanity check.
 
 I have deleted all the .class files that are generated in the source 
 tree (and deleted the entire build directory).
 
 The class files are rebuilt for ProtonJInterop, alongside those for 
 InteropTest and JythonTest in 
 ./tests/target/test-classes/org/apache/qpid/proton/:
 
 ProtonJInterop$1.class
 ProtonJInterop.class
 ProtonJInterop$Recv.class
 ProtonJInterop$Send.class
 ProtonJInterop$SendHandler.class
 
 However the test run still reports that it cannot load these.
 
  Also, it's
  possible something in your local maven repo is somehow coming into 
  play,
  maybe blow that away and rebuild it and/or do an mvn install to be 
  sure
  that remove dependencies aren't out of sync with local code?
 
 I removed everything I could find that was proton related from the 
 mvn 
 repository, but that didn't help.
 

Have you tried `rm -rf $HOME/.m2`? Maven stuffs it with class files
from who-knows-where which can lurk for months or years before screwing
up builds in a brand-new checkout. I hate maven. (and cmake, and
automake. I'm not a bigot but you just can't trust build systems I'm
telling you!)



Re: ProtonJ compilation and test failures

2015-07-06 Thread aconway
On Mon, 2015-07-06 at 17:31 +0100, Gordon Sim wrote:
 On 07/06/2015 05:22 PM, aconway wrote:
  On Mon, 2015-07-06 at 16:48 +0100, Gordon Sim wrote:
   On 07/06/2015 04:08 PM, Rafael Schloming wrote:
Any sort of missing class really should be a compile time
exception, which
I think means you must have stale class files *somewhere*. You
could try
doing a find checkout -name *.class just as a sanity check.

Not maven's fault (I still hate maven) missing CLASSPATH in cmake
config. Fixed. 

89fca58 NO-JIRA: Add missing CLASSPATH needed to run python tests in
proton-c/CMakeLists.txt

We could improve the setup by moving all the config.sh paths into cmake
variables and using them consistently in cmake and to generate
config.sh.

Also the fact that tests in proton/test are driven from proton/proton
-c/CMakeLists.txt is odd. If the tests live at top level (which is OK
by me) then the test drivers should too. Especially when (as in this
case) they are pulling code from both proton-c and proton-j.

I'm not doing that right now because it requires thought, maybe later
if nobody else does it.


Re: ruby test failures on master?

2015-06-25 Thread aconway
On Wed, 2015-06-24 at 13:53 -0400, aconway wrote:
 On Wed, 2015-06-24 at 08:10 -0400, Rafael Schloming wrote:
  Is anyone else seeing the ruby tests fail on master?
  
  I've attached the test output.
  
  --Rafael
 
 Seeing the same thing on fedora 22. I had a quick look and it appears
 like somebody started to re-organize the namespaces in the ruby 
 binding
 but did not finish the job. E.g. Qpid::Proton::Data is actually in
 Qpid::Proton::Codec::Data. I had a quick go at fixing it but there 
 are
 a lot of broken references and I'm not sure if it is best to remove 
 the
 extra namespaces (e.g. Codec) or fix the references to them.

Talked to mcpierce and the libs are correct but the tests are out of
date. I'll have a go at sorting this out.

Cheers,
Alan.


Re: ruby test failures on master?

2015-06-25 Thread aconway

f965610 NO-JIRA: Fix broken ruby build. Swig changes may require clean
rebuild. 


On Thu, 2015-06-25 at 11:33 -0400, aconway wrote:
 On Wed, 2015-06-24 at 13:53 -0400, aconway wrote:
  On Wed, 2015-06-24 at 08:10 -0400, Rafael Schloming wrote:
   Is anyone else seeing the ruby tests fail on master?
   
   I've attached the test output.
   
   --Rafael
  
  Seeing the same thing on fedora 22. I had a quick look and it 
  appears
  like somebody started to re-organize the namespaces in the ruby 
  binding
  but did not finish the job. E.g. Qpid::Proton::Data is actually in
  Qpid::Proton::Codec::Data. I had a quick go at fixing it but there 
  are
  a lot of broken references and I'm not sure if it is best to remove 
  
  the
  extra namespaces (e.g. Codec) or fix the references to them.
 
 Talked to mcpierce and the libs are correct but the tests are out of
 date. I'll have a go at sorting this out.
 
 Cheers,
 Alan.


Re: ruby test failures on master?

2015-06-24 Thread aconway
On Wed, 2015-06-24 at 08:10 -0400, Rafael Schloming wrote:
 Is anyone else seeing the ruby tests fail on master?
 
 I've attached the test output.
 
 --Rafael

Seeing the same thing on fedora 22. I had a quick look and it appears
like somebody started to re-organize the namespaces in the ruby binding
but did not finish the job. E.g. Qpid::Proton::Data is actually in
Qpid::Proton::Codec::Data. I had a quick go at fixing it but there are
a lot of broken references and I'm not sure if it is best to remove the
extra namespaces (e.g. Codec) or fix the references to them.


Re: Can we release proton 0.10? Can we add Py3K to that release?

2015-06-22 Thread aconway
On Tue, 2015-06-16 at 23:38 -0400, Rafael Schloming wrote:
 I'd like to get the proton-j-reactor branch into 0.10 also. It should 
 be
 ready soon, so if py3k can be sorted and merged in a similar 
 timeframe we
 could target a release for the end of the month.

The C++ and Go bindings are also close to ready. I would not advocate
delaying the release just for them if there are already key features
that people are asking for, but if we can get them ready in time it
would be good to include them.

 
 --Rafael
 
 On Tue, Jun 16, 2015 at 3:32 PM, Flavio Percoco fla...@redhat.com 
 wrote:
 
  Greetings,
  
  I've been looking with great pleasure all the progress happening in
  proton lately and I was wondering whether it'd be possible to have 
  an
  0.10 release cut soon.
  
  There are some bugfixes I'm personally interested in but also some
  important changes (specifically in the python bindings) that will 
  make
  consuming proton easier for users (OpenStack among those).
  
  Is there a chance for the above to happen any time soon?
  
  Can I push my request a bit further and ask for the py3k code to be
  merged as well?
  
  All the above are key pieces to make proton more consumable and 
  allow
  for services like OpenStack to fully adopt it.
  
  Thanks,
  Flavio
  
  --
  @flaper87
  Flavio Percoco
  


Re: Proton-c Null Messages

2015-06-15 Thread aconway
On Thu, 2015-06-11 at 14:45 +0100, Gordon Sim wrote:
 On 06/11/2015 01:54 PM, aconway wrote:
  On Thu, 2015-06-11 at 13:40 +0100, Gordon Sim wrote:
   If a name field is populated with an empty string, that to me is 
   the
   same as not supplying a name. An empty string is a legal 
   encoding,
   but
   in my view it does not supply a value at all. (It is not like say 
   0
   which may be the default but is clearly a value in its own 
   right).
   
  
  It is exactly like 0, a perfectly legal value that is often abused 
  to
  mean something special.
 
 It is treated as meaning 'empty' i.e. 'there is nothing here'. I 
 don't 
 consider that abuse myself. 
 If a string is a sequence of chars, the 
 empty string is not a sequence of chars, there are no chars specified 
 in 
 sequence, therefore it clearly is 'special'. The distinction between 
 an 
 empty string and null is an artificial one and in my view anything 
 for 
 that relies on that difference to convey something logically 
 significant 
 is poorly designed.
 

0 does not mean there is no number here, even though numbers are for
counting things and 0 means there are no things.  Similarly  does not
mean there is no string here it means there is a string and it is
empty. 

Abuse was a poor choice of word - it is fine to assign special
meanings to 0, negative numbers,  etc. but you have to *say* that in
the API or spec. The spec could have said a delivery tag is a non
-empty binary and I would agree with you, but it just says a delivery
tag is a binary. I also agree that  is a terrible choice of delivery
tag and we shouldn't ever *use* it (precisely because other people may
not make the distinction) but for interop purposes I think we have to
accept it. Be strict with what you send forgiving with what you
receive.

 In any case there is no harm in handling an empty string as a valid 
 delivery tag (as long as it is unique). I have no objection to that.
 
 The only part I personally consider important is that proton-c 
 doesn't 
 crash.

Agreed.


Re: Proton-c Null Messages

2015-06-11 Thread aconway
On Wed, 2015-06-10 at 16:32 +0100, Gordon Sim wrote:
 On 06/10/2015 04:01 PM, aconway wrote:
  On Tue, 2015-06-09 at 19:54 +0100, Gordon Sim wrote:
   On 06/09/2015 06:40 PM, logty wrote:
When I run the client I get:

[0x5351db0]:0 - @transfer(20) [handle=0, delivery-id=0, 
delivery
-tag=b,
message-format=0, settled=true, more=true] (16363)
\x00Sp\xc0\x07\x05B...
   
   My guess would be that it is the delivery tag being null (or 
   empty,
   can't tell which) that is the problem. From the spec:
   
 This field MUST be specified for the first transfer of
 a multi-transfer message and can only be omitted for
 continuation transfers. [section 2.7.5]
   
   So I think that whatever is sending that frame has a bug. Proton
   -c
   has a
   bug too of course, since it shouldn't segfault but should close 
   the
   connection with a framing-error or similar.
  
  It says the field must be specified, it does not say it must not be 
  an
  empty binary value. Is the field really missing or is proton 
  choking on
  a 0-length delivery tag?
 
 I'm not sure the distinction between null and an empty value is very 
 useful here. The intent is that the delivery is clearly identified. I 
 
 would argue that a 'zero byte identifier' doesn't meet the spirit of 
 the 
 law there.

I disagree. An empty string is a perfectly legal value for a string. If
the spec wants to assign special meaning to particular values of a
property that needs to be stated. Of course, like you, I personally
would not use an empty string as an identifier but as an implementor of
an inter-operable spec I think we have to take the large view: *any*
legal value of a parameter has to be considered equal unless the spec
clearly states otherwise.

  It shouldn't, which might explain why rabbit is OK with
  it.
 
 I don't think RabbitMQ is ever seeing that frame. I believe that 
 frame 
 is emitted by ApolloMQ to the receiving client.

Rabbit, Apollo, whoever. If somebody is using the empty string as a
delivery tag and the spec does not clearly state you must never use
the empty string as a delivery tag then we should accept it.

 I agree that proton should not choke on a zero byte delivery tag (or 
 indeed on a non-existent delivery tag). But I do think it's a bug to 
 send such a frame.

Quote me the spec, this is a mater of law not opinion ;)




Re: Proton-c Null Messages

2015-06-11 Thread aconway
On Thu, 2015-06-11 at 13:40 +0100, Gordon Sim wrote:
 On 06/11/2015 01:11 PM, aconway wrote:
  I disagree. An empty string is a perfectly legal value for a 
  string. If
  the spec wants to assign special meaning to particular values of a
  property that needs to be stated. Of course, like you, I personally
  would not use an empty string as an identifier but as an 
  implementor of
  an inter-operable spec I think we have to take the large view: 
  *any*
  legal value of a parameter has to be considered equal unless the 
  spec
  clearly states otherwise.
 
 If a name field is populated with an empty string, that to me is the 
 same as not supplying a name. An empty string is a legal encoding, 
 but 
 in my view it does not supply a value at all. (It is not like say 0 
 which may be the default but is clearly a value in its own right).
 

It is exactly like 0, a perfectly legal value that is often abused to
mean something special.  is a legal string literal in every language
I know of. It can be used as a key in a map or hash table. It can be
compared with other strings. There is no string operation in any
language I know that will throw NotAString if you apply it to .

 However, from the practical point of view...

This is very practical. Interoperability is about agreeing on a type
system. A type defines a range of legal values. The AMQP type system
includes empty string and 0-length binaries as legal values for those
types. We absolutely cannot treat any legal value in an exceptional way
unless that is clearly mandated by the spec.

 [...]
  Quote me the spec, this is a mater of law not opinion ;)
 
 I suspect that the sending of an empty string for a multi-frame 
 message 
 is entirely unintentional on the part of Apollo. I suspect it is a 
 bug 
 in Apollo or in the proton-j version/path it uses. That should be 
 confirmed and an appropriate JIRA raised

That's fine for Apollo but irrelevant for proton. The first law of
interoperability is be strict with what you send, be forgiving with
what you receive. To me that means that we should never *send* an
empty delivery tag, but we should accept one unless the spec clearly
states that it is illegal for anyone to ever send one. I see no such
clear statement.


.
 
 Proton-c should also not crash on receiving an empty (or null) 
 delivery 
 id. Beyond that I'm not overly concerned how it handles the empty 
 string 
 case.


C++ binding stream encoder/decoder - review requested.

2015-06-11 Thread aconway
The new commits are on the pull request
 https://github.com/apache/qpid-proton/pull/35/commits 

PROTON-865: Stream like Encoder/Decoder and AMQP Value type for C++
binding.

See Encoder.h, Decoder.h, Value.h for details.
Encoder/Decoder use operator  and  to encode/decode AMQP values.
Value is a variant-like type that can hold an arbitrary AMQP value.

Simple types are implemented, complex types coming next.




Re: C++ binding naming conventions: Qpid vs. C++

2015-06-10 Thread aconway
On Tue, 2015-06-09 at 19:56 +0100, Gordon Sim wrote:
 On 06/09/2015 07:47 PM, aconway wrote:
  C++ standard library uses lowercase_and_underscores, but Qpid C++
  projects to date use JavaWobbleCaseIndentifiers. Is the C++ binding 
  the
  time to start writing C++ like C++ programmers? Or will somebody's 
  head
  explode if class names start with a lower case letter?
  
  In particular since the proton C library is written in typical
  c_style_with_underscores, I am finding the CamelCase in the C++ 
  binding
  to be an ugly clash.
 
 I agree and I would go with underscores (and I'm largely responsible 
 for 
 the poor choice in qpid-cpp, sorry!).
 

Woo-hoo! From the horses mouth ;) Anyone know a good C++ de-camelcasing
script?




Re: C++ binding naming conventions: Qpid vs. C++

2015-06-10 Thread aconway
On Wed, 2015-06-10 at 09:41 -0400, Chuck Rolke wrote:
 The .NET binding on top of Qpid C++ Messaging library had the same 
 problem.
 cjansen suggested that the binding present a naming convention 
 consistent
 with what the binding users might expect. So that binding did not 
 simply
 copy all the C++ function and variable names but renamed them along 
 the way.
 
 If you do a one-to-one mapping it's sometimes easier to see what 
 exactly
 the function and variable mapping is. When stuff is renamed it's 
 harder.
 
 You are so early in the dev cycle that you can be consistent in 
 whatever
 form you choose.

Yup. C++ does not have such strong naming traditions as some languages
since it sort of grew by accident and misadventure out of C and
originally did not have any standard library written in C++ to provide
an example. However these days there is a large and widely used std
library with a clear naming convention so I'm strongly tempted to go that
way.

 
 - Original Message -
  From: aconway acon...@redhat.com
  To: proton proton@qpid.apache.org
  Sent: Tuesday, June 9, 2015 2:47:06 PM
  Subject: C++ binding naming conventions: Qpid vs. C++
  
  C++ standard library uses lowercase_and_underscores, but Qpid C++
  projects to date use JavaWobbleCaseIndentifiers. Is the C++ binding 
  the
  time to start writing C++ like C++ programmers? Or will somebody's 
  head
  explode if class names start with a lower case letter?
  
  In particular since the proton C library is written in typical
  c_style_with_underscores, I am finding the CamelCase in the C++ 
  binding
  to be an ugly clash.
  
  DoesAnybodyReallyThinkThis is_easier_to_read_than_this?
  
  Cheers,
  Alan.
  


Re: something rotten in the state of... something or other

2015-06-10 Thread aconway
On Tue, 2015-06-09 at 17:38 +0100, Robbie Gemmell wrote:
 I'm not seeing that currently, but I have seen similar sort of things
 a couple of times in the past.
 
 As you mention, some files get created in the source tree (presumably
 by or due to use of Jython), outwith the normal build areas they 
 would
 be (which would lead to them being cleaned up), and I think that is
 part of the problem sometimes. If the shim, binding or test files get
 updated in certain ways, some bits can become out of sync, leading to
 the type of issue you saw.
 
 CI doesnt see the issue as it blows away all unversioned files before
 each update. If I see it locally I've just used git-clean to tidy up
 my checkout. I'm not sure what we can do otherwise except put
 something togehter that targets all the extraneous files and removes
 them.

Better to fix dependencies so things get rebuilt properly than to
simply blow them away. Could it be broken swig dependencies leaving out
of date .py files around? I've noticed before that swig does not always
get re-run when it should be.

 Robbie
 
 On 9 June 2015 at 16:57, Gordon Sim g...@redhat.com wrote:
  I've recently started seeing errors[1] when running tests due to 
  left over
  artefacts of previous builds. This happens even for a completely 
  clean build
  directory, as some of the offending artefacts seem to be created in 
  the
  source tree.
  
  Jython seems to be trying and failing to load cproton. With a 
  completely
  clean source and build tree, everything passes, but it is kind of 
  annoying
  to have to rely on that. Is anyone else seeing anything similar? 
  Any ideas
  as to the cause (I've only seen it happening quite recently) or 
  possible
  cures?
  
  
  [1]:
  
   ---
T E S T S
   ---
   Running org.apache.qpid.proton.InteropTest
   Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
   0.119 sec
   Running org.apache.qpid.proton.JythonTest
   2015-06-09 16:49:29.705 INFO About to call Jython test script:
   '/home/gordon/projects/proton-git/tests/python/proton-test' with
   '/home/gordon/projects/proton-git/tests/python' added to Jython 
   path
   Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
   5.207 sec
FAILURE!
   test(org.apache.qpid.proton.JythonTest)  Time elapsed: 5.203 sec 

   FAILURE!
   java.lang.AssertionError: Caught PyException on invocation number 
   2:
   Traceback (most recent call last):
 File /home/gordon/projects/proton-git/tests/python/proton
   -test, line
   616, in module
   m = __import__(name, None, None, [dummy])
 File
   /home/gordon/projects/proton
   -git/tests/python/proton_tests/__init__.py,
   line 20, in module
   import proton_tests.codec
 File
   /home/gordon/projects/proton
   -git/tests/python/proton_tests/codec.py, line
   20, in module
   import os, common, sys
 File
   /home/gordon/projects/proton
   -git/tests/python/proton_tests/common.py, line
   26, in module
   from proton import Connection, Transport, SASL, Endpoint, 
   Delivery,
   SSL
 File
   /home/gordon/projects/proton-git/tests/../proton
   -c/bindings/python/proton/__init__.py,
   line 33, in module
   from cproton import *
 File
   /home/gordon/projects/proton-git/tests/../proton
   -c/bindings/python/cproton.py,
   line 29, in module
   import _cproton
   ImportError: No module named _cproton
with message: null
   at org.junit.Assert.fail(Assert.java:93)
   at
   org.apache.qpid.proton.JythonTest.runTestOnce(JythonTest.java:120
   )
   at 
   org.apache.qpid.proton.JythonTest.test(JythonTest.java:95)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
   Method)
   at
   sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
   mpl.java:57)
   at
   sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
   ccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at
   org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(Frame
   workMethod.java:45)
   at
   org.junit.internal.runners.model.ReflectiveCallable.run(Reflectiv
   eCallable.java:15)
   at
   org.junit.runners.model.FrameworkMethod.invokeExplosively(Framewo
   rkMethod.java:42)
   at
   org.junit.internal.runners.statements.InvokeMethod.evaluate(Invok
   eMethod.java:20)
   at 
   org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at
   org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4Clas
   sRunner.java:68)
   at
   org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4Clas
   sRunner.java:47)
   at 
   org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at 
   org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at
   

Re: Proton-c Null Messages

2015-06-10 Thread aconway
On Tue, 2015-06-09 at 19:54 +0100, Gordon Sim wrote:
 On 06/09/2015 06:40 PM, logty wrote:
  When I run the client I get:
  
  [0x5351db0]:0 - @transfer(20) [handle=0, delivery-id=0, delivery
  -tag=b,
  message-format=0, settled=true, more=true] (16363) 
  \x00Sp\xc0\x07\x05B...
 
 My guess would be that it is the delivery tag being null (or empty, 
 can't tell which) that is the problem. From the spec:
 
  This field MUST be specified for the first transfer of
  a multi-transfer message and can only be omitted for
  continuation transfers. [section 2.7.5]
 
 So I think that whatever is sending that frame has a bug. Proton-c 
 has a 
 bug too of course, since it shouldn't segfault but should close the 
 connection with a framing-error or similar.

It says the field must be specified, it does not say it must not be an
empty binary value. Is the field really missing or is proton choking on
a 0-length delivery tag? It shouldn't, which might explain why rabbit is OK with
it.

  And then the segfault occurs when transfering a 5 MB message, and 
  it is only
  coming through as this 16 KB message.
 


Re: [Resending] - Proton-J engine and thread safety

2015-06-10 Thread aconway
On Wed, 2015-06-10 at 14:18 +, Kritikos, Alex wrote:
 Hi Alan,
 
 thanks for your response. We also use an engine per connection 
 however there are different read and write threads interacting with 
 it and the issues only occur under load.
 I guess we should try to create a repro case.

You need to serialize the read and write threads, the engine is not
safe for concurrent use at all. My blabbering about read and write
concurrency may have misled you. 

You could simply mutex-lock the engine in your read/write threads.
Depending on what else you are doing beware contention and deadlocks.

The C++ (and I think Java) brokers handle this in the poller: we take
the FD out of the poller on a read or write event, do the relevant
proton work, then put it back to get further events. That way you can't
have concurrent read/write on an engine.



 
 Thanks,
 
 Alex Kritikos
 Software AG
 On 10 Jun 2015, at 16:50, aconway acon...@redhat.com wrote:
 
  On Wed, 2015-06-10 at 09:34 +, Kritikos, Alex wrote:
   [Resending as it ended up in the wrong thread]
   
   Hi all,
   
   is the proton-j engine meant to be thread safe?
  
  The C engine is definitely NOT meant to be thread safe. Unless you 
  have
  found an explicit written declaration that the java engine is 
  supposed
  to be AND a bunch of code to back that up I wouldn't rely on it.
  
  The way we use proton in the C++ broker and in the upcoming Go 
  binding
  is to create an engine per connection and serialize the action on 
  each
  connection. In principle you can read and write from the OS 
  connection
  concurrently but it's debatable how much you gain, you are likely 
  just
  moving OS buffers into app buffers which is not a big win.
  
  The inbound and outbound protocol state *for a single connection* 
  is
  pretty closely tied together. Proton is probably taking the right
  approach by assuming both are handled in a single concurrency 
  context.
  
  The engine state for separate connections is *completely 
  independent*
  so it's safe to run engines for separate connections in separte
  contexts.
  
  The recent reactor extensions to proton are interesting but not 
  thread
  friendly. They force the protocol handling for multiple connections
  into a single thread context, which is great for single threaded 
  apps
  but IMO the wrong way to go for concurrent apps.
  
  The go binding uses channels to pump data from connection 
  read/write
  goroutines to a proton engine event loop goroutine per connection. 
  The
  C++ broker predates the reactor and does it's own polling with
  read/write activity on an FD handled dispatched sequentially to 
  worker
  threads so the proton engine for a connection is never used
  concurrently.
  
  There may be something interesting we can do at the proton layer to
  help with this pattern or it may be better to leave concurrency 
  above
  the binding to be handled by the languages own concurrency tools, I 
  am
  not sure yet.
  
  
   We have been experiencing some sporadic issues where under load, 
   the
   engine sends callbacks to registered handlers with null as the 
   event.
   We do not have a standalone repro case yet but just wondered what
   other people’s experience is as well as what are the 
   recommendations
   around thread safety.
   
   Thanks,
   
   Alex Kritikos
   Software AG
   This communication contains information which is confidential and 
   may
   also be privileged. It is for the exclusive use of the intended
   recipient(s). If you are not the intended recipient(s), please 
   note
   that any distribution, copying, or use of this communication or 
   the
   information in it, is strictly prohibited. If you have received 
   this
   communication in error please notify us by e-mail and then delete 
   the
   e-mail and any copies of it.
   Software AG (UK) Limited Registered in England  Wales 1310740 -
   http://www.softwareag.com/uk
   
 
 This communication contains information which is confidential and may 
 also be privileged. It is for the exclusive use of the intended 
 recipient(s). If you are not the intended recipient(s), please note 
 that any distribution, copying, or use of this communication or the 
 information in it, is strictly prohibited. If you have received this 
 communication in error please notify us by e-mail and then delete the 
 e-mail and any copies of it.
 Software AG (UK) Limited Registered in England  Wales 1310740 - 
 http://www.softwareag.com/uk
 


C++ binding naming conventions: Qpid vs. C++

2015-06-09 Thread aconway
C++ standard library uses lowercase_and_underscores, but Qpid C++
projects to date use JavaWobbleCaseIndentifiers. Is the C++ binding the
time to start writing C++ like C++ programmers? Or will somebody's head
explode if class names start with a lower case letter?

In particular since the proton C library is written in typical
c_style_with_underscores, I am finding the CamelCase in the C++ binding
to be an ugly clash.

DoesAnybodyReallyThinkThis is_easier_to_read_than_this?

Cheers,
Alan.


Re: has someone successfully built proton-c for iOS?

2015-06-08 Thread aconway
On Sun, 2015-06-07 at 21:28 -0700, yf wrote:
 We are interested in using Proton-C library in an iOS app so that 
 data can be
 sent to Qpidd server from iOS app.
 
 However, it seems that we can hardly find relevant materials for 
 using
 Proton-C in an iOS app? We find some Proton introduction slides 
 mentioned
 that proton-c is designed with portability in mind and iOS and 
 Android are
 listed as the possible platforms to run Proton. 
 

I'm not aware of anyone having done that yet, but proton was designed
with portability in mind and should be able to run on iOS. Hopefully
the folks on this list can help you out with any problems you find, and
if you do write up the lessons you learn it would be a great addition
to the proton
documentation.

Cheers,
Alan.


Re: C++ versions for the C++ binding

2015-06-08 Thread aconway
On Mon, 2015-06-08 at 09:27 +0200, Tomáš Šoltys wrote:
 Hi,
 
 since I need to compile proton on OpenVMS which does not support 
 C++11 nor
 C++14 I would be more happy with C++03.
 
Thanks, I suspected there would be people out there :)
So I'll go for plain ole C++03 to start with, if I add anything more
recent it will be #ifdef'ed based on version.

Cheers,
Alan.


C++ versions for the C++ binding

2015-06-05 Thread aconway
I think this has been discussed before, but is there a consensus on
what C++ version we want to target for the C++ binding? C++03 is old
and smelly. C++11 is better, C++14 better still and afaik any compiler
that does 11 will do 14. So the sensible choices would seem to be:

1. Smelly C++03 code only
2. C++03 compatible with #ifdef optional bits of C++14 niceness.
3. C++14 first, add C++03 later if somebody whines about it.

3. is the most fun for me obviously.




Re: question about proton error philosophy

2015-06-03 Thread aconway
On Mon, 2013-09-16 at 13:23 -0400, Rafael Schloming wrote:
 FYI, as of 0.5 you should be able to use 
 pn_messenger_error(pn_messenger_t
 *) to access the underlying error object (pn_error_t *) and clear it 
 if an
 error has occurred.
 

Making the application clear errno is pretty standard Unix practice
too. I think the justification is that only the application can decide
when a given error has been resolved, so only the application should
clear errno. The library should not hide an error that the
application has not explicitly indicated it is done with. If library
functions call each other etc. the risk of losing an error is too great
if they all start by assuming all OK.

It it philosophically questionable but C is not a very philosophical
language.


 --Rafael
 
 On Mon, Sep 16, 2013 at 12:40 PM, Michael Goulish 
 mgoul...@redhat.comwrote:
 
  
  No, you're right.
  
  errno is never set to zero by any system call or library 
  function
  ( That's from Linux doco. )
  
  OK, I was just philosophically challenged.
  I think what confused me was the line in the current Proton C doc 
  (about
  errno) that says an error code or zero if there is no error.
  I'll just remove that line.
  
  OK, I withdraw the question.
  
  
  ( I still don't like this philosophy, but the whole world is using 
  it, and
  the whole world is bigger than I am... )
  
  
  
  
  - Original Message -
  Do other APIs reset the errno?  I could have sworn they didn't.
  
  On Mon, Sep 16, 2013 at 12:01 PM, Michael Goulish 
  mgoul...@redhat.com
  wrote:
   
   I was expecting errno inside the messenger to be reset to 0 at 
   the end
  of any successful API call.
   
   It isn't: instead it looks like the idea is that errno preserves 
   the
  most recent error that happened, regardless of how long ago that 
  might be.
   
   Is this intentional?
   
   I am having a hard time understanding why we would not want errno 
   to
  always represent the messenger state as of the completion of the 
  most
  recent API call.
   
   
   I would be happy to submit a patch to make it work this way, and 
   see
  what people think - but not if I am merely exhibiting my own
  philosophical ignorance here.
   
   
  
  
  
  --
  Hiram Chirino
  
  Engineering | Red Hat, Inc.
  
  hchir...@redhat.com | fusesource.com | redhat.com
  
  skype: hiramchirino | twitter: @hiramchirino
  
  blog: Hiram Chirino's Bit Mojo