Re: [java] Message codec improvements

2014-12-15 Thread Clebert Suconic
I will need to recover my old-branch to give you some pointers about it…

I - A lot of the codecs are creating plenty of intermediate objects… 


Pseudo-Example on the messageCodec:  List… decodedProperties…


Message message = new Message(list.get(0), list.get(1)… );


it could be something done directly at wire layer without requiring the 
intermediate object. that’s putting some pressure at the GC.




There’s also a case now where having SASL is increasing the time response 
(having SASL is requiring having an intermediate buffer I believe)… 






And there are other cases that could go beyond simply the codec.. but I have 
more abstract concerns at this point.. where we could probably talk over the 
phone.. or have a dedicated session over IRC.






I have this transient branch containing some of the micro-benchmarks and some 
proposed changes I had a while ago. It’s probably not in a working state… but 
it would give you an idea where I was aiming for.


https://github.com/clebertsuconic/qpid-proton/tree/old-work

Re: Fixing the merge commit in the git repo?

2014-12-12 Thread Clebert Suconic

 On Dec 12, 2014, at 9:38 AM, Darryl L. Pierce dpie...@redhat.com wrote:
 
 On Fri, Dec 12, 2014 at 01:43:45PM +, Gordon Sim wrote:
 On 12/12/2014 12:16 PM, Darryl L. Pierce wrote:
 I like the idea of pull requests and explicit peer reviews for changes.
 But it's above my pay grade to do anything more than envy such a work
 flow. :D
 
 Pay grade isn't relevant on an Apache project.
 
 I was being facetious. I had mentioned pull requests a while back and
 got the response that (at least then) it wasn't being considered.


In my experience there is no way to proper work with git without PRs and peer 
review.. but that’s up to you guys.


this has been working very well for us…  it has even increased our productivity 
instead of being an extra step…


For instance… I just captured an error from Andy Taylor on this PR: 
https://github.com/apache/activemq-6/pull/43 
https://github.com/apache/activemq-6/pull/43… he then fixed now we are ready 
to merge it


You could even setup automated builds for PRs. and it’s quite easy to do it. no 
more breaking the build for instance!

Re: Fixing the merge commit in the git repo?

2014-12-12 Thread Clebert Suconic

 On Dec 12, 2014, at 9:57 AM, Darryl L. Pierce dpie...@redhat.com wrote:
 
 On Fri, Dec 12, 2014 at 09:48:05AM -0500, Clebert Suconic wrote:
 In my experience there is no way to proper work with git without PRs and 
 peer review.. but that’s up to you guys.

 
 I feel like my comment, which was just a joke, might have been taken the
 wrong way. For that I apologize…


Not at all… I was answering based at a later point you mentioned that PRs were 
considered and decided against.


Anyways.. I’m just trying to help you guys and bring you some POV from us since 
we have been using git longer than you guys…

You guys know where to find me if you want to have a chat about it.

Re: Fixing the merge commit in the git repo?

2014-12-12 Thread Clebert Suconic

 On Dec 12, 2014, at 11:44 AM, Gordon Sim g...@redhat.com wrote:
 
 On 12/12/2014 02:38 PM, Darryl L. Pierce wrote:
 On Fri, Dec 12, 2014 at 01:43:45PM +, Gordon Sim wrote:
 On 12/12/2014 12:16 PM, Darryl L. Pierce wrote:
 I like the idea of pull requests and explicit peer reviews for changes.
 But it's above my pay grade to do anything more than envy such a work
 flow. :D
 
 Pay grade isn't relevant on an Apache project.
 
 I was being facetious.
 
 I was being pompous.

Is it Friday yet? 

Better: is it Xmas break yet?


Re: Fixing the merge commit in the git repo?

2014-12-12 Thread Clebert Suconic

 
 I agree and disagree  with this simplistic position.
 
 If (and only if) your task branch is yours and yours alone and no one
 has ever relied on it then you can safely rebase it. Actually I find
 that rebasing is a lot more useful to get my commits in a logical
 sequence of smaller working commits (by using rebase -i).
 
+1 … Yes.. as I said earlier.. you can rebase your branch as much as you want.

 On the other hand merging topic branches into master is also perfectly
 sensible to finish off work on a topic branch especially a long lived
 one.

+1. That’s why I’m suggesting the PR approach.

 
 A long lived topic branch will necessarily have some level of merging
 from master to keep it up to date.
 
 On the other hand I agree that merging from master just before merging
 to master is irritating and pointless.

The apache master can’t be rebased… Period!  Work as you wish in your topic 
branch, your master or whatever you choose… but be careful when you upload to 
apache/master.. once it’s there you can’t rollback.. unless you ask special 
privileges from Infra. If you ever need a push -f for a mistake or something.. 
you will need Infra to do it.

Re: Fixing the merge commit in the git repo?

2014-12-12 Thread Clebert Suconic

 On Dec 12, 2014, at 2:44 PM, Andrew Stitcher astitc...@redhat.com wrote:
 
 On Fri, 2014-12-12 at 14:21 -0500, Clebert Suconic wrote:
 ..
 On the other hand I agree that merging from master just before merging
 to master is irritating and pointless.
 
 The apache master can’t be rebased… Period!  Work as you wish in your topic 
 branch, your master or whatever you choose… but be careful when you upload 
 to apache/master.. once it’s there you can’t rollback.. unless you ask 
 special privileges from Infra. If you ever need a push -f for a mistake or 
 something.. you will need Infra to do it.
 
 I think you are misinterpreting what I said (not that it matters, but it
 is Friday) - merging from master doesn't imply any sort of rebasing. In
 some recent merges to master there is an immediately previous merge from
 master to the topic branch, that is what is unnecessary - because you
 only need to merge to your local master then push that.

Sure of course… I was just referring that you can’t push -f on apache/master… 
so you are limited to how much rebasing you can do before pushing into apache 
copy/master.

Re: Fixing the merge commit in the git repo?

2014-12-11 Thread Clebert Suconic
Rebasing and pushing is not a good option IMO. We have been using pull requests 
from GitHub and pushing them through Apache. It's working very well for us. 

Committing directly to Apachea may get you these issues. 


We can provide you guys more information on how we are doing on activemq6 if 
you are interested. 


 On Dec 11, 2014, at 16:13, Darryl L. Pierce dpie...@redhat.com wrote:
 
 On Thu, Dec 11, 2014 at 02:35:05PM -0500, Rafael Schloming wrote:
 Can you provide a bit more detail? I'm not the most expert git user, so I'm
 not sure exactly what you're asking for, much less how to do it.
 
 In my case, my usual workflow is to do small changes and commit them
 locally. If a change goes with a commit a few back I'll do:
 
  $ git commit -a --fixup=[early commit's hash]
 
 Then, when I'm ready to commit I'll do:
 
  $ git rebase -i HEAD~[# of commits to process] --autosquash
 
 and git will put the commits in the right order so I can squash them
 down.
 
 Normally I just do HEAD~25 rather than counting the commits since we
 usually had a nice, linear stream of commits in our old repo that was
 cloned from Subversion. But when I did it this time that the commits
 overlapped with this merge commit:
 
 ===[snip]===
 commit e8029597b825e49ee5ab2b7c8bd5d042e57321bb
 Merge: db437cf a5cb27e
 Author: Rafael Schloming r...@alum.mit.edu
 Date:   Fri Nov 28 08:43:03 2014 -0500
 
Merge branch 'javascript'
 ===[snip]===
 
 This made git them try to rebase all of those changes, which numbered up
 to about 60 or so commits that are out of sequence. So I had to abort
 that rebase and count the commits instead and do it again.
 
 When merging branches, etc. we can avoid this if, before merging branch
 A into master we go to branch A and rebase it on master. That way we
 avoid merge commits like this.
 
 To fix it, you would need to:
 
 1. git rebase -i HEAD~[# of commits to the merge above]
 2. when it pops up the list of commits, just save (you're not going to
   re-order them at that point)
 3. check for any merge commits, dual adds or deletes with git status
 4. fix any issues that you see: fix merges, etc.
 5. git add (or del) files as needed
 6. git rebase --continue
 7. go to 3 as needed until all is fixed
 
 -- 
 Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
 Delivering value year after year.
 Red Hat ranks #1 in value among software vendors.
 http://www.redhat.com/promo/vendor/
 


Re: Fixing the merge commit in the git repo?

2014-12-11 Thread Clebert Suconic
We do a slightly different approach. If you guys are doing git I really think 
you guys should consider it.
 1. create a task branch : git checkout -b my-working-branch
 2. create that voodoo that you do
 3. when done, rebase your task branch on master : git rebase -i master

4. push to your github fork
5. send a github Pull Request towards the github fork of the apache project

6. Someone else (yeah… someone who was not the original committer) should merge 
it after review

7 That someone else will do: git merge --no-ff github/pr/PRNumber
( We ellected to have merge committs with our committs, so we always have two 
people to blame in case things go wrong)

If you elect to not have the merge commit like we have, you can simply use git 
pull.. and that should presever the Hashes from the original commit.

8. The apache bot will close the PR unless you rebased the commit (which you 
are not supposed to)



Why we do that?


I - to avoid merge errors like it just happened
II - it increases community
iii - we have a theory that everyone will do stupid things eventually... this 
is an extra layer of protection :)


You could look at our README for more information: We are still getting started 
with it and we have based this on how other apache projects are using git and 
github:


https://github.com/apache/activemq-6/blob/master/README.md






Re: Fixing the merge commit in the git repo?

2014-12-11 Thread Clebert Suconic

 On Dec 11, 2014, at 5:11 PM, Darryl L. Pierce dpie...@redhat.com wrote:
 
 On Thu, Dec 11, 2014 at 04:16:29PM -0500, Clebert Suconic wrote:
 Rebasing and pushing is not a good option IMO. We have been using pull 
 requests from GitHub and pushing them through Apache. It's working very well 
 for us. 
 
 Committing directly to Apachea may get you these issues. 
 
 We can provide you guys more information on how we are doing on activemq6 if 
 you are interested. 
 
 Additionally, while working on a task branch, to resynch with master do
 a rebase:
 
 $ rebase -i master
 
 rather than merging master down onto your task branch. I saw a *lot* of
 that while examining the merge commits. Rebasing is by far one of the
 most awesome features of git.

Right but you can’t ever push -f on an apache branch. you can rebase as much as 
you like .. and it’s awesome I agree…


But others would lose reference if you rebased and pushed -f.. that’s why it’s 
forbidden at the apache git.

And that’s why we are very careful on merging commits.






[jira] [Commented] (PROTON-762) Javascript tests run when emscripten is not installed (and fail)

2014-12-01 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14230092#comment-14230092
 ] 

clebert suconic commented on PROTON-762:


couldn't you commit a compiled version and only replace it after an optional 
step?


That way the javascript files would be there for anyone to download it.. and 
developers could always rebuild it?


Like store it somewhere else from target?


That's just an idea!

 Javascript tests run when emscripten is not installed (and fail)
 

 Key: PROTON-762
 URL: https://issues.apache.org/jira/browse/PROTON-762
 Project: Qpid Proton
  Issue Type: Bug
Reporter: Andrew Stitcher

 I would expect the tests to not be run at all if there was no emscripten on 
 the machine to allow compilation to javascript in the first place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: VOTE: Release Proton 0.8 RC5 as 0.8 final

2014-10-29 Thread Clebert Suconic
[X} Yes, release Proton 0.8 RC5 as 0.8 final

I tested 0.8 against the HornetQ trunk. Had to rename a few enums (SESSION_OPEN 
to SESSION_LOCAL_OPEN) but other than that everything worked.




Re: VOTE: Release Proton 0.8 RC4 as 0.8 final

2014-10-23 Thread Clebert Suconic
+1 (non binding I guess)


 On Oct 23, 2014, at 12:21 PM, Rafael Schloming r...@alum.mit.edu wrote:
 
 Hi Everyone,
 
 I've put together RC4. This is pretty much the same as RC3 with a number of
 fixes to disable those SSL versions that are vulnerable to attack.
 
 The sources are available here:
 
  - http://people.apache.org/~rhs/qpid-proton-0.8rc4/
 
 Java binaries are here:
 
  - https://repository.apache.org/content/repositories/orgapacheqpid-1020/
 
 Changes since RC3:
 
  - PROTON-724: make sure to pop any pending output in
 pn_transport_close_head()
  - PROTON-720: [Windows IO] fix format specifier to print string
  - added dispatch utility
  - fixed error message
  - fixed Collector.put
  - PROTON-719 : prevent ssl3 connections in Windows with schannel
  - PROTON-717: disable SSLv3
  - PROTON-717: mitigate the CRIME SSL vulnerability
  - PROTON-716: reject connections using SSLv3 - it is insecure
 
 Please check the sources out and register your vote:
 
  [   ] Yes, release Proton 0.8 RC4 as 0.8 final.
  [   ] No, because...
 
 --Rafael



Re: SASL / non SASL connections...

2014-10-15 Thread Clebert Suconic

On Oct 15, 2014, at 11:09 AM, Robbie Gemmell robbie.gemm...@gmail.com wrote:

 What Ted already implemented on the C side would enable what you are
 seeking, allowing the server to be configured (depending on your security
 configuration; some wouldnt want the without-sasl case to ever work, other
 may allow it in no-auth scenarios) to accept either the sasl or non sasl
 header as the first bytes and respond appropriately. The JIRA for the Java
 side just hasnt been implemented yet.
 
 Generating an event to say the sasl header was received might not work that
 well, as you currently need to pump the bytes into the transport for it to
 generate events, and it currently needs to know in advance how to process
 any bytes that come after the header (the client could for example pipeline
 the entire sasl 'negotiation' and well beyond, on assumption the
 authentication or lack thereof will be successfull). Making the sasl
 process interface nicer with the events stuff in general is something I
 would like to see happen, though not something I currently have time to
 look at though.

I thought about the even being generated before SASL headers were created. I 
event would work if implemented at the proper place.


But if you already have a solution for that.. it's all good... thanks Robbie



 
 Robbie
 
 On 15 October 2014 15:15, Clebert Suconic csuco...@redhat.com wrote:
 
 I think you should have an event for SASL_NEGOTATION.. or any name you
 chose.. then you could negotiate it properly. I don't think it would be too
 hard to implement.
 
 
 The clients I'm working don't know how to negotiate ANONYMOUS. All the
 Java clients I'm dealing with now will throw a bad NPE if I don't have this
 behaviour.
 
 
 Should we raise a JIRA?
 
 
 On Oct 15, 2014, at 6:07 AM, Robbie Gemmell robbie.gemm...@gmail.com
 wrote:
 
 Hi Clebert,
 
 As a little extra context for readers...with AMQP 1.0, if the client
 wishes
 to use SASL security it first establishes a SASL layer beginning with the
 AMQP%d3.1.0.0 header, and then if successfull proceed to establish the
 'regular' AMQP connection over it beginning with the AMQP%d0.1.0.0
 header.
 Details/diagrams at:
 
 http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-security-v1.0-os.html#section-sasl
 
 As you know, calling the sasl() method on the Proton transport will make
 it
 add the sasl layer and expect the relevant header for initial input, and
 also emit it when sending its initial output. You are thus concerned with
 how to know whether you should call the sasl() method. Currently you are
 correct that with proton-j you would have to sniff the traffic in order
 to
 support accepting both sasl layered connections and bare connections
 without sasl. The alternative is that you would always call sasl() and
 simply be unable to accept connections that skip the sasl layer, and if
 wishing to support unauthenticated connections for explicit no-auth
 scenarios doing so by either using the ANONYMOUS mechanism or simply
 accepting any credentials supplied via others.
 
 Ted added support to proton-c in 0.8 for a server to be able to add the
 transport sasl layer but also be able to identify the client skipped the
 sasl layer and then optionally allow it to do so by replying with the
 AMQP%d0.1.0.0 header. I stubbed the API for this in proton-j to get the
 test suite back running at all, but the functionality has yet to actually
 be implemented:
 
 https://issues.apache.org/jira/browse/PROTON-642
 https://issues.apache.org/jira/browse/PROTON-655
 
 I think the JMS clients currently expect the server to offer the SASL
 layer, and will negotiate the ANONYMOUS mechanism if that is all the
 server
 offers. I dont believe either client would currently handle the server
 replying with the bare connection header in response to them sending the
 sasl header,  and I don't believe the new client yet supports skipping
 sending the SASL header but it looks like the old one might if provided
 with a null username.
 
 Robbie
 
 
 On 14 October 2014 19:20, Clebert Suconic csuco...@redhat.com wrote:
 
 qpid JMS clients currently expect to send anonymous connection in the
 following way:
 
 
 - if the first frame has the SASL byte 3 (I'm not reading the spec now..
 I'm not sure the correct term), the server is supposed to initialize
 SASL
 on the connection, transport... etc
 In other terms, if the following frame arrives, we need to create SASL
 with the proper capabilities:
 
 414D5150  --03--01
 
 * just as a reference for general audience, 414D5150 == AMQP in ascii
 terms
 
 - if that byte is 0, then the JMS client is expecting to have the
 server's
 session being anonymous.
 
 414D5150 --00-- 01
 
 
 
 With that I have two questions:
 
 - What is the SASL Anonymous for then if clients are expected to dictate
 if they will use SASL or not? I was expecting it being a server's
 directive.. to either use SASL or not?
 
 
 - If you need that capability for sure.. there's

SASL / non SASL connections...

2014-10-14 Thread Clebert Suconic
qpid JMS clients currently expect to send anonymous connection in the following 
way:


- if the first frame has the SASL byte 3 (I'm not reading the spec now.. I'm 
not sure the correct term), the server is supposed to initialize SASL on the 
connection, transport... etc
In other terms, if the following frame arrives, we need to create SASL with the 
proper capabilities:

414D5150  --03--01  

* just as a reference for general audience, 414D5150 == AMQP in ascii terms

- if that byte is 0, then the JMS client is expecting to have the server's 
session being anonymous.

414D5150 --00-- 01



With that I have two questions:

- What is the SASL Anonymous for then if clients are expected to dictate if 
they will use SASL or not? I was expecting it being a server's directive.. to 
either use SASL or not?


- If you need that capability for sure.. there's currently no way to use Proton 
to determine if we need SASL or not. The only way I could find was to inspect 
the first byte 4 (starting at 0) on the protocol and initialize SASL.
  Couldn't (or Shouldn't?) we have an event such as SASL_INIT from Proton and 
then we make the proper determination?

Maybe I missed the proper API if there's a way around this already!

Re: VOTE: Release Proton 0.8 RC2 as 0.8 final

2014-10-14 Thread Clebert Suconic
+1 (mine it's a non binding vote.. but I wanted to register it since we have 
been testing it for a while)


On Oct 14, 2014, at 4:44 PM, Rafael Schloming r...@alum.mit.edu wrote:

 Hi Everyone,
 
 I've put up 0.8 RC2 at the usual places. The source code can be found here:
 
http://people.apache.org/~rhs/qpid-proton-0.8rc2/
 
 Java binaries here:
 
https://repository.apache.org/content/repositories/orgapacheqpid-1016
 
 Notable changes since RC1 are:
  - fix to cmake detection of perl libraries
  - last minute rename of some new events to avoid future API breakage
  - pulled in event dispatch code and proton-j examples from the demo as
 discussed previously on the list
  - PROTON-711
 
 Since none of these changes are particularly destabilizing, I'll be an
 optimist and call for a formal vote on RC2, so please check it out and
 register your vote:
 
 [  ] Yes, release RC2 as Proton 0.8 final
 [  ] No, because of the following issues...
 
 --Rafael



Re: how can I get proton-api version 1.0-SNAPSHOT for api-reconciliation?

2014-10-02 Thread Clebert Suconic
If you want another example you can also look at the Proton integration I'm 
doing at HornetQ:

https://github.com/hornetq/ProtonPlug


I made it to be pluggable with a minimal interface.. I don't think I will keep 
it as a separate library.. but it could suite you as an example.  



On Oct 2, 2014, at 11:31 AM, Ernest Allen eal...@redhat.com wrote:

 Thanks Robbie. In order to get the proton-jni jar I naively checked out and 
 built the branch named jni-binding and copied the contents of the 
 build/proton-c/bindings/java dir.
 
 Since the api-reconciliation tool is no longer viable, are there any example 
 java apps that use the engine api?
 
 -Ernie
 
 
 - Original Message -
 From: Robbie Gemmell robbie.gemm...@gmail.com
 To: proton@qpid.apache.org
 Sent: Thursday, October 2, 2014 11:14:29 AM
 Subject: Re: how can I get proton-api version 1.0-SNAPSHOT for 
 api-reconciliation?
 
 On 2 October 2014 16:13, Robbie Gemmell robbie.gemm...@gmail.com wrote:
 
 Hi Ernie,
 
 The proton-api module no longer exists, it was merged with proton-j-impl
 to form the current proton-j module, so there are snapshots
 
 
 * no snapshots
 
 
 (which are confusingly named 1.0-SNAPSHOT all the time currently) being
 made for it now. The JNI bits were also removed around the same time.
 
 I'm afraid I never ran the tool that you are trying to use, but I would
 assume that it no longer works given the above. What version are you using
 if you managed to have a jni jar?
 
 Robbie
 
 
 On 1 October 2014 19:16, Ernest Allen eal...@redhat.com wrote:
 
 There is probably a simple solution to this, but I'm trying to run the
 api-reconciliation tool and I'm getting errors.
 
 Here is what I've done:
 
 - built proton
 - did a source config.sh from the build dir
 - switched to the design/api-reconciliation dir
 - ran ./generate-c-functions.sh
 - verified that target/cfunctions.txt exists
 - ran mvn clean install -U
 - ran mvn compile
 - ran mvn exec:java
 
 Here is the output from the exec:
 [INFO] Scanning for projects...
 [INFO]
 [INFO]
 
 [INFO] Building proton-api-reconciliation 1.0-SNAPSHOT
 [INFO]
 
 [INFO]
 [INFO]  exec-maven-plugin:1.2.1:java (default-cli) @
 proton-api-reconciliation 
 [INFO]
 [INFO]  exec-maven-plugin:1.2.1:java (default-cli) @
 proton-api-reconciliation 
 [INFO]
 [INFO] --- exec-maven-plugin:1.2.1:java (default-cli) @
 proton-api-reconciliation ---
 [WARNING] The POM for org.apache.qpid:proton-api:jar:1.0-SNAPSHOT is
 missing, no dependency information available
 [INFO]
 
 [INFO] BUILD FAILURE
 [INFO]
 
 [INFO] Total time: 1.563s
 [INFO] Finished at: Wed Oct 01 14:06:17 EDT 2014
 [INFO] Final Memory: 8M/103M
 [INFO]
 
 [ERROR] Failed to execute goal
 org.codehaus.mojo:exec-maven-plugin:1.2.1:java (default-cli) on project
 proton-api-reconciliation: Execution default-cli of goal
 org.codehaus.mojo:exec-maven-plugin:1.2.1:java failed: Plugin
 org.codehaus.mojo:exec-maven-plugin:1.2.1 or one of its dependencies could
 not be resolved: Failure to find
 org.apache.qpid:proton-api:jar:1.0-SNAPSHOT in
 http://snapshots.repository.codehaus.org was cached in the local
 repository, resolution will not be reattempted until the update interval of
 codehaus.org has elapsed or updates are forced - [Help 1]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the
 -e switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions,
 please read the following articles:
 [ERROR] [Help 1]
 http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
 
 From this I'm assuming that I'm missing the correct proton-api.jar, but
 I'm unclear on how to build/install it. I found a pom.xml for proton-api
 version 0.6, but api-reconciliation is looking for version 1.0-SNAPSHOT.
 When I modify the api-reconciliation pom to use the 0.6 proton.api.jar, it
 crashes.
 
 Any suggestions?
 
 P.S. I should mention that I built the proton-jni jar file separately and
 copied to my build/proton-c/bindings/java directory
 
 Thanks
 -Ernie
 
 
 



Re: how can I get proton-api version 1.0-SNAPSHOT for api-reconciliation?

2014-10-02 Thread Clebert Suconic

On Oct 2, 2014, at 1:01 PM, Bozo Dragojevic bo...@digiverse.si wrote:

 On 2. 10. 14 17:31, Ernest Allen wrote:
 Since the api-reconciliation tool is no longer viable, are there any example 
 java apps that use the engine api?
 
 
 contrib/proton-hawtdispatch is also using the engine.
 

True, but let me point that HornetQ's ProtonPlug doesn't have any specific 
HornetQ usage. It has some Netty buffer dependencies though as it's using a 
buffer pool in some places. 


 Bozzo



Re: svn commit: r1626329 - in /qpid/proton/trunk: proton-c/ proton-c/bindings/perl/ proton-c/bindings/php/ proton-c/bindings/python/ proton-c/bindings/ruby/ proton-c/include/proton/ proton-c/src/ test

2014-09-25 Thread Clebert Suconic
This is also failing for me when I run the entire mvn install (after a mvn 
clean).. it's not just on the CI for me



On Sep 25, 2014, at 2:20 PM, Robbie Gemmell robbie.gemm...@gmail.com wrote:

 Failing in a different way now, the overall run of the python tests bombs
 out. I see the same thing locally, where it was working when I made the
 commit immediately before yours :)
 
 https://builds.apache.org/view/M-R/view/Qpid/job/Qpid-proton-j/683/console
 
 Running org.apache.qpid.proton.JythonTest
 Sep 25, 2014 6:03:44 PM org.apache.qpid.proton.JythonTest test
 INFO: About to call Jython test script:
 '/home/jenkins/jenkins-slave/workspace/Qpid-proton-j/trunk/tests/python/proton-test'
 with '/home/jenkins/jenkins-slave/workspace/Qpid-proton-j/trunk/tests/python'
 added to Jython path
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.714
 sec  FAILURE!
 test(org.apache.qpid.proton.JythonTest)  Time elapsed: 7.675 sec   FAILURE!
 java.lang.AssertionError: Caught PyException on invocation number 2:
 Traceback (most recent call last):
  File 
 /home/jenkins/jenkins-slave/workspace/Qpid-proton-j/trunk/tests/python/proton-test,
 line 597, in module
h.scan(m)
  File 
 /home/jenkins/jenkins-slave/workspace/Qpid-proton-j/trunk/tests/python/proton-test,
 line 587, in scan
if not (child in self.scanned or child in objects):
 TypeError: object of type 'object' has no len()
 with message: null
   at org.junit.Assert.fail(Assert.java:93)
   at org.apache.qpid.proton.JythonTest.runTestOnce(JythonTest.java:120)
   at org.apache.qpid.proton.JythonTest.test(JythonTest.java:95)
 
 
 On 25 September 2014 18:59, Alan Conway acon...@redhat.com wrote:
 
 On Thu, 2014-09-25 at 15:59 +0100, Robbie Gemmell wrote:
 On 25 September 2014 15:00, Alan Conway acon...@redhat.com wrote:
 
 On Wed, 2014-09-24 at 12:19 +0100, Robbie Gemmell wrote:
 The tests are now running again, but a couple of the URL tests still
 seem
 to be failing on the CI job:
 
 
 https://builds.apache.org/view/M-R/view/Qpid/job/Qpid-proton-j/lastBuild/org.apache.qpid$proton-tests/testReport/
 
 
 They are all failing with:
 Not a valid port number or service name: 'amqps'
 
 Could this be a configuration problem on the CI machine, i.e. missing
 an
 'amqps' entry in /etc/services? Can I get access to the CI machine to
 poke around and see what's up?
 
 
 Almost certainly no, only the core maintainers are allowed shell access
 as
 far as I've seen. You can ask on bui...@apache.org for those with
 access to
 check things out and report back and see what happens.
 
 I think the CI instances will be running Ubuntu 12.04 or 14.04 LTS. For
 giggles, I dug out an old Ubuntu VM with Java 6 on it and tried the
 tests, which failed, and it indeed has no amqp[s] entry in /etc/services
 file so that could well be it.
 
 Thanks for checking that out! I have hacked the tests to skip tests for
 'amqps' if it is not recognized. Poke me if there are still failures.
 
 
 r1627577 | aconway | 2014-09-25 13:59:17 -0400 (Thu, 25 Sep 2014) | 5
 lines
 
 NO-JIRA: Fix URL test to skip 'amqps' tests if 'amqps' is not recognized
 as a service name.
 
 On some older Ubuntu with Java 6, 'amqps' is not recognized as a service
 name so
 skip those tests in that case.
 
 
 
 
 The URL code uses socket.getservbyname() to look up service names. Is
 there a more portable way to do it?
 
 
 No idea I'm afraid.
 
 
 
 Cheers,
 Alan.
 
 As mentioned in my other post about a timeline for dropping Java6
 support,
 they seem to work on Java8 (havent tried Java7).
 
 Robbie
 
 On 22 September 2014 21:14, Alan Conway acon...@redhat.com wrote:
 
 My bad, didn't run the java tests. Will fix ASAP and then give
 myself a
 flogging.
 
 On Mon, 2014-09-22 at 19:50 +0100, Robbie Gemmell wrote:
 This seems to have broken the Java test runs:
 
 
 https://builds.apache.org/view/M-R/view/Qpid/job/Qpid-proton-j/664/
 
 
 
 On 19 September 2014 22:00, acon...@apache.org wrote:
 
 Author: aconway
 Date: Fri Sep 19 21:00:50 2014
 New Revision: 1626329
 
 URL: http://svn.apache.org/r1626329
 Log:
 PROTON-693: Python Url class to wrap C function pni_parse_url
 
 It was pointed out that pni_parse_url is an internal function
 and
 the
 interface
 is not suitable for public API.
 
 Rewrote the URL parser as a proper swigable C API pn_url_*.
 This
 gets
 rid
 of the
 need for previous swig insanity and is cleaner all round.
 
 Internally still uses the pni_parse_url parser, we can clean
 that
 up
 later.
 
 Added:
qpid/proton/trunk/proton-c/include/proton/url.h
qpid/proton/trunk/proton-c/src/url.c
 Modified:
qpid/proton/trunk/proton-c/CMakeLists.txt
qpid/proton/trunk/proton-c/bindings/perl/perl.i
qpid/proton/trunk/proton-c/bindings/php/php.i
qpid/proton/trunk/proton-c/bindings/python/cproton.i

Re: Timeline to drop Java 6 support for Proton?

2014-09-24 Thread Clebert Suconic
This is just testing... can't you have a java7 tests folder? you would be able 
to still have java7 specific tests.



On Sep 24, 2014, at 7:13 AM, Robbie Gemmell robbie.gemm...@gmail.com wrote:

 Hi all,
 
 With Qpid 0.30 we have made the move to requiring Java 7+. Currently,
 proton still allows for use of Java 6, so I wonder what peoples thoughts
 are on the timing of a similar move for Proton? I'd personally like to do
 it soon since Java 6 is EOL, but if not then I think we should at least
 decide when we will.
 
 Robbie
 
 Background:
 I committed a patch yesterday which contained some Java 7 API usage in its
 tests, and subsequently broke the ASF Jenkins jobs that are still using
 Java 6 (I'm using 8). Having now noticed this I updated the test to make it
 compile and run on Java 6, unfortunately having to disable use of some of
 the input aimed at testing the defect in question. Everything now compiles
 and the test in question passes, but the overall test run is still failing
 because it turns out some other new changes in recent days mean there are
 now a couple of URL tests which fail on Java 6 (but work on Java 8).



[jira] [Commented] (PROTON-694) splitting contrib/JMSMappingOutboundTransformer's encoding and transformation

2014-09-22 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143232#comment-14143232
 ] 

clebert suconic commented on PROTON-694:



I don't need an interface method here as I'm only using this transformer within 
the context of JMS.


I will attach a patch shortly

 splitting contrib/JMSMappingOutboundTransformer's encoding and transformation
 -

 Key: PROTON-694
 URL: https://issues.apache.org/jira/browse/PROTON-694
 Project: Qpid Proton
  Issue Type: Bug
Reporter: clebert suconic

 I just need the transformation from this method, not the actual encoding.
 I need to later encode the ProtonJMessage using NettyBuffer which is pooled 
 and more efficient than the method done within JMSMappingOutboundTransformer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PROTON-694) splitting contrib/JMSMappingOutboundTransformer's encoding and transformation

2014-09-22 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143239#comment-14143239
 ] 

clebert suconic commented on PROTON-694:


this is the patch:


https://github.com/clebertsuconic/qpid-proton/tree/PROTON-694

 splitting contrib/JMSMappingOutboundTransformer's encoding and transformation
 -

 Key: PROTON-694
 URL: https://issues.apache.org/jira/browse/PROTON-694
 Project: Qpid Proton
  Issue Type: Bug
Reporter: clebert suconic

 I just need the transformation from this method, not the actual encoding.
 I need to later encode the ProtonJMessage using NettyBuffer which is pooled 
 and more efficient than the method done within JMSMappingOutboundTransformer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 0.8 release prep

2014-09-22 Thread Clebert Suconic
Can we get this one in:

https://github.com/clebertsuconic/qpid-proton/tree/PROTON-694


The current version always gets me a Bytearray through EncodedMessage while I 
just need the ProtonJMessage for a later encoding when dealing with JMS 
Conversions.



I have a patch linked on the issue.


On Sep 21, 2014, at 5:43 PM, Bozo Dragojevic bo...@digiverse.si wrote:

 On 18. 09. 14 16:55, Rafael Schloming wrote:
 Hi Everyone,
 
 I'd like to put out a 0.8 RC soon. I will be going through JIRA shortly in
 order to triage any remaining issues. If there are any particular issues
 that you feel strongly should be included in 0.8, please follow up on this
 thread and bring them to my attention.
 
 --Rafael
 
 I'd like to get these two of mine in
 
 PROTON-660: Fix openssl.c build on windows
 PROTON-691: allow proton to be built with add_subdirectory
 
 Plus the other half of the Windows IOCP work by Cliff
 
 PROTON-684: Restrict IOCP enlistment to pn_selector_t usage scenarios in
 Windows
 
 Bozzo



[jira] [Commented] (PROTON-694) splitting contrib/JMSMappingOutboundTransformer's encoding and transformation

2014-09-22 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143502#comment-14143502
 ] 

clebert suconic commented on PROTON-694:


I don't think it's double encoding... the previous version was already creating 
a ProtonJMessage and then generating the EncodedMessage. this is exactly 
splitting the previous behaviour.

 splitting contrib/JMSMappingOutboundTransformer's encoding and transformation
 -

 Key: PROTON-694
 URL: https://issues.apache.org/jira/browse/PROTON-694
 Project: Qpid Proton
  Issue Type: Bug
Reporter: clebert suconic

 I just need the transformation from this method, not the actual encoding.
 I need to later encode the ProtonJMessage using NettyBuffer which is pooled 
 and more efficient than the method done within JMSMappingOutboundTransformer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PROTON-694) splitting contrib/JMSMappingOutboundTransformer's encoding and transformation

2014-09-22 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143508#comment-14143508
 ] 

clebert suconic commented on PROTON-694:


The previous version would do this:


ProtonJMessage amqp = (ProtonJMessage) 
org.apache.qpid.proton.message.Message.Factory.create(header, da, ma, props, 
ap, body, footer);

ByteBuffer buffer = ByteBuffer.wrap(new byte[1024*4]);
final DroppingWritableBuffer overflow = new DroppingWritableBuffer();
int c = amqp.encode(new CompositeWritableBuffer(new 
WritableBuffer.ByteBufferWrapper(buffer), overflow));
if( overflow.position()  0 ) {
buffer = ByteBuffer.wrap(new byte[1024*4+overflow.position()]);
c = amqp.encode(new WritableBuffer.ByteBufferWrapper(buffer));
}

return new EncodedMessage(messageFormat, buffer.array(), 0, c);



after my patch I'm doing the conversion on a sub method and then doing this 
existent conversion at the caller method.

I intend to call the version that won't be doing this final conversion. 

 splitting contrib/JMSMappingOutboundTransformer's encoding and transformation
 -

 Key: PROTON-694
 URL: https://issues.apache.org/jira/browse/PROTON-694
 Project: Qpid Proton
  Issue Type: Bug
Reporter: clebert suconic

 I just need the transformation from this method, not the actual encoding.
 I need to later encode the ProtonJMessage using NettyBuffer which is pooled 
 and more efficient than the method done within JMSMappingOutboundTransformer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PROTON-694) splitting contrib/JMSMappingOutboundTransformer's encoding and transformation

2014-09-22 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/PROTON-694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated PROTON-694:
---
Attachment: diff.patch

 splitting contrib/JMSMappingOutboundTransformer's encoding and transformation
 -

 Key: PROTON-694
 URL: https://issues.apache.org/jira/browse/PROTON-694
 Project: Qpid Proton
  Issue Type: Bug
Reporter: clebert suconic
 Attachments: diff.patch


 I just need the transformation from this method, not the actual encoding.
 I need to later encode the ProtonJMessage using NettyBuffer which is pooled 
 and more efficient than the method done within JMSMappingOutboundTransformer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PROTON-694) splitting contrib/JMSMappingOutboundTransformer's encoding and transformation

2014-09-22 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143589#comment-14143589
 ] 

clebert suconic commented on PROTON-694:


I have squashed the change at 
https://github.com/clebertsuconic/qpid-proton/commit/06fd5f50bc66b57cb207915579eb61e99ad5f272


and I also have attached the diff here

 splitting contrib/JMSMappingOutboundTransformer's encoding and transformation
 -

 Key: PROTON-694
 URL: https://issues.apache.org/jira/browse/PROTON-694
 Project: Qpid Proton
  Issue Type: Bug
Reporter: clebert suconic
 Attachments: diff.patch


 I just need the transformation from this method, not the actual encoding.
 I need to later encode the ProtonJMessage using NettyBuffer which is pooled 
 and more efficient than the method done within JMSMappingOutboundTransformer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 0.8 release prep

2014-09-18 Thread Clebert Suconic
I have a commit that I would like to send towards a contrib component:

https://github.com/clebertsuconic/qpid-proton/commit/ab423a90f6b1e6d06c84615c5724497a3f9e8533



The current code is always encoding it as Bytes, and I need the ProtonJ as an 
output on the case of the JMS component.



On Sep 18, 2014, at 11:15 AM, Miguel da Rocha Correia Lima 
mcorreial...@landix.com.br wrote:

 Hi Rafael,
 
The PROTON-661 was useful for us. We are working with this patch 
 without any error.
We supose that it can be useful for others.
 
 Miguel da Rocha Correia Lima
 miguel at landix dot com dot br
 
 Em 18/09/2014 11:55, Rafael Schloming escreveu:
 Hi Everyone,
 
 I'd like to put out a 0.8 RC soon. I will be going through JIRA shortly in
 order to triage any remaining issues. If there are any particular issues
 that you feel strongly should be included in 0.8, please follow up on this
 thread and bring them to my attention.
 
 --Rafael
 
 



Reusing ProtonJ Integration through ProtonPlug from HornetQ

2014-08-28 Thread Clebert Suconic



I'm the project lead on hornetq and I'm currently writing the Proton 
integration on HornetQ


Instead of keeping the integration bits internally on HornetQ, I have made the 
Proton integration as a plugin, and maybe it could be reused for other projects 
outside of RedHat scope.

This is working very nicely with Netty also. it's actually the result of my 
last few months working on AMQP.



The idea is basically to provide a simple interface for sending bytes to your 
connection (on my case It's netty but it could be anything really).



It's in really early stage... I've made it as part of HornetQ but it could be 
used elsewhere... we could even make it to the contrib folder on Proton maybe?



https://github.com/hornetq/ProtonPlug



why I'm saying this? because maybe other people outside of RedHat could maybe 
contribute into it now that I'm developing it. I could get really good 
performance of it (this will also encapsulate performance results that I did 
over the Proton tests I did).

This is also including some of the work Rafi did on processing events (the 
EventHandler.. etc).. 


It's also doing locks on the best way possible. Proton is single threaded but 
on a server I need to process multiple sessions on different threads  The 
Proton Plug here will be encapsulating locking in the best way possible for a 
broker. (best as the best I could find.. there's always space for improvement 
and that's why I'm reaching the community here)


I can make a little video or something like that to show the best way to 
integrate this.. and maybe improve it as an external component.



I'm really interested in anyone's opinion from the community on this.. if it's 
a good idea at all or not?



Clebert

0.8 release / Beta at least?

2014-08-13 Thread Clebert Suconic
I'm looking into committing some stuff into hornetq next week that's highly 
dependent on the current trunk.


However I don't have a space/policy in place currently for snapshots on our 
trunk (It's hard / impossible to get the current release on history with 
snapshots).


So, is there a way you guys could upload a 0.8 release any time soon? (a 0.8 
Beta, so at least we have an uploaded release?)



Thanks,


Clebert

[jira] [Created] (PROTON-645) Implement session flow control independently of transport.capacity

2014-08-12 Thread clebert suconic (JIRA)
clebert suconic created PROTON-645:
--

 Summary: Implement session flow control independently of 
transport.capacity
 Key: PROTON-645
 URL: https://issues.apache.org/jira/browse/PROTON-645
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Reporter: clebert suconic


When integrating with any asynchronous IO we are obligated to implement flow 
control based on the capacity of the Transport.

Things get especially harder because when you pop bytes in a asynchronous 
method for after completion:

e.g. on Netty:

  channel.writeAndFlush(bytes).addListener(new ChannelFutureListener()
  {
 @Override
 public void operationComplete(ChannelFuture future) throws Exception
 {
transport.pop(bufferSize);
/// perform process, and dispatch!
performDispatchEtc();
if (transport.capacity()  0)
{
   channel.read();
}
 }
  });


this makes things specially harder as we are eventually losing events or bytes. 
We could make things work with some hacks (like having recursive loops on the 
methods for writing and reading)...

Making Proton-J to work with asynchronous pop is definitely a black art. 
Instead of working around with recursive workarounds we should provide an 
easier path. such as plugging SessionFlowController where we could activate / 
deactivate producers on a server, or use Semaphores on a client where a client 
could block until things are properly released.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PROTON-645) Implement session flow control independently of transport.capacity

2014-08-12 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094565#comment-14094565
 ] 

clebert suconic commented on PROTON-645:


Of course it could be the case that pop asynchronous is just broken. There are 
0 tests doing it asynchronously right now.

 Implement session flow control independently of transport.capacity
 --

 Key: PROTON-645
 URL: https://issues.apache.org/jira/browse/PROTON-645
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Reporter: clebert suconic

 When integrating with any asynchronous IO we are obligated to implement flow 
 control based on the capacity of the Transport.
 Things get especially harder because when you pop bytes in a asynchronous 
 method for after completion:
 e.g. on Netty:
   channel.writeAndFlush(bytes).addListener(new ChannelFutureListener()
   {
  @Override
  public void operationComplete(ChannelFuture future) throws Exception
  {
 transport.pop(bufferSize);
 /// perform process, and dispatch!
 performDispatchEtc();
 if (transport.capacity()  0)
 {
channel.read();
 }
  }
   });
 this makes things specially harder as we are eventually losing events or 
 bytes. We could make things work with some hacks (like having recursive loops 
 on the methods for writing and reading)...
 Making Proton-J to work with asynchronous pop is definitely a black art. 
 Instead of working around with recursive workarounds we should provide an 
 easier path. such as plugging SessionFlowController where we could activate / 
 deactivate producers on a server, or use Semaphores on a client where a 
 client could block until things are properly released.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PROTON-645) Implement session flow control independently of transport.capacity

2014-08-12 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094563#comment-14094563
 ] 

clebert suconic commented on PROTON-645:


*Producer = I'm describing this term here as something producing bytes...

By this I mean.. the server would stop sending anything to the proton 
connection when the flow controller is busy. This way we would have things not 
being generated instead of flooding the Queues on Connection.

 Implement session flow control independently of transport.capacity
 --

 Key: PROTON-645
 URL: https://issues.apache.org/jira/browse/PROTON-645
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Reporter: clebert suconic

 When integrating with any asynchronous IO we are obligated to implement flow 
 control based on the capacity of the Transport.
 Things get especially harder because when you pop bytes in a asynchronous 
 method for after completion:
 e.g. on Netty:
   channel.writeAndFlush(bytes).addListener(new ChannelFutureListener()
   {
  @Override
  public void operationComplete(ChannelFuture future) throws Exception
  {
 transport.pop(bufferSize);
 /// perform process, and dispatch!
 performDispatchEtc();
 if (transport.capacity()  0)
 {
channel.read();
 }
  }
   });
 this makes things specially harder as we are eventually losing events or 
 bytes. We could make things work with some hacks (like having recursive loops 
 on the methods for writing and reading)...
 Making Proton-J to work with asynchronous pop is definitely a black art. 
 Instead of working around with recursive workarounds we should provide an 
 easier path. such as plugging SessionFlowController where we could activate / 
 deactivate producers on a server, or use Semaphores on a client where a 
 client could block until things are properly released.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Important!!! Please revert PROTON-597

2014-07-02 Thread Clebert Suconic
commit ccb7b03b8089c2fea329d44cea5dc06da6345ea6
Author: Robert Gemmell rob...@apache.org
Date:   Fri Jun 27 10:08:24 2014 +

PROTON-597: only release transport buffers if they are above a threshold 
capacity

git-svn-id: https://svn.apache.org/repos/asf/qpid/proton/trunk@1606012 
13f79535-47bb-0310-9956-ffa450edef68



It makes Proton pretty much useless...

this code makes proton create a new buffer (a big buffer actually) every new 
message written.. even minor messages.
The previous version before this would always use the same buffer, recycling 
it... (making copies between my buffer and Proton), and then reseting the 
buffer ready for the next usage.


with this new change in place, a benchmark that I was already struggling to 
improve would make 2500 very small messages / second. With the new change you 
get it to 333 small messages / second...




If you want to diminish the weight of each connection, then you need a proper 
refactoring to Proton where you write to output buffers rather than internal 
buffers. Proton needs this buffer as it is now unless you do a proper 
refactoring.

Re: Important!!! Please revert PROTON-597

2014-07-02 Thread Clebert Suconic
Wrong commit...


this is the culprit:

commit 8fe9a12b1ad8dc9cd35324f4ed53bea9cb37ce22
Author: Robert Gemmell rob...@apache.org
Date:   Fri Jun 27 10:07:47 2014 +

PROTON-597: update TransportOutputAdaptor and FrameParser to release 
buffers after use, reducing memory consumption when using large frame sizes

Change from Marcel Meulemans

git-svn-id: https://svn.apache.org/repos/asf/qpid/proton/trunk@1606010 
13f79535-47bb-0310-9956-ffa450edef68

On Jul 2, 2014, at 11:40 AM, Clebert Suconic csuco...@redhat.com wrote:

 commit ccb7b03b8089c2fea329d44cea5dc06da6345ea6
 Author: Robert Gemmell rob...@apache.org
 Date:   Fri Jun 27 10:08:24 2014 +
 
 PROTON-597: only release transport buffers if they are above a threshold 
 capacity
 
 git-svn-id: https://svn.apache.org/repos/asf/qpid/proton/trunk@1606012 
 13f79535-47bb-0310-9956-ffa450edef68
 
 
 
 It makes Proton pretty much useless...
 
 this code makes proton create a new buffer (a big buffer actually) every new 
 message written.. even minor messages.
 The previous version before this would always use the same buffer, recycling 
 it... (making copies between my buffer and Proton), and then reseting the 
 buffer ready for the next usage.
 
 
 with this new change in place, a benchmark that I was already struggling to 
 improve would make 2500 very small messages / second. With the new change you 
 get it to 333 small messages / second...
 
 
 
 
 If you want to diminish the weight of each connection, then you need a proper 
 refactoring to Proton where you write to output buffers rather than internal 
 buffers. Proton needs this buffer as it is now unless you do a proper 
 refactoring.



Re: Important!!! Please revert PROTON-597

2014-07-02 Thread Clebert Suconic
Can you rather have a proper parameter on Transport such as 
setBufferReleaseThreshold than a static attribute for the whole VM?

Default is fine.. but I have no way to control such parameter.


(I actually think the default should be 2X maxFrameSize.. or something like 
that).
On Jul 2, 2014, at 12:28 PM, Robbie Gemmell robbie.gemm...@gmail.com wrote:

 Discussed with Clebert on IRC. There is a threshold where it changes
 behaviour and that was set too low, so it has now been raised significantly
 such that most people continue seeing the prior behaviour and those who
 want it can opt in with the property.
 
 Robbie
 
 On 2 July 2014 16:42, Clebert Suconic csuco...@redhat.com wrote:
 
 Wrong commit...
 
 
 this is the culprit:
 
 commit 8fe9a12b1ad8dc9cd35324f4ed53bea9cb37ce22
 Author: Robert Gemmell rob...@apache.org
 Date:   Fri Jun 27 10:07:47 2014 +
 
PROTON-597: update TransportOutputAdaptor and FrameParser to release
 buffers after use, reducing memory consumption when using large frame sizes
 
Change from Marcel Meulemans
 
git-svn-id: https://svn.apache.org/repos/asf/qpid/proton/trunk@1606010
 13f79535-47bb-0310-9956-ffa450edef68
 
 On Jul 2, 2014, at 11:40 AM, Clebert Suconic csuco...@redhat.com wrote:
 
 commit ccb7b03b8089c2fea329d44cea5dc06da6345ea6
 Author: Robert Gemmell rob...@apache.org
 Date:   Fri Jun 27 10:08:24 2014 +
 
PROTON-597: only release transport buffers if they are above a
 threshold capacity
 
git-svn-id:
 https://svn.apache.org/repos/asf/qpid/proton/trunk@1606012
 13f79535-47bb-0310-9956-ffa450edef68
 
 
 
 It makes Proton pretty much useless...
 
 this code makes proton create a new buffer (a big buffer actually) every
 new message written.. even minor messages.
 The previous version before this would always use the same buffer,
 recycling it... (making copies between my buffer and Proton), and then
 reseting the buffer ready for the next usage.
 
 
 with this new change in place, a benchmark that I was already struggling
 to improve would make 2500 very small messages / second. With the new
 change you get it to 333 small messages / second...
 
 
 
 
 If you want to diminish the weight of each connection, then you need a
 proper refactoring to Proton where you write to output buffers rather than
 internal buffers. Proton needs this buffer as it is now unless you do a
 proper refactoring.
 
 



Re: Important!!! Please revert PROTON-597

2014-07-02 Thread Clebert Suconic

On Jul 2, 2014, at 12:28 PM, Robbie Gemmell robbie.gemm...@gmail.com wrote:

 Discussed with Clebert on IRC. There is a threshold where it changes
 behaviour and that was set too low, so it has now been raised significantly
 such that most people continue seeing the prior behaviour and those who
 want it can opt in with the property.


I don't really want to force my users to set any properties on the VM for stuff 
I would expect on my code. Any property should have a proper configuration so 
we can set it on the Transport please!

MessageImplde.decode(ByteBuffer)

2014-06-25 Thread Clebert Suconic


I have a case where I have a ByteBuffer (which could be a native buffer 
(ByteBuffer.allocateNative (whatever method name that is)), and using the 
decode (byte[] ) would make me duplicate the byte region.


Would be ok if someone merged this simple commit on qpid-proton-j? It's a very 
simple change... current code still works.


https://github.com/clebertsuconic/qpid-proton/commit/2515b60ee69cb4d7ee240ff54d311cb8cf763f59


You could make the method as part of the interface if you like.. but I would be 
happy on casting MessageImpl for now.

Re: simplified factory pattern for proton-j

2014-06-24 Thread Clebert Suconic
I just bumped into one issue (actually two issues) after this:


I - First HAWT-JMS is not working with trunk because the API is changed here. I 
could change the API but now I have an issue that we can't commit the change 
since there is no commit for this yet.

II - Using this Factory on the interface makes it very tight to use any other 
factory. I mean.. why use a factory at all if the only factory available is the 
one described on the Interface? you could just use new MessageImpl. 
The advantage of the factory would be to plug other message 
implementations. For instance I just came up with a message implementation on 
my JMS parser that doesn't parse the body (since my server doesn't need to do 
it). I'm not using any of the factories here because of this. 
 I don't want to make a big deal about this.. but I guess I would prefer 
the old factories back.




On May 1, 2014, at 3:39 PM, Rafael Schloming r...@alum.mit.edu wrote:

 Just a heads up, I've committed this to trunk. Please let me know if it
 causes any problems.
 
 --Rafael
 
 
 On Tue, Apr 29, 2014 at 2:57 PM, Rafael Schloming r...@alum.mit.edu wrote:
 
 Hi Everyone,
 
 I've put together a patch that makes the proton-j factory usage a bit
 simpler and more consistent. You can review it here if you like:
 
  - https://reviews.apache.org/r/20854/
 
 The main point of the patch is to make all the factories consistently
 follow this pattern:
 
  package.Interface iface = package.Interface.Factory.create(...);
 
 I like this because it is simple and easy to remember and doesn't require
 importing the impl packages directly.
 
 The patch preserves the convenience constructors, e.g. Proton.connection()
 and so forth, but it does remove the old factory APIs. I think this is a
 reasonable thing to do because the old factories were cumbersome enough to
 use that I don't think anyone actually bothered (including our own
 exmaples).
 
 In any case, please shout if this patch will be troublesome for you. If I
 don't hear anything I'll go ahead and commit it later this week.
 
 Thanks,
 
 --Rafael
 
 



Re: Proton-c on android

2014-06-03 Thread Clebert Suconic
IMO it's easier to fix Proton-j than use Proton-C on Android
On Jun 3, 2014, at 12:44 PM, Jimmy Campbell t-jic...@microsoft.com wrote:

 That was a little bit more vague than I would have liked. They did work, they 
 just didn't work with the Azure Service Bus, which is my end goal.
 
 -Jimmy
 
 -Original Message-
 From: Jimmy Campbell [mailto:t-jic...@microsoft.com] 
 Sent: Tuesday, June 3, 2014 9:42 AM
 To: proton@qpid.apache.org
 Subject: RE: Proton-c on android
 
 I started out using Proton-J 0.7 on windows 8.1 and then I tried it with 0.6 
 on linux CentOS. Neither of them worked for me.
 
 -Jimmy
 
 -Original Message-
 From: Rafael Schloming [mailto:r...@alum.mit.edu]
 Sent: Tuesday, June 3, 2014 8:32 AM
 To: proton@qpid.apache.org
 Subject: Re: Proton-c on android
 
 Can you provide me with the details of what version of proton-j you were 
 using and what kind of environment?
 
 --Rafael
 
 
 On Tue, Jun 3, 2014 at 11:29 AM, Jimmy Campbell t-jic...@microsoft.com
 wrote:
 
 I was unable to get messages through to the Azure Service Bus. It kept 
 saying remote host has dropped the connection in system error.
 
 Proton-c is able to send without any issues.
 
 -Original Message-
 From: Rafael Schloming [mailto:r...@alum.mit.edu]
 Sent: Tuesday, June 3, 2014 8:26 AM
 To: proton@qpid.apache.org
 Subject: Re: Proton-c on android
 
 What sort of problems did you run into with proton-j?
 
 --Rafael
 
 
 On Thu, May 29, 2014 at 11:21 AM, Jimmy Campbell 
 t-jic...@microsoft.com
 wrote:
 
 Proton-j was my first choice but I was unable to connect to the 
 Azure Service Bus, which is my end goal. There are tutorials and 
 support for connecting proton-c to the azure service bus.
 
 -Original Message-
 From: Clebert Suconic [mailto:csuco...@redhat.com]
 Sent: Wednesday, May 28, 2014 7:00 PM
 To: proton@qpid.apache.org
 Subject: Re: Proton-c on android
 
 Wouldn't Proton-j be a better fit?
 
 
 On May 28, 2014, at 8:45 PM, Jimmy Campbell t-jic...@microsoft.com
 wrote:
 
 I am trying to get proton-c to run on android. Does anybody have 
 any
 advice or experience on the matter that I can get in contact with?
 
 
 



Re: Proton-c on android

2014-05-28 Thread Clebert Suconic
Wouldn't Proton-j be a better fit?


On May 28, 2014, at 8:45 PM, Jimmy Campbell t-jic...@microsoft.com wrote:

 I am trying to get proton-c to run on android. Does anybody have any advice 
 or experience on the matter that I can get in contact with?



Re: delivery.setPayload

2014-05-16 Thread Clebert Suconic

On May 14, 2014, at 1:36 PM, Rafael Schloming r...@alum.mit.edu wrote:

 On Wed, May 14, 2014 at 1:05 PM, Clebert Suconic csuco...@redhat.comwrote:
 
 I was just playing with possibilities and trying to find ways to improve
 the API how things are done.
 
 
 
 The interface is designed to
 operate in terms of chunks of message data rather than in terms of an
 entire message.
 
 
 
 That's one thing that I'm lacking control through the API actually.. which
 I'm looking to improve.
 
 
 My application (hornetQ) has the ability to store messages on the disk and
 send in chunks to the server.
 
 Using the current API, I would have to parse the entire message to Proton,
 and have proton dealing with the chunks.
 
 
 I'm not sure I follow this. The byte[] that you pass to send doesn't need
 to be a complete message.


I thought the Delivery had to be complete.
If that's not the case then ignore me :) I will research a bit more 
heresomething is probably missing for our large message implementation.. 
but let me reserach a bit more and I will come back on this when it's time.



I won't be sending the setPayload to be committed or work on fixing it to a 
commit level... I will forget this one for now.. I just wanted to give a food 
for thought that having the messages encoded directly could be a good idea.

Re: delivery.setPayload

2014-05-14 Thread Clebert Suconic
I was just playing with possibilities and trying to find ways to improve the 
API how things are done.



  The interface is designed to
 operate in terms of chunks of message data rather than in terms of an
 entire message. 



That's one thing that I'm lacking control through the API actually.. which I'm 
looking to improve.


My application (hornetQ) has the ability to store messages on the disk and send 
in chunks to the server.

Using the current API, I would have to parse the entire message to Proton, and 
have proton dealing with the chunks.


I have tests for instance on sending 1 Gigabyte messages, where the body stays 
on the disk on the server, while the client streams bytes while receiving, and 
having flow control holding them.

Proton has a big buffer internally, and which actually is not exposing flow 
control in such way I would stop feeding Proton. as a result this use case 
would easily cause OMEs from Proton.

as I said I was just for now exploring with possibilities. I could send a 
ReadableBuffer (the new interface from my previous patch) and use a Pool on 
this buffer to encode my Message. But I stlil lack control on my large messages 
streaming and flow control.


The way things are done now, if i wanted to preserve my large message 
functionality at the client level (outside of Proton's API), I would need to 
bypass proton and send Delivery directly without Proton intervention.

delivery.setPayload

2014-05-13 Thread Clebert Suconic
I have been playing with the API, and there is one change that would make the 
API clearer IMO.


Right now you have to send a delivery, and then call send(bytes) to add Bytes 
to the delivery what will make it copy data to the Delivery and self expand the 
buffer.



I have played with a change that made it 5% faster than the most optimal way to 
expand the payload on the Delivery (using the same buffer over and over)


And 15% on a brief calculation against creating the buffer every time... but 
there are cases where this could be a bit worse.



Basically I have created an interface called Payload, and added a method 
setPayload on Delivery. 



I'm not sure yet how I would implement framing into multiple packages.. but I 
think it could be done.. this is just a prototyped idea:


https://github.com/clebertsuconic/qpid-proton/commit/02abe61fc54911955ddcce77b792a153c5476aef



in case you want to fetch the buffer from my git, it's this branch:  
https://github.com/clebertsuconic/qpid-proton/tree/payload



In any case I liked the idea of the setPayload better than sender.send(bytes) 
to set the payload of a message.



Ideas?

Re: delivery.setPayload

2014-05-13 Thread Clebert Suconic


On May 13, 2014, at 1:46 PM, Rafael Schloming r...@alum.mit.edu wrote:

 I'm not sure this will work as an API. One problem I see with this is that
 it is circumventing the engine's flow control logic. If you notice there is
 logic inside send() to update counters on the session. Unless I've missed
 something your patch doesn't seem to have equivalent logic. This might just
 be an oversight, but I don't see how you could easily add the same logic
 since you don't know how many bytes the payload is until much much later on
 in the control flow of the engine.
 

as I told you  this was just a prototyped idea... it's not in fact updating the 
window yet..

If this idea is a good idea, we could pursue the idea here.

 Can you supply some more detail as to why it got 5% faster? If it was
 merely avoiding the copy, then I can think of some ways we could avoid that
 copy without changing the API quite so drastically, e.g. just overload
 send() to take some sort of releasable buffer reference.

The encoding is done directly the the FrameWriter::__outputBuffer.  I've made a 
framework where I'm testing the send and it made it somewhat fast than copying 
the encoding over 1 million messages.

On this case it could be a bit more if you encoded a MesasgeImpl on a new 
buffer every time

 
 FWIW, I think that a good buffer abstraction that we could use everywhere
 would help a lot. I suspect having distinct abstractions for payload
 buffers vs encodable buffers vs decodable buffers is just going to result
 in lots of unnecessary conversions.

probably.. I was just trying to improve the idea of the payloads. I don't like 
the send API right now.. I think it would make more sense to set the payload on 
the delivery than send bytes through sender.








Test: Is this list working fine?

2014-05-11 Thread Clebert Suconic
It seems the list is not working properly. sorry for the Spam but this is just 
a test.. ignore it please

Re: Optimizations on Proton-j

2014-05-05 Thread Clebert Suconic

On May 5, 2014, at 5:02 PM, Rafael Schloming r...@alum.mit.edu wrote:

 Hi Clebert,
 
 Sorry for taking so long to follow up with the benchmark. I've been tweaking 
 it and using it to do some memory and CPU profiling.
 
 The good news is I was able to significantly reduce memory utilization by 
 making some simple changes to CollectorImpl. The benchmark triggers over 42 
 million events. The way CollectorImpl was coded initially this would create 
 two throwaway objects for each event. This was ending up somewhere north of 
 84 million throw away objects over the course of the benchmark. One being the 
 event itself, and the other being the linked list node. I changed 
 CollectorImpl to use a simple chain instead of java.util's linked list and 
 also to pool/reuse popped events. The same benchmark now only results in 
 about 250 actual event objects being created in total in order to handle the 
 same 42 million events.
 
 While this reduces memory pressure a lot, surprisingly enough, the event 
 related objects were not the biggest source of garbage. At the top is 
 java.nio.HeapByteBuffer (I haven't looked into this, but I suspect we're 
 throwing away some of these that we could be reusing), and the second biggest 
 source of garbage is org.apache.qpid.proton.amqp.UnsignedInteger. The latter 
 one I find a bit more concerning as its use is fairly interwingled into the 
 design of the current codec and as a result it is likely less straightforward 
 to address.
 
 On the CPU profiling front I've noticed a couple of interesting things. First 
 off it does appear that much of the processing time is spent in codec, 
 roughly 63%, but the cost is not symmetrically divided between encode and 
 decode. Encoding accounts for about 46% of the total, and decoding about 17%. 
 I think this may be why its hard to measure the effect of your patch on this 
 benchmark. The decoding cost just isn't all that high compared to encoding. I 
 did the same profiling again with your patch applied and decoding dropped to 
 about 10% of the total while encoding increased to about 50% of the total.
 
 Digging into the encoding side a bit, it appears that a significant amount of 
 time is being spent calculating the encoded size of a value prior to writing 
 its encoded representation to the wire. One of the optimizations I've had 
 success with in the past (both on previous Java codecs and in the C codec) is 
 to avoid calculating the size up front and instead simply reserve the 
 necessary space for it and fill it in after the encoded representation has 
 been written. In the past this has close to doubled the performance of encode 
 since calculating the encoded size is often as expensive as simply doing the 
 encoding. Unfortunately I'm guessing this kind of thing would probably 
 require a major rework of the codec.
 
 To summarize I think there are really two design related issues we will need 
 to address in order to achieve optimum performance. On the memory front, I 
 think the fact that every described type is rendered into a tree of generic 
 objects on both decode/encode is going to be problematic. The strategy you've 
 taken in your patch to special case certain frames and eliminate the 
 intermediate list objects helps with this, but I think we could do a whole 
 lot better if we were to adopt a design that did not require any intermediate 
 objects at all. On the CPU front, I think we'll get the biggest bang for our 
 buck if we look into a design that doesn't require calculating the size up 
 front.
 
 I have some ideas in mind for a new design that I hope will address both of 
 these issues. I'm going to write them up in a separate post.
 
 Regarding your patch, I'm happy to apply it, but I suspect that much of the 
 current codec layer would need to be modified and/or replaced to address the 
 above findings. Let me know how you would like to proceed.



It's all consistent with what I have seen... I have also realized the 
calculating encode size prior to sending.


I would prefer if we could evolve in top of what did. I think the patch that I 
did is one step further on avoiding intermediate objects...   I have seen 
already other cases where we could avoid it...  It's all an evolution... and 
I'm still working into other cases...  if you take this patch now we would move 
a step further.

the patch is not only addressing the codec optimizations, but it's also 
avoiding multiple instances of the codec itself.. what makes it lighter.

I'm now working on a benchmark based on yours that will be closer to what I 
will be using.

Re: Optimizations on Proton-j

2014-05-05 Thread Clebert Suconic
I have some ideas as well:


- Calculating size prior to sending:

 - We could write zeroes, write to the buffer... come back to the previous 
position.. write the size instead of calculating it.

I have read this code a lot.. and I wouldn't rewrite the code.. just optimize 
these cases... I wouldn't optimize it for every possible case TBH.. just on 
message Delivery and Settling unless you want to also optimize other cases for 
use cases that I'm not aware at the moment.



other things that could boost performance based on the micro benchmark I wrote:


- Using Integer, Long.. etc..inside of UnsignedInt, UnsignedLong would give you 
a good boost in performance. The JDK is already optimized to box these types... 
while UnsignedInt, UnsignedLong.. etc.. its not well optimized.

- Reusing buffers.. maybe adding a framework where we could reuse buffers.. or 
delegate into other frameworks (e.g. Netty).








On May 5, 2014, at 5:40 PM, Clebert Suconic csuco...@redhat.com wrote:

 
 On May 5, 2014, at 5:02 PM, Rafael Schloming r...@alum.mit.edu wrote:
 
 Hi Clebert,
 
 Sorry for taking so long to follow up with the benchmark. I've been tweaking 
 it and using it to do some memory and CPU profiling.
 
 The good news is I was able to significantly reduce memory utilization by 
 making some simple changes to CollectorImpl. The benchmark triggers over 42 
 million events. The way CollectorImpl was coded initially this would create 
 two throwaway objects for each event. This was ending up somewhere north of 
 84 million throw away objects over the course of the benchmark. One being 
 the event itself, and the other being the linked list node. I changed 
 CollectorImpl to use a simple chain instead of java.util's linked list and 
 also to pool/reuse popped events. The same benchmark now only results in 
 about 250 actual event objects being created in total in order to handle the 
 same 42 million events.
 
 While this reduces memory pressure a lot, surprisingly enough, the event 
 related objects were not the biggest source of garbage. At the top is 
 java.nio.HeapByteBuffer (I haven't looked into this, but I suspect we're 
 throwing away some of these that we could be reusing), and the second 
 biggest source of garbage is org.apache.qpid.proton.amqp.UnsignedInteger. 
 The latter one I find a bit more concerning as its use is fairly 
 interwingled into the design of the current codec and as a result it is 
 likely less straightforward to address.
 
 On the CPU profiling front I've noticed a couple of interesting things. 
 First off it does appear that much of the processing time is spent in codec, 
 roughly 63%, but the cost is not symmetrically divided between encode and 
 decode. Encoding accounts for about 46% of the total, and decoding about 
 17%. I think this may be why its hard to measure the effect of your patch on 
 this benchmark. The decoding cost just isn't all that high compared to 
 encoding. I did the same profiling again with your patch applied and 
 decoding dropped to about 10% of the total while encoding increased to about 
 50% of the total.
 
 Digging into the encoding side a bit, it appears that a significant amount 
 of time is being spent calculating the encoded size of a value prior to 
 writing its encoded representation to the wire. One of the optimizations 
 I've had success with in the past (both on previous Java codecs and in the C 
 codec) is to avoid calculating the size up front and instead simply reserve 
 the necessary space for it and fill it in after the encoded representation 
 has been written. In the past this has close to doubled the performance of 
 encode since calculating the encoded size is often as expensive as simply 
 doing the encoding. Unfortunately I'm guessing this kind of thing would 
 probably require a major rework of the codec.
 
 To summarize I think there are really two design related issues we will need 
 to address in order to achieve optimum performance. On the memory front, I 
 think the fact that every described type is rendered into a tree of generic 
 objects on both decode/encode is going to be problematic. The strategy 
 you've taken in your patch to special case certain frames and eliminate the 
 intermediate list objects helps with this, but I think we could do a whole 
 lot better if we were to adopt a design that did not require any 
 intermediate objects at all. On the CPU front, I think we'll get the biggest 
 bang for our buck if we look into a design that doesn't require calculating 
 the size up front.
 
 I have some ideas in mind for a new design that I hope will address both of 
 these issues. I'm going to write them up in a separate post.
 
 Regarding your patch, I'm happy to apply it, but I suspect that much of the 
 current codec layer would need to be modified and/or replaced to address the 
 above findings. Let me know how you would like to proceed.
 
 
 
 It's all consistent with what I have seen... I have also realized

Re: Optimizations on Proton-j

2014-05-02 Thread Clebert Suconic
These shuld be all cleared now..


My github branch and PR are up to date now:

https://github.com/apache/qpid-proton/pull/1


 And isn't git is beautiful. It's already rebased with Rafi's last commit! 



On May 1, 2014, at 5:32 PM, Clebert Suconic csuco...@redhat.com wrote:

 I will do some cleanup on this .. I already fixed the headers here on my copy 
 and I will do some cleanup on imports that were't supposed to be done.
 On May 1, 2014, at 5:11 PM, Robbie Gemmell robbie.gemm...@gmail.com wrote:
 
 As no mail arrived here or qpid-dev, and none seems to have arrived at what
 used to be the default location (infra-dev) either, I had a quick look and
 it seems like they might have changed the process slightly and we will need
 to ask for the mails to be enabled at all:
 https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
 
 I particularly like the mention of the new comment syncing between our
 mailing list and the Pull Requests.
 
 Regarding closing the pull requests, it seems like something along the
 lines of This closes #request number at GitHub added to the end of the
 svn commit message should do the trick:
 https://help.github.com/articles/closing-issues-via-commit-messages
 
 I havent had a chance to really look at the actual code change but when I
 was quickly scrolling down the PR, in addition to the licence headers on
 the new files that Rafi already mentioned (which I spotted due to the
 Copyright notices we wouldnt typically have) I noticed Encoder.java having
 its existing licence header corrupted a little by some wayward code.
 
 Robbie
 I just submitted it as a git PR:
 
 https://github.com/apache/qpid-proton/pull/1
 
 
 
 On Apr 30, 2014, at 10:47 AM, Robbie Gemmell robbie.gemm...@gmail.com
 wrote:
 
 I think anyone can sign up for ReviewBoard themselves. It certainly didn't
 used to be linked to the ASF LDAP in the past, presumably for that reason.
 
 Its probably also worth noting you can initiate pull requests against the
 github mirrors. If it hasn't already been done for the proton mirror, we
 can have the emails that would generate be directed to this list (e.g.
 
 http://mail-archives.apache.org/mod_mbox/qpid-dev/201401.mbox/%3c20140130180355.3cf9e916...@tyr.zones.apache.org%3E
 ).
 We obviously can't merge the pull request via github, but you can use
 the reviewing tools etc and the resultant patch can be downloaded or
 attached to a JIRA and then applied in the usual fashion (I believe there
 is a commit message syntax that can be used to trigger closing the pull
 request).
 
 Robbie
 
 On 30 April 2014 15:22, Rafael Schloming r...@alum.mit.edu wrote:
 
 On Wed, Apr 30, 2014 at 8:35 AM, Clebert Suconic csuco...@redhat.com
 wrote:
 
 @Rafi: I see there is a patch review  process within Apache (based on
 your
 other thread on Java8)
 
 Should we make this through the patch process at some point?
 
 
 I'm fine looking at it on your git branch, but if you'd like to play with
 the review tool then feel free.  Just let me know if you need an account
 and I will try to remember how to set one up (or who to bug to get you
 one). ;-)
 
 --Rafael
 
 



[jira] [Commented] (PROTON-576) proton-j: codec support for UTF-8 encoding and decoding appears broken?

2014-05-02 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987747#comment-13987747
 ] 

clebert suconic commented on PROTON-576:


Why don't you add a test with your patch? It would help a lot!


 proton-j: codec support for UTF-8 encoding and decoding appears broken?
 ---

 Key: PROTON-576
 URL: https://issues.apache.org/jira/browse/PROTON-576
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-j
Affects Versions: 0.7
Reporter: Dominic Evans
 Attachments: 02_fix_stringtype_encode_decode.patch, Utf8Samples.txt, 
 benchmark-src.zip, benchmark.jar


 It seems like Proton-J has its own custom UTF-8 encoder, but relies on Java 
 String's built-in UTF-8 decoder. However, the code doesn't seem quite right 
 and complex double byte UTF-8 like emoji ('') can quite easily fail to 
 parse:
 |   |   Cause:1   :-  java.lang.IllegalArgumentException: Cannot parse 
 String
 |   |   Message:1 :-  Cannot parse String
 |   |   StackTrace:1  :-  java.lang.IllegalArgumentException: Cannot parse 
 String
 |   | at 
 org.apache.qpid.proton.codec.StringType$1.decode(StringType.java:48)
 |   | at 
 org.apache.qpid.proton.codec.StringType$1.decode(StringType.java:36)
 |   | at 
 org.apache.qpid.proton.codec.DecoderImpl.readRaw(DecoderImpl.java:945)
 |   | at 
 org.apache.qpid.proton.codec.StringType$AllStringEncoding.readValue(StringType.java:172)
 |   | at 
 org.apache.qpid.proton.codec.StringType$AllStringEncoding.readValue(StringType.java:124)
 |   | at 
 org.apache.qpid.proton.codec.DynamicTypeConstructor.readValue(DynamicTypeConstructor.java:39)
 |   | at 
 org.apache.qpid.proton.codec.DecoderImpl.readObject(DecoderImpl.java:885)
 |   | at 
 org.apache.qpid.proton.message.impl.MessageImpl.decode(MessageImpl.java:629)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PROTON-576) proton-j: codec support for UTF-8 encoding and decoding appears broken?

2014-05-01 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13986807#comment-13986807
 ] 

clebert suconic commented on PROTON-576:


Sounds a good idea.


Talking about microbenchmarks.. we need a placeholder for those in the project. 
I have written one recently on my recently proposed patch that I have placed it 
under tests.. a placeholder for those sounds a good idea.

 proton-j: codec support for UTF-8 encoding and decoding appears broken?
 ---

 Key: PROTON-576
 URL: https://issues.apache.org/jira/browse/PROTON-576
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-j
Affects Versions: 0.7
Reporter: Dominic Evans
 Attachments: 02_fix_stringtype_encode_decode.patch


 It seems like Proton-J has its own custom UTF-8 encoder, but relies on Java 
 String's built-in UTF-8 decoder. However, the code doesn't seem quite right 
 and complex double byte UTF-8 like emoji ('') can quite easily fail to 
 parse:
 |   |   Cause:1   :-  java.lang.IllegalArgumentException: Cannot parse 
 String
 |   |   Message:1 :-  Cannot parse String
 |   |   StackTrace:1  :-  java.lang.IllegalArgumentException: Cannot parse 
 String
 |   | at 
 org.apache.qpid.proton.codec.StringType$1.decode(StringType.java:48)
 |   | at 
 org.apache.qpid.proton.codec.StringType$1.decode(StringType.java:36)
 |   | at 
 org.apache.qpid.proton.codec.DecoderImpl.readRaw(DecoderImpl.java:945)
 |   | at 
 org.apache.qpid.proton.codec.StringType$AllStringEncoding.readValue(StringType.java:172)
 |   | at 
 org.apache.qpid.proton.codec.StringType$AllStringEncoding.readValue(StringType.java:124)
 |   | at 
 org.apache.qpid.proton.codec.DynamicTypeConstructor.readValue(DynamicTypeConstructor.java:39)
 |   | at 
 org.apache.qpid.proton.codec.DecoderImpl.readObject(DecoderImpl.java:885)
 |   | at 
 org.apache.qpid.proton.message.impl.MessageImpl.decode(MessageImpl.java:629)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PROTON-576) proton-j: codec support for UTF-8 encoding and decoding appears broken?

2014-05-01 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13986911#comment-13986911
 ] 

clebert suconic commented on PROTON-576:


200 samples is a small number. you should probably do it with 1000s, and start 
measuring after a few iterations (to give time to the JDK to do just in timer.. 
etc)

 proton-j: codec support for UTF-8 encoding and decoding appears broken?
 ---

 Key: PROTON-576
 URL: https://issues.apache.org/jira/browse/PROTON-576
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-j
Affects Versions: 0.7
Reporter: Dominic Evans
 Attachments: 02_fix_stringtype_encode_decode.patch


 It seems like Proton-J has its own custom UTF-8 encoder, but relies on Java 
 String's built-in UTF-8 decoder. However, the code doesn't seem quite right 
 and complex double byte UTF-8 like emoji ('') can quite easily fail to 
 parse:
 |   |   Cause:1   :-  java.lang.IllegalArgumentException: Cannot parse 
 String
 |   |   Message:1 :-  Cannot parse String
 |   |   StackTrace:1  :-  java.lang.IllegalArgumentException: Cannot parse 
 String
 |   | at 
 org.apache.qpid.proton.codec.StringType$1.decode(StringType.java:48)
 |   | at 
 org.apache.qpid.proton.codec.StringType$1.decode(StringType.java:36)
 |   | at 
 org.apache.qpid.proton.codec.DecoderImpl.readRaw(DecoderImpl.java:945)
 |   | at 
 org.apache.qpid.proton.codec.StringType$AllStringEncoding.readValue(StringType.java:172)
 |   | at 
 org.apache.qpid.proton.codec.StringType$AllStringEncoding.readValue(StringType.java:124)
 |   | at 
 org.apache.qpid.proton.codec.DynamicTypeConstructor.readValue(DynamicTypeConstructor.java:39)
 |   | at 
 org.apache.qpid.proton.codec.DecoderImpl.readObject(DecoderImpl.java:885)
 |   | at 
 org.apache.qpid.proton.message.impl.MessageImpl.decode(MessageImpl.java:629)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PROTON-576) proton-j: codec support for UTF-8 encoding and decoding appears broken?

2014-05-01 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/PROTON-576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13986913#comment-13986913
 ] 

clebert suconic commented on PROTON-576:


why don't you share your test here?

 proton-j: codec support for UTF-8 encoding and decoding appears broken?
 ---

 Key: PROTON-576
 URL: https://issues.apache.org/jira/browse/PROTON-576
 Project: Qpid Proton
  Issue Type: Bug
  Components: proton-j
Affects Versions: 0.7
Reporter: Dominic Evans
 Attachments: 02_fix_stringtype_encode_decode.patch


 It seems like Proton-J has its own custom UTF-8 encoder, but relies on Java 
 String's built-in UTF-8 decoder. However, the code doesn't seem quite right 
 and complex double byte UTF-8 like emoji ('') can quite easily fail to 
 parse:
 |   |   Cause:1   :-  java.lang.IllegalArgumentException: Cannot parse 
 String
 |   |   Message:1 :-  Cannot parse String
 |   |   StackTrace:1  :-  java.lang.IllegalArgumentException: Cannot parse 
 String
 |   | at 
 org.apache.qpid.proton.codec.StringType$1.decode(StringType.java:48)
 |   | at 
 org.apache.qpid.proton.codec.StringType$1.decode(StringType.java:36)
 |   | at 
 org.apache.qpid.proton.codec.DecoderImpl.readRaw(DecoderImpl.java:945)
 |   | at 
 org.apache.qpid.proton.codec.StringType$AllStringEncoding.readValue(StringType.java:172)
 |   | at 
 org.apache.qpid.proton.codec.StringType$AllStringEncoding.readValue(StringType.java:124)
 |   | at 
 org.apache.qpid.proton.codec.DynamicTypeConstructor.readValue(DynamicTypeConstructor.java:39)
 |   | at 
 org.apache.qpid.proton.codec.DecoderImpl.readObject(DecoderImpl.java:885)
 |   | at 
 org.apache.qpid.proton.message.impl.MessageImpl.decode(MessageImpl.java:629)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Optimizations on Proton-j

2014-05-01 Thread Clebert Suconic
For now I have pretty much optimized the Transfer type only. no other types.

for instance I see that Disposition type needs optimization as well... 

For us though the biggest advantage on the patch I'm making .

If you send a 1K message.. you won't have much of the optimization on the codec 
being exercised.

we could do 10 Million Transfer in 3 seconds before... against 1.5 on my 
laptop. If transferring 10Million * 10K is taking 40 seconds the optimization 
of the 1.5 would be spread among the delivery and you wouldn't be able to see a 
difference.


Why don't you try sending empty messages? meaning.. a message is received with 
an empty body.

On May 1, 2014, at 4:44 PM, Rafael Schloming r...@alum.mit.edu wrote:

 Hi Clebert,
 
 I've been (amongst other things) doing a little bit of investigation on
 this topic over the past couple of days. I wrote a microbenchmark that
 takes two engines and directly wires their transports together. It then
 pumps about 10 million 1K messages from one engine to the other. I ran this
 benchmark under jprofiler and codec definitely came up as a hot spot, but
 when I apply your patch, I don't see any measurable difference in results.
 Either way it's taking about 40 seconds to pump all the messages through.
 
 I'm not quite sure what is going on, but I'm guessing either the code path
 you've optimized isn't coming up enough to make much of a difference, or
 I've somehow messed up the measurements. I will post the benchmark shortly,
 so hopefully you can check up on my measurements yourself.
 
 On a more mundane note, Andrew pointed out that the new files you've added
 in your patch use an outdated license header. You can take a look at some
 existing files in the repo to get a current license header.
 
 --Rafael
 
 
 
 On Wed, Apr 30, 2014 at 2:15 PM, Clebert Suconic csuco...@redhat.comwrote:
 
 I just submitted it as a git PR:
 
 https://github.com/apache/qpid-proton/pull/1
 
 
 
 On Apr 30, 2014, at 10:47 AM, Robbie Gemmell robbie.gemm...@gmail.com
 wrote:
 
 I think anyone can sign up for ReviewBoard themselves. It certainly
 didn't
 used to be linked to the ASF LDAP in the past, presumably for that
 reason.
 
 Its probably also worth noting you can initiate pull requests against the
 github mirrors. If it hasn't already been done for the proton mirror, we
 can have the emails that would generate be directed to this list (e.g.
 
 http://mail-archives.apache.org/mod_mbox/qpid-dev/201401.mbox/%3c20140130180355.3cf9e916...@tyr.zones.apache.org%3E
 ).
 We obviously can't merge the pull request via github, but you can use
 the reviewing tools etc and the resultant patch can be downloaded or
 attached to a JIRA and then applied in the usual fashion (I believe there
 is a commit message syntax that can be used to trigger closing the pull
 request).
 
 Robbie
 
 On 30 April 2014 15:22, Rafael Schloming r...@alum.mit.edu wrote:
 
 On Wed, Apr 30, 2014 at 8:35 AM, Clebert Suconic csuco...@redhat.com
 wrote:
 
 @Rafi: I see there is a patch review  process within Apache (based on
 your
 other thread on Java8)
 
 Should we make this through the patch process at some point?
 
 
 I'm fine looking at it on your git branch, but if you'd like to play
 with
 the review tool then feel free.  Just let me know if you need an account
 and I will try to remember how to set one up (or who to bug to get you
 one). ;-)
 
 --Rafael
 
 
 



Re: Optimizations on Proton-j

2014-05-01 Thread Clebert Suconic
Ah... I will fix the license headers shortly.



On May 1, 2014, at 4:51 PM, Clebert Suconic csuco...@redhat.com wrote:

 For now I have pretty much optimized the Transfer type only. no other types.
 
 for instance I see that Disposition type needs optimization as well... 
 
 For us though the biggest advantage on the patch I'm making .
 
 If you send a 1K message.. you won't have much of the optimization on the 
 codec being exercised.
 
 we could do 10 Million Transfer in 3 seconds before... against 1.5 on my 
 laptop. If transferring 10Million * 10K is taking 40 seconds the optimization 
 of the 1.5 would be spread among the delivery and you wouldn't be able to see 
 a difference.
 
 
 Why don't you try sending empty messages? meaning.. a message is received 
 with an empty body.
 
 On May 1, 2014, at 4:44 PM, Rafael Schloming r...@alum.mit.edu wrote:
 
 Hi Clebert,
 
 I've been (amongst other things) doing a little bit of investigation on
 this topic over the past couple of days. I wrote a microbenchmark that
 takes two engines and directly wires their transports together. It then
 pumps about 10 million 1K messages from one engine to the other. I ran this
 benchmark under jprofiler and codec definitely came up as a hot spot, but
 when I apply your patch, I don't see any measurable difference in results.
 Either way it's taking about 40 seconds to pump all the messages through.
 
 I'm not quite sure what is going on, but I'm guessing either the code path
 you've optimized isn't coming up enough to make much of a difference, or
 I've somehow messed up the measurements. I will post the benchmark shortly,
 so hopefully you can check up on my measurements yourself.
 
 On a more mundane note, Andrew pointed out that the new files you've added
 in your patch use an outdated license header. You can take a look at some
 existing files in the repo to get a current license header.
 
 --Rafael
 
 
 
 On Wed, Apr 30, 2014 at 2:15 PM, Clebert Suconic csuco...@redhat.comwrote:
 
 I just submitted it as a git PR:
 
 https://github.com/apache/qpid-proton/pull/1
 
 
 
 On Apr 30, 2014, at 10:47 AM, Robbie Gemmell robbie.gemm...@gmail.com
 wrote:
 
 I think anyone can sign up for ReviewBoard themselves. It certainly
 didn't
 used to be linked to the ASF LDAP in the past, presumably for that
 reason.
 
 Its probably also worth noting you can initiate pull requests against the
 github mirrors. If it hasn't already been done for the proton mirror, we
 can have the emails that would generate be directed to this list (e.g.
 
 http://mail-archives.apache.org/mod_mbox/qpid-dev/201401.mbox/%3c20140130180355.3cf9e916...@tyr.zones.apache.org%3E
 ).
 We obviously can't merge the pull request via github, but you can use
 the reviewing tools etc and the resultant patch can be downloaded or
 attached to a JIRA and then applied in the usual fashion (I believe there
 is a commit message syntax that can be used to trigger closing the pull
 request).
 
 Robbie
 
 On 30 April 2014 15:22, Rafael Schloming r...@alum.mit.edu wrote:
 
 On Wed, Apr 30, 2014 at 8:35 AM, Clebert Suconic csuco...@redhat.com
 wrote:
 
 @Rafi: I see there is a patch review  process within Apache (based on
 your
 other thread on Java8)
 
 Should we make this through the patch process at some point?
 
 
 I'm fine looking at it on your git branch, but if you'd like to play
 with
 the review tool then feel free.  Just let me know if you need an account
 and I will try to remember how to set one up (or who to bug to get you
 one). ;-)
 
 --Rafael
 
 
 
 



Re: Optimizations on Proton-j

2014-05-01 Thread Clebert Suconic
I will do some cleanup on this .. I already fixed the headers here on my copy 
and I will do some cleanup on imports that were't supposed to be done.
On May 1, 2014, at 5:11 PM, Robbie Gemmell robbie.gemm...@gmail.com wrote:

 As no mail arrived here or qpid-dev, and none seems to have arrived at what
 used to be the default location (infra-dev) either, I had a quick look and
 it seems like they might have changed the process slightly and we will need
 to ask for the mails to be enabled at all:
 https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
 
 I particularly like the mention of the new comment syncing between our
 mailing list and the Pull Requests.
 
 Regarding closing the pull requests, it seems like something along the
 lines of This closes #request number at GitHub added to the end of the
 svn commit message should do the trick:
 https://help.github.com/articles/closing-issues-via-commit-messages
 
 I havent had a chance to really look at the actual code change but when I
 was quickly scrolling down the PR, in addition to the licence headers on
 the new files that Rafi already mentioned (which I spotted due to the
 Copyright notices we wouldnt typically have) I noticed Encoder.java having
 its existing licence header corrupted a little by some wayward code.
 
 Robbie
 I just submitted it as a git PR:
 
 https://github.com/apache/qpid-proton/pull/1
 
 
 
 On Apr 30, 2014, at 10:47 AM, Robbie Gemmell robbie.gemm...@gmail.com
 wrote:
 
 I think anyone can sign up for ReviewBoard themselves. It certainly didn't
 used to be linked to the ASF LDAP in the past, presumably for that reason.
 
 Its probably also worth noting you can initiate pull requests against the
 github mirrors. If it hasn't already been done for the proton mirror, we
 can have the emails that would generate be directed to this list (e.g.
 
 http://mail-archives.apache.org/mod_mbox/qpid-dev/201401.mbox/%3c20140130180355.3cf9e916...@tyr.zones.apache.org%3E
 ).
 We obviously can't merge the pull request via github, but you can use
 the reviewing tools etc and the resultant patch can be downloaded or
 attached to a JIRA and then applied in the usual fashion (I believe there
 is a commit message syntax that can be used to trigger closing the pull
 request).
 
 Robbie
 
 On 30 April 2014 15:22, Rafael Schloming r...@alum.mit.edu wrote:
 
 On Wed, Apr 30, 2014 at 8:35 AM, Clebert Suconic csuco...@redhat.com
 wrote:
 
 @Rafi: I see there is a patch review  process within Apache (based on
 your
 other thread on Java8)
 
 Should we make this through the patch process at some point?
 
 
 I'm fine looking at it on your git branch, but if you'd like to play with
 the review tool then feel free.  Just let me know if you need an account
 and I will try to remember how to set one up (or who to bug to get you
 one). ;-)
 
 --Rafael
 



Re: simplified factory pattern for proton-j

2014-04-30 Thread Clebert Suconic
The only issue is that all the users using Proton-j would have to be at least 
Java8.
For instance, I can have maybe my users using Java8 on the server, but they 
won't migrate all their clients.


On Apr 30, 2014, at 6:48 AM, Rafael Schloming r...@alum.mit.edu wrote:

 I forgot to mention, but another part of the reasoning here is that Java 8
 is (finally!!!) allowing static methods in interfaces, so the natural
 pattern for this sort of thing would just be Interface.create(...), and
 while we won't be able to use that for a while, the
 Interface.Factory.create(...) option is about as idiomatically close to
 that as we can get in Java 7.
 
 --Rafael
 
 
 On Tue, Apr 29, 2014 at 2:57 PM, Rafael Schloming r...@alum.mit.edu wrote:
 
 Hi Everyone,
 
 I've put together a patch that makes the proton-j factory usage a bit
 simpler and more consistent. You can review it here if you like:
 
  - https://reviews.apache.org/r/20854/
 
 The main point of the patch is to make all the factories consistently
 follow this pattern:
 
  package.Interface iface = package.Interface.Factory.create(...);
 
 I like this because it is simple and easy to remember and doesn't require
 importing the impl packages directly.
 
 The patch preserves the convenience constructors, e.g. Proton.connection()
 and so forth, but it does remove the old factory APIs. I think this is a
 reasonable thing to do because the old factories were cumbersome enough to
 use that I don't think anyone actually bothered (including our own
 exmaples).
 
 In any case, please shout if this patch will be troublesome for you. If I
 don't hear anything I'll go ahead and commit it later this week.
 
 Thanks,
 
 --Rafael
 
 



Re: Optimizations on Proton-j

2014-04-30 Thread Clebert Suconic
@Rafi: I see there is a patch review  process within Apache (based on your 
other thread on Java8)

Should we make this through the patch process at some point?


On Apr 29, 2014, at 4:04 PM, Clebert Suconic csuco...@redhat.com wrote:

 
 On Apr 29, 2014, at 4:00 PM, Rafael Schloming r...@alum.mit.edu wrote:
 
 Notice that this will also minimize the footprint of the codec but I'm not
 measuring that here.
 
 
 Just out of curiosity, when you say minimize the footprint, are you
 referring to the in memory overhead, or do you mean the encoded size on the
 wire?
 
 I didn't change any encoding size on these refactoring. The code should 
 produce the same bytes as it was producing before.
 
 
 I meant memory overhead.. Before my changes you had to instantiate a Decoder 
 per Connection, and a Decoder per Message (that was actually recently changed 
 to per thread). The decoder has the DefinedTypes and registered objects... 
 with this change now you can have a single instance of the Decoder and 
 Encoder per JVM since it's stateless now.
 



Re: simplified factory pattern for proton-j

2014-04-30 Thread Clebert Suconic
Ah.. I see. 

I looked at the patch and I liked it.


One thing I don't like on the current codebase is the implementations as inner 
classes on Interfaces.. (e.g. WriteableBuffer)... I would suggest making it a 
separate class.

   but on this case it makes a lot of sense I looked at the patch and I 
liked it.



On Apr 30, 2014, at 9:08 AM, Rafael Schloming r...@alum.mit.edu wrote:

 Right, I wasn't suggesting the proton codebase use Java 8 anytime soon (I
 would think not until Java 7 is EOLed), just that if a Java 8 codebase uses
 proton-j then the idioms will be a little bit closer to each other.
 
 --Rafael
 
 
 On Wed, Apr 30, 2014 at 8:33 AM, Clebert Suconic csuco...@redhat.comwrote:
 
 The only issue is that all the users using Proton-j would have to be at
 least Java8.
 For instance, I can have maybe my users using Java8 on the server, but
 they won't migrate all their clients.
 
 
 On Apr 30, 2014, at 6:48 AM, Rafael Schloming r...@alum.mit.edu wrote:
 
 I forgot to mention, but another part of the reasoning here is that Java
 8
 is (finally!!!) allowing static methods in interfaces, so the natural
 pattern for this sort of thing would just be Interface.create(...), and
 while we won't be able to use that for a while, the
 Interface.Factory.create(...) option is about as idiomatically close to
 that as we can get in Java 7.
 
 --Rafael
 
 
 On Tue, Apr 29, 2014 at 2:57 PM, Rafael Schloming r...@alum.mit.edu
 wrote:
 
 Hi Everyone,
 
 I've put together a patch that makes the proton-j factory usage a bit
 simpler and more consistent. You can review it here if you like:
 
 - https://reviews.apache.org/r/20854/
 
 The main point of the patch is to make all the factories consistently
 follow this pattern:
 
 package.Interface iface = package.Interface.Factory.create(...);
 
 I like this because it is simple and easy to remember and doesn't
 require
 importing the impl packages directly.
 
 The patch preserves the convenience constructors, e.g.
 Proton.connection()
 and so forth, but it does remove the old factory APIs. I think this is a
 reasonable thing to do because the old factories were cumbersome enough
 to
 use that I don't think anyone actually bothered (including our own
 exmaples).
 
 In any case, please shout if this patch will be troublesome for you. If
 I
 don't hear anything I'll go ahead and commit it later this week.
 
 Thanks,
 
 --Rafael
 
 
 
 



Optimizations on Proton-j

2014-04-29 Thread Clebert Suconic
I have done some work last week on optimizing the Codec.. and I think i've 
gotten some interesting results.


- The Decoder now is stateless, meaning the same instance can be used over and 
over (no more need for one instance per connection). Bozo Dragojefic has 
actually seen how heavy is to create a Decoder and has recently optimized 
MessageImpl to always take the same instance through ThreadLocals. This 
optimization goes a step further
- I have changed the ListDecoders somehow  you won't need intermediate objects 
to parse Types. For now I have only made Transfer as that effective type but I 
could do that for all the other Types at some point
- There were a few hotspots that I found on the test and I have refactored 
accordingly, meaning no semantic changes.

As a result of these optimizations, DecoderImpl won't have a setBuffer method 
any longer. Instead of that each method will take a read(ReadableBuffer..., old 
signature).


And talking about ReadableBuffer, I have introduced the interface 
ReadableBuffer. When integrating on the broker, I had a situation where I won't 
have a ByteBuffer, and this interface will allow me to further optimize the 
Parser later as I could take the usage of Netty Buffer (aka ByteBuf).


You will find these optimizations on my branch on github: 
https://github.com/clebertsuconic/qpid-proton/tree/optimizations


Where I will have two commits:

I - a micro benchmark where I added a testcase doing a direct read on the 
buffer without any framework. I've actually written a simple parser that will 
work for the byte array I have, but that's very close to reading directly from 
the bytes.
   I used that to compare raw reading and interpreting the buffer to the 
current framework we had.
   I was actually concerned about the number of intermediate objects, so I used 
that to map these differences.

https://github.com/clebertsuconic/qpid-proton/commit/7b2b02649e5bdd35aa2e4cc487ffb91c01e75685


I - a commit with the actual optimizations:


https://github.com/clebertsuconic/qpid-proton/commit/305ecc6aaa5192fc0a1ae42b90cb4eb8ddfe046e








Without these optimizations my MicroBenchmark, parsing 1000L instances of 
Transfer, without reallocating any buffers could complete on my laptop in:

- 3480 milliseconds , against 750 milliseconds with raw reading


After these optimizations:
- 1927 milliseconds, against 750 milliseconds with raw reading



Notice that this will also minimize the footprint of the codec but I'm not 
measuring that here.





I'm looking forward to work with this group, I actually had a meeting with Rafi 
and Ted last week, and I plan to work closer to you guys on this



Clebert Suconic







Re: Optimizations on Proton-j

2014-04-29 Thread Clebert Suconic

On Apr 29, 2014, at 4:00 PM, Rafael Schloming r...@alum.mit.edu wrote:
 
 Notice that this will also minimize the footprint of the codec but I'm not
 measuring that here.
 
 
 Just out of curiosity, when you say minimize the footprint, are you
 referring to the in memory overhead, or do you mean the encoded size on the
 wire?

I didn't change any encoding size on these refactoring. The code should produce 
the same bytes as it was producing before.


I meant memory overhead.. Before my changes you had to instantiate a Decoder 
per Connection, and a Decoder per Message (that was actually recently changed 
to per thread). The decoder has the DefinedTypes and registered objects... with 
this change now you can have a single instance of the Decoder and Encoder per 
JVM since it's stateless now.



Re: Anonymous Producers

2014-03-24 Thread Clebert Suconic
Thanks.. that's what I needed!


On Mar 21, 2014, at 9:06 AM, Ted Ross tr...@redhat.com wrote:

 You have the option of leaving the target of a producer link empty and 
 putting the destination in the to field of each message sent on the link.
 
 -Ted
 
 On 03/20/2014 11:36 PM, Clebert Suconic wrote:
 A common JMS feature is to have a Producer being anonymous and define the 
 address only when the message is sent.
 
 
 We do the same on HornetQ core API.. However from my first look on the 
 Proton API I always need to define the source and the target?
 
 Is there a way to define the destination per message sent?
 
 
 How do you handle this at the current qpid JMS Client? From what I 
 understand on the API it will then declare a new LINK when a new destination 
 is set.. is that right?
 
 
 That means there are no anonymous links or a way to alter the target during 
 the message send? no such thing as deliver.send(message, *destination*)?
 



Anonymous Producers

2014-03-20 Thread Clebert Suconic
A common JMS feature is to have a Producer being anonymous and define the 
address only when the message is sent.


We do the same on HornetQ core API.. However from my first look on the Proton 
API I always need to define the source and the target?

Is there a way to define the destination per message sent?


How do you handle this at the current qpid JMS Client? From what I understand 
on the API it will then declare a new LINK when a new destination is set.. is 
that right?


That means there are no anonymous links or a way to alter the target during the 
message send? no such thing as deliver.send(message, *destination*)?

Re: ConnectionImpl::getWorkSequence()

2014-03-18 Thread Clebert Suconic
On Mar 18, 2014, at 11:25 AM, Rafael Schloming r...@alum.mit.edu wrote:

 I doubt there is a good reason, however I suspect the new events API would
 probably be an easier alternative to getWorkHead() and friends.
 Unfortunately there aren't docs for the Java version of the API yet, but it
 shouldn't be difficult to figure out how to use it from the C API docs.
 

I know nothing about the new events API... 
but I think it would be a mistake to have java being an exact mirror of the C 
API. Things like Iterators are pretty common in Java. 

Right now my implementation is forced to cast to ConnectionImpl what breaks the 
purpose of the interface. Can you guys move it?

 --Rafael
 
 
 On Mon, Mar 17, 2014 at 12:44 PM, Clebert Suconic csuco...@redhat.comwrote:
 
 Why getWorkSequence is not exposed through Connection?
 
 
 Forcing me to use getWorkHead() would make me re-implement the exact same
 Iterator that's being implemented at ConnectionImpl(); Why not just expose
 it properly? Iterators are common practice in Java anyways.



Re: ConnectionImpl::getWorkSequence()

2014-03-18 Thread Clebert Suconic
 
 
 Unfortunately there aren't docs for the Java version of the API yet, but it
 shouldn't be difficult to figure out how to use it from the C API docs.
 

BTW I don't need the javadoc.. just point me what class are you talking about 
and I will figure out



Re: ConnectionImpl::getWorkSequence()

2014-03-18 Thread Clebert Suconic
 
 
 Right now my implementation is forced to cast to ConnectionImpl what
 breaks the purpose of the interface. Can you guys move it?
 
 
 I'm happy to accept a patch for it, although I'd encourage you to check out
 the events stuff in any case.
 
 --Rafael


I sure will take a look on the events stuff..


I thought you were objecting Iterators just because of the C / Java API 
compatibility.


I would know how to provide a patch if it was git or github. I'm a bit rusty on 
SVN.. would you be ok if I provided you a git branch with a commit? I'm not a 
proton committer yet (although I'm planning to get down to it a lot more).

ConnectionImpl::getWorkSequence()

2014-03-17 Thread Clebert Suconic
Why getWorkSequence is not exposed through Connection?


Forcing me to use getWorkHead() would make me re-implement the exact same 
Iterator that's being implemented at ConnectionImpl(); Why not just expose it 
properly? Iterators are common practice in Java anyways.