Re: VOTE: Release Proton 0.9-rc-1 as 0.9 final

2015-03-12 Thread Rafael Schloming
Hi Everyone,

FYI, I'm going to be out of contact for a few days again. So far the two
issues discovered in this RC are quite isolated, so please continue to test
RC 1 and make sure any new fixes go on the 0.9 branch. I will spin another
RC off of the branch as soon as I'm back.

Thanks,

--Rafael


On Tue, Mar 10, 2015 at 12:57 AM, Rafael Schloming r...@alum.mit.edu wrote:

 Hi Everyone,

 I've posted 0.9-rc-1 in the usual places. Please have a look and register
 your vote:

 Source code can be found here:

 http://people.apache.org/~rhs/qpid-proton-0.9-rc-1/

 Java binaries are here:

 https://repository.apache.org/content/repositories/orgapacheqpid-1025

 [   ] Yes, release Proton 0.9-rc-1 as 0.9 final
 [   ] No, because ...
 --Rafael



[GitHub] qpid-proton pull request: NO-JIRA: Measure size of encoded data.

2015-03-12 Thread astitcher
Github user astitcher commented on the pull request:

https://github.com/apache/qpid-proton/pull/11#issuecomment-78436876
  
 I'd love to hear some details. Are you suggesting we accumulate the 
encoded size every time you
 modify the data?

Not especially, but possibly - Iobserve that when you need to resize your 
buffer you always end up doing at least 2 passes of the data. So in this case 
it isn't really a big deal to ask for the size after the first attempt failed. 
That leads me to caching the required size in the pn_data_t whenever we do 
encode even if (especially if) it fails.

So the flow would look like (in python)

size = 1024
cd, enc = pn_data_encode(self._data, size)
if cd == PN_OVERFLOW:
  size = pn_data_encoded_size(self._data)
  cd, enc = pn_data_encode(self._data, size)
  assert(cd!=PN_OVERFLOW)

So no dance any more (at least only the two step, not round -and-round). 
And if this is not fast enough we can cache away the size during 
pn_data_encode() for use in pn_data_encoded_size(), remembering to invalidate 
it if the struct is further changed (not actually likely).

Just in case it isn't clear this is better than calculating the size first 
because there is no wasted work assuming a cached size (at least compared to 
the existing and your proposed scheme), whereas always asking for the size and 
calculating it is always extra work if 80% of the time the buffer would have 
been big enough anyway.

This isn't so far from your proposal, I just think the API doesn't suck ;-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] qpid-proton pull request: NO-JIRA: Measure size of encoded data.

2015-03-12 Thread alanconway
Github user alanconway commented on the pull request:

https://github.com/apache/qpid-proton/pull/11#issuecomment-78482514
  
On Thu, 2015-03-12 at 00:52 -0700, Andrew Stitcher wrote:
 I'd love to hear some details. Are you suggesting we
 accumulate the encoded size every time you
 modify the data?
 
 
 Not especially, but possibly - Iobserve that when you need to resize
 your buffer you always end up doing at least 2 passes of the data. So
 in this case it isn't really a big deal to ask for the size after the
 first attempt failed. That leads me to caching the required size in
 the pn_data_t whenever we do encode even if (especially if) it fails.
 
 So the flow would look like (in python)
 
 size = 1024
 cd, enc = pn_data_encode(self._data, size)
 if cd == PN_OVERFLOW:
   size = pn_data_encoded_size(self._data)
   cd, enc = pn_data_encode(self._data, size)
   assert(cd!=PN_OVERFLOW)
 
 So no dance any more (at least only the two step, not round
 -and-round). And if this is not fast enough we can cache away the size
 during pn_data_encode() for use in pn_data_encoded_size(), remembering
 to invalidate it if the struct is further changed (not actually
 likely).
 
 Just in case it isn't clear this is better than calculating the size
 first because there is no wasted work assuming a cached size (at least
 compared to the existing and your proposed scheme), whereas always
 asking for the size and calculating it is always extra work if 80% of
 the time the buffer would have been big enough anyway.
 
 This isn't so far from your proposal, I just think the API doesn't
 suck ;-)

Agreed, same behavior nicer API. Will rework  re-post. Thanks!




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---