Re: delivery.setPayload

2014-05-16 Thread Clebert Suconic

On May 14, 2014, at 1:36 PM, Rafael Schloming  wrote:

> On Wed, May 14, 2014 at 1:05 PM, Clebert Suconic wrote:
> 
>> I was just playing with possibilities and trying to find ways to improve
>> the API how things are done.
>> 
>> 
>> 
>>> The interface is designed to
>>> operate in terms of chunks of message data rather than in terms of an
>>> entire message.
>> 
>> 
>> 
>> That's one thing that I'm lacking control through the API actually.. which
>> I'm looking to improve.
>> 
>> 
>> My application (hornetQ) has the ability to store messages on the disk and
>> send in chunks to the server.
>> 
>> Using the current API, I would have to parse the entire message to Proton,
>> and have proton dealing with the chunks.
>> 
> 
> I'm not sure I follow this. The byte[] that you pass to send doesn't need
> to be a complete message.


I thought the Delivery had to be complete.
If that's not the case then ignore me :) I will research a bit more 
heresomething is probably missing for our large message implementation.. 
but let me reserach a bit more and I will come back on this when it's time.



I won't be sending the setPayload to be committed or work on fixing it to a 
commit level... I will forget this one for now.. I just wanted to give a food 
for thought that having the messages encoded directly could be a good idea.

Re: delivery.setPayload

2014-05-14 Thread Rafael Schloming
On Wed, May 14, 2014 at 1:05 PM, Clebert Suconic wrote:

> I was just playing with possibilities and trying to find ways to improve
> the API how things are done.
>
>
>
> >  The interface is designed to
> > operate in terms of chunks of message data rather than in terms of an
> > entire message.
>
>
>
> That's one thing that I'm lacking control through the API actually.. which
> I'm looking to improve.
>
>
> My application (hornetQ) has the ability to store messages on the disk and
> send in chunks to the server.
>
> Using the current API, I would have to parse the entire message to Proton,
> and have proton dealing with the chunks.
>

I'm not sure I follow this. The byte[] that you pass to send doesn't need
to be a complete message.


> I have tests for instance on sending 1 Gigabyte messages, where the body
> stays on the disk on the server, while the client streams bytes while
> receiving, and having flow control holding them.
>
> Proton has a big buffer internally, and which actually is not exposing
> flow control in such way I would stop feeding Proton. as a result this use
> case would easily cause OMEs from Proton.
>

The Delivery.pending() API will tell you exactly how much data is being
buffered for a given delivery, and
Session.getOutgoingBytes()/getIncomingBytes() will tell you how much is
being buffered in aggregate by the Session. Between the two of these you
should be able to limit what Proton is buffering.


>
> as I said I was just for now exploring with possibilities. I could send a
> ReadableBuffer (the new interface from my previous patch) and use a Pool on
> this buffer to encode my Message. But I stlil lack control on my large
> messages streaming and flow control.
>
>
> The way things are done now, if i wanted to preserve my large message
> functionality at the client level (outside of Proton's API), I would need
> to bypass proton and send Delivery directly without Proton intervention.


I'm still confused by this. Can you point me to the code or post some
samples of what you're having trouble with?

--Rafael


Re: delivery.setPayload

2014-05-14 Thread Clebert Suconic
I was just playing with possibilities and trying to find ways to improve the 
API how things are done.



>  The interface is designed to
> operate in terms of chunks of message data rather than in terms of an
> entire message. 



That's one thing that I'm lacking control through the API actually.. which I'm 
looking to improve.


My application (hornetQ) has the ability to store messages on the disk and send 
in chunks to the server.

Using the current API, I would have to parse the entire message to Proton, and 
have proton dealing with the chunks.


I have tests for instance on sending 1 Gigabyte messages, where the body stays 
on the disk on the server, while the client streams bytes while receiving, and 
having flow control holding them.

Proton has a big buffer internally, and which actually is not exposing flow 
control in such way I would stop feeding Proton. as a result this use case 
would easily cause OMEs from Proton.

as I said I was just for now exploring with possibilities. I could send a 
ReadableBuffer (the new interface from my previous patch) and use a Pool on 
this buffer to encode my Message. But I stlil lack control on my large messages 
streaming and flow control.


The way things are done now, if i wanted to preserve my large message 
functionality at the client level (outside of Proton's API), I would need to 
bypass proton and send Delivery directly without Proton intervention.

Re: delivery.setPayload

2014-05-14 Thread Rafael Schloming
On Tue, May 13, 2014 at 4:40 PM, Clebert Suconic wrote:

>
>
> On May 13, 2014, at 1:46 PM, Rafael Schloming  wrote:
>
> > I'm not sure this will work as an API. One problem I see with this is
> that
> > it is circumventing the engine's flow control logic. If you notice there
> is
> > logic inside send() to update counters on the session. Unless I've missed
> > something your patch doesn't seem to have equivalent logic. This might
> just
> > be an oversight, but I don't see how you could easily add the same logic
> > since you don't know how many bytes the payload is until much much later
> on
> > in the control flow of the engine.
> >
>
> as I told you  this was just a prototyped idea... it's not in fact
> updating the window yet..
>
> If this idea is a good idea, we could pursue the idea here.
>

Providing the option to pass in something more abstract than a byte[] is a
good idea. I'm just observing that the Payload/setPayload interface as it
stands is changing two fundamental aspects of the send interface. The first
aspect being that the Sender implementation no longer has any up front idea
of how much data it is being offered. The second aspect is the stream
oriented nature of the Sender interface. The interface is designed to
operate in terms of chunks of message data rather than in terms of an
entire message. This is intentional because an AMQP message might be
arbitrarily large, and the interface needs to allow for the possibility of
an intermediary that can stream through message data without buffering the
whole message. I don't see how such streaming could easily be accomplished
with the Payload/setPayload interface.


>
> > Can you supply some more detail as to why it got 5% faster? If it was
> > merely avoiding the copy, then I can think of some ways we could avoid
> that
> > copy without changing the API quite so drastically, e.g. just overload
> > send() to take some sort of releasable buffer reference.
>
> The encoding is done directly the the FrameWriter::__outputBuffer.  I've
> made a framework where I'm testing the send and it made it somewhat fast
> than copying the encoding over 1 million messages.
>
> On this case it could be a bit more if you encoded a MesasgeImpl on a new
> buffer every time
>
> >
> > FWIW, I think that a good buffer abstraction that we could use everywhere
> > would help a lot. I suspect having distinct abstractions for payload
> > buffers vs encodable buffers vs decodable buffers is just going to result
> > in lots of unnecessary conversions.
>
> probably.. I was just trying to improve the idea of the payloads. I don't
> like the send API right now.. I think it would make more sense to set the
> payload on the delivery than send bytes through sender.
>

 This is really a question of the generality of the API. Operating in terms
of chunks of a message is always going to be lower level than operating in
terms of entire messages, however it is also more general. If we only
permitted the latter then we would be omitting important use cases.

I think we can treat these issues separately though. We should be able to
eliminate copying in the stream oriented API, while still providing a
convenience API to make sending discrete messages more convenient.

--Rafael


Re: delivery.setPayload

2014-05-13 Thread Clebert Suconic


On May 13, 2014, at 1:46 PM, Rafael Schloming  wrote:

> I'm not sure this will work as an API. One problem I see with this is that
> it is circumventing the engine's flow control logic. If you notice there is
> logic inside send() to update counters on the session. Unless I've missed
> something your patch doesn't seem to have equivalent logic. This might just
> be an oversight, but I don't see how you could easily add the same logic
> since you don't know how many bytes the payload is until much much later on
> in the control flow of the engine.
> 

as I told you  this was just a prototyped idea... it's not in fact updating the 
window yet..

If this idea is a good idea, we could pursue the idea here.

> Can you supply some more detail as to why it got 5% faster? If it was
> merely avoiding the copy, then I can think of some ways we could avoid that
> copy without changing the API quite so drastically, e.g. just overload
> send() to take some sort of releasable buffer reference.

The encoding is done directly the the FrameWriter::__outputBuffer.  I've made a 
framework where I'm testing the send and it made it somewhat fast than copying 
the encoding over 1 million messages.

On this case it could be a bit more if you encoded a MesasgeImpl on a new 
buffer every time

> 
> FWIW, I think that a good buffer abstraction that we could use everywhere
> would help a lot. I suspect having distinct abstractions for payload
> buffers vs encodable buffers vs decodable buffers is just going to result
> in lots of unnecessary conversions.

probably.. I was just trying to improve the idea of the payloads. I don't like 
the send API right now.. I think it would make more sense to set the payload on 
the delivery than send bytes through sender.








Re: delivery.setPayload

2014-05-13 Thread Rafael Schloming
I'm not sure this will work as an API. One problem I see with this is that
it is circumventing the engine's flow control logic. If you notice there is
logic inside send() to update counters on the session. Unless I've missed
something your patch doesn't seem to have equivalent logic. This might just
be an oversight, but I don't see how you could easily add the same logic
since you don't know how many bytes the payload is until much much later on
in the control flow of the engine.

Can you supply some more detail as to why it got 5% faster? If it was
merely avoiding the copy, then I can think of some ways we could avoid that
copy without changing the API quite so drastically, e.g. just overload
send() to take some sort of releasable buffer reference.

FWIW, I think that a good buffer abstraction that we could use everywhere
would help a lot. I suspect having distinct abstractions for payload
buffers vs encodable buffers vs decodable buffers is just going to result
in lots of unnecessary conversions.

--Rafael

On Tue, May 13, 2014 at 11:19 AM, Clebert Suconic wrote:

> I have been playing with the API, and there is one change that would make
> the API clearer IMO.
>
>
> Right now you have to send a delivery, and then call send(bytes) to add
> Bytes to the delivery what will make it copy data to the Delivery and self
> expand the buffer.
>
>
>
> I have played with a change that made it 5% faster than the most optimal
> way to expand the payload on the Delivery (using the same buffer over and
> over)
>
>
> And 15% on a brief calculation against creating the buffer every time...
> but there are cases where this could be a bit worse.
>
>
>
> Basically I have created an interface called Payload, and added a method
> setPayload on Delivery.
>
>
>
> I'm not sure yet how I would implement framing into multiple packages..
> but I think it could be done.. this is just a prototyped idea:
>
>
>
> https://github.com/clebertsuconic/qpid-proton/commit/02abe61fc54911955ddcce77b792a153c5476aef
>
>
>
> in case you want to fetch the buffer from my git, it's this branch:
> https://github.com/clebertsuconic/qpid-proton/tree/payload
>
>
>
> In any case I liked the idea of the setPayload better than
> sender.send(bytes) to set the payload of a message.
>
>
>
> Ideas?


delivery.setPayload

2014-05-13 Thread Clebert Suconic
I have been playing with the API, and there is one change that would make the 
API clearer IMO.


Right now you have to send a delivery, and then call send(bytes) to add Bytes 
to the delivery what will make it copy data to the Delivery and self expand the 
buffer.



I have played with a change that made it 5% faster than the most optimal way to 
expand the payload on the Delivery (using the same buffer over and over)


And 15% on a brief calculation against creating the buffer every time... but 
there are cases where this could be a bit worse.



Basically I have created an interface called Payload, and added a method 
setPayload on Delivery. 



I'm not sure yet how I would implement framing into multiple packages.. but I 
think it could be done.. this is just a prototyped idea:


https://github.com/clebertsuconic/qpid-proton/commit/02abe61fc54911955ddcce77b792a153c5476aef



in case you want to fetch the buffer from my git, it's this branch:  
https://github.com/clebertsuconic/qpid-proton/tree/payload



In any case I liked the idea of the setPayload better than sender.send(bytes) 
to set the payload of a message.



Ideas?