[ 
https://issues.apache.org/jira/browse/PROTON-1786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16391664#comment-16391664
 ] 

Chuck Rolke commented on PROTON-1786:
-------------------------------------

[Proton C] Dispatch will most likely have a different pattern than any other 
client that uses Proton. One differentiator was added under 
https://issues.apache.org/jira/browse/DISPATCH-807 where Dispatch stalls 
incoming bytes and stops pumping data into Proton's outgoing byte buffer.

One aspect of Proton that hurt Dispatch was that a sending client could send 
call pn_link_send with huge amounts of data and Proton just adds the bytes to 
the outgoing byte buffer. When Dispatch sent a largish message to three links 
there would be three copies of the message in the Proton outgoing bytes. WIth 
fast producers and slow consumers Dispatch could have 10s of megabytes ready to 
send.

Dispatch after DISPATCH-807 sets a threshold on the number of bytes that can be 
in Proton outgoing bytes. On reaching that threshold Dispatch stops reading the 
associated incoming link. After Proton sends some transfer data and reduces the 
outgoing bytes buffer size then Dispatch will start reading the incoming links 
and filling the outgoing bytes buffer(s) again.

Compare that with a simple sender does not care about Proton outgoing bytes. As 
the sender sends it is pushing the bytes to be sent down to Proton.

When Proton gets to send data it behaves the same for Dispatch and for the 
simple sender. It will send what it can in the largest frames and transfers 
that it can. For simple senders Proton will likely send the whole message in 
large frames. For Dispatch there is a real chance that Proton will empty its 
outgoing byte buffers routinely while sending a large message stream. That 
creates the opportunity for Proton to send a small frame in the middle of a 
larger transfer because the outgoing bytes buffer went empty.

> Multiframe transfer wire traffic patterns differ substantially
> --------------------------------------------------------------
>
>                 Key: PROTON-1786
>                 URL: https://issues.apache.org/jira/browse/PROTON-1786
>             Project: Qpid Proton
>          Issue Type: Task
>            Reporter: Kim van der Riet
>            Priority: Major
>         Attachments: dispatch.multiframe.07.pcapng
>
>
> This is not a bug (although it could become one), but rather an observation 
> of large message transfer patterns observed using Wireshark while running 
> Qpid Interop Test's amqp_large_content_test.
> The test sends large messages through a broker (in this case, dispatch 
> router) of size 1MB and 10MB.
> I hope to add other observable patterns from other clients and/or brokers if 
> they are significant. If there is consensus that this is a potential large 
> message transfer efficiency issue, then this Jira can be a placeholder for 
> this issue.
> See the attached file for an example. In  this test, the receiver is using 
> TCP port 59806 and the sender port 59808. The router is using the standard 
> AMQP port.
> h2. C++ client:
> The client in this case uses the Proton C++ API and is based on the 
> SimpleSender.cpp example. To isolate the traffic from this client, use the 
> following filter in Wireshark:
> {noformat}
> amqp.performative == transfer && tcp.srcport == 59808{noformat}
> It can be seen that although there is a large message being sent, the sender 
> appears to be restricted to a single transfer of 16kB or a 64kB frame 
> containing 4 transfers (very occasionally, 2, 3 or 5 transfers) - for example:
> {noformat}
> 35    1.972620        ::1     ::1     AMQP    16470   transfer
> 1177  2.032878        ::1     ::1     AMQP    65550   transfer transfer 
> transfer transfer
> {noformat}
> h2. Dispatch Router:
> This uses Proton through its native C interface. To isolate the traffic from 
> the router to the receiver, use the following Wireshark filter:
> {noformat}
> amqp.performative == transfer && tcp.dstport == 59806{noformat}
> The traffic for this broker looks very different. Each frame contains only a 
> single transfer, and the size of the transfers differ widely, from 196 bytes 
> to the full 64kB max. I am assuming that the router is sending on message 
> content it has received as soon as it becomes available no matter the size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to