On 10/17/2014 10:49 PM, Chuck Rolke wrote:
1. The code running is qpidd/proton trunk 2014-10-15 2. The relative frame timings in seconds are shown between the hosts and the performatives e.g. ◊ ◊◊ Frame 67 [::1]:47813 -> [::1]:5672 8.478287 [transfer [0,1] Wireshark tagged the frame at 8.478287 seconds relative to the start of the trace. (Note to self: add a legend to the web pages.) 3. ActiveMQ with transport.tcpNoDelay=true for the amqp transportConnector improves the ActiveMQ run time significantly. Repeated runs complete in 60 mS. However, with tcpNoDelay=true the number of frames from the AMQ broker goes way up. Every broker-to-client transfer is a single frame; none are aggregated into a single frame. I suppose that's proof that tcpNoDelay has an effect! 4. The tcp retransmit is an artifact of a sick laptop that black screen rebooted shortly after my post the other day. Rebooted the retransmit is not seen again.I'll post the decoded results of AMQ with tcpNoDelay next week.
Looking at the original traces again, there are a few gaps in the times of about 40ms (which I recall being about the delay for nagle on linux?): frames 70/72, 105/107, 122/124 and 139/141.
Its not immediately obvious to me why it would be sensitive to tcp_nodelay. Also there is a frame 'missing' in each of those gaps (at least based on the visible count). Is it possible that this was in fact something to do with 4. abovem rather than the tcp-nodelay? Or is the effect of tcp-nodelay reproducible even now that issue whatever it was has been resolved? (Does setting tcp-nodelay on the client have any effect?)
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
