On 10/15/2014 09:09 PM, Chuck Rolke wrote:
I'm working on comparing AMQP 1.0 clients with different brokers. In this case
I have ActiveMQ 5.10 and Qpidd brokers on a Fedora 19 localhost. For the client 
I
have a qpidmessaging client test that moves unreliable messages.

The test client program:
     For the full source please see
     http://people.apache.org/~chug/adverb_unreliable_400_50/hello_world.cpp

  0. Uses 400 total messages with a 'batch' size of 50.

  1. Creates a connection to:
     "chuck; {create:always, link: {reliability:unreliable}}"

  2. Creates a sender and receiver each with capacity of 100.

  3. Sends a batch to the broker

  4. Loops sending and receiving batches of messages

  5. Receives the final batch.

  The idea is to prime the broker with a batch of messages. Then send and
  receive batches so that the receiver might never have to wait doing a
  fetch because the receiver's capacity will prefetch the messages.

The results:
  Each broker ran from a fresh boot. Only one broker ran at a time.

  With Qpidd the client gets through the 400 messages in 8 +/- 3 mS. The AMQP
  transfer-to-flow ratio is very high and efficient.

  With ActiveMQ 5.10 the clients takes 260 +/- 60 mS. The AMQP transfer-to-flow
  ratio is much lower. Additionally, the client sends frames to the host where
  the payload is nothing but a dozen flow performatives (see frame 138 below).
  This is the exact same client that worked so well with Qpidd.

  To visualize the traffic please see:
    
http://people.apache.org/~chug/adverb_unreliable_400_50/_unreliable_qpidd_400_batched_50.html
    
http://people.apache.org/~chug/adverb_unreliable_400_50/_unreliable_amq5.10_400_batched_50.html

Discussion:

1) Is this a real issue, and if so with which module?

It looks like the qpidmessaging client spits out a flow for each message it 
receives
from the broker and then some.

I don't think there should be more that the number of received messages, in fact from a brief look I'd say its slightly fewer.

Are you using the latest code from trunk? There, the credit is updated for every message received, but how these are actually sent depends on the interleaving of the io thread and application thread.

Previously the io thread was woken up if needed for every credit increase. The difference between that and the current more optimal code with qpid::messaging against qpidd was pretty slight however. In other words the larger number of individual flow frames may not be the cause of the slowness.

What are the timings of the various frames? Are there any big gaps?

Is ActiveMQ configured to use tcp-nodelay?

Have you tried comparing e.g. proton messenger clients against both brokers to see if the same difference in time taken can be seen there?

That makes sense until the back-to-back flow frames
start piling up. In one trace with 2000 messages I saw 700 consecutive flow 
frames from the client to
the broker but this is not repeatable.

Both brokers wind up transferring the same payload in the end. But maybe if the
client did things a little differently the ActiveMQ broker could do it more 
quickly.

2) Another late observation in the ActiveMQ broker trace is:

  ◊  ◊◊  Frame 67  [::1]:47813  -> [::1]:5672  8.478287 [transfer [0,1] (47) .. 
transfer [0,1] (99)]
  ◊  ◊◊  Frame 68  [::1]:47813  -> [::1]:5672  8.487341 [transfer [0,1] (47) .. 
transfer [0,1] (99)]

  Here the client is sending the same batch of messages (47..99) twice with the 
second being a
  TCP Retransmission and a lost 10mS between the two. This same retransmission 
is in the
  two traces from different runs that I've saved so it is somewhat repeatable.

Nothing at the AMQP level should result in TCP retransmission (at least not at this volume).

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to