On 10/23/2013 05:40 PM, Aaron Rosen wrote:
I'm curious if you can do the following tests to help pinpoint the
bottle neck:

Run iperf or netperf between:
two instances on the same hypervisor - this will determine if it's a
virtualization driver issue if the performance is bad.
two instances on different hypervisors.
one instance to the namespace of the l3 agent.

If you happen to run netperf, I would suggest something like:

netperf -H <otherinstance> -t TCP_STREAM -l 30 -- -m 64K -o throughput,local_transport_retrans

If you need data flowing the other direction, then I would suggest:

netperf -H <otherinstance> -t TCP_MAERTS -l 30 -- -m ,64K -o throughput,remote_transport_retrans


You could add ",transport_mss" to those lists after the -o option if you want.

What you will get is throughput (in 10^6 bits/s) and the number of TCP retransmissions for the data connection (assuming the OS running in the instances is Linux). Netperf will present 64KB of data to the transport in each send call, and will run for 30 seconds. The socket buffer sizes will be at their defaults - which under linux means they will autotune.

happy benchmarking,

rick jones

For extra credit :) you can run:

netperf -t TCP_RR -H <otherinstance> -l 30

if you are curious about latency.

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to