Hi,

I would check whether GRO is enabled under Dom0 or not (ethtool -k ethX). Comparing top during test (especially that which context use how many percentage), and the number of interrupts per second (grep eth /proc/interrupts) would be interesting too.

Regards,

Zoltan

On 28/11/14 02:29, zhangleiqiang wrote:

  I have done some network performance testing under XEN, and found the result 
which is somehow strange:

    When testing between Hosts (Sender and receiver servers are both booting 
from kernel-default), running only one netperf process will be enough to reach 
the upper limit of NIC. However, when testing between Dom0s (Sender and 
receiver servers are both booting from kernel-xen), we must run more count of 
netperf process to reach the upper limit of NIC, running only one netperf 
process the throughout was low. The results using one netperf process are as 
follows:

          TCP 512    TCP 1460     TCP 4096    TCP 8500
Hosts:    9134.24    9444.84  9448.05      9447.58    (Mbs)
Dom0s:    2063.9    3018.95     6561.29         5008.17    (Mbs)

   The question is why one netperf process cannot reach the max throughout of 
NIC under Dom0 ? I know that Dom0 will have extra overhead when handling 
interrupt which must be handled by hypervisor first, but I think it is not the 
reason for it.

   The testing environment details are as follows:

    1. Hardware
        a. CPU: Intel(R) Xeon(R) CPU E5645 @ 2.40GHz, 2 CPU 6 Cores with Hyper 
Thread enabled
        b. NIC: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network 
Connection (rev 01)
    2. Sofware:
        a. HostOS: SUSE SLES 11 SP3 (Kernel 3.0.76)
        b. NIC Driver: IXGBE 3.21.2
        c. OVS: 2.1.3
        d. MTU: 1600
        e. Dom0:6U6G
    3. Networking Environment:
        a. All network flows are transmit/receive through OVS
        b. Sender server and receiver server are connected directly between 
10GE NIC
    4. Testing Tools:
        a. Sender: netperf
        b. Receiver: netserver


----------
zhangleiqiang (Trump)

Best Regards
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to