* Is the VM ethernet driver a para-virtual driver? Para-virtual drivers give a 
good performance boost.
I have used openstack default parameters, it is virtio, I think virtio should 
have a good performance:
    <interface type='bridge'>
      <mac address='fa:16:3e:ca:4a:86'/>
      <source bridge='br-int'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='3213dbec-f2ea-462f-818b-e07b76a1752c'/>
      </virtualport>
      <target dev='tap3213dbec-f2'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' 
function='0x0'/>
    </interface>


* Is TSO ON in the VM and the Hypervisor?

The VM:
Features for eth0:
rx-checksumming: off [fixed]
tx-checksumming: on
        tx-checksum-ipv4: off [fixed]
        tx-checksum-ip-generic: on
        tx-checksum-ipv6: off [fixed]
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: on
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: on
        tx-tcp6-segmentation: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: on
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
The hypervisor:
Offload parameters for eth4:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on

* What throughput do you get while using Linux bridge instead of OVS?
Currently, I don't have a linux bridge environment.
But, I remember in virtio test, while I create bridge by hand and assigned it 
to an instance, I can always get near hardware limitation bandwidth if I have 
enough threads.

* Are you using tunnels? If you are using a tunnel like GRE, you will see a 
throughput drop.
No, I'm working under Quantum+OVS+VLAN.

Thanks.
-chen

From: Gurucharan Shetty [mailto:[email protected]]
Sent: Tuesday, July 30, 2013 12:06 AM
To: Li, Chen
Cc: [email protected]
Subject: Re: [ovs-discuss] network bandwidth in Openstack when using OVS+VLAN

There could be multiple reasons for the low throughput. I would probably look 
at the following.

* Is the VM ethernet driver a para-virtual driver? Para-virtual drivers give a 
good performance boost.
* Is TSO ON in the VM and the Hypervisor?
* What throughput do you get while using Linux bridge instead of OVS?
* Are you using tunnels? If you are using a tunnel like GRE, you will see a 
throughput drop.


On Mon, Jul 29, 2013 at 1:48 AM, Li, Chen 
<[email protected]<mailto:[email protected]>> wrote:
Hi list,

I'm a new user to OVS.

I installed OpenStack Grizzly, and  using Quantum + OVS + VLAN for network.

I have two compute nodes with 10 Gb NICs, and the bandwidth between them is 
about  8.49 Gbits/sec (tested by iperf).

I started one instance at each compute node:
instance-a => compute1
instance-b=> compute2
The bandwidth between this two virtual machine is only 1.18 Gbits/sec.

Then I start 6 instances at each compute node:
          (   instance-a => compute1 ) ----- iperf------ > (instance-b=> 
compute2)
                          (   instance-c => compute1 ) ----- iperf------ > 
(instance-d=> compute2)
                          (   instance-e => compute1 ) ----- iperf------ > 
(instance-f=> compute2)
                          (   instance-g => compute1 ) ----- iperf------ > 
(instance-h=> compute2)
                          (   instance-i => compute1 ) ----- iperf------ > 
(instance-j=> compute2)
                          (   instance-k => compute1 ) ----- iperf------ > 
(instance-l=> compute2)
The total bandwidth is only 4.25 Gbits/sec.


Anyone know why the performance is this low ?

Thanks.
-chen

_______________________________________________
discuss mailing list
[email protected]<mailto:[email protected]>
http://openvswitch.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to