There could be multiple reasons for the low throughput. I would probably look at the following.
* Is the VM ethernet driver a para-virtual driver? Para-virtual drivers give a good performance boost. * Is TSO ON in the VM and the Hypervisor? * What throughput do you get while using Linux bridge instead of OVS? * Are you using tunnels? If you are using a tunnel like GRE, you will see a throughput drop. On Mon, Jul 29, 2013 at 1:48 AM, Li, Chen <[email protected]> wrote: > Hi list,**** > > ** ** > > I’m a new user to OVS.**** > > ** ** > > I installed OpenStack Grizzly, and using Quantum + OVS + VLAN for network. > **** > > ** ** > > I have two compute nodes with 10 Gb NICs, and the bandwidth between them > is about *8.49* Gbits/sec (tested by iperf).**** > > ** ** > > I started one instance at each compute node:**** > > instance-a => compute1**** > > instance-b=> compute2**** > > The bandwidth between this two virtual machine is only *1.18* Gbits/sec. > ** > > Then I start 6 instances at each compute node:**** > > ( instance-a => compute1 ) ----- iperf------ > (instance-b=> > compute2)**** > > ( instance-c => compute1 ) ----- iperf------ > > (instance-d=> compute2)**** > > ( instance-e => compute1 ) ----- iperf------ > > (instance-f=> compute2)**** > > ( instance-g => compute1 ) ----- iperf------ > > (instance-h=> compute2)**** > > ( instance-i => compute1 ) ----- iperf------ > > (instance-j=> compute2)**** > > ( instance-k => compute1 ) ----- iperf------ > > (instance-l=> compute2)**** > > The total bandwidth is only *4.25 *Gbits/sec.**** > > ** ** > > ** ** > > Anyone know why the performance is this low ?**** > > ** ** > > Thanks.**** > > -chen**** > > _______________________________________________ > discuss mailing list > [email protected] > http://openvswitch.org/mailman/listinfo/discuss > >
_______________________________________________ discuss mailing list [email protected] http://openvswitch.org/mailman/listinfo/discuss
