I think your environment is miss-configured, as I don't experience these
performance issues with Havana + Ubuntu 12.04 + OVS  1.10.2 + GRE.

I have two instances running on two compute nodes connected using GRE tunnels.
The instances belong to the same tenant so traffic between them uses the GRE 
tunnel between the two compute nodes.
Only the traffic destined for outside is sent to the qrouter running on the 
dedicated network node using the GRE tunnel established between the compute 
node and the network node.

The topology is "Per-Tenant Router with Private Networks" and basically,
the setup looks like this:

Network node Public IP x.x.x.x
                            Data IP (GRE ) 10.0.20.1 

Compute node1 Data IP (GRE) 10.0.20.2
                               Instance1 tenant IP 10.0.0.2

Compute node2 Data IP (GRE) 10.0.20.3
                               Instance1 tenant IP 10.0.0.4

The compute nodes as well as the networking node connect to the switch
at 1 Gbps.

I ran iperf between the two instances and got 450 Mbps:

[root@host-10-0-0-2 ~]#  iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.0.2 port 5001 connected with 10.0.0.4 port 53122
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-120.0 sec  6.29 GBytes   450 Mbits/sec

root@host-10-0-0-4 ~]# iperf -c 10.0.0.2 -i 10 -t 120 -w 128K
------------------------------------------------------------
Client connecting to 10.0.0.2, TCP port 5001
TCP window size:  216 KByte (WARNING: requested  128 KByte)
------------------------------------------------------------
[  3] local 10.0.0.4 port 53122 connected with 10.0.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   540 MBytes   453 Mbits/sec
[  3] 10.0-20.0 sec   537 MBytes   451 Mbits/sec
[  3] 20.0-30.0 sec   525 MBytes   440 Mbits/sec
[  3] 30.0-40.0 sec   525 MBytes   440 Mbits/sec
[  3] 40.0-50.0 sec   541 MBytes   454 Mbits/sec
[  3] 50.0-60.0 sec   539 MBytes   452 Mbits/sec
[  3] 60.0-70.0 sec   541 MBytes   454 Mbits/sec
[  3] 70.0-80.0 sec   540 MBytes   453 Mbits/sec
[  3] 80.0-90.0 sec   540 MBytes   453 Mbits/sec
[  3] 90.0-100.0 sec   535 MBytes   449 Mbits/sec
[  3] 100.0-110.0 sec   539 MBytes   452 Mbits/sec
[  3] 110.0-120.0 sec   542 MBytes   454 Mbits/sec
[  3]  0.0-120.0 sec  6.29 GBytes   450 Mbits/sec


I also ran iperf between the two compute nodes using the same physical link 
(the GRE 10.0.20.X segment) and I got close to wire speed (941 Mbps):

root@compute1:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.20.2 port 5001 connected with 10.0.20.3 port 58015
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-120.0 sec  13.1 GBytes   941 Mbits/sec

root@compute2:~# iperf -c 10.0.20.2 -i 10 -t 120 -w 128K
------------------------------------------------------------
Client connecting to 10.0.20.2, TCP port 5001
TCP window size:  256 KByte (WARNING: requested  128 KByte)
------------------------------------------------------------
[  3] local 10.0.20.3 port 58015 connected with 10.0.20.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 10.0-20.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 20.0-30.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 30.0-40.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 40.0-50.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 50.0-60.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 60.0-70.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 70.0-80.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 80.0-90.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 90.0-100.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 100.0-110.0 sec  1.10 GBytes   941 Mbits/sec
[  3] 110.0-120.0 sec  1.10 GBytes   941 Mbits/sec
[  3]  0.0-120.0 sec  13.1 GBytes   941 Mbits/sec


>From instance 2, I downloaded a large ISO (4.92 MB/s):

[root@host-10-0-0-4 ~]# wget 
http://centos.arcticnetwork.ca/6.4/isos/x86_64/CentOS-6.4-x86_64-minimal.iso
--2013-11-20 11:31:40--  
http://centos.arcticnetwork.ca/6.4/isos/x86_64/CentOS-6.4-x86_64-minimal.iso
Resolving centos.arcticnetwork.ca... 64.59.140.91
Connecting to centos.arcticnetwork.ca|64.59.140.91|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 358959104 (342M) [application/octet-stream]
Saving to: `CentOS-6.4-x86_64-minimal.iso'

100%[===================================================================================================================>]
358,959,104 5.40M/s   in 70s

2013-11-20 11:32:49 (4.92 MB/s) - `CentOS-6.4-x86_64-minimal.iso' saved
[358959104/358959104]


>From instance 2, I cloned the nova git ( 5.01 MiB/s):
[root@host-10-0-0-4 ~]# git clone https://github.com/openstack/nova.git
Cloning into 'nova'...
remote: Counting objects: 221982, done.
remote: Compressing objects: 100% (62488/62488), done.
remote: Total 221982 (delta 177528), reused 195110 (delta 152622)
Receiving objects: 100% (221982/221982), 128.48 MiB | 5.01 MiB/s, done.
Resolving deltas: 100% (177528/177528), done.

>From instance 2, I ran speedtest:

yum -y install python-pip
pip-python install speedtest-cli
speedtest-cli --server 2565 --share


and the speedtest results were:
Testing download speed........................................
Download: 90.15 Mbit/s
Testing upload speed..................................................
Upload: 43.87 Mbit/s

In conclusion, check again and you might find the issue.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1252900

Title:
  Directional network performance issues with Neutron + OpenvSwitch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1252900/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to