Hi, I am trying to test OpenStack with Quantum using OVS. In my test environment, the nova-network service run in a kvm VM on a host dedicated to virtualization. Network on this host is built on linux bridges.
The kvm host (hx4) is a Debian Squeeze box, running among others a guest
(essex) consisting in a Ubuntu Precise system running OpenStack Essex
and Quantum (from packages).
Mainly, on my host (hx4) I have a 'vm1' bridge on which eth0 and some
kvm guests are.
root@hx4 ~# brctl show
bridge name bridge id STP enabled interfaces
vm1 8000.002219bb08dd yes eth0
vnet12
vnet12 is the eth1 interface of my essex guest.
In that guest, I have openvswitch installed (from distrib packages). I
created an ovs bridge in which the eth1 interface is plugged.
I also have 3 nova-compute nodes (hx6, hx7, hx8) on bare metal machines,
on which ovs is installed and working properly.
On essex the bridge looks like:
root@essex:~# ovs-vsctl show
8df2727c-ba5c-4b34-bd84-b332cf6dfe6a
Bridge br-int
Port br-int
Interface br-int
type: internal
Port br-int-tag
tag: 1111
Interface br-int-tag
type: internal
Port "eth1"
Interface "eth1"
ovs_version: "1.4.0+build0"
I added some static IPs to internal ports:
root@essex:~# ip a show br-int
5: br-int: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UNKNOWN
link/ether d2:66:69:f3:78:48 brd ff:ff:ff:ff:ff:ff
inet 10.10.12.1/24 scope global br-int
root@essex:~# ip a show br-int-tag
8: br-int-tag: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UNKNOWN
link/ether 2e:aa:30:a4:09:cb brd ff:ff:ff:ff:ff:ff
inet 10.10.11.1/24 scope global br-int-tag
I did the same on one of the compute nodes:
root@hx7:~# ovs-vsctl show
5e1ac6f9-211d-4a11-b1fd-c6ac81e5de84
Bridge br-int
Port br-int-tag
tag: 1111
Interface br-int-tag
type: internal
Port br-int
Interface br-int
type: internal
Port "eth1"
Interface "eth1"
ovs_version: "1.4.0+build0"
root@hx7:~# ip a show br-int
5: br-int: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UNKNOWN
link/ether d4:ae:52:73:6f:28 brd ff:ff:ff:ff:ff:ff
inet 10.10.12.7/24 scope global br-int
root@hx7:~# ip a show br-int-tag
12: br-int-tag: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UNKNOWN
link/ether 5e:f3:43:31:1d:9a brd ff:ff:ff:ff:ff:ff
inet 10.10.11.7/24 scope global br-int-tag
My problem is that, if I can ping the untagged interfaces, it does not
work with tagged ones.
root@essex:~# ping 10.10.12.7 -c 1
PING 10.10.12.7 (10.10.12.7) 56(84) bytes of data.
64 bytes from 10.10.12.7: icmp_req=1 ttl=64 time=1.69 ms
--- 10.10.12.7 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.693/1.693/1.693/0.000 ms
root@essex:~# ping 10.10.11.7 -c 1
PING 10.10.11.7 (10.10.11.7) 56(84) bytes of data.
From 10.10.11.1 icmp_seq=1 Destination Host Unreachable
--- 10.10.11.7 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
Note that it works properly between 2 bare-metal nodes (taggued or
untaggued). It does not work from/to the virtualized machine for taggued
nets only (for which traffic goes through a linux bridge on the
virtualization host).
Does anyone have a clue on what is going wrong here?
Sorry for being that long.
<<attachment: david_douard.vcf>>
signature.asc
Description: OpenPGP digital signature
_______________________________________________ discuss mailing list [email protected] http://openvswitch.org/mailman/listinfo/discuss
