Yes. You should see the similar flow table entries like this one,


sudo ovs-ofctl dump-flows br-int

NXST_FLOW reply (xid=0x4):

cookie=0x0, duration=82162.227s, table=0, n_packets=219, n_bytes=23483, 
idle_age=11936, hard_age=65534, priority=3,in_port=1,dl_vlan=120 
actions=mod_vlan_vid:1,NORMAL

cookie=0x0, duration=82174.601s, table=0, n_packets=52271, n_bytes=7173176, 
idle_age=61, hard_age=65534, priority=2,in_port=1 actions=drop

cookie=0x0, duration=82176.97s, table=0, n_packets=2110, n_bytes=188772, 
idle_age=1618, hard_age=65534, priority=1 actions=NORMAL



dl_vlan=120 is the vlan number you configured in your ml2 init file. For some 
reason, this is not been configured in your flow table entry. The actual code 
that generates this flow entry configuration is at,



/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py



Also Neutron log file should have the following CLI log,



2014-04-28 11:23:56.860 DEBUG neutron.agent.linux.utils [-]

Running command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.con

f', 'ovs-ofctl', 'add-flow', 'br-int', 
'hard_timeout=0,idle_timeout=0,priority=3,in_port=1,

dl_vlan=120,actions=mod_vlan_vid:1,normal'] from (pid=11102)



HTH



Dennis Qin







-----Original Message-----
From: Erich Weiler [mailto:[email protected]]
Sent: Friday, April 25, 2014 11:23 AM
To: openstack
Subject: [Openstack] Open vSwitch not working as expected...?



Hi Y'all,



I recently began rebuilding my OpenStack installation under the latest icehouse 
release, and everything is almost working, but I'm having issues with Open 
vSwitch, at least on the compute nodes.



I'm use the ML2 plugin and VLAN tenant isolation.  I have this in my 
/etc/neutron/plugin.ini file



----------

[ovs]

bridge_mappings = physnet1:br-eth1



[ml2]

type_drivers = vlan

tenant_network_types = vlan

mechanism_drivers  = openvswitch



# Example: mechanism_drivers = linuxbridge,brocade



[ml2_type_flat]



[ml2_type_vlan]

network_vlan_ranges = physnet1:200:209

----------



My switchports that the nodes connect to are configured as trunks, allowing 
VLANs 200-209 to flow over them.



My network that the VMs should be connecting to is:



# neutron net-show cbse-net

+---------------------------+--------------------------------------+

| Field                     | Value                                |

+---------------------------+--------------------------------------+

| admin_state_up            | True                                 |

| id                        | 23028b15-fb12-4a9f-9fba-02f165a52d44 |

| name                      | cbse-net                             |

| provider:network_type     | vlan                                 |

| provider:physical_network | physnet1                             |

| provider:segmentation_id  | 200                                  |

| router:external           | False                                |

| shared                    | False                                |

| status                    | ACTIVE                               |

| subnets                   | dd25433a-b21d-475d-91e4-156b00f25047 |

| tenant_id                 | 7c1980078e044cb08250f628cbe73d29     |

+---------------------------+--------------------------------------+



# neutron subnet-show dd25433a-b21d-475d-91e4-156b00f25047

+------------------+--------------------------------------------------+

| Field            | Value                                            |

+------------------+--------------------------------------------------+

| allocation_pools | {"start": "10.200.0.2", "end": "10.200.255.254"} |

| cidr             | 10.200.0.0/16                                    |

| dns_nameservers  | 128.114.48.44                                    |

| enable_dhcp      | True                                             |

| gateway_ip       | 10.200.0.1                                       |

| host_routes      |                                                  |

| id               | dd25433a-b21d-475d-91e4-156b00f25047             |

| ip_version       | 4                                                |

| name             |                                                  |

| network_id       | 23028b15-fb12-4a9f-9fba-02f165a52d44             |

| tenant_id        | 7c1980078e044cb08250f628cbe73d29                 |

+------------------+--------------------------------------------------+



So those VMs on that network should send packets that would be tagged with VLAN 
200.



I launch an instance, then look at the compute node with the instance on it.  
It doesn't get a DHCP address, so it can't talk to the neutron node with the 
dnsmasq server running on it.  I configure the VM's interface to be a static IP 
on VLAN200, 10.200.0.30, and netmask 255.255.0.0.  I have another node set up 
on VLAN 200 on my switch to test with

(10.200.0.50) that is a real bare-metal server.



I can't ping my bare-metal server.  I see the packets getting to eth1 on my 
compute node, but stopping there.  Then I figure out that the packets are *not 
being tagged* for VLAN 200 as they leave the compute node!!  So

the switch is dropping them.    As a test I configure the switchport

with "native vlan 200", and voila, the ping works.



So, Open vSwitch is not getting that it needs to tag the packets for VLAN 200.  
A little diagnostics on the compute node:



  ovs-ofctl dump-flows br-int

NXST_FLOW reply (xid=0x4):

  cookie=0x0, duration=966.803s, table=0, n_packets=0, n_bytes=0, idle_age=966, 
priority=0 actions=NORMAL



Shouldn't that show some VLAN tagging?



and a tcpdump on eth1 on the compute node:



# tcpdump -e -n -vv -i eth1 | grep -i arp

tcpdump: WARNING: eth1: no IPv4 address assigned

tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size

65535 bytes

11:21:50.462447 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 
42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 
10.200.0.30, length 28

11:21:51.462968 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 
42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 
10.200.0.30, length 28

11:21:52.462330 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 
42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 
10.200.0.30, length 28

11:21:53.462311 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 
42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 
10.200.0.30, length 28

11:21:54.463169 fa:16:3e:94:b3:63 > Broadcast, ethertype ARP (0x0806), length 
42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.200.0.50 tell 
10.200.0.30, length 28



That tcpdump also confirms the ARP packets are not being tagged 200 as they 
leave the physical interface.



This worked before when I was testing icehouse RC1, I don't know what changed 
with Open vSwitch...  Anyone have any ideas?



Thanks as always for the help!!  This list has been very helpful.



cheers,

erich





_______________________________________________

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to     : 
[email protected]<mailto:[email protected]>

Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to