Hi Kevin,

Typically we noticed that the underlay switches maintained a table like this:

VLAN ID    MAC Address  Learned-Interface

In the physical underlay, with the current architecture if we enable VLAN, the 
same DVR Unique

MAC will appear  on different VLANs as the packets get DVR Routed.

This will result in the rows of the above tables in the switch to be updated 
very frequently with new

VLANs noted in incoming packets for the same DVR MAC Address, even though they 
are from the

same physical port.

We are not sure if all the switches maintained the tables this way , but 
atleast we

saw Openvswitch implementations did.  So we consciously did not promote VLANs 

initial phase of DVR.




From: Kevin Benton [mailto:blak...@gmail.com]
Sent: Thursday, September 18, 2014 3:01 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] DVR Tunnel Design Question

Can you clarify what you mean with the thrashing condition? MAC addresses only 
need to be unique per-VLAN so I don't see how the same MAC on multiple VLANs 
from the same physical port would lead to any issues.

On Wed, Sep 17, 2014 at 12:41 PM, Armando M. 
<arma...@gmail.com<mailto:arma...@gmail.com>> wrote:

VLAN is on the radar, vxlan/gre was done to start with.

I believe Vivek mentioned the rationale in some other thread. The gist
of it below:

In the current architecture, we use a unique DVR MAC per compute node
to forward DVR Routed traffic directly to destination compute node.
The DVR routed traffic from the source compute node will carry
'destination VMs underlay VLAN' in the frame, but the Source Mac in
that same frame will be the DVR Unique MAC. So, same DVR Unique Mac is
used for potentially a number of overlay network VMs that would exist
on that same source compute node.

The underlay infrastructure switches will see the same DVR Unique MAC
being associated with different VLANs on incoming frames, and so this
would result in VLAN Thrashing on the switches in the physical cloud
infrastructure. Since tunneling protocols carry the entire DVR routed
inner frames as tunnel payloads, there is no thrashing effect on
underlay switches.

There will still be thrashing effect on endpoints on CNs themselves,
when they try to learn that association between inner frame source MAC
and the TEP port on which the tunneled frame is received. But that we
have addressed in L2 Agent by having a 'DVR Learning Blocker' table,
which ensures that learning for DVR routed packets alone is

As a result, VLAN was not promoted as a supported underlay for the
initial DVR architecture.


On 16 September 2014 20:35, 龚永生 
<gong...@unitedstack.com<mailto:gong...@unitedstack.com>> wrote:
> I think the VLAN should also be supported later.  The tunnel should not be
> the prerequisite for the DVR feature.
> ------------------ Original ------------------
> From:  "Steve Wormley"<openst...@wormley.com<mailto:openst...@wormley.com>>;
> Date:  Wed, Sep 17, 2014 10:29 AM
> To:  
> "openstack-dev"<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>;
> Subject:  [openstack-dev] [neutron] DVR Tunnel Design Question
> In our environment using VXLAN/GRE would make it difficult to keep some of
> the features we currently offer our customers. So for a while now I've been
> looking at the DVR code, blueprints and Google drive docs and other than it
> being the way the code was written I can't find anything indicating why a
> Tunnel/Overlay network is required for DVR or what problem it was solving.
> Basically I'm just trying to see if I missed anything as I look into doing a
> VLAN/OVS implementation.
> Thanks,
> -Steve Wormley

> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack-dev mailing list


Kevin Benton

OpenStack-dev mailing list

Reply via email to