Re: [ovs-dev] 答复: Re: ?答复: Re: 答复: Re: [PATCH] ovn-controller: Support vxlan tunnel in ovn

2017-05-09 Thread Ben Pfaff
The solution presented so far breaks OVN current and planned features.
If you have a solution that doesn't break OVN current or planned
features, let's talk about it.  Until then, it is not interesting.

On Tue, May 09, 2017 at 09:29:07AM +0800, wang.qia...@zte.com.cn wrote:
> Hi Ben and Mikey, thank you for your review and analysis.
> 
> As we discribed below, vxlan is used very common, and the use case that we 
> mentioned below is typical architecture of telecom operators' networks.
> So, we think it is very necessary to support vxlan between ovs and HW 
> switch. If ovn does not support the use case we discribed before, we think 
> the the 
> ovn features may be imperfect. For this reason, we do the changes as this 
> patch.
> 
> Because vxlan tunnel can not carry enough information like genev to 
> fulfill ovn current pipeline, so we do some assumptions. As Michkey and 
> Ben said, 
> these assumptions may not hold going forward, such as S_SWITCH_IN_L2_LKUP 
> and  more than one chassisredirect ports and so on.
> 
> We think these incompatibles are not impossibly hard problems. If we have 
> the consensus of the necessity to support vxlan, we think we could 
> together 
> to talk over a blueprint that less influence of current planning features 
> and more effective to solve the problem we mentioned. 
> 
> Waiting for your suggestions,Thanks
> 
> 
> 
> 
> 
> 
> Mickey Spiegel <mickeys@gmail.com>
> 发件人: ovs-dev-boun...@openvswitch.org
> 2017/05/08 04:58
>  
> 收件人:xu.r...@zte.com.cn, 
>         抄送:  ovs dev <d...@openvswitch.org>, 
> ovs-dev-boun...@openvswitch.org
> 主题:  Re: [ovs-dev]   答复: Re: 答复: Re: [PATCH] 
> ovn-controller: Support vxlan tunnel in ovn
> 
> 
> There are some assumptions that you are making which need to be called 
> out.
> These assumptions may not hold going forward. In fact I refer to two
> different patches below that are currently under review, that break your
> assumptions.
> 
> On Fri, May 5, 2017 at 7:18 PM, <xu.r...@zte.com.cn> wrote:
> 
> > Hi,Russell
> >
> > We think vxlan is the most commonly used tunnel encapsulation in the
> > overlay network openstack,ovn should better consider it.
> >
> > As my workmate wang qianyu said,we would consider computer node connect
> > with existing hardware switches which associates with SR-IOV as VTEP.
> >
> > After discussion, we feel that as long as the following changes for 
> vxlan
> > tunnel in the table0:
> >
> > 1.For local switch, move MFF_TUN_ID to MFF_LOG_DATAPATH, resubmit to
> > OFTABLE_ETH_UCAST(table29)
> >
> 
> It looks like you are overloading OFTABLE_ETH_UCAST that you define here
> with S_SWITCH_IN_L2_LKUP in ovn/northd/ovn-northd.c. Hardcoding the value
> to 29 in ovn/controller/lflow.h is not the way to do this. This pipeline
> stage does move back as new features are added. In fact it is now table 31
> due to the recent addition of 2 tables for DNS lookup.
> 
> 
> > //---In table29, we can find out dst port based on dst mac
> >
> 
> You are assuming that outport determination is only based on
> S_SWITCH_IN_L2_LKUP with no impact from any other ingress pipeline stages.
> This may not always be true, which I think is the point of Ben's 
> complaint.
> For example the SFC patch that is currently under review (
> http://patchwork.ozlabs.org/patch/754427/) may set outport and then do
> "output" in the ingress pipeline, in a pipeline stage other than
> S_SWITCH_IN_L2_LKUP.
> 
> The alternative is to go through the entire ingress pipeline, but here you
> have a problem since you do not know the inport. The current VTEP-centric
> VXLAN code assumes that there is only one port binding per datapath from
> the VTEP chassis. For the general case that you are trying to address, 
> this
> assumption does not hold, so you cannot properly determine the inport. The
> inport may affect the ultimate decision on outport. This is certainly the
> case for the SFC patch currently under review.
> 
> You are also assuming that inport does not affect anything in the egress
> pipeline. This seems to be true at the moment, but this might not always 
> be
> true as features are added.
> 
> The existing VTEP functionality does not rely on the assumptions that you
> made, but since you changed the logic to determine inport in case of 
> VXLAN,
> you are changing existing functionality.
> 
> 
> > 2.For local chassisredirect port, move MFF_TUN_ID to MFF_LOG_DATAPATH, 
> set
> > port tunnel_key to MFF_LOG_OUTPORT and then resubmit to
> > OFTABLE_LOCAL_OUTPUT.
> > //---In table 33, we can find out dst local sw and sw 

Re: [ovs-dev] 答复: Re: 答复: Re: 答复: Re: [PATCH] ovn-controller: Support vxlan tunnel in ovn

2017-05-08 Thread Macauley Cheng
Hi Qianyu,
Did you try the ovn-controller-vtep before?
You can use the ovn-controller-vtep with HW switch to setup the vxlan tunnel 
with host which run ovn-controller.


-Original Message-
From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-boun...@openvswitch.org] 
On Behalf Of wang.qia...@zte.com.cn
Sent: Tuesday, May 09, 2017 9:29 AM
To: ovs-dev-boun...@openvswitch.org; b...@ovn.org; mickeys@gmail.com
Cc: ovs dev; xu.r...@zte.com.cn; ovs-dev-boun...@openvswitch.org
Subject: [ovs-dev] 答复: Re: 答复: Re: 答复: Re: [PATCH] ovn-controller: Support 
vxlan tunnel in ovn

Hi Ben and Mikey, thank you for your review and analysis.

As we discribed below, vxlan is used very common, and the use case that we 
mentioned below is typical architecture of telecom operators' networks.
So, we think it is very necessary to support vxlan between ovs and HW switch. 
If ovn does not support the use case we discribed before, we think the the ovn 
features may be imperfect. For this reason, we do the changes as this patch.

Because vxlan tunnel can not carry enough information like genev to fulfill ovn 
current pipeline, so we do some assumptions. As Michkey and Ben said, these 
assumptions may not hold going forward, such as S_SWITCH_IN_L2_LKUP and  more 
than one chassisredirect ports and so on.

We think these incompatibles are not impossibly hard problems. If we have the 
consensus of the necessity to support vxlan, we think we could together to talk 
over a blueprint that less influence of current planning features and more 
effective to solve the problem we mentioned. 

Waiting for your suggestions,Thanks






Mickey Spiegel <mickeys@gmail.com>
发件人: ovs-dev-boun...@openvswitch.org
2017/05/08 04:58
 
收件人:xu.r...@zte.com.cn, 
抄送:  ovs dev <d...@openvswitch.org>, ovs-dev-boun...@openvswitch.org
        主题:  Re: [ovs-dev]   答复: Re: 答复: Re: [PATCH] 
ovn-controller: Support vxlan tunnel in ovn


There are some assumptions that you are making which need to be called 
out.
These assumptions may not hold going forward. In fact I refer to two
different patches below that are currently under review, that break your
assumptions.

On Fri, May 5, 2017 at 7:18 PM, <xu.r...@zte.com.cn> wrote:

> Hi,Russell
>
> We think vxlan is the most commonly used tunnel encapsulation in the
> overlay network openstack,ovn should better consider it.
>
> As my workmate wang qianyu said,we would consider computer node connect
> with existing hardware switches which associates with SR-IOV as VTEP.
>
> After discussion, we feel that as long as the following changes for 
vxlan
> tunnel in the table0:
>
> 1.For local switch, move MFF_TUN_ID to MFF_LOG_DATAPATH, resubmit to
> OFTABLE_ETH_UCAST(table29)
>

It looks like you are overloading OFTABLE_ETH_UCAST that you define here
with S_SWITCH_IN_L2_LKUP in ovn/northd/ovn-northd.c. Hardcoding the value
to 29 in ovn/controller/lflow.h is not the way to do this. This pipeline
stage does move back as new features are added. In fact it is now table 31
due to the recent addition of 2 tables for DNS lookup.


> //---In table29, we can find out dst port based on dst mac
>

You are assuming that outport determination is only based on
S_SWITCH_IN_L2_LKUP with no impact from any other ingress pipeline stages.
This may not always be true, which I think is the point of Ben's 
complaint.
For example the SFC patch that is currently under review (
http://patchwork.ozlabs.org/patch/754427/) may set outport and then do
"output" in the ingress pipeline, in a pipeline stage other than
S_SWITCH_IN_L2_LKUP.

The alternative is to go through the entire ingress pipeline, but here you
have a problem since you do not know the inport. The current VTEP-centric
VXLAN code assumes that there is only one port binding per datapath from
the VTEP chassis. For the general case that you are trying to address, 
this
assumption does not hold, so you cannot properly determine the inport. The
inport may affect the ultimate decision on outport. This is certainly the
case for the SFC patch currently under review.

You are also assuming that inport does not affect anything in the egress
pipeline. This seems to be true at the moment, but this might not always 
be
true as features are added.

The existing VTEP functionality does not rely on the assumptions that you
made, but since you changed the logic to determine inport in case of 
VXLAN,
you are changing existing functionality.


> 2.For local chassisredirect port, move MFF_TUN_ID to MFF_LOG_DATAPATH, 
set
> port tunnel_key to MFF_LOG_OUTPORT and then resubmit to
> OFTABLE_LOCAL_OUTPUT.
> //---In table 33, we can find out dst local sw and sw patch port based 
on
> the local chassisredirect port,and then follow the exsiting flows.
>

At the moment, the only case where a tunnel is used for a datapath
representing a logical router is when the outport is a chassisre

[ovs-dev] 答复: Re: 答复: Re: 答复: Re: [PATCH] ovn-controller: Support vxlan tunnel in ovn

2017-05-08 Thread wang . qianyu
Hi Ben and Mikey, thank you for your review and analysis.

As we discribed below, vxlan is used very common, and the use case that we 
mentioned below is typical architecture of telecom operators' networks.
So, we think it is very necessary to support vxlan between ovs and HW 
switch. If ovn does not support the use case we discribed before, we think 
the the 
ovn features may be imperfect. For this reason, we do the changes as this 
patch.

Because vxlan tunnel can not carry enough information like genev to 
fulfill ovn current pipeline, so we do some assumptions. As Michkey and 
Ben said, 
these assumptions may not hold going forward, such as S_SWITCH_IN_L2_LKUP 
and  more than one chassisredirect ports and so on.

We think these incompatibles are not impossibly hard problems. If we have 
the consensus of the necessity to support vxlan, we think we could 
together 
to talk over a blueprint that less influence of current planning features 
and more effective to solve the problem we mentioned. 

Waiting for your suggestions,Thanks






Mickey Spiegel <mickeys@gmail.com>
发件人: ovs-dev-boun...@openvswitch.org
2017/05/08 04:58
 
收件人:xu.r...@zte.com.cn, 
抄送:  ovs dev <d...@openvswitch.org>, 
ovs-dev-boun...@openvswitch.org
        主题:  Re: [ovs-dev]   答复: Re: 答复: Re: [PATCH] 
ovn-controller: Support vxlan tunnel in ovn


There are some assumptions that you are making which need to be called 
out.
These assumptions may not hold going forward. In fact I refer to two
different patches below that are currently under review, that break your
assumptions.

On Fri, May 5, 2017 at 7:18 PM, <xu.r...@zte.com.cn> wrote:

> Hi,Russell
>
> We think vxlan is the most commonly used tunnel encapsulation in the
> overlay network openstack,ovn should better consider it.
>
> As my workmate wang qianyu said,we would consider computer node connect
> with existing hardware switches which associates with SR-IOV as VTEP.
>
> After discussion, we feel that as long as the following changes for 
vxlan
> tunnel in the table0:
>
> 1.For local switch, move MFF_TUN_ID to MFF_LOG_DATAPATH, resubmit to
> OFTABLE_ETH_UCAST(table29)
>

It looks like you are overloading OFTABLE_ETH_UCAST that you define here
with S_SWITCH_IN_L2_LKUP in ovn/northd/ovn-northd.c. Hardcoding the value
to 29 in ovn/controller/lflow.h is not the way to do this. This pipeline
stage does move back as new features are added. In fact it is now table 31
due to the recent addition of 2 tables for DNS lookup.


> //---In table29, we can find out dst port based on dst mac
>

You are assuming that outport determination is only based on
S_SWITCH_IN_L2_LKUP with no impact from any other ingress pipeline stages.
This may not always be true, which I think is the point of Ben's 
complaint.
For example the SFC patch that is currently under review (
http://patchwork.ozlabs.org/patch/754427/) may set outport and then do
"output" in the ingress pipeline, in a pipeline stage other than
S_SWITCH_IN_L2_LKUP.

The alternative is to go through the entire ingress pipeline, but here you
have a problem since you do not know the inport. The current VTEP-centric
VXLAN code assumes that there is only one port binding per datapath from
the VTEP chassis. For the general case that you are trying to address, 
this
assumption does not hold, so you cannot properly determine the inport. The
inport may affect the ultimate decision on outport. This is certainly the
case for the SFC patch currently under review.

You are also assuming that inport does not affect anything in the egress
pipeline. This seems to be true at the moment, but this might not always 
be
true as features are added.

The existing VTEP functionality does not rely on the assumptions that you
made, but since you changed the logic to determine inport in case of 
VXLAN,
you are changing existing functionality.


> 2.For local chassisredirect port, move MFF_TUN_ID to MFF_LOG_DATAPATH, 
set
> port tunnel_key to MFF_LOG_OUTPORT and then resubmit to
> OFTABLE_LOCAL_OUTPUT.
> //---In table 33, we can find out dst local sw and sw patch port based 
on
> the local chassisredirect port,and then follow the exsiting flows.
>

At the moment, the only case where a tunnel is used for a datapath
representing a logical router is when the outport is a chassisredirect
port. Your code assumes that will always be the case. If we do what you 
are
suggesting, then that becomes a restriction for all logical router 
features
going forward.

This code also assumes that there can only be one chassisredirect port per
datapath per chassis. There is a patch that has not yet been reviewed (
http://patchwork.ozlabs.org/patch/732815/) that proposes multiple
distributed gateway ports (and correspondingly chassisredirect ports) on
one datapath. I am not sure what the use case is, but if that feature were
added and more than one distributed gateway por