Re: [ovs-dev] 答复: Re: 答复: [spam可疑邮件]Re: 答复: Re: [PATCH 2/2] ovn-northd: Fix ping failure of vlan networks.

2017-06-29 Thread Mickey Spiegel
7 at 10:00 AM, Han Zhou <zhou...@gmail.com> wrote:
>
> > It is not about limit but more about use case. Could you explain your use
> > case why using localnet ports here while the VMs can be connected through
> > logical router on overlay? Did my patch work for you? Does it work also
> > when you remove the localnet ports?
> >
> > Mickey mentioned scenarios when a combination of localnet ports and
> > logical router is valid, which are for gateway use cases, and it seems to
> > me that is not your case cause you are just try to connect two VMs.
> >
> > Han
> >
> >
> > On Tue, Jun 27, 2017 at 2:32 AM, <wang.qia...@zte.com.cn> wrote:
> >
> >> Hi Han Zhou,
> >>
> >> > If using localnet, it should rely on physical network (L2 and L3) to
> >> reach the destination, not overlay, so adding the logical router here
> >> doesn't make sense here
> >>
> >> Why ovn have this limit for physical network? Does this mean that vlan
> >> network can not use the l3 function of ovn?
> >>
> >> Thanks
> >>
> >>
> >>
> >> *Han Zhou <zhou...@gmail.com <zhou...@gmail.com>>*
> >>
> >> 2017/06/27 16:24
> >>
> >> 收件人:wang.qia...@zte.com.cn,
> >> 抄送:Russell Bryant <russ...@ovn.org>, ovs dev <
> >> d...@openvswitch.org>, zhou.huij...@zte.com.cn, xurong00037997 <
> >> xu.r...@zte.com.cn>
> >> 主题:Re: [ovs-dev] 答复: [spam可疑邮件]Re: 答复: Re: [PATCH 2/2]
> >> ovn-northd: Fix ping failure of vlan networks.
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Jun 15, 2017 at 1:04 AM, <*wang.qia...@zte.com.cn*
> >> <wang.qia...@zte.com.cn>> wrote:
> >> >
> >> > Hi Russell, I am sorry for the late reply.
> >> > The route not bound to a chassis, and have no redirect-chassis. The
> >> dumped
> >> > northbound db is as follow.
> >> > Ip addresses of 100.0.0.148 and 200.0.0.2 locate on different chassis.
> >> The
> >> > ping between them is not success before this patch.
> >> >
> >> >
> >> > [root@tecs159 ~]#
> >> > [root@tecs159 ~]# ovsdb-client dump
> >> > unix:/var/run/openvswitch/ovnnb_db.sock
> >> > ACL table
> >> > _uuidactiondirection
> >>  external_ids
> >> > log   match
>  priority
> >> >  - --
> >> >  -
> >> > 
> >> 
> >> --
> >> > 
> >> > ac2900f9-49fd-430a-b646-88d1f7c54ab8 allow from-lport
> >> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false
> "inport
> >> ==
> >> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip4 && ip4.dst ==
> >> > {255.255.255.255, *100.0.0.0/24* <http://100.0.0.0/24>} && udp &&
> >> udp.src == 68 && udp.dst == 67"
> >> > 1002
> >> > 784a55c3-05fd-4c4d-a51e-5b9ee5cc1e8e allow from-lport
> >> > {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false
> "inport
> >> ==
> >> > \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip4 && ip4.dst ==
> >> > {255.255.255.255, *100.0.0.0/24* <http://100.0.0.0/24>} && udp &&
> >> udp.src == 68 && udp.dst == 67"
> >> > 1002
> >> > 08be2532-f8ff-493f-83e3-085eede36e08 allow from-lport
> >> > {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false
> "inport
> >> ==
> >> > \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip4 && ip4.dst ==
> >> > {255.255.255.255, *100.0.0.0/24* <http://100.0.0.0/24>} && udp &&
> >> udp.src == 68 && udp.dst == 67"
> >> > 1002
> >> > bb263947-a436-4a0d-9218-5abd89546a69 allow from-lport
> >> > {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false
> "inport
> >> ==
> >> > \"f8de0603-f4ec-454

Re: [ovs-dev] 答复: Re: 答复: [spam可疑邮件]Re: 答复: Re: [PATCH 2/2] ovn-northd: Fix ping failure of vlan networks.

2017-06-29 Thread Han Zhou
I learned that this use case is kind of Hierarchical scenario:
https://specs.openstack.org/openstack/neutron-specs/specs/kilo/ml2-hierarchical-port-binding.html

In such scenario, user wants to use OVN to manage vlan networks, and the
vlan networks is connected via VTEP overlay, which is not managed by OVN
itself. VTEP is needed to connect to BM/SRIOV VMs to the same L2 that OVS
VIFs are connected to.

User don't want to use OVN to manage VTEPs since there will be flooding to
many VTEPs. (Mac-learning is not supported yet)

So in this scenario, user want to utilize distributed logical router as a
way to optimize the datapath. For VM to VM traffic between different vlans,
instead of going to a centralized external L3 router, user wants the
traffic to be tagged to the destination vlan directly and go straight from
source HV to destination HV through the destination vlan.

In the vtep scenario, this is a valuable optimization. Even in a normal
vlan setup without vtep, this can be an optimization too if src and dst VMs
are on the same HV (so that the packet doesn't need to go to physical
switch and come back).

So, I agree with Qianyu that connecting VLAN networks with logical router
is reasonable, which means, the transport of logical router can be not only
tunnels, but also physical networks (localnet port). To fulfill this
requirement, we need to solve some problems in current OVN code:

1) Since the data path is asymmetric, we need to solve the CT problem of
the localnet port. I agree with the idea of the patch from Qianyu, which
bypasses FW for localnet ports, since localnet port is to connect real
endpoints, so maybe there is not much value to add ACL on localnet ports.
Not sure if there is use case where ACL is really needed for localnet ports.

2) When there are ARP requests from vlan network (e.g. from a BM) to
logical router interface IP, the ARP request will reach every HV through
the localnet port and the distributed logical router port will respond from
every HV. Shall we disable ARP response from logical router for requests
from localnet port? In this scenario, I would expect the BM/SRIOV VMs on
the same vlan to use a different GW rather than the logical router.

3) We need to add a restriction/validation so that the localnet connection
is used only we are sure the 2 logical switches are on the same "physical
network", e.g. different vlans under same physical bridge group, or a
virtual L2 bridge group formed by vtep overlays.

Thanks,
Han


On Tue, Jun 27, 2017 at 10:00 AM, Han Zhou <zhou...@gmail.com> wrote:

> It is not about limit but more about use case. Could you explain your use
> case why using localnet ports here while the VMs can be connected through
> logical router on overlay? Did my patch work for you? Does it work also
> when you remove the localnet ports?
>
> Mickey mentioned scenarios when a combination of localnet ports and
> logical router is valid, which are for gateway use cases, and it seems to
> me that is not your case cause you are just try to connect two VMs.
>
> Han
>
>
> On Tue, Jun 27, 2017 at 2:32 AM, <wang.qia...@zte.com.cn> wrote:
>
>> Hi Han Zhou,
>>
>> > If using localnet, it should rely on physical network (L2 and L3) to
>> reach the destination, not overlay, so adding the logical router here
>> doesn't make sense here
>>
>> Why ovn have this limit for physical network? Does this mean that vlan
>> network can not use the l3 function of ovn?
>>
>> Thanks
>>
>>
>>
>> *Han Zhou <zhou...@gmail.com <zhou...@gmail.com>>*
>>
>> 2017/06/27 16:24
>>
>> 收件人:wang.qia...@zte.com.cn,
>>     抄送:        Russell Bryant <russ...@ovn.org>, ovs dev <
>> d...@openvswitch.org>, zhou.huij...@zte.com.cn, xurong00037997 <
>> xu.r...@zte.com.cn>
>> 主题:Re: [ovs-dev] 答复: [spam可疑邮件]Re: 答复: Re: [PATCH 2/2]
>> ovn-northd: Fix ping failure of vlan networks.
>>
>>
>>
>>
>>
>> On Thu, Jun 15, 2017 at 1:04 AM, <*wang.qia...@zte.com.cn*
>> <wang.qia...@zte.com.cn>> wrote:
>> >
>> > Hi Russell, I am sorry for the late reply.
>> > The route not bound to a chassis, and have no redirect-chassis. The
>> dumped
>> > northbound db is as follow.
>> > Ip addresses of 100.0.0.148 and 200.0.0.2 locate on different chassis.
>> The
>> > ping between them is not success before this patch.
>> >
>> >
>> > [root@tecs159 ~]#
>> > [root@tecs159 ~]# ovsdb-client dump
>> > unix:/var/run/openvswitch/ovnnb_db.sock
>> > ACL table
>> > _uuidactiondirection
>>  external_ids
>> > 

Re: [ovs-dev] 答复: Re: 答复: [spam可疑邮件]Re: 答复: Re: [PATCH 2/2] ovn-northd: Fix ping failure of vlan networks.

2017-06-27 Thread Han Zhou
It is not about limit but more about use case. Could you explain your use
case why using localnet ports here while the VMs can be connected through
logical router on overlay? Did my patch work for you? Does it work also
when you remove the localnet ports?

Mickey mentioned scenarios when a combination of localnet ports and logical
router is valid, which are for gateway use cases, and it seems to me that
is not your case cause you are just try to connect two VMs.

Han

On Tue, Jun 27, 2017 at 2:32 AM, <wang.qia...@zte.com.cn> wrote:

> Hi Han Zhou,
>
> > If using localnet, it should rely on physical network (L2 and L3) to
> reach the destination, not overlay, so adding the logical router here
> doesn't make sense here
>
> Why ovn have this limit for physical network? Does this mean that vlan
> network can not use the l3 function of ovn?
>
> Thanks
>
>
>
> *Han Zhou <zhou...@gmail.com <zhou...@gmail.com>>*
>
> 2017/06/27 16:24
>
> 收件人:wang.qia...@zte.com.cn,
> 抄送:Russell Bryant <russ...@ovn.org>, ovs dev <
> d...@openvswitch.org>, zhou.huij...@zte.com.cn, xurong00037997 <
> xu.r...@zte.com.cn>
>     主题:    Re: [ovs-dev] 答复: [spam可疑邮件]Re: 答复: Re: [PATCH 2/2]
> ovn-northd: Fix ping failure of vlan networks.
>
>
>
>
>
> On Thu, Jun 15, 2017 at 1:04 AM, <*wang.qia...@zte.com.cn*
> <wang.qia...@zte.com.cn>> wrote:
> >
> > Hi Russell, I am sorry for the late reply.
> > The route not bound to a chassis, and have no redirect-chassis. The
> dumped
> > northbound db is as follow.
> > Ip addresses of 100.0.0.148 and 200.0.0.2 locate on different chassis.
> The
> > ping between them is not success before this patch.
> >
> >
> > [root@tecs159 ~]#
> > [root@tecs159 ~]# ovsdb-client dump
> > unix:/var/run/openvswitch/ovnnb_db.sock
> > ACL table
> > _uuidactiondirection
>  external_ids
> > log   match priority
> >  - --
> >  -
> > 
> 
> --
> > 
> > ac2900f9-49fd-430a-b646-88d1f7c54ab8 allow from-lport
> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false "inport
> ==
> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip4 && ip4.dst ==
> > {255.255.255.255, *100.0.0.0/24* <http://100.0.0.0/24>} && udp &&
> udp.src == 68 && udp.dst == 67"
> > 1002
> > 784a55c3-05fd-4c4d-a51e-5b9ee5cc1e8e allow from-lport
> > {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false "inport
> ==
> > \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip4 && ip4.dst ==
> > {255.255.255.255, *100.0.0.0/24* <http://100.0.0.0/24>} && udp &&
> udp.src == 68 && udp.dst == 67"
> > 1002
> > 08be2532-f8ff-493f-83e3-085eede36e08 allow from-lport
> > {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false "inport
> ==
> > \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip4 && ip4.dst ==
> > {255.255.255.255, *100.0.0.0/24* <http://100.0.0.0/24>} && udp &&
> udp.src == 68 && udp.dst == 67"
> > 1002
> > bb263947-a436-4a0d-9218-5abd89546a69 allow from-lport
> > {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false "inport
> ==
> > \"f8de0603-f4ec-4546-a8f3-574640f270e8\" && ip4 && ip4.dst ==
> > {255.255.255.255, *200.0.0.0/24* <http://200.0.0.0/24>} && udp &&
> udp.src == 68 && udp.dst == 67"
> > 1002
> > 092964cc-2ce5-4a34-b747-558006bb3de1 allow-related from-lport
> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false "inport
> ==
> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip4" 1002
> > 5f2ebb8e-edbc-40aa-ada6-2fc90fc104af allow-related from-lport
> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false "inport
> ==
> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip6" 1002
> > 13d32fab-0ed7-4472-97c2-1e3057eaca6e allow-related from-lport
> > {"neutron:lport"="

[ovs-dev] 答复: Re: 答复: [spam可疑邮件]Re: 答复: Re: [PATCH 2/2] ovn-northd: Fix ping failure of vlan networks.

2017-06-27 Thread wang . qianyu
Hi Han Zhou,

> If using localnet, it should rely on physical network (L2 and L3) to 
reach the destination, not overlay, so adding the logical router here 
doesn't make sense here

Why ovn have this limit for physical network? Does this mean that vlan 
network can not use the l3 function of ovn?

Thanks





Han Zhou <zhou...@gmail.com>
2017/06/27 16:24
 
收件人:wang.qia...@zte.com.cn, 
抄送:  Russell Bryant <russ...@ovn.org>, ovs dev 
<d...@openvswitch.org>, zhou.huij...@zte.com.cn, xurong00037997 
<xu.r...@zte.com.cn>
        主题:  Re: [ovs-dev] 答复: [spam可疑邮件]Re: 答复: Re: [PATCH 
2/2] ovn-northd: Fix ping failure of vlan networks.




On Thu, Jun 15, 2017 at 1:04 AM, <wang.qia...@zte.com.cn> wrote:
>
> Hi Russell, I am sorry for the late reply.
> The route not bound to a chassis, and have no redirect-chassis. The 
dumped
> northbound db is as follow.
> Ip addresses of 100.0.0.148 and 200.0.0.2 locate on different chassis. 
The
> ping between them is not success before this patch.
>
>
> [root@tecs159 ~]#
> [root@tecs159 ~]# ovsdb-client dump
> unix:/var/run/openvswitch/ovnnb_db.sock
> ACL table
> _uuidactiondirection 
 external_ids
> log   match priority
>  - --
>  -
> 
--
> 
> ac2900f9-49fd-430a-b646-88d1f7c54ab8 allow from-lport
> {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false "inport 
==
> \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip4 && ip4.dst ==
> {255.255.255.255, 100.0.0.0/24} && udp && udp.src == 68 && udp.dst == 
67"
> 1002
> 784a55c3-05fd-4c4d-a51e-5b9ee5cc1e8e allow from-lport
> {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false "inport 
==
> \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip4 && ip4.dst ==
> {255.255.255.255, 100.0.0.0/24} && udp && udp.src == 68 && udp.dst == 
67"
> 1002
> 08be2532-f8ff-493f-83e3-085eede36e08 allow from-lport
> {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false "inport 
==
> \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip4 && ip4.dst ==
> {255.255.255.255, 100.0.0.0/24} && udp && udp.src == 68 && udp.dst == 
67"
> 1002
> bb263947-a436-4a0d-9218-5abd89546a69 allow from-lport
> {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false "inport 
==
> \"f8de0603-f4ec-4546-a8f3-574640f270e8\" && ip4 && ip4.dst ==
> {255.255.255.255, 200.0.0.0/24} && udp && udp.src == 68 && udp.dst == 
67"
> 1002
> 092964cc-2ce5-4a34-b747-558006bb3de1 allow-related from-lport
> {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false "inport 
==
> \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip4" 1002
> 5f2ebb8e-edbc-40aa-ada6-2fc90fc104af allow-related from-lport
> {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false "inport 
==
> \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip6" 1002
> 13d32fab-0ed7-4472-97c2-1e3057eaca6e allow-related from-lport
> {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false "inport 
==
> \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip4" 1002
> 7fa4e0b0-ffce-436f-a20a-07b0584c3285 allow-related from-lport
> {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false "inport 
==
> \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip6" 1002
> b32cf462-a8e5-4597-9c6e-4dc02ae2e2c4 allow-related from-lport
> {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false "inport 
==
> \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip4" 1002
> 4d003f24-f546-49fa-a33c-92384e4d3549 allow-related from-lport
> {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false "inport 
==
> \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip6" 1002
> 7078873a-fa44-4c64-be7f-067d19361fb4 allow-related from-lport
> {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false "inport 
==
> \"f8de0603-f4ec-4546