Hi Han

Thanks for the reiew job.

I will revise the commit message in new patch.

Qianyu




Han Zhou <zhou...@gmail.com>
2017/07/07 02:15
 
        收件人:        Mickey Spiegel <mickeys....@gmail.com>, 
        抄送:  wang.qia...@zte.com.cn, ovs dev <d...@openvswitch.org>, 
xurong00037997 <xu.r...@zte.com.cn>, zhou.huij...@zte.com.cn
        主题:  Re: 答复: Re: [ovs-dev] 答复: Re: 答复: [spam可疑邮件]Re: 
答复: Re: [PATCH 2/2] ovn-northd: Fix ping failure of vlan networks.


Despite the original motivation of this change, I found the patch valuable 
for data-plane performance.

When localnet port is used for communication between 2 ports of same 
lswitch (basic provider network scenario), without the patch, each flow is 
tracked in conntrack table twice. With the patch, it improve the 
performance in 2 ways:

1) It reduces 50% of conntrack operations

2) It reduces 50% number of entries in conntrack table, which also helps 
reducing conntrack cost

I had some tests for TCP_CRR, it improves performance for 5 - 10%.
Discussed in today's ovn meeting and we agreed it is valid optimization 
because localnet port is used as transport, not the real end-point to 
protect.

@Qianyu, would it be good to revise the patch on the commit message to put 
it as an optimization for conntrack performance? The current commit 
message is not true because it is not a supported scenario for now, and 
the "fix" is not complete, either. The new scenaro would worth a separate 
discussion. What do you think?

On Wed, Jul 5, 2017 at 9:07 PM, Mickey Spiegel <mickeys....@gmail.com> 
wrote:

On Tue, Jul 4, 2017 at 6:01 PM, <wang.qia...@zte.com.cn> wrote:
Hi Mickey, 

Thanks for your review. 

If we could do some modifications to avoid the north/south problem you 
mentioned? Like as follow: 

When packets send to the localnet-port, if the MAC is router-port MAC, we 
change the router-port MAC to HV physical NIC MAC. And in gateway node, we 
make the HV physical NIC MAC and router-port MAC all sense. 

In OpenStack usage, for public internet access, it is common for one 
logical switch to have many different logical routers connected to it, 
which can reside on the same hypervisor. The MAC address is used to 
identify which logical router the traffic is destined to. So you would 
need a MAC address per (logical router, chassis) pair, or you would have 
to introduce yet another type of router that front ends the distributed 
logical routers, deciding based on destination IP address. Either way, a 
port with a different MAC address on each chassis that hosts the port 
seems different than the logical router ports that exist today.

I am still trying to understand your use case. It sounds like you have 
different regions, with different logical switches in different regions. 
If you just put localnets on each of those logical switches, wouldn't that 
do what you want?

If you need L3 between the different regions, you could use a separate 
router per region, either a gateway router in each region, or a 
distributed logical router with a distributed gateway port in each region. 
With gateway routers, traffic coming in and out of each region goes 
through one centralized chassis. With a distributed logical router, 
non-NAT traffic coming in and out of each region goes through one 
centralized chassis, while NAT traffic coming in and out of each region 
can be distributed. With this type of solution, traffic will go in and out 
of each localnet symmetrically, in both directions.

Mickey



Thanks! 






Mickey Spiegel <mickeys....@gmail.com> 
2017/06/30 12:31 
        
        收件人:        Han Zhou <zhou...@gmail.com>, 
        抄送:        wang.qia...@zte.com.cn, ovs dev <d...@openvswitch.org
>, zhou.huij...@zte.com.cn, xurong00037997 <xu.r...@zte.com.cn> 
        主题:        Re: [ovs-dev] 答复: Re: 答复: [spam可疑邮件]Re: 答复
: Re: [PATCH 2/2] ovn-northd: Fix ping failure of vlan networks.




On Thu, Jun 29, 2017 at 2:19 PM, Han Zhou <zhou...@gmail.com> wrote: 
I learned that this use case is kind of Hierarchical scenario:
https://specs.openstack.org/openstack/neutron-specs/specs/kilo/ml2-hierarchical-port-binding.html


In such scenario, user wants to use OVN to manage vlan networks, and the
vlan networks is connected via VTEP overlay, which is not managed by OVN
itself. VTEP is needed to connect to BM/SRIOV VMs to the same L2 that OVS
VIFs are connected to.

User don't want to use OVN to manage VTEPs since there will be flooding to
many VTEPs. (Mac-learning is not supported yet)

So in this scenario, user want to utilize distributed logical router as a
way to optimize the datapath. For VM to VM traffic between different 
vlans,
instead of going to a centralized external L3 router, user wants the
traffic to be tagged to the destination vlan directly and go straight from
source HV to destination HV through the destination vlan. 

L2 and L3 have different semantics and different ways of handling packets. 

There is a big difference between: 
1) bridging between different VLANs, going through a VTEP overlay 
   that connects those VLANs, and 
2) routing between different VLANs. 
Trying to blur that boundary will lead to unexpected behavior and various 
issues. 

In the vtep scenario, this is a valuable optimization. Even in a normal
vlan setup without vtep, this can be an optimization too if src and dst 
VMs
are on the same HV (so that the packet doesn't need to go to physical
switch and come back).

So, I agree with Qianyu that connecting VLAN networks with logical router
is reasonable, which means, the transport of logical router can be not 
only
tunnels, but also physical networks (localnet port). 

Mixing routers and localnet is dangerous because of interactions with 
MAC addresses and L2 learning. The reason it is not a good idea to 
transmit packets from a distributed logical router directly to a physical 
network through localnet is because the router rewrites the source MAC 
to the router MAC. The physical network will learn about the router MAC 
based on the last location from which it saw the router send a packet to 
the physical network. That may have low correlation with the next 
packet from the physical network back to the router MAC, so north/south 
packets may end up with an almost random distribution of chassis on 
which the distributed logical router resides, independent of where the 
destination actually resides. 

You are looking only at east west traffic, counting on the asymmetry using 

the distributed router instance on the other side so that return traffic 
never hits the physical network with the router MAC as the destination 
MAC. However, there will be north/south traffic destined to the router 
MAC, and this approach will make that bounce all over the place. 

Perhaps you can make something work if you have per node per router 
MAC addresses, but it still scares me. 

To fulfill this
requirement, we need to solve some problems in current OVN code:

1) Since the data path is asymmetric, we need to solve the CT problem of
the localnet port. I agree with the idea of the patch from Qianyu, which
bypasses FW for localnet ports, since localnet port is to connect real
endpoints, so maybe there is not much value to add ACL on localnet ports.
Not sure if there is use case where ACL is really needed for localnet 
ports.

2) When there are ARP requests from vlan network (e.g. from a BM) to
logical router interface IP, the ARP request will reach every HV through
the localnet port and the distributed logical router port will respond 
from
every HV. Shall we disable ARP response from logical router for requests
from localnet port? In this scenario, I would expect the BM/SRIOV VMs on
the same vlan to use a different GW rather than the logical router. 

How will north/south traffic work? 

Distributed gateway ports limit ARP responses so that only one instance 
of the distributed gateway port on one chassis responds. 

At the moment distributed gateway ports only allow direct traffic (without 

going through the central node) for 1:1 NAT with logical_port and 
external_mac specified. This is because of the implications of upstream 
L2 learning, and lack of a routing protocol. If there are mechanisms that 
allow things on the outside to know which chassis to forward traffic to, 
then this can be relaxed. This is controlled by the gw_redirect table. 

Mickey 


3) We need to add a restriction/validation so that the localnet connection
is used only we are sure the 2 logical switches are on the same "physical
network", e.g. different vlans under same physical bridge group, or a
virtual L2 bridge group formed by vtep overlays.

Thanks,
Han 


On Tue, Jun 27, 2017 at 10:00 AM, Han Zhou <zhou...@gmail.com> wrote:

> It is not about limit but more about use case. Could you explain your 
use
> case why using localnet ports here while the VMs can be connected 
through
> logical router on overlay? Did my patch work for you? Does it work also
> when you remove the localnet ports?
>
> Mickey mentioned scenarios when a combination of localnet ports and
> logical router is valid, which are for gateway use cases, and it seems 
to
> me that is not your case cause you are just try to connect two VMs.
>
> Han
>
>
> On Tue, Jun 27, 2017 at 2:32 AM, <wang.qia...@zte.com.cn> wrote:
>
>> Hi Han Zhou,
>>
>> > If using localnet, it should rely on physical network (L2 and L3) to
>> reach the destination, not overlay, so adding the logical router here
>> doesn't make sense here
>>
>> Why ovn have this limit for physical network? Does this mean that vlan
>> network can not use the l3 function of ovn?
>>
>> Thanks
>>
>>
>>
>> *Han Zhou <zhou...@gmail.com <zhou...@gmail.com>>*
>>
>> 2017/06/27 16:24
>>
>>         收件人:        wang.qia...@zte.com.cn,
>>         抄送:        Russell Bryant <russ...@ovn.org>, ovs dev <
>> d...@openvswitch.org>, zhou.huij...@zte.com.cn, xurong00037997 <
>> xu.r...@zte.com.cn>
>>         主题:        Re: [ovs-dev] 答复: [spam可疑邮件]Re: 答复: Re: 
[PATCH 2/2]
>> ovn-northd: Fix ping failure of vlan networks.
>>
>>
>>
>>
>>
>> On Thu, Jun 15, 2017 at 1:04 AM, <*wang.qia...@zte.com.cn*
>> <wang.qia...@zte.com.cn>> wrote:
>> >
>> > Hi Russell, I am sorry for the late reply.
>> > The route not bound to a chassis, and have no redirect-chassis. The
>> dumped
>> > northbound db is as follow.
>> > Ip addresses of 100.0.0.148 and 200.0.0.2 locate on different 
chassis.
>> The
>> > ping between them is not success before this patch.
>> >
>> >
>> > [root@tecs159 ~]#
>> > [root@tecs159 ~]# ovsdb-client dump
>> > unix:/var/run/openvswitch/ovnnb_db.sock
>> > ACL table
>> > _uuid                                action        direction
>>  external_ids
>> >                                             log   match        
 priority
>> > ------------------------------------ ------------- ----------
>> > -------------------------------------------------------- -----
>> > ------------------------------------------------------------
>> ------------------------------------------------------------
>> --------------------------
>> > --------
>> > ac2900f9-49fd-430a-b646-88d1f7c54ab8 allow         from-lport
>> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false 
"inport
>> ==
>> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip4 && ip4.dst ==
>> > {255.255.255.255, *100.0.0.0/24* <http://100.0.0.0/24>} && udp &&
>> udp.src == 68 && udp.dst == 67"
>> > 1002
>> > 784a55c3-05fd-4c4d-a51e-5b9ee5cc1e8e allow         from-lport
>> > {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false 
"inport
>> ==
>> > \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip4 && ip4.dst ==
>> > {255.255.255.255, *100.0.0.0/24* <http://100.0.0.0/24>} && udp &&
>> udp.src == 68 && udp.dst == 67"
>> > 1002
>> > 08be2532-f8ff-493f-83e3-085eede36e08 allow         from-lport
>> > {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false 
"inport
>> ==
>> > \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip4 && ip4.dst ==
>> > {255.255.255.255, *100.0.0.0/24* <http://100.0.0.0/24>} && udp &&
>> udp.src == 68 && udp.dst == 67"
>> > 1002
>> > bb263947-a436-4a0d-9218-5abd89546a69 allow         from-lport
>> > {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false 
"inport
>> ==
>> > \"f8de0603-f4ec-4546-a8f3-574640f270e8\" && ip4 && ip4.dst ==
>> > {255.255.255.255, *200.0.0.0/24* <http://200.0.0.0/24>} && udp &&
>> udp.src == 68 && udp.dst == 67"
>> > 1002
>> > 092964cc-2ce5-4a34-b747-558006bb3de1 allow-related from-lport
>> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false 
"inport
>> ==
>> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip4"             1002
>> > 5f2ebb8e-edbc-40aa-ada6-2fc90fc104af allow-related from-lport
>> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false 
"inport
>> ==
>> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip6"             1002
>> > 13d32fab-0ed7-4472-97c2-1e3057eaca6e allow-related from-lport
>> > {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false 
"inport
>> ==
>> > \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip4"             1002
>> > 7fa4e0b0-ffce-436f-a20a-07b0584c3285 allow-related from-lport
>> > {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false 
"inport
>> ==
>> > \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip6"             1002
>> > b32cf462-a8e5-4597-9c6e-4dc02ae2e2c4 allow-related from-lport
>> > {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false 
"inport
>> ==
>> > \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip4"             1002
>> > 4d003f24-f546-49fa-a33c-92384e4d3549 allow-related from-lport
>> > {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false 
"inport
>> ==
>> > \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip6"             1002
>> > 7078873a-fa44-4c64-be7f-067d19361fb4 allow-related from-lport
>> > {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false 
"inport
>> ==
>> > \"f8de0603-f4ec-4546-a8f3-574640f270e8\" && ip4"             1002
>> > a15bd032-9755-45a5-b7ea-9687b9d14560 allow-related from-lport
>> > {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false 
"inport
>> ==
>> > \"f8de0603-f4ec-4546-a8f3-574640f270e8\" && ip6"             1002
>> > 4ace5b98-e6dd-467c-a7cf-af5e76a258f5 allow-related to-lport
>> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false
>> "outport ==
>> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip4 && ip4.src ==
>> > $as_ip4_539bd583_ca35_4ae7_9774_299fd56765ef" 1002
>> > 6e3453ee-a717-49fe-8160-ab304daa7bd8 allow-related to-lport
>> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false
>> "outport ==
>> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip6 && ip6.src ==
>> > $as_ip6_539bd583_ca35_4ae7_9774_299fd56765ef" 1002
>> > cc1b88c5-e9d7-42fe-8e17-deb2fbc7c7a2 allow-related to-lport
>> > {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false
>> "outport ==
>> > \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip4 && ip4.src ==
>> > $as_ip4_539bd583_ca35_4ae7_9774_299fd56765ef" 1002
>> > ecb2798f-4a87-4260-b9a8-3cdea1eca638 allow-related to-lport
>> > {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false
>> "outport ==
>> > \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip6 && ip6.src ==
>> > $as_ip6_539bd583_ca35_4ae7_9774_299fd56765ef" 1002
>> > 71c56144-3b95-454a-acb4-67cd924dff08 allow-related to-lport
>> > {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false
>> "outport ==
>> > \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip4 && ip4.src ==
>> > $as_ip4_539bd583_ca35_4ae7_9774_299fd56765ef" 1002
>> > 15766592-aa79-465b-8935-bbc916692b75 allow-related to-lport
>> > {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false
>> "outport ==
>> > \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip6 && ip6.src ==
>> > $as_ip6_539bd583_ca35_4ae7_9774_299fd56765ef" 1002
>> > 5c6a9b01-ade0-4b6c-8a1e-2fe0155bdf7d allow-related to-lport
>> > {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false
>> "outport ==
>> > \"f8de0603-f4ec-4546-a8f3-574640f270e8\" && ip4 && ip4.src ==
>> > $as_ip4_539bd583_ca35_4ae7_9774_299fd56765ef" 1002
>> > 4e0d6019-7801-4537-883b-aebeae1ab136 allow-related to-lport
>> > {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false
>> "outport ==
>> > \"f8de0603-f4ec-4546-a8f3-574640f270e8\" && ip6 && ip6.src ==
>> > $as_ip6_539bd583_ca35_4ae7_9774_299fd56765ef" 1002
>> > 54c72e2f-26b9-433c-968e-8e9b86379dfb drop          from-lport
>> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false 
"inport
>> ==
>> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip"           1001
>> > 2fc107d5-f809-4719-a84c-078add4844b0 drop          from-lport
>> > {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false 
"inport
>> ==
>> > \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip"           1001
>> > 32d170f3-af0e-451a-9557-4ba1ff168fab drop          from-lport
>> > {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false 
"inport
>> ==
>> > \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip"           1001
>> > d579d231-06c9-4b14-b744-760937f824a7 drop          from-lport
>> > {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false 
"inport
>> ==
>> > \"f8de0603-f4ec-4546-a8f3-574640f270e8\" && ip"           1001
>> > f7eca2db-82ca-43bd-b8cb-778ecbc6e75b drop          to-lport
>> > {"neutron:lport"="1ef52eb4-1f0e-416d-8dc2-e2fc7557979c"} false
>> "outport ==
>> > \"1ef52eb4-1f0e-416d-8dc2-e2fc7557979c\" && ip"          1001
>> > 6bd4c9a2-15a0-498b-aa3d-29ed5c041427 drop          to-lport
>> > {"neutron:lport"="6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"} false
>> "outport ==
>> > \"6c04e45e-ad83-4cf0-ae74-84f7720a5bc4\" && ip"          1001
>> > 9d7efb59-90f2-469d-9550-1e8a906d59e8 drop          to-lport
>> > {"neutron:lport"="c5ff4f7b-bd0d-4757-ac18-636f9d62b94c"} false
>> "outport ==
>> > \"c5ff4f7b-bd0d-4757-ac18-636f9d62b94c\" && ip"          1001
>> > 10bbc3ba-a5bb-4441-b789-d95158588d96 drop          to-lport
>> > {"neutron:lport"="f8de0603-f4ec-4546-a8f3-574640f270e8"} false
>> "outport ==
>> > \"f8de0603-f4ec-4546-a8f3-574640f270e8\" && ip"          1001
>> >
>> > Address_Set table
>> > _uuid                                addresses  external_ids  name
>> > ------------------------------------
>> > ----------------------------------------------------------
>> > ---------------------------------------
>> > ---------------------------------------------
>> > dcea33b7-7313-41bb-813c-06d8b9634a7d []
>> > {"neutron:security_group_name"=default}
>> > "as_ip4_1a91f4e0_8a0f_4c18_9122_dae1179297c3"
>> > 33f542ff-ec0c-4b79-bd9f-9bf38d3721e3 []
>> > {"neutron:security_group_name"=default}
>> > "as_ip6_1a91f4e0_8a0f_4c18_9122_dae1179297c3"
>> > 062f417a-84f0-488a-947b-eb20127be8ed []
>> > {"neutron:security_group_name"=default}
>> > "as_ip6_539bd583_ca35_4ae7_9774_299fd56765ef"
>> > 7f8e253a-a937-4557-b976-a62c7b2c62c5 ["100.0.0.147", "100.0.0.148",
>> > "100.0.0.149", "200.0.0.2"] {"neutron:security_group_name"=default}
>> > "as_ip4_539bd583_ca35_4ae7_9774_299fd56765ef"
>> >
>> > Connection table
>> > _uuid external_ids inactivity_probe is_connected max_backoff
>> other_config
>> > status target
>> > ----- ------------ ---------------- ------------ -----------
>> ------------
>> > ------ ------
>> >
>> > DHCP_Options table
>> > _uuid                                cidr           external_ids
>> > options
>> > ------------------------------------ --------------
>> > --------------------------------------------------
>> > ------------------------------------------------------------
>> -----------------------------------------------
>> > b55e7e0b-b26e-4895-a112-7ba15cfc4ebb "*100.0.0.0/24*
>> <http://100.0.0.0/24>"
>> > {subnet_id="2b218dec-7f3d-4e8b-8c3e-8761203a989f"} 
{lease_time="43200",
>> > mtu="1500", router="100.0.0.1", server_id="100.0.0.1",
>> > server_mac="fa:16:3e:86:32:cd"}
>> > 06c44867-2b2f-417d-8232-afab999eed1a "*200.0.0.0/24*
>> <http://200.0.0.0/24>"
>> > {subnet_id="8a50258b-cbbf-4099-9275-dccebfd23762"} 
{lease_time="43200",
>> > mtu="1500", router="200.0.0.1", server_id="200.0.0.1",
>> > server_mac="fa:16:3e:12:25:dc"}
>> >
>> > Load_Balancer table
>> > _uuid external_ids name protocol vips
>> > ----- ------------ ---- -------- ----
>> >
>> > Logical_Router table
>> > _uuid                                enabled external_ids  
load_balancer
>> > name                                           nat options ports
>> >       static_routes
>> > ------------------------------------ -------
>> > --------------------------------- -------------
>> > ---------------------------------------------- --- -------
>> > ------------------------------------------------------------
>> ----------------
>> > -------------
>> > c96ff734-590b-496c-8955-076c8ec524ab true
>> > {"neutron:router_name"="router1"} []
>> > "neutron-5ba0b278-d35b-40d6-85a7-1e527576b085" []  {}
>> > [5d9b823f-a9f1-4d85-bf15-da392cecebca,
>> > 97c1b867-1f0c-4865-a58b-21b8a84f3758] []
>> >
>> > Logical_Router_Port table
>> > _uuid                                enabled external_ids mac    name
>> >                   networks         options peer
>> > ------------------------------------ ------- ------------
>> > ------------------- ------------------------------------------
>> > ---------------- ------- ----
>> > 97c1b867-1f0c-4865-a58b-21b8a84f3758 []      {} "fa:16:3e:79:7b:06"
>> > "lrp-a794083d-c374-46ac-b246-23568235fea1" ["*200.0.0.1/24*
>> <http://200.0.0.1/24>"] {}      []
>> > 5d9b823f-a9f1-4d85-bf15-da392cecebca []      {} "fa:16:3e:d1:71:75"
>> > "lrp-88561050-51c2-4585-936a-05eba5dba19a" ["*100.0.0.1/24*
>> <http://100.0.0.1/24>"] {}      []
>> >
>> > Logical_Router_Static_Route table
>> > _uuid ip_prefix nexthop output_port policy
>> > ----- --------- ------- ----------- ------
>> >
>> > Logical_Switch table
>> > _uuid                                acls
>> >                                                      external_ids
>> > load_balancer name
>> other_config
>> > ports                    qos_rules
>> > ------------------------------------
>> > ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------
>> > ----------------------------------- -------------
>> > ---------------------------------------------- ------------
>> > ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> ------------------------------------------------
>> > ---------
>> > 616e6c26-4dda-46de-9ceb-96008db3478a
>> > [10bbc3ba-a5bb-4441-b789-d95158588d96,
>> > 4e0d6019-7801-4537-883b-aebeae1ab136,
>> > 5c6a9b01-ade0-4b6c-8a1e-2fe0155bdf7d,
>> > 7078873a-fa44-4c64-be7f-067d19361fb4,
>> > a15bd032-9755-45a5-b7ea-9687b9d14560,
>> > bb263947-a436-4a0d-9218-5abd89546a69,
>> > d579d231-06c9-4b14-b744-760937f824a7]
>> > {"neutron:network_name"="vlan-200"} []
>> > "neutron-f0e1df21-1a76-46d6-a92b-1ab17ba3ba68" {}
>> > [3a513f49-50fb-44f2-8de1-1ee063f38f52,
>> > 7692e68e-7f37-4d25-81aa-14fbd8a6eb3c,
>> > c13df760-1a00-4948-848e-00f14c79b3d4]       []
>> > 5f7e8ce5-486b-4273-86aa-971b1cbe5e93
>> > [08be2532-f8ff-493f-83e3-085eede36e08,
>> > 092964cc-2ce5-4a34-b747-558006bb3de1,
>> > 13d32fab-0ed7-4472-97c2-1e3057eaca6e,
>> > 15766592-aa79-465b-8935-bbc916692b75,
>> > 2fc107d5-f809-4719-a84c-078add4844b0,
>> > 32d170f3-af0e-451a-9557-4ba1ff168fab,
>> > 4ace5b98-e6dd-467c-a7cf-af5e76a258f5,
>> > 4d003f24-f546-49fa-a33c-92384e4d3549,
>> > 54c72e2f-26b9-433c-968e-8e9b86379dfb,
>> > 5f2ebb8e-edbc-40aa-ada6-2fc90fc104af,
>> > 6bd4c9a2-15a0-498b-aa3d-29ed5c041427,
>> > 6e3453ee-a717-49fe-8160-ab304daa7bd8,
>> > 71c56144-3b95-454a-acb4-67cd924dff08,
>> > 784a55c3-05fd-4c4d-a51e-5b9ee5cc1e8e,
>> > 7fa4e0b0-ffce-436f-a20a-07b0584c3285,
>> > 9d7efb59-90f2-469d-9550-1e8a906d59e8,
>> > ac2900f9-49fd-430a-b646-88d1f7c54ab8,
>> > b32cf462-a8e5-4597-9c6e-4dc02ae2e2c4,
>> > cc1b88c5-e9d7-42fe-8e17-deb2fbc7c7a2,
>> > ecb2798f-4a87-4260-b9a8-3cdea1eca638,
>> > f7eca2db-82ca-43bd-b8cb-778ecbc6e75b] {"neutron:network_name"="vlan1
>> 00"}
>> > []            "neutron-47b88d0c-71e8-4129-b091-faccd9665fd5" {}
>> > [1eed33cc-5599-4d11-8f33-04c3302e4719,
>> > 34282540-5c2b-4a20-9d29-9180c15e5fc2,
>> > 3ef7cbaa-b602-496d-8ecd-d17433b3d73d,
>> > 87d8a014-22d7-4992-8446-749f2b3705ef,
>> > bd2f11ba-49ef-40f8-b576-87d0b8ec1b87,
>> > faf8deeb-8f05-4af1-91de-3316f9467959] []
>> >
>> > Logical_Switch_Port table
>> > _uuid                                addresses dhcpv4_options
>> > dhcpv6_options dynamic_addresses enabled external_ids      name
>> > options                                                  parent_name
>> > port_security tag tag_request type     up
>> > ------------------------------------ 
---------------------------------
>> > ------------------------------------ -------------- -----------------
>> > ------- --------------------------------------
>> > ----------------------------------------------
>> > -------------------------------------------------------- -----------
>> > ------------- --- ----------- -------- -----
>> > 3ef7cbaa-b602-496d-8ecd-d17433b3d73d ["fa:16:3e:36:41:6e 
100.0.0.148"]
>> > b55e7e0b-b26e-4895-a112-7ba15cfc4ebb []             []
>>  true
>> >    {"neutron:port_name"=""} "c5ff4f7b-bd0d-4757-ac18-636f9d62b94c" {}
>> >                                   []          []            []  []  
""
>> > true
>> > 1eed33cc-5599-4d11-8f33-04c3302e4719 ["fa:16:3e:66:98:cd 
100.0.0.147"]
>> > b55e7e0b-b26e-4895-a112-7ba15cfc4ebb []             []
>>  true
>> >    {"neutron:port_name"="port-pci-100-1"}
>> > "6c04e45e-ad83-4cf0-ae74-84f7720a5bc4"         {}      []          []
>>   []
>> >  []          ""       false
>> > 87d8a014-22d7-4992-8446-749f2b3705ef ["fa:16:3e:73:c8:95 
100.0.0.146"]
>> []
>> >                                 []             []                true
>> > {"neutron:port_name"=""} "ed3a389a-af88-4234-a30f-749c45d8805d"
>>   {}
>> >                                                       []          []  
[]
>> > []          ""       false
>> > 7692e68e-7f37-4d25-81aa-14fbd8a6eb3c ["fa:16:3e:e7:1d:3c 200.0.0.2"]
>> > 06c44867-2b2f-417d-8232-afab999eed1a []             []
>>  true
>> >    {"neutron:port_name"=""} "f8de0603-f4ec-4546-a8f3-574640f270e8" {}
>> >                                   []          []            []  []  
""
>> > true
>> > bd2f11ba-49ef-40f8-b576-87d0b8ec1b87 ["fa:16:3e:fc:de:db 
100.0.0.149"]
>> > b55e7e0b-b26e-4895-a112-7ba15cfc4ebb []             []
>>  true
>> >    {"neutron:port_name"=""} "1ef52eb4-1f0e-416d-8dc2-e2fc7557979c" {}
>> >                                   []          []            []  []  
""
>> > true
>> > faf8deeb-8f05-4af1-91de-3316f9467959 [router]
>>  []
>> >                                 []             []                true
>> > {"neutron:port_name"=""} "88561050-51c2-4585-936a-05eba5dba19a"
>> > {router-port="lrp-88561050-51c2-4585-936a-05eba5dba19a"} []          
[]
>> >   []  []          router   false
>> > 3a513f49-50fb-44f2-8de1-1ee063f38f52 [router]
>>  []
>> >                                 []             []                true
>> > {"neutron:port_name"=""} "a794083d-c374-46ac-b246-23568235fea1"
>> > {router-port="lrp-a794083d-c374-46ac-b246-23568235fea1"} []          
[]
>> >   []  []          router   false
>> > 34282540-5c2b-4a20-9d29-9180c15e5fc2 [unknown]
>> []
>> >                                 []             []                [] 
{}
>> >                       "provnet-47b88d0c-71e8-4129-b091-faccd9665fd5"
>> > {network_name="physnet1"}                                []          
[]
>> >   100 []          localnet false
>> > c13df760-1a00-4948-848e-00f14c79b3d4 [unknown]
>> []
>> >                                 []             []                [] 
{}
>> >                       "provnet-f0e1df21-1a76-46d6-a92b-1ab17ba3ba68"
>> > {network_name="physnet1"}                                []          
[]
>> >   200 []          localnet false
>> >
>> > NAT table
>> > _uuid external_ip external_mac logical_ip logical_port type
>> > ----- ----------- ------------ ---------- ------------ ----
>> >
>> > NB_Global table
>> > _uuid                                connections external_ids hv_cfg
>> > nb_cfg sb_cfg ssl
>> > ------------------------------------ ----------- ------------ ------
>> > ------ ------ ---
>> > fcf4effb-eff7-4401-8727-03864c363477 []          {}           0      
0
>>  0
>> >    []
>> >
>> > QoS table
>> > _uuid action direction external_ids match priority
>> > ----- ------ --------- ------------ ----- --------
>> >
>> > SSL table
>> > _uuid bootstrap_ca_cert ca_cert certificate external_ids private_key
>> > ----- ----------------- ------- ----------- ------------ -----------
>> > [root@tecs159 ~]#
>> >
>> >
>> >
>> Here you have both logical router and localnet ports, which doesn't 
seems
>> to make sense. The 2 VMs are on different vlan, which are not supposed 
to
>> be able to communicate with localnet port. However, it reveals a bug
>> introduced by recursive local-datapath filling, which is fixed in [1].
>> Because of this bug, VM1 is talking to VM2 directly using net2's 
localnet
>> port which tags VLAN2 directly. It is also the reason why CT didn't 
work.
>> Please try the patch [1] to see if it fixes the problem (it will not 
use
>> localnet port but overlay tunnel to reach the destination).
>>
>> Regarding your configuration itself, localnet is for physical/provider
>> network. If using localnet, it should rely on physical network (L2 and 
L3)
>> to reach the destination, not overlay, so adding the logical router 
here
>> doesn't make sense here. Since these 2 VMs are on different VLANs, the
>> expected way for them to communicate is through external routers that
>> connects these 2 VLANs.
>>
>> Alternatively, if these 2 HVs are L3 connected, you can use overlay
>> logical router to connect the 2 logical switches, and then the localnet
>> ports should not be created.
>>
>> [1]
>> *https://mail.openvswitch.org/pipermail/ovs-dev/2017-June/334633.html*
>> <https://mail.openvswitch.org/pipermail/ovs-dev/2017-June/334633.html>
>>
>> Thanks,
>> Han
>>
>>
>
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev 




_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to