Regards, Vladislav Odintsov
On 31 Aug 2021, at 16:51, Numan Siddique <[email protected]<mailto:[email protected]>> wrote: On Tue, Aug 31, 2021 at 9:35 AM Odintsov Vladislav <[email protected]<mailto:[email protected]>> wrote: Regards, Vladislav Odintsov On 31 Aug 2021, at 15:36, Numan Siddique <[email protected]<mailto:[email protected]><mailto:[email protected]>> wrote: On Tue, Aug 31, 2021 at 7:50 AM Odintsov Vladislav <[email protected]<mailto:[email protected]><mailto:[email protected]>> wrote: One more addition for the ID usage: In ovn-ic code I gust copy route_table value from NB to IC SB:Route and then to NB on the other AZ. If we supply inport for the route, how should this route be learned to another AZ? It seems to me that to have an abstract identifier (route_table name) is a good idea here. Han, maybe you can help with Numan’s concern about datapath disruption while IDs change? Will there be any datapath flow eviction if OF is reinstalled? How can this affect real traffic? In my opinion this can lead to increased latency for packets hit this reinstalling DP flow. I consider this could be acceptable if user changes its virtual infrastructure. Also, as Han pointed about future usage of route table IDs in Router Policies looks a good advantage for me. Regards, Vladislav Odintsov On 31 Aug 2021, at 10:28, Han Zhou <[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>> wrote: On Mon, Aug 30, 2021 at 11:48 PM Odintsov Vladislav <[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>> wrote: Hi Han, Using Router Policies for the purpose of routing makes impossible to use filtering on the LR level (using Router Policies, not ACLs), because in that case routing flows and filter flows would be in the same lr stage. We offer to our customers ability to configure multiple routing tables, stateless ACLs on inter-LRP level and stateful ACLs on the VIF level. Like AWS EC2 does. So currently I don’t see how to implement routing table for each lr inport and inter-LRP traffic filtering by utilizing Logical Router Policies. OK, so it has to be different stages, so it makes sense to support multiple routing tables in the static_route. @Numan, by port group you mean I should create a new field in port_group table - router_ports and somewhere in ovn-controller lflow generation code add support for converting @group_name in LR pipeline to LRPs names from this PG? No. I mean this. Add a new column "inport" in logical_router_static_router table. CMS can either set this value to either a logical router port name - (eg. "lr0-sw0" ) or to a port group name - "@pg1". i.e ovn-nbctl set logical_router_static_router <STATIC_ROUTER> inport=lr0-sw0 or ovn-nbctl set logical_router_static_router <STATIC_ROUTER> inport="@pg1" CMS has to create a port group by the name "@pg1" and add the desired router port names. I’m a bit confused here. Port group supports only Logical Switch Ports, isn’t it? ovn-nb.ovsschema "Port_Group": { "columns": { "name": {"type": "string"}, "ports": {"type": {"key": {"type": "uuid", "refTable": "Logical_Switch_Port", "refType": "weak"}, "min": 0, "max": "unlimited"}}, Oh right. It references the switch port. Sorry I missed it. @Numan, I tend to agree with the table-id idea instead of using inport. Here are my considerations: 1) Using table-id, i.e. support multiple routing tables is more flexible. It not only supports the use case when tables are selected according to inport, but also possible to be selected based on the result of logical router policies - a new action can be added to forward to a specific routing table based on the match rules in Logical_Router_Policy. 2) When routing table is selected according to inports, it should be clear that for a specific inport there should be at most one target routing table. Configure table-id in LRP enforces this naturally, while inport column in routing entries can easily be misconfigured to result in conflict routing entries with overlapping inports. Ok. I won't object to it if the table-id makes more sense. However I can think of one major drawback with this approach. Since the proposed patch makes use of reg7, we will be restricting this feature to IPv4 static routes. The register xxreg1 is used in the routing pipeline for SRC_IPV6 for NS. I could be wrong here. Can we still use reg7 for IPv6 traffic if the packet is not an NS packet ? Previously I’ve sent a patch [1], where you and I checked usage of xxregN and updated documentation in ovn-northd code. Was it changed after this patch was accepted? Anyway, there are tests for IPv4 and IPv6 traffic with route tables. Maybe I should add any additional services in test topology to check if IPv6 is not limeted by this function? Ok. Looks like it's not a limitation for IPv6 then since you already have tests. I was not sure earlier. Thanks Numan [1] https://github.com/ovn-org/ovn/commit/d4cf6e52ffcab537161c3d7bd644f0191405b08b If we want to make use of table-id, maybe it's better that CMS sets the table id as an integer ? This way, ovn-northd doesn't have to generate the table id for each route table name. It would be the responsibility of CMS to generate and manage the table ids. This can be a trade-off to avoid unconsistent IDs generation. But I still don’t understand if changing IDs are a real issue. As I said same logic is used in ECMP groups (lr_in_ip_routing_ecmp stage). Can we check somehow if it’s an issue. Because if it is, ECMP groups should be also reimplemented to some more consistent IDs generation approach. Right? Thoughts ? Thanks Numan Thoughts? Thanks, Han Regards, Vladislav Odintsov Lead system engineer at Croc Cloud development team On 31 Aug 2021, at 04:35, Han Zhou <[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>> wrote: On Mon, Aug 30, 2021 at 3:55 PM Numan Siddique <[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>> wrote: On Mon, Aug 30, 2021 at 5:48 PM Numan Siddique <[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>> wrote: On Mon, Aug 30, 2021 at 5:25 PM Vladislav Odintsov <[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>> wrote: Hi Numan, thanks for review. While my answers are inline, I’ve got a counterquestion: After I’ve submitted this patch series, I’ve added support for route tables in OVN IC daemon. Is it okay if I submit a new version with requested changes and support for interconnection as well? I know it’s upcoming soft-freeze and as I’m new to project, so I don’t know if it is accepted for now. In my opinion you've submitted the patches before soft-freeze. So it should be fine and can be considered for the upcoming release. Regards, Vladislav Odintsov On 30 Aug 2021, at 23:44, Numan Siddique <[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>> wrote: On Mon, Aug 16, 2021 at 5:15 PM Vladislav Odintsov < [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]> <mailto:[email protected]<mailto:[email protected]>>> wrote: This patch extends Logical Router's routing functionality. Now user may create multiple routing tables within a Logical Router and assign them to Logical Router Ports. Traffic coming from Logical Router Port with assigned route_table is checked against global routes if any (Logical_Router_Static_Routes whith empty route_table field), next against directly connected routes and then Logical_Router_Static_Routes with same route_table value as in Logical_Router_Port options:route_table field. A new Logical Router ingress table #10 is added - IN_IP_ROUTING_PRE. In this table packets which come from LRPs with configured options:route_table field are checked against inport and in OVS register 7 unique non-zero value identifying route table is written. Then in 11th table IN_IP_ROUTING routes which have non-empty `route_table` field are added with additional match on reg7 value associated with appropriate route_table. Signed-off-by: Vladislav Odintsov <[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>> Hi Vladislav, Thanks for the patch. Sorry for the late reviews. I've a few comments. I didn't review the code completely. 1. I think you can merge patch 2 and patch 3. Without patch 3, the test cases fail. Ack. 2. ddlog implementation is missing. Let us know if you need some help here. It would be great if you could add ddlog implementation. I’ll try to add implementation by myself, but it can take a lot of time, especially if I spent time changing approach. If I stuck I’ll need somebody’s help. 3. I see a few problems in the present approach. The ID allocated for each route table entry may not be consistent across OVN DB runs. This may not be a huge problem. But in a scaled environment, adding or deleting route table entry could cause the ids to be shuffled disrupting the datapath. Instead of using the router table for each static route, I'd suggest instead to add a new column "inports" to Logical_Router_Static_Route table. This column can be a set of strings and CMS can add the inports for each static route. Eg. ovn-nbctl set logical_router_static_route <R1> inports="lrp-lr1-ls1, lrp-lr1-ls2" And ovn-northd would add a logical flow like table=11(lr_in_ip_routing ), priority=65 , match=(inport == {"lrp-lr1-ls1, lrp-lr1-ls2"} && ip4.dst == 1.1.1.1/32<http://1.1.1.1/32>) action=(.....) CMS can also create a port group if desired. With this, we don't have to generate the route table id and store it in the reg7. We also don't have to introduce a new stage - lr_in_ip_routing_pre. What do you think ? My first approach was almost same as you described. Just one difference in that each route could have only one inport. I decided to rewrite it to support route table names, whichever user wants, because it was more comfortable for our CMS (non-openstack). IDs allocation logic I took from ECMP groups code. It’s the same. However, your approach has its advantages in: - doesn’t have inconsistent ids for route tables - doesn’t require additional OVS register - doesn’t require additional stage (though, its not a big problem I guess) - doesn’t disrupt datapath while topology changes. I’ve got questions here: 1. Why ids would be inconsistent in large scale across different NB runs? Did you mean that ids can change while adding/removing routes and assigning route tables to LRPs? Lets say you have route tables - rt1, rt2, rt3 and rt4 with the same ids assigned by ovn-northd. Lets say you delete rt2, then ovn-northd would reassign the ids again for rt1, rt3 and rt4. The id assignment would totally depend on the order in which the route tables are loop though. In this case, assuming rt1 was assigned '1', the ids of rt3 and rt4 would definitely change. 2. Why will datapath be disrupted? When we add/remove route table, only its ids would be changed, so relevant OFs would be updated. How will this change datapath flows? Since we are deleting and readding the OFs with the updated register 7 value, this may disrupt if the datapath flow are evicted. I don't know the internals though. 3. Wouldn’t be datapath disrupted while changing inports list? Actually you're right. The OF flows will be deleted and re-added as the logical flow will be deleted and re-added with the updated inports list. Only way to avoid is if CMS uses port groups. Seems to me using inports is better than allocating the id to the route tables. On a second thought, I think it's better to have just "inport" as a new column. CMS can either set the name of the router port or a port group name. Thanks Numan I am curious why can't the Logical_Router_Policy table achieve the same goal? It has a "match" column which is flexible enough to add inport in the match. Thanks, Han _______________________________________________ dev mailing list [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]> https://mail.openvswitch.org/mailman/listinfo/ovs-dev _______________________________________________ dev mailing list [email protected]<mailto:[email protected]> https://mail.openvswitch.org/mailman/listinfo/ovs-dev _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
