Hi Filip, There comes another issue, traffic from VRF1's other interface could not be out from VRF1 interface with public ip address. how can i make this work ?
add my cli command in previous message below Thanks. haiyan...@ilinkall.cn 发件人: 李海艳 发送时间: 2022-03-09 11:03 收件人: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco); vpp-dev 主题: Re: RE: [vpp-dev] route between two vrfs does not work Hi Filip, Great thanks, that did work. Now VRF2 can reach internet through VRF1's outside interface. haiyan...@ilinkall.cn From: Filip Varga -X (fivarga - PANTHEON TECH SRO at Cisco) Date: 2022-03-09 05:01 To: haiyan...@ilinkall.cn; vpp-dev Subject: RE: [vpp-dev] route between two vrfs does not work Hi Haiyan, VRF for nat44-ed and nat44-ei works as follows: Scenario 2 VRF’s. VRF1 can reach the internet, VRF2 can’t. 1) Enable nat44-ed plugin. lhy: nat44 forwarding enable 2) VRF1: Configure public facing interface to be used as outside interface for nat44-ed plugin. lhy: set interface nat44 out G0 output-feature VRF2: Configure one or all interface (that you want to be able to communicate with public IP addresses) as inside interface[s] for nat44-ed plugin. lhy: set interface nat44 in tap300 lhy: set interface ip table tap300 VRF2 3) Configure nat44-ed address range for VRF2. lhy: nat44 add address <G0's ip> tenant-vrf VRF2 tenant-vrf parameter is used to tell nat for which source VRF the address should be used for translation. So in this scenario we need it to be VRF2. Best regards, Filip Varga From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of haiyan...@ilinkall.cn Sent: Tuesday, March 8, 2022 1:06 AM To: vpp-dev <vpp-dev@lists.fd.io> Subject: [vpp-dev] route between two vrfs does not work Dear all my test uses two vrfs in vpp: vrf A: interface(G0) with pulic ip address(172.16.0.73/24) exists,and i also did "nat44 add address xxx tenant-vrf A"/"set interface nat44 out G0 output-feature"/"nat44 forwarding enable" vrf B: there is no public ip address, so need to access the internet through vfr A interface G0, so i did "ip route add 172.16.0.47/32 table B via 0.0.0.0 next-hop-table A",but that does not work. do I missing something or any suggestion ? detail configurations shows below: vpp# show version vpp v20.01-natt~82-g061bec7 built by root on localhost.localdomain at 2022年 03月 07日 星期一 16:04:34 CST vpp# vpp# vpp# show nat nat nat44 nat64 nat66 vpp# show interface addr G0 (up): L3 172.16.0.73/24 ip4 table-id 1 fib-idx 1 G1 (up): L2 bridge bd-id 1 idx 1 shg 0 local0 (dn): loop21 (up): L2 bridge bd-id 1 idx 1 shg 0 bvi L3 192.168.1.1/24 tap10 (up): L3 10.10.1.1/24 ip4 table-id 1 fib-idx 1 tap20 (up): L2 bridge bd-id 1 idx 1 shg 0 vpp# vpp# show ip fib ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] locks:[src:plugin-hi:2, src:adjacency:1, src:default-route:1, ] 0.0.0.0/0 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[101:8484]] [0] [@0]: dpo-drop ip4 0.0.0.0/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]] [0] [@0]: dpo-drop ip4 172.16.0.47/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:33 buckets:1 uRPF:38 to:[2220:186480]] [0] [@13]: dst-address,unicast lookup in ipv4-VRF:1 192.168.1.0/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:27 buckets:1 uRPF:32 to:[0:0]] [0] [@0]: dpo-drop ip4 192.168.1.0/24 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:26 buckets:1 uRPF:31 to:[10:960]] [0] [@4]: ipv4-glean: loop21: mtu:9000 ffffffffffffdead000000150806 192.168.1.1/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:29 buckets:1 uRPF:36 to:[10:900]] [0] [@2]: dpo-receive: 192.168.1.1 on loop21 192.168.1.200/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:30 buckets:1 uRPF:35 to:[5:480] via:[22:1848]] [0] [@5]: ipv4 via 192.168.1.200 loop21: mtu:9000 8a48fe5830d9dead000000150800 192.168.1.255/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:28 buckets:1 uRPF:34 to:[0:0]] [0] [@0]: dpo-drop ip4 224.0.0.0/4 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]] [0] [@0]: dpo-drop ip4 240.0.0.0/4 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]] [0] [@0]: dpo-drop ip4 255.255.255.255/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]] [0] [@0]: dpo-drop ip4 ipv4-VRF:1, fib_index:1, flow hash:[src dst sport dport proto ] locks:[src:CLI:3, src:plugin-low:1, src:adjacency:8, src:recursive-resolution:1, ] 0.0.0.0/0 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:20 to:[10:2415]] [0] [@5]: ipv4 via 172.16.0.1 G0: mtu:9000 8446fe747dd4a0369f75ba8a0800 0.0.0.0/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:8 to:[0:0]] [0] [@0]: dpo-drop ip4 10.10.1.0/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:20 buckets:1 uRPF:22 to:[0:0]] [0] [@0]: dpo-drop ip4 10.10.1.0/24 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:21 to:[0:0]] [0] [@4]: ipv4-glean: tap10: mtu:9000 ffffffffffff02fedd9ceb9b0806 10.10.1.1/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:22 buckets:1 uRPF:26 to:[0:0]] [0] [@2]: dpo-receive: 10.10.1.1 on tap10 10.10.1.100/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:29 to:[1666:315090]] [0] [@5]: ipv4 via 10.10.1.100 tap10: mtu:9000 963b645c5f7402fedd9ceb9b0800 10.10.1.255/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:21 buckets:1 uRPF:24 to:[0:0]] [0] [@0]: dpo-drop ip4 172.16.0.0/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:15 buckets:1 uRPF:14 to:[0:0]] [0] [@0]: dpo-drop ip4 172.16.0.1/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:18 buckets:1 uRPF:17 to:[54:18684] via:[28:2352]] [0] [@5]: ipv4 via 172.16.0.1 G0: mtu:9000 8446fe747dd4a0369f75ba8a0800 172.16.0.20/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:32 buckets:1 uRPF:39 to:[0:0] via:[1:84]] [0] [@5]: ipv4 via 172.16.0.20 G0: mtu:9000 d66a53d1f17ea0369f75ba8a0800 172.16.0.22/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:25 to:[61:5124]] [0] [@5]: ipv4 via 172.16.0.22 G0: mtu:9000 80d21df5918da0369f75ba8a0800 172.16.0.25/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:35 buckets:1 uRPF:41 to:[0:0]] [0] [@5]: ipv4 via 172.16.0.25 G0: mtu:9000 1063c8ed8695a0369f75ba8a0800 172.16.0.47/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:31 buckets:1 uRPF:37 to:[2239:188184]] [0] [@5]: ipv4 via 172.16.0.47 G0: mtu:9000 ccd39d9eada5a0369f75ba8a0800 172.16.0.65/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:34 buckets:1 uRPF:40 to:[0:0]] [0] [@5]: ipv4 via 172.16.0.65 G0: mtu:9000 e4aaeab4c36ba0369f75ba8a0800 172.16.0.0/24 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:14 buckets:1 uRPF:13 to:[1:96] via:[2:128]] [0] [@4]: ipv4-glean: G0: mtu:9000 ffffffffffffa0369f75ba8a0806 172.16.0.73/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:18 to:[88:5368] via:[17:1428]] [0] [@2]: dpo-receive: 172.16.0.73 on G0 172.16.0.98/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:27 to:[1790:329506]] [0] [@5]: ipv4 via 172.16.0.98 G0: mtu:9000 309c239595e7a0369f75ba8a0800 172.16.0.255/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:16 buckets:1 uRPF:16 to:[1017:132666]] [0] [@0]: dpo-drop ip4 224.0.0.0/4 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:10 to:[0:0]] [0] [@0]: dpo-drop ip4 240.0.0.0/4 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:9 to:[0:0]] [0] [@0]: dpo-drop ip4 255.255.255.255/32 unicast-ip4-chain [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:11 to:[792:276399]] [0] [@0]: dpo-drop ip4 haiyan...@ilinkall.cn
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21000): https://lists.fd.io/g/vpp-dev/message/21000 Mute This Topic: https://lists.fd.io/mt/89683253/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-