Thanks Neale for responding and for the inputs. I hope you are doing well.
I understood your point about the guarantee of packet (may not be) reaching the
destination. But the use-case demands.
Just to give it a try, I went ahead and changed the IPIP-hw-interface-type to
NBMA and made some changes to pick up the stack-entry from
"adj->sub_type.nbr.next_hop“ instead of "t->tunnel_dst”. It looks like it
works and packet forwarding happens.
70.70.70.0/24
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:77 buckets:1 uRPF:87 to:[9:414]
drop:[0:0]]
[0] [@6]: ipv4 via 55.55.55.55 ipip0: l3-mtu:9000 l2-mtu:9000 next:8
450000000000000040040c8d1616161621212121
stacked-on entry:73:
[@2]: ipv4 via 12.1.1.2 TenGigabitEthernet3/0/1.60: l3-mtu:1400
l2-mtu:1508 next:6 000081bee3f8322a643982f4810000030800
I got your point about introducing a new VRF (service-chain) and changing the
tunnel VRF to be the new service-chain VRF. This sounds like a good idea to
stitch the path for packet steering between spaces.
Thanks again,
Leela sankar
From: [email protected] <[email protected]> on behalf of Neale Ranns via
lists.fd.io <[email protected]>
Date: Sunday, November 9, 2025 at 10:59 PM
To: [email protected] <[email protected]>
Subject: [**EXTERNAL**] Re: [vpp-dev] Route pointing to IPIP tunnel for
encapsulation
Hi Leela sankar,
Comments inline
From: [email protected] <[email protected]> on behalf of Gudimetla, Leela
Sankar via lists.fd.io <[email protected]>
Date: Monday, 10 November 2025 at 10:37 am
To: [email protected] <[email protected]>
Subject: [vpp-dev] Route pointing to IPIP tunnel for encapsulation
You don't often get email from [email protected]. Learn why this
is important
[aka.ms]<https://urldefense.com/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!OSsGDw!OvZQtz8yO7UXeYbDQ6Vmh2lE9scObEtOMbrgduFXje0X-wZ_PQ0nILIds2CKErS8zupdhXofG1BcVMDP8JHyRsv0Soci$>
Hello All,
Good evening.
I have a use case with IPIP tunnel. Can someone suggest a solution if already
done ? or share thoughts ?
IPIP tunnel encapsulation Requirement:
The ip-route that points to ipip-tunnel should be sent on a neighbor-ip that is
different than the destination-ip of the ipip-tunnel.
In other words, the ipip-tunnel should only be responsible for adding outer
header (i.e. encapsulation) and should not decide the forwarding.
That’s not supported.
If you add an encapsulation for 33.33.33.33 but send it on the path towards
55.55.55.55, what guarantee do you get that is reaches 55.55.55.55?
For example …
For example:
ip route add 70.70.70.0/24 via 55.55.55.55 ipip0
The neighbor-ip 55.55.55.55 is another route that is resolved.
55.55.55.55/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:73 buckets:1 uRPF:84 to:[0:0]
drop:[0:0]]
[0] [@5]: ipv4 via 12.1.1.2 TenGigabitEthernet3/0/1.60: l3-mtu:1400
l2-mtu:1508 next:6 000081bee3f8322a643982f4810000030800
… Why doesn’t 12.1.1.2 send the packet straight back to you …
The ipip-tunnel configuration is like below.
[0] instance 0 src 22.22.22.22 dst 33.33.33.33 table-ID 0 sw-if-idx 77 flags
[none] dscp CS0
The tunnel destination-ip is resolved like below.
33.33.33.33/32
unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:78 buckets:1 uRPF:88 to:[0:0]
drop:[0:0]]
[0] [@5]: ipv4 via 11.1.1.2 GigabitEthernet5/0/0.59: l3-mtu:1400
l2-mtu:1504 next:5 000081bee3f6322a643982f4810000020800
… for you to forward to 11.1.1.2?
As per my requirement, the route should be sent via via 12.1.1.2
TenGigabitEthernet3/0/1.60.
But the route is getting stacked on 11.1.1.2 GigabitEthernet5/0/0.59.
Intentionally so, as I hope the above example illustrates.
In the VPP code, I see that the ipip-tunnel header is updated by the
adj_nbr_midchain_update and immediately after the midchain_delegate_stack
happens with the destination-ip of the tunnel.
Since the ipip-tunnel is P2P interface (highlighted code), it looks like the
nexthop_addr is passed as ZERO.
static fib_path_t*
fib_path_attached_next_hop_get_adj (fib_path_t *path,
vnet_link_t link,
dpo_id_t *dpo)
{
fib_node_index_t fib_path_index;
fib_protocol_t nh_proto;
adj_index_t ai;
fib_path_index = fib_path_get_index(path);
nh_proto = dpo_proto_to_fib(path->fp_nh_proto);
if (vnet_sw_interface_is_p2p(vnet_get_main(),
path->attached_next_hop.fp_interface))
{
/*
* if the interface is p2p then the adj for the specific
* neighbour on that link will never exist. on p2p links
* the subnet address (the attached route) links to the
* auto-adj (see below), we want that adj here too.
*/
ai = adj_nbr_add_or_lock(nh_proto, link, &zero_addr,
path->attached_next_hop.fp_interface);
}
else
{
ai = adj_nbr_add_or_lock(nh_proto, link,
&path->attached_next_hop.fp_nh,
path->attached_next_hop.fp_interface);
}
dpo_set(dpo, DPO_ADJACENCY, vnet_link_to_dpo_proto(link), ai);
adj_unlock(ai);
return (fib_path_get(fib_path_index));
}
Can such a adj_nbr exist ? or should it be a new type ?
Or can I simply make the IPIP-hw-interface-type as NBMA instead of P2P and make
some changes in the ipip_tunnel_stack to get nh_addr from ai and do the
midchain_delegate_stack on this address ?
Using an NMBA is not going to change things. The address in the encapsulation
is the one used to send the packet.
The best, ahem, workaround I can think of is to add a new VRF (called maybe
service-chain) and add a route in that reachable via 55.55.55.55 in the default
table (where I assume the route for 55.55.55.55 you showed above resides). Then
change the underlay VRF of your tunnel to be your new service-chain VRF. Or
change the tunnel’s encap.
Regards,
neale
Any inputs are highly appreciated.
Thanks,
Leela sankar
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#26508): https://lists.fd.io/g/vpp-dev/message/26508
Mute This Topic: https://lists.fd.io/mt/116211264/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/14379924/21656/631435203/xyzzy
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-