On 01.10.2025 17:00, Stefan Hanreich wrote:
Have some additional comments I wanted to post on the ML so they don't
get lost, but they're more food for though for further improvements of
the whole SDN / Fabrics stack wrt VRFs.
Tested the patch series and looked at the code - LGTM!
Reviewed-by: Stefan Hanreich <[email protected]>
Tested-by: Stefan Hanreich <[email protected]>
Thanks for the review!
On 9/16/25 2:41 PM, Gabriel Goller wrote:
[snip]
In this scenario we ping from Node1 to Node3, which means that Node2
needs to forward our packet. When the packet arrives at Node2, we again
check the routing table, where we see the same entry as before, so
10.0.1.3 is available at (e.g.) ens20 onlink with the src address
10.0.1.2. This is fine, but we still need to do an ARP request to lookup
the mac address of the neighbor which is attached at ens20. So we take
the packet we get from Node1, which has a source address of 10.0.1.1, and
call arp_solicit on it to make an ARP request and find the neighbors mac
address. arp_solicit will take the source address of the packet
(10.0.1.1) and lookup to search it locally. This check fails because
10.0.1.1 is not available locally (there is a direct route to it, but
it's not configured on any local interface (RTN_LOCAL)). arp_solicit
will thus [2] call inet_select_addr, which goes through all the ip
addresses on the current interface (there are none, because this
interface is unnumbered) and then iterate through all the other
interfaces on the host and select the first one with 'scope link'. This
ip will then be selected as the source address for the outgoing ARP
package. Now if we're lucky this is the dummy interface on our node and
we select the correct source address (10.0.1.2) -- but we could also be
unlucky and it selects a completely random address from another
interface e.g. 172.16.0.26. If we're unlucky arp_solicit will send out
the following ARP packet:
Request who-has 10.0.1.3 tell 172.16.0.26
We will get a correct response but the response will end up on another
interface (because 172.16.0.26 is not on the same interface as
10.0.1.2). This means we will send out these ARP requests repeatedly and
never get an answer, so the ping from Node1 to Node3 will get
"Destination host unreachable errors".
Interesting, that the src IP address directive from the route is not
considered at all:
172.16.123.2 nhid 162 via 172.16.123.2 dev ens22 [..] src 172.16.123.1
Didn't dig further into it, there's a good chance there's a good reason
for that I just didn't see immediately.
Hmm yeah, I'll look into this. Currently the whole neighbor system is
quite generic and limited, I'll see if there is any way we can add
"hints" to the arp_solicit function when it's called from ip_forward.
What I also found interesting while jumping down this rabbit hole is the
following comment / code section in the inet_select_addr function [1]:
/* For VRFs, the VRF device takes the place of the loopback device,
* with addresses on it being preferred. Note in such cases the
* loopback device will be among the devices that fail the master_idx
* equality check in the loop below.
*/
So, in that case (iiuc) one could side-step that problem by
compartmentalizing the fabric inside its own VRF (further reinforcing my
belief in implementing VRF support sooner than later to avoid issues
like this when running everything in one routing table, particularly
multiple fabrics).
fabricd has no VRF support atm though (could potentially run it via ip
vrf exec, but that seems hacky) - OSPF and BGP do.
I agree, in theory this would be very nice, but hacking vrf support into
lots of stuff that doesn't have vrf support might be tricky. Another
example is wireguard, where vrf support is also kinda tricky.
I'll look into OpenFabric VRF support though, this shouldn't be too
tricky to implement as ISIS already supports it.
[1]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/ipv4/devinet.c?h=v6.17#n1359
For more information check out the simplified version of the arp_solicit
function below:
[snip]
_______________________________________________
pve-devel mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel