Hi Vladimir,
On Sun, Mar 22, 2026 at 4:42 PM Vladimir Medvedkin <[email protected]> wrote: > > This series adds multi-VRF support to both IPv4 and IPv6 FIB paths by > allowing a single FIB instance to host multiple isolated routing domains. > > Currently FIB instance represents one routing instance. For workloads that > need multiple VRFs, the only option is to create multiple FIB objects. In a > burst oriented datapath, packets in the same batch can belong to different > VRFs, so > the application either does per-packet lookup in different FIB instances or > regroups packets by VRF before lookup. Both approaches are expensive. > > To remove that cost, this series keeps all VRFs inside one FIB instance and > extends lookup input with per-packet VRF IDs. > > The design follows the existing fast-path structure for both families. IPv4 > and > IPv6 use multi-ary trees with a 2^24 associativity on a first level (tbl24). > The > first-level table scales per configured VRF. This increases memory usage, but > keeps performance and lookup complexity on par with non-VRF implementation. > Thanks for the RFC. Some thoughts below. Memory cost: the flat TBL24 replicates the entire table for every VRF (num_vrfs * 2^24 * nh_size). With 256 VRFs and 8B nexthops that is 32 GB for TBL24 alone. In grout we support up to 256 VRFs allocated on demand -- this approach forces the full cost upfront even if most VRFs are empty. Per-packet VRF lookup: Rx bursts come from one port, thus one VRF. Mixed-VRF bulk lookups do not occur in practice. The three AVX512 code paths add complexity for a scenario that does not exist, at least for a classic router. Am I missing a use-case? I am not too familiar with DPDK FIB internals, but would it be possible to keep a separate TBL24 per VRF and only share the TBL8 pool? Something like pre-allocating an array of max_vrfs TBL24 pointers, allocating each TBL24 on demand at VRF add time, and having them all point into a shared TBL8 pool. The TBL8 index in TBL24 entries seems to already be global, so would that work without encoding changes? Going further: could the same idea extend to IPv6? The dir24_8 and trie seem to use the same TBL8 block format (256 entries, same (nh << 1) | ext_bit encoding, same size). Would unifying the TBL8 allocator allow a single pool shared across IPv4, IPv6, and all VRFs? That could be a bigger win for /32-heavy and /128-heavy tables and maybe a good first step before multi-VRF. Regards, Maxime Leroy > Vladimir Medvedkin (4): > fib: add multi-VRF support > fib: add VRF functional and unit tests > fib6: add multi-VRF support > fib6: add VRF functional and unit tests > > app/test-fib/main.c | 257 ++++++++++++++++++++++-- > app/test/test_fib.c | 298 +++++++++++++++++++++++++++ > app/test/test_fib6.c | 319 ++++++++++++++++++++++++++++- > lib/fib/dir24_8.c | 241 ++++++++++++++++------ > lib/fib/dir24_8.h | 255 ++++++++++++++++-------- > lib/fib/dir24_8_avx512.c | 420 +++++++++++++++++++++++++++++++-------- > lib/fib/dir24_8_avx512.h | 80 +++++++- > lib/fib/rte_fib.c | 158 ++++++++++++--- > lib/fib/rte_fib.h | 94 ++++++++- > lib/fib/rte_fib6.c | 166 +++++++++++++--- > lib/fib/rte_fib6.h | 88 +++++++- > lib/fib/trie.c | 158 +++++++++++---- > lib/fib/trie.h | 51 +++-- > lib/fib/trie_avx512.c | 225 +++++++++++++++++++-- > lib/fib/trie_avx512.h | 39 +++- > 15 files changed, 2453 insertions(+), 396 deletions(-) > > -- > 2.43.0 > -- ------------------------------- Maxime Leroy [email protected]

