Hi Morten,
On 3/23/2026 9:01 AM, Morten Brørup wrote:
From: Stephen Hemminger [mailto:[email protected]]
Sent: Sunday, 22 March 2026 17.44
On Sun, 22 Mar 2026 15:42:11 +0000
Vladimir Medvedkin <[email protected]> wrote:
This series adds multi-VRF support to both IPv4 and IPv6 FIB paths by
allowing a single FIB instance to host multiple isolated routing
domains.
Currently FIB instance represents one routing instance. For workloads
that
need multiple VRFs, the only option is to create multiple FIB
objects. In a
burst oriented datapath, packets in the same batch can belong to
different VRFs, so
the application either does per-packet lookup in different FIB
instances or
regroups packets by VRF before lookup. Both approaches are expensive.
To remove that cost, this series keeps all VRFs inside one FIB
instance and
extends lookup input with per-packet VRF IDs.
The design follows the existing fast-path structure for both
families. IPv4 and
IPv6 use multi-ary trees with a 2^24 associativity on a first level
(tbl24). The
first-level table scales per configured VRF. This increases memory
usage, but
keeps performance and lookup complexity on par with non-VRF
implementation.
I noticed the suggested API uses separate parameters for the VRF and IP.
How about using one parameter, a structure containing the {VRF, IP} tuple,
instead?
I'm mainly thinking about the bulk operations, where passing one array seems
more intuitive than passing two arrays.
I found this design to be more intuitive and kind of backward
compatible, many apps already create an array of addresses, adding an
extra array with corresponding VRFs maynotbecounterintuitive IMO.
ButwhatIfindmoreimportantisperformance,atleastthisapproachis
moreconvenientforvectorization.
Not sure at all if this the right way to do VRF.
There are multiple ways to do VRF, the Linux way, the Cisco way, ...
I think a shared table operating on the {VRF, IP} tuple makes sense.
If a table instance per VRF is preferred, that is still supported.
Can you elaborate what Linux and Cisco does differently than this?
This needs way more documentation and also an example.
+1
Like an option to l3fwd. And also an implementation in testpmd.
--
Regards,
Vladimir