On Sat, 30 May 2015, Lou Berger wrote:
We actually have the code to do the per-VRF import/export as well that
we'd like to submit as well.
Ok, so can we see it, so we can be more informed on the API, Zserv and UI
considerations?
I'm loathe to take in a patch with major implications for future design
choices without seeing the uses of it.
One of the challenges has been how to integrate it with a testable
forwarding plane. (The code was originally developed as what is
essentially an NVO3 controller with a proprietary -- now DOA -- NVA to
NVE, think openflow-ish, protocol).
Testing is another issue.
Using the fact that the Linux kernel namespaces were explicitly designed
to *avoid* the need for modifying user-space network code (least, outside
of things that want to trade info between namespaces), to *avoid* having
to modify the code has a huge advantage:
* The difference in testing the normal case code and the
multi-VRF/VRF-specific code will be 0. Test the normal case, and you're
testing the "in one VRF" case automatically.
Then only the "exchange info between routing-contexts/VRFs/namespaces"
cases need explicit, separate testing. Even there, we can minimise
code-path explosion by re-using existing code paths (e.g. BGP is a pretty
natural protocol for exchanging routing info between different instances
of bgpd - and for IPC it can be made transparent and 0-conf).
Having multiple VRF-RIB/Zebras supported in a single process will
certainly be easier and perhaps cleaner integration -- keep in mind the
code submitted *must* run in a single bgpd.
Why *must*?
Even so, it doesn't require one zebra, does it?
That said, this single bgpd that does the per-VRF import/export could
use an RPC mechanism per zebra process.
I think both single-process multi-VRF and multi-process models have
their places. The former is more scalable while the latter provides
greater isolation. I view the latter as being closer to a logical
system/network element than a providing a simple VRF. And as I think is
generally understood, there is value in being able to support both in
larger systems, and that smaller systems may only support one (or
neither) type.
If we want to have the option to support both kinds, then it might be a
good idea to review the interfacing issues now, before integration.
E.g., the ZServ interface. Should it be with 1 ZServ instance handling
multiple VRFs, with a VRF ID protocol field, or 1 Zserv instance per VRF
and have the association between Zserv:VRF defined outside (e.g.
filesystem path)?
One Zserv instance for all makes it much harder to do multiple-processes
later - as it is next to impossible to have multiple processes
concurrently read messages off the same socket in a coherent manner in a
way that doesn't lose at least some of the benefits of distinct processes,
should someone want that in the future.
Zserv instance per VRF, with VRF implicit in the instance would allow
multiple processes in the future and is also easy to handle with 1
process. It does mean you need to rely on other conventions to associate
the VRF and the Zserv instance - but we already have well-known paths for
Zserv local sockets. It doesn't preclude information messages to send what
the VRF ID is (if known) though.
E.g., again, if we accept this zebra patch, we're going to have zebra
inside /containers/ advertising VRF capability (which may work on Linux
with its nestable namespaces, but not elsewhere), as things stand.
These things deserve thought, before we become committed to them, I feel.
regards,
--
Paul Jakma [email protected] @pjakma Key ID: 64A2FF6A
Fortune:
Mater artium necessitas.
[Necessity is the mother of invention].
_______________________________________________
Quagga-dev mailing list
[email protected]
https://lists.quagga.net/mailman/listinfo/quagga-dev