Hi Vincent,
I still have some stuff I need addressed, to be "sold" on.
First off, we need to discus the end-destination. As it may be very
difficult, if not impossible, to agree on the next step if we're not
agreed on the destination.
There seemed to wide agreement that in a perfect world we'd have a RIB for
each VRF/netns, and each instance of a protocol, in a separate process.
Indeed, you might want further separation (e.g. each AFI/SAFI RIB in a
different process, as there's next to no data sharing between them).
The major obstacle to this is that our config/control code and UI are
currently too directly entwined. That needs fixing one day.
The destination then is to have a collection of processes, that scales out
with VRFs and/or protocol instances, peers, etc. I would imagine there
would be /one/ process there to add/destroy/create VRFs and associated
daemons, and that it'd be separate from the RIB task in the VRF and other
protocols.
Are we on the same page with that? Or is there disagreement there?
If on the same page:
The next question how this patch set affects the distance to the desired
state. If we're at A and ultimately want to get to B, then is this getting
us directly there (great!)? Or is it perhaps at a slight angle from the
straight path, a slight detour but still generally decreasing the distance
to B? Or is it at right angles or greater - increasing the distance?
Now we have to try make this specific, to try get some kind of objective
technical handle on how to assess this:
The first specific issue is whether commands added suit the destination:
I've looked through the code and the commands added look like if/when we
fix the config/control/rendezvous issues, and are able to try split VRF
management from 'zebra' into a VRF management daemon, then the command
definition should still be fine.
So that's good, that's getting us closer to the destination.
(Though, I wonder, why not just /start/ by putting the VRF code in a
dedicated VRF management daemon?)
Background: Note that it is already possible today to run zebra+protocol
daemons one-set-per-VRF/netns. You can write a script to setup the VRFs
and inter-VRF interfaces and it works. For each VRF you launch zebra and
the client daemons for that VRF with a VRF-specific Zserv path.
The next specific issue is how is this going to interact with client
daemons?
Your patch-set does this by making VRF part of ZServ. Every client must
now become VRF-aware, even if to set the VRF-Id.
However, couldn't this also be accomplished by simply having a separate
ZServ socket for each VRF? The socket for the default VRF having the
(existing) default path? No client mods needed?
Doing it inside the 1 ZServ socket can only serve adding multi-VRF to
client daemons. I will have major concerns about taking such patches
though.
We've not been given a use-case for single-daemon/multi-instance. Note
that protocol-daemon per netns _already_ works today. So what reason will
we have in the future to accept multi-VRF-single-daemon patches, to take
on that level of churn and maintenance needs - when the use-case will
already be covered?
Without multi-VRF-single-daemon, there is no need to have ZServ support
VRF Ids.
Indeed, even with multi-VRF-single-daemon, it might still be better to
have 1 ZServ protocol instance/socket per VRF. It would then be easier
later to split up the zebra side into per-VRF processes.
---
So in summary, the issues I have:
* Why not have VRF setup/management separate from zebra? A single VRF
management daemon, then dumb (zebra+client daemons)-per VRF (already
somewhat possible now with scripts)?
* Doing it with one ZServ socket assumes that Quagga will want to support
single-process (certainly single-peer) multi-VRF daemons. Why is that
latter point attractive when we can already cover the VRF use-case with
existing VRF-unaware-daemons-per-VRF?
Basically, my problem is that if I sign up to this zebra multi-VRF patch,
am I signing up for later immense amounts of churn in the routing client
daemons, for a use-case that does need the client-daemons to be modified?
Note: I'd sign up for a simple VRF management daemon, with VRF-specific
ZServ paths, in a jiffy.
On Wed, 27 May 2015, Vincent JARDIN wrote:
David, Paul,
Until now, I did prefer to stay quite to avoid confusions.
1- I understand that there is a consensus on this contribution (both
approaches Single Daemon and Multi Daemon make sense and can coexist),
2- my review of the code is OK too.
So, I am ready for a merge of this v3 of the patch, unless there is another
concern on the code.
Best regards,
Vincent
PS: Sorry, I cannot connect on IRC during dayworks.
On 27/05/2015 10:42, Nicolas Dichtel wrote:
I also agree.
Thank you,
Nicolas
Le 26/05/2015 22:33, Donald Sharp a écrit :
> Jafar -
>
> Thanks for saying this so eloquently. This is exactly what we have been
> discussing internally and we believe that this is the correct
> direction to
> go.
>
> thanks!
>
> donald
>
> On Tue, May 26, 2015 at 3:13 PM, Jafar Al-Gharaibeh <[email protected]>
> wrote:
>
> > Andrew,
> >
> > I will leave direct answers to your questions to Nicolas, but
> > here is
> > my take in the matter.
> >
> > Given how substantial 6WIND's patches already, I'd probably wait
> > for
> > the patches to be merged in (if ever) and rebase afterward.
> >
> > Just a thought, in the bigger picture, I think your approach to VRF
> > (VRF-light as you call it) complements 6WIND's, and it is useful
> > with or
> > without 6WIND patches, but we need resolve terminology issues if we
> > are
> > going to have both. 6WIND's VRFs are tightly coupled to network
> > namespaces
> > while your VRFs are mapping to different routing tables. I know there
> > is a
> > "table" commands in zebra that I've never tried and don't know if it
> > is
> > fully functional. The documentation says:
> >
> > — Command: *table *tableno
> >
> > Select the primary kernel routing table to be used. This only works
> > for
> > kernels supporting multiple routing tables (like GNU/Linux 2.2.x and
> > later). After setting tableno with this command, static routes defined
> > after this are added to the specified table.
> >
> >
> > "VRF-Light" defines a mapping to different kernel routing tables. In
> > my
> > mind, (Please correct if I'm wrong) it seems as if vrf-light is a
> > natural
> > generalization of this command's concept. Currently the "table"
> > command
> > has a global effect on Zebra's state and I'm not sure if you can
> > change the
> > table at run-time even, with VRF-Light we want to extend this
> > behavior and
> > make it dynamic so that we can maintain different tables at the same
> > time,
> > and also direct commands/routes to a specific table. In such world:
> >
> > VRF 3 table 2
> >
> > Means network name space 3, routing table number 2. The former is
> > 6WIND's work, the latter is yours.
> >
> > Cheers,
> > Jafar
> >
> >
> > On 5/26/2015 1:08 PM, Xiaorong (Andrew) Qu wrote:
> >
> > Hi Nicolas,
> >
> >
> > It's good to see 6wind also worked on VRF support.
> >
> > How about we coordinate the VRF support check-in? we can re-base to
> > your branch and add what is missed in our patch?
> >
> > The design/implementation between 6wind and us in this patch are
> > very
> > similar, so there will be no much conflict in design and is more
> > just add on.
> >
> > coordinating may resolve minor differences before entire patch
> > checked in
> > Quagga. and it may serves a good code review as well.
> >
> > What do you think?
> >
> > Thanks,
> >
> > Andrew
> >
> >
> >
> >
> > _______________________________________________
> > Quagga-dev mailing list
> > [email protected]
> > https://lists.quagga.net/mailman/listinfo/quagga-dev
> >
> >
>
_______________________________________________
Quagga-dev mailing list
[email protected]
https://lists.quagga.net/mailman/listinfo/quagga-dev
--
Paul Jakma [email protected] @pjakma Key ID: 64A2FF6A
Fortune:
The big cities of America are becoming Third World countries.
-- Nora Ephron_______________________________________________
Quagga-dev mailing list
[email protected]
https://lists.quagga.net/mailman/listinfo/quagga-dev