Paul,

On June 1, 2015 5:00:38 PM Paul Jakma <[email protected]> wrote:

> On Mon, 1 Jun 2015, Lou Berger wrote:
>
> >> One Zserv instance for all makes it much harder to do multiple-processes
> >> later
>
> > I'm not sure I see this.
>
> You have a Zserv protocol, which is an API. This API, with the proposed
> patch train, is extended to add a VRF-Id field. This is so a client can
> send a route and have zebra install it into the relevant VRF. Once we add
> support for a given way of working, it may become hard to ever change or
> remove it.
>

I agree with the point that once vrf awareness is included in the api,
it will be hard to remove it. I do think that many changes will still be
possible, i.e. we won't be locked into one way the api is supported.

> [zebra VRF 0..n]----------[client]
>

Agreed, this is the initial version.

> If you want to split zebra into multiple processes that means you have
> either:
>
> [zebra VRF0] |
> [zebra VRF1] |    ZServ
> .            |----------------------[client]
> .            |
> .            |
> [zebra VRFn] |
>
> How do you demux the messages from the client to the right zebra in a sane
> way with zebra-daemon-per-VRF, if the protocol requires that it support
> clients that expect to be able to send routes for different VRFs over *one
> message stream* on a single socket?

This is where you loose me.  In any approach that allows multiple zebras
there will be a way to map vrf to zebra instance (and presumably
socket),  and the client code will need to resolve/dispatch per-vrf info
to the right socket as well as identify the vrf associated with incoming
messages. It seems to me that the current single-socket an be replaced
with such a dispatch mechanism at a later date with minimal impact to
the rest of the code/system.  And that the only decision that is really
being made now that is likely to be hard to go back on is the
introduction of a quagga-instance wide unique VRF ID.

>
> (Similarly, how do these daemons mux together their messages back into a
> single message stream to the client in a sane way, that doesn't negate
> many of the benefits of using multiple, isolated independent processes
> that might motivate this approach?)
>
> If there isn't an easy way to do this, then the above approach would seem
> to cast it into stone that we support single-daemon-for-all-VRF zebra
> forever more, no?

So I think you are really making two separable points:
Point 1- Does the current patch set preclude a separate Zserv per VRF.

I think the answer is no.  Any mechanism that could be introduced to
support such a mapping could still be introduced at a later date.

(see below for point 2)


>
> What I'm asking is that we consider the above issue, and weight it against
> the alternative of using a separate Zserv per VRF.
>
> A separate Zserv session per VRF is easy to implement in a non-problematic
> way regardless of the chosen single/multi-processing approach:
>
>                Zserv
>
>
> [zebra VRF0] |-----|-----------------------
> [zebra VRF1] |-----|                      |
> .            |  .  |[single daemon client]|
> .            |  .  |                      |
> .            |  .  |                      |
> [zebra VRFn] |-----|-----------------------
>
> or
>
>                |-----|
>                |-----|
> [single zebra]| .   | [single daemon client]
>                | .   |
>                | .   |
>                |-----|
>
>
> [zebra VRF0] |-----|[VRF0 client daemon]
> [zebra VRF1] |-----|[VRF1 client daemon]
> .            |  .  | .
> .            |  .  | .
> .            |  .  | .
> [zebra VRFn] |-----|[VRFn client daemon]
>
> or
>
> etc.
>
> I.e. no need to change Zserv. The VRF becomes implicit in the filename.

I think this is point 2: It is easier/cleaner to implement a Zserv per VRF.

My view on this is that there are different optimization points and
tradeoffs to be made in the different models, and there are *valid* use
cases for each.  I think it's likely that both may be supported in the
long term, and their introduction into the code should be driven by
community interest/use. So even if point 2 is correct, someone has
already done the "harder job" and demonstrated support, in the form of
contributed code, for one model -- and I don't see a reason to not make
use of it.

>
> Does my concern make more sense now?

I think so -- even if I don't completely agree, I do appreciate you
taking the time to explain it.

Lou


>
> regards,
> --
> Paul Jakma    [email protected]  @pjakma Key ID: 64A2FF6A
> Fortune:
> A dream will always triumph over reality, once it is given the chance.
>               -- Stanislaw Lem
>


_______________________________________________
Quagga-dev mailing list
[email protected]
https://lists.quagga.net/mailman/listinfo/quagga-dev

Reply via email to