Paul,

On 6/3/2015 12:40 PM, Paul Jakma wrote:
> On Tue, 2 Jun 2015, Lou Berger wrote:
>
>> This is where you loose me.  In any approach that allows multiple zebras
>> there will be a way to map vrf to zebra instance (and presumably
>> socket),  and the client code will need to resolve/dispatch per-vrf info
>> to the right socket as well as identify the vrf associated with incoming
>> messages.
> How will you do this when the VRF ID is in the message header, 
see below.

> and the 
> requirement is that the client be able to send commands for any VRF down 
> this message stream?

I don't see that this is a requirement, but rather is the current
implementation.  More generally, and in the long term, I see that one
socket can support N VRFs where N <= the complete set of VRFs.  It's
just the 1st/current version that sends all down one scoket.

>
>> It seems to me that the current single-socket an be replaced with such a 
>> dispatch mechanism at a later date with minimal impact to the rest of 
>> the code/system.
> You have a socket with messages intended for different VRFs. How will you 
> make message for VRF 1 go to the zebra instance for VRF 1, and the message 
> for VRF x to zebra instance x? (And similarly, for the other direction, 
> how do you assemble a coherent message stream).
I'd use a socket per instance plus a mechanism that allowed clients to
map VRF# to socket. One way to do this would have multiple socket names
point/link to the same real socket.  (e.g., Bind to create the base/real
socket name and links per associated vrf.)

> I don't know of a way for multiple processes to receive messages from the 
> same socket in a clean way that doesn't add cross-process locking that 
> obviates at least some of the point of the multiple-processes.
I wasn't considering using single socket with multiple instances.

>
> Other ways might be to have another, external method for locating the 
> right zebra (e.g., simplistically, the filename) -
Filenames could be used to map multiple vrfs to the same socket too,
e.g., (made up inodes)
1080764 srwx------ 1 quagga quagga 0 Jun  4 10:56
/var/run/quagga/vrf123/zserv.api
1080764 srwx------ 1 quagga quagga 0 Jun  4 10:56
/var/run/quagga/vrf82/zserv.api
2608904 srwx------ 1 quagga quagga 0 Jun  4 10:56
/var/run/quagga/vrf5678/zserv.api

where VRFs 123 and 82 share the same instance/socket and 5678 has it's
own instance.

>  but then why bother 
> with the VRF inside the protocol?

To continue to allow more than 1  VRF to share a zebra instance.

>
>> And that the only decision that is really being made 
>> now that is likely to be hard to go back on is the introduction of a 
>> quagga-instance wide unique VRF ID.
> Right.
>
> That VRF ID implies 1 zebra process, or cross-process locking.
Again, we disagree on this point.

>  Because 
> we're adding an API which is *guaranteeing* that the zebra side will be 
> able to mux and demux messages correctly based on the VRF ID.
>
>> I think the answer is no.  Any mechanism that could be introduced to
>> support such a mapping could still be introduced at a later date.
> Then sketch it out.
Do you need more than what's above?  I'm not saying that it's the only
way the problem can be solved, but it is one way that's pretty straight
forward to implement.

> Otherwise, it looks difficult to me, and I want people to be clear:
>
>    "We're committing to a single process zebra with this design".
>
> At least, we should assume that, because it probably will be true.
>
>> My view on this is that there are different optimization points and 
>> tradeoffs to be made in the different models, and there are *valid* use 
>> cases for each.  I think it's likely that both may be supported in the 
>> long term,
> No, see, that's the point.
>
> I'm saying we need to be clear that accepting this patch set - at least 
> the external ZServ API aspects - implies we may well be *ruling out* the 
> multiple way.
>
> I need people to be clear on that. :)
And I hope you see why I don't agree that it is ruling anything out.

> And note, I think we need to start being much more careful about API 
> compatibility - Zserv particularly. Constant mucking around breaking 
> things makes it harder for stuff to exist comfortably outside Quagga, 
> which I suspect damages the eco-system somewhat in the long-run.
This is a fair point, at least in principle.

...

Lou



_______________________________________________
Quagga-dev mailing list
[email protected]
https://lists.quagga.net/mailman/listinfo/quagga-dev

Reply via email to