On September 25, 2015 at 15:45:16, Nick Hilliard ([email protected]) wrote:
> it's easy enough to complain about tools and constructs not being there,
> but there's another fundamental problem, namely that the config semantic
> requirements of the network are complicated relative to the simplicity of
> the config atoms. Mixing in different language interpretations from
> different vendors adds to the mess.

Right, 90% of the work of building a YANG model that is vendor neutral is 
working out exactly what the required functionality is; and how that should 
perform. For example, if I take a simple BGP configuration knob - allowing 
one’s own ASN in the path, then even here, there are differences in how this is 
implemented across vendors. Some allow the operator to specify N instances, 
some have a boolean on/off value.

The work that is required is to figure out what is needed from an operational 
perspective - and subsequently work with the folks at the vendors to work out 
how to map their backend config database to the parameters that are available.

If you review openconfig-bgp (which is also the proposed IETF standard), this 
is something that I think we’ve made a reasonably good go of. In some cases 
there are deviations from the model - but (having written deviations files for 
two major routing vendor implementations) these are in general “knob X is not 
supported at {neighbor,group} hierarchy but only at [some higher level]”.

So, once one does the work of starting to normalise the interpretations of the 
RFCs - and features that don’t have RFCs, then one can start to reduce the 
complexity there.

Amalgamation of these functions into services is, as Neil says, the key 
requirement for moving automation of operations of the network forward. Tying 
something from being “configure atom X” into “configure me a series of atoms A, 
B, C .. N which make up a service” starts to move one towards being able to 
push ‘real’ things onto the network. Of course, there have already been systems 
that do this, but IME, their modularity and flexibility is not sufficient. A 
model-driven approach allows atoms to more programmatically defined, such that 
combining them is easier.

The tools are a small part of the problem, absolutely. Starting to define the 
‘service’ components that one needs, which combine usefully is another. This 
requires some careful consideration of what you need to expose, and some view 
of what the abstraction layer you’re building needs to do - does it simply need 
to say “prefer peer X over peer Y”, or does it need to do more (prefer paths 
matching some criteria over others)…

With both service models, and the lifecycle that surrounds the provisioning of 
the network - there are many organisational-specific requirements. The products 
that are in this space (e.g., tail-f’s NCS) are really toolkits that you can 
build on. To be honest, I don’t see a single open source solution really coming 
together for a good while, since the requirements will be so diverse. 

For the record, the NAPALM approach is really cool - I think it just depends on 
where you’re trying to get to. For me, I want to move away from the approach 
that the CLI is acceptable :-) The approach taken there is nice because it 
works with what’s there today.

The audience reading this thread may be interested in the fact that Juniper are 
publically talking about supporting the OpenConfig models on their devices: 
https://vimeo.com/139447948.

Best,
r.



Reply via email to