There was a request for concrete use cases.  This email from before was good:

        
https://mailarchive.ietf.org/arch/msg/netmod/v5cNLcC2F_OT8t-407F3Zj6Vws4/ 
<https://mailarchive.ietf.org/arch/msg/netmod/v5cNLcC2F_OT8t-407F3Zj6Vws4/>


More below:



> On Aug 9, 2021, at 6:23 PM, Andy Bierman <[email protected]> wrote:
> 
> On Mon, Aug 9, 2021 at 2:51 PM Sterne, Jason (Nokia - CA/Ottawa) 
> <[email protected] <mailto:[email protected]>> wrote:
> Hi guys,
> 
>  
> 
> I'm late to the game on this thread but I did read through the entire thing 
> (whew - can't promise I absorbed every nuance though). I am familiar with 
> this issue. I've been dealing with it from the server implementation side 
> (some clients *do* complain if there are references to missing "system 
> config" list entries).
> 
>  
> 
> One of the pretty fundamental issues IMO is whether we want good ol' standard 
> <running> to always be valid (including for an offline client or tool to be 
> able to validate instance data retrieved from a server against the YANG 
> models). Even without this "system config" topic, we already have a looming 
> problem with config templates & expansion.  Servers with YANG models that 
> document a lot of constraints (e.g. leafrefs, mandatory statements, etc) are 
> more likely to hit this issue of an invalid running config (offline).
> 

I think that it is already the case with JUNOS that <running> by itself cannot 
be validated by a client unaware of how JUNOS templates work…as the templates 
may supply mandatory nodes and/or values, or some validations only make sense 
to evaluate when the config is flattened (e.g., ensuring "max-elements" is not 
exceeded, ensuring ACLs always end with a terminal rules, etc.).


> I'm doubtful that it is easy for a client to know how templates are expanded 
> for all given servers. Unless that expansion is fully standardized (which 
> would be challenging given multiple different shipping implementations in the 
> industry), there can be some subtle corner cases and it can be complex 
> (multiple levels of template application, exclusion rules, etc).
> 
Agreed, not easy but…

Some [smart] clients may never need to grok how device-level templates are 
expanded.  For instance, the NMS I worked on in a previous life would always 
onboard a new device by asking the device to provide its config already 
expanded, and thereafter would never send templated-config to the device (note: 
the NMS had its own template mechanism).  

On the other end of the client-spectrum, other [dumb] clients may also not need 
to grok how device-level templates are expanded, as they mostly push simple 
updates (e.g., open a port) known to pass validation and/or rely on server-side 
validation.

Of course, there MAY be clients in-between that read/write templated-config 
and, if they wish to perform off-box validation, most likely will need to grok 
how templates are expanded.


> I agree there can be dynamically added system config (e.g. create a new qos 
> policy, and some queue list entries are automatically created inside that 
> policy).
> 
I'm unsure how this relates.


> Note that config added by the system could be deletable or non-deletable. For 
> deletable items there were some previous discussions a few years ago that 
> those list entries could be simply a default config that is populated if the 
> startup datastore is empty at boot time.
> 
Yes, but I that’s a different use-case, right?  (see the link to concrete use 
cases provided at top)


> The system config list entries could also have leafs that are modifiable and 
> leafs that are immutable. For the modifiable objects I think this is 
> basically the system merging the explicit config with the system config (with 
> the explicit taking precedence).

The “modifiable” leafs are in this use-case, assuming “modifiable" is the 
synonymous as “over-writable".  Some “immutable” leafs are more like those 
discussed in RFC 8342 Section 5, but others are like what Andy discusses below 
(note: these immutable nodes are not currently in scope, as I understand it).


> Perhaps a lot of this system config is a legacy hold over from human-driven 
> CLIs. For machine interfaces maybe it's reasonable to have clients explicitly 
> define their config *if* they need offline validation of <running> to succeed 
> (i.e. allow clients to explicitly configure any of the system config policies 
> they are actually using & referencing).  In many cases, users want the 
> "master" view of the config to live up in the client/OSS side and just push 
> it down to the server (without having the server touch the contents of the 
> running, i.e. a read-back should return identically what was sent).
> 
Agree that a read-back of <running> should always return identically what was 
sent, but that doesn’t mean that there isn’t system config in play.

> It might be good to set up a virtual interim or call of some sort to discuss 
> this one. It is pretty complex.
> 
Good idea, but I’d hope first to get past the “do we understand the 
requirements” part of the discussion.  Of course, if people prefer, we could 
schedule an interim to discuss the use-cases also.

> I think Balasz captured the 2 use-cases very well.

The use-cases in his 7/31 message are good, but neither of them are part this 
work’s scope, as I understand it.  [update: his use-case ‘A’ may be in scope, 
depending on how we set the scope]


> For implementations that combine the system into <running>, undoing
> this behavior would be quite disruptive and therefore not likely to be done.

I do not support merging <system> into <running> (unless 
appropriately-annotated in a “with-system” <get-config> response).  I do 
support merging <system> into <intended>.


> It would help to have some common understanding of the contents of <system>.
> Maybe start with the well-known "interface problem".  The system configures 
> almost empty physical interfaces.  The user is allowed to add, modify, and 
> delete all descendants except the "name" and "type" leaf, which are set by 
> the system.

I agree that concrete use-cases are needed.  The link provided at top provides 
some, but not all, and definitely not the specific one you just mentioned 
(which regards into Balazs’s use-case ‘A’ and Jason’s “immutable” config).


> In this case <running> will have the /interfaces/interface nodes in them (or 
> else the client-accessible nodes could not be represented in NETCONF).  So 
> any config=true XPath can point at name or type leafs in <running> or 
> <operational>.

Ack.


> I have heard that <system> is needed because it has nodes in it that do not 
> exist
> in <running> or <operational>, but could exist.  The premise is that the 
> server could have created these nodes but it did not for some reason.  And 
> the client can inspect these nodes and somehow force the server to change the 
> nodes it adds to <running>. (So what is the real use-case then?)

This is what the link provided at the top of this message regards.


> Retrieving the system-created nodes with origin=system is already supported 
> by NMDA.

True, but this “system config” feels different, or at least parts of it does.  
We may need to tease apart the “leafref-able system nodes” from the 
“resource-independent nodes” from the “resource-dependent nodes”.



K.


_______________________________________________
netmod mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/netmod

Reply via email to