Hi Andy, The actual system instance data isn't in a YANG model, but that instance data sits in schema defined in the YANG model. It is typically a list entry populated by the system. But there may also be other list entries created by the users/clients, and other areas of the YANG model may reference those lists using leafrefs.
Jason From: netmod <[email protected]> On Behalf Of Andy Bierman Sent: Monday, November 29, 2021 6:14 PM To: Kent Watsen <[email protected]> Cc: [email protected] Subject: Re: [netmod] Should the origin="system" be required for system configurations copied/pasted into <running>? On Mon, Nov 29, 2021 at 2:49 PM Kent Watsen <[email protected]<mailto:[email protected]>> wrote: Hi Andy, > RFC 7950 rules about leafref validation are very clear. > Adding a new datastore to these rules requires a massive change to NMDA > and all implementations. Not really or, rather, it seems like it would be just part of adding support for <system>, which implies adding support for <intended> (if not supported already) and doing the validation on <intended> instead. > 2) As Juergen points out, servers populate the system config in <running> and > the operator is > free to ignore it, reference it, or possibly change it. The premise of > <system> seems to > be that this approach is not "pure enough" for NMDA and <running> MUST only > contain > operator-created configuration. Why? What real problems would this change > solve? Servers populating <running> (beyond coping <startup> into <running>) has never been a supported idea. We’ve always maintained that <running> should only contain client-provided config, right? I’m unsure what the “change” in your last question points to. But note that on JUNOS-based systems, there are many hundreds of system-defined nodes available to be referenced by config in <running>. Putting all these nodes into <running> would clutter the config immensely. FWIW, JUNOS “solves” this by *hiding* these system-nodes, such that clients must use a special command just to discover them. And, yes, if ever any of these are hidden-nodes are reference, offline-validation of <running> will fail. The <system> datastore being discussed effectively mimics this aspect of the JUNOS solution. There is a separate thread for if *offline* validation of <running> *alone* must succeed. The soft (fallback) solution is to have clients copy/paste the referenced-subset of the system-defined nodes into <running>. The theory being that, since it’s just a subset, it won’t clutter things up too badly and, as it is already the case that some system-defined nodes must be copy/pasted into <running> in order for clients to configure descendent nodes (e.g., some tunable for the system-defined “lo” interface), the water is already over the bridge, so to speak, on the copy/pasting debate. The only holdout for not already demanding client’s also copy/paste the subset of referenced system-defined nodes is that it is completely unnecessary, unless trying to ensure offline validation of <running> alone. IMO the least disruptive solution possible should be used. There is a use-case for adding "origin" support to the <running> datastore in the <get-data> operation. This allows an NMDA client to identify system config that is not being used in <operational>. All "resource" oriented data models should have some mechanism to require the resources to be utilized or enabled somehow, within the data model. Unfortunately this cannot be standardized with a 1-size-fits-all solution. A vendor must somehow populate these data models with origin=system data nodes. The "hidden system" data uses proprietary logic to decide what nodes to add (and when). This data is probably not represented with YANG models. Transforming the hidden system data to the appropriate YANG models in any datastore is an implementation detail, out of scope for standardization. > Andy Kent, as a contributor. Andy
_______________________________________________ netmod mailing list [email protected] https://www.ietf.org/mailman/listinfo/netmod
