Thx Kent. Please see inline.
Jason
From: Kent Watsen <[email protected]>
Sent: Monday, August 9, 2021 9:59 PM
To: Andy Bierman <[email protected]>
Cc: Sterne, Jason (Nokia - CA/Ottawa) <[email protected]>; [email protected]
Subject: Re: [netmod] system configuration sync mechanism
There was a request for concrete use cases. This email from before was good:
https://mailarchive.ietf.org/arch/msg/netmod/v5cNLcC2F_OT8t-407F3Zj6Vws4/
[>>JTS: ] I saw the taxonomy of resource-dependant vs resource-independent, and
the sub-categories of applied-immediately vs applied-after-reference. And then
the concept of "conditional" system config. But I think there may also be two
orthogonal concepts that need to be clarified:
1) deletable/removable vs non-removable
2) modifiable vs non-modifiable (i.e. can you modify elements in the list entry)
Let me give you an example use case of deletable+modifiable:
- user "admin" (list entry)
- exists in <running> when the server boots up with no <startup> DS
- can be deleted by the client (and then a copy-config from <running> to
<startup> would create a <startup> with the absence of the user "admin"
- I think this particular example would be "resource-independent",
"applied-immediately", and "non-conditional" (but there could also be
deletable+modifiable objects that are applied-after-reference or
resource-dependant)
- but I also think that deletable entries like this can just be handled as
system populated <running> config at boot time when there is no <startup> DS
present
An example of non-deletable+modifiable:
- CPU protection policy number 255 (rate limit for extracted control traffic)
that can be assigned to an interface (leafref from interface->protection policy)
- not shown in <running> unless the client explicitly modifies parameters of
that protection policy
- an option ?: have this show up in <running> if the client simply explicitly
creates the list entry
An example of non-deletable+non-modifiable:
- QoS policy 1 that is assigned to every interface by default
- The leafref from interface->qos policy 1 has a default value of "1" in the
YANG model
- an option ?: have this show up in <running> if the client simply explicitly
creates the list entry
More below:
[>>JTS: ] Please see inline below
On Aug 9, 2021, at 6:23 PM, Andy Bierman
<[email protected]<mailto:[email protected]>> wrote:
On Mon, Aug 9, 2021 at 2:51 PM Sterne, Jason (Nokia - CA/Ottawa)
<[email protected]<mailto:[email protected]>> wrote:
Hi guys,
I'm late to the game on this thread but I did read through the entire thing
(whew - can't promise I absorbed every nuance though). I am familiar with this
issue. I've been dealing with it from the server implementation side (some
clients *do* complain if there are references to missing "system config" list
entries).
One of the pretty fundamental issues IMO is whether we want good ol' standard
<running> to always be valid (including for an offline client or tool to be
able to validate instance data retrieved from a server against the YANG
models). Even without this "system config" topic, we already have a looming
problem with config templates & expansion. Servers with YANG models that
document a lot of constraints (e.g. leafrefs, mandatory statements, etc) are
more likely to hit this issue of an invalid running config (offline).
I think that it is already the case with JUNOS that <running> by itself cannot
be validated by a client unaware of how JUNOS templates work…as the templates
may supply mandatory nodes and/or values, or some validations only make sense
to evaluate when the config is flattened (e.g., ensuring "max-elements" is not
exceeded, ensuring ACLs always end with a terminal rules, etc.).
[>>JTS: ] Clients may not hit this as much with JUNOS. The latest models I
looked at didn't have any leafrefs and had very few mandatory statements. So
I'm not sure what parts of the templates provide data that is required to
validate running against the YANG models ?
But for models with leafrefs and mandatory statements, there is high likelihood
that clients won't be able to do offline validation if templates are used.
I'm doubtful that it is easy for a client to know how templates are expanded
for all given servers. Unless that expansion is fully standardized (which would
be challenging given multiple different shipping implementations in the
industry), there can be some subtle corner cases and it can be complex
(multiple levels of template application, exclusion rules, etc).
Agreed, not easy but…
Some [smart] clients may never need to grok how device-level templates are
expanded. For instance, the NMS I worked on in a previous life would always
onboard a new device by asking the device to provide its config already
expanded, and thereafter would never send templated-config to the device (note:
the NMS had its own template mechanism).
[>>JTS: ] But that's moving the mastership/ownership (i.e. definitive view of
the <running>) down to the server. Or abandoning templates as part of the
operator-owned config.
On the other end of the client-spectrum, other [dumb] clients may also not need
to grok how device-level templates are expanded, as they mostly push simple
updates (e.g., open a port) known to pass validation and/or rely on server-side
validation.
[>>JTS: ] If we only want to solve server-side validation then this gets
easier. And maybe that is the answer. But some clients today do complain when
offline validation fails.
Of course, there MAY be clients in-between that read/write templated-config
and, if they wish to perform off-box validation, most likely will need to grok
how templates are expanded.[>>JTS: ] Yes - I think this is the issue (and
workflow) we're trying to address here.
I agree there can be dynamically added system config (e.g. create a new qos
policy, and some queue list entries are automatically created inside that
policy).
I'm unsure how this relates.[>>JTS: ] Qiufang was asking if conditional system
config exists. It does in some implementations.
Note that config added by the system could be deletable or non-deletable. For
deletable items there were some previous discussions a few years ago that those
list entries could be simply a default config that is populated if the startup
datastore is empty at boot time.
Yes, but I that’s a different use-case, right? (see the link to concrete use
cases provided at top)[>>JTS: ] Not really a different use-case. A different
attribute of some of the use cases we're discussing. I think its another
dimension we need to consider (along with resource-dependant vs
resource-independent, applied-immediately vs applied-after-reference,
conditional vs non-conditional).
The system config list entries could also have leafs that are modifiable and
leafs that are immutable. For the modifiable objects I think this is basically
the system merging the explicit config with the system config (with the
explicit taking precedence).
The “modifiable” leafs are in this use-case, assuming “modifiable" is the
synonymous as “over-writable". Some “immutable” leafs are more like those
discussed in RFC 8342 Section 5, but others are like what Andy discusses below
(note: these immutable nodes are not currently in scope, as I understand
it).[>>JTS: ] Yes - I'm thinking of modifiable as over-writable. Maybe this is
a separate issue - I'm not sure yet.
Perhaps a lot of this system config is a legacy hold over from human-driven
CLIs. For machine interfaces maybe it's reasonable to have clients explicitly
define their config *if* they need offline validation of <running> to succeed
(i.e. allow clients to explicitly configure any of the system config policies
they are actually using & referencing). In many cases, users want the "master"
view of the config to live up in the client/OSS side and just push it down to
the server (without having the server touch the contents of the running, i.e. a
read-back should return identically what was sent).
Agree that a read-back of <running> should always return identically what was
sent, but that doesn’t mean that there isn’t system config in play.[>>JTS: ]
Not sure what you mean here. If the client sent config X, and in X there is a
leafref to some system list entry, and the system list entry wasn't sent by the
client, then I'm doubtful the readback should return the system list entry (in
which case the config would be considered invalid by offline validation, e.g.
analysis of the instance data against the YANG model with yangLint).
It might be good to set up a virtual interim or call of some sort to discuss
this one. It is pretty complex.
Good idea, but I’d hope first to get past the “do we understand the
requirements” part of the discussion. Of course, if people prefer, we could
schedule an interim to discuss the use-cases also.
I think Balasz captured the 2 use-cases very well.
The use-cases in his 7/31 message are good, but neither of them are part this
work’s scope, as I understand it. [update: his use-case ‘A’ may be in scope,
depending on how we set the scope]
[>>JTS: ] I'm confused about your and Andy's references to Balazs' use cases.
Do you mean this email ?
https://mailarchive.ietf.org/arch/msg/netmod/ncgHfMzteHL3cnqCdpDoojw171s/
I see principles. But I'm not sure what you mean by use-case 'A' ?
For implementations that combine the system into <running>, undoing
this behavior would be quite disruptive and therefore not likely to be done.
I do not support merging <system> into <running> (unless
appropriately-annotated in a “with-system” <get-config> response). I do
support merging <system> into <intended>.
[>>JTS: ] I'm also doubtful about merging into running (that means magic config
appearing in running). Even with annotation, a client that doesn’t understand
the annotation would just see this as config being added that they didn't add.
An explicit RPC to cause the addition might be OK, but then a client would have
to continually issue that RPC (maybe after every edit-config).
It would help to have some common understanding of the contents of <system>.
Maybe start with the well-known "interface problem". The system configures
almost empty physical interfaces. The user is allowed to add, modify, and
delete all descendants except the "name" and "type" leaf, which are set by the
system.
I agree that concrete use-cases are needed. The link provided at top provides
some, but not all, and definitely not the specific one you just mentioned
(which regards into Balazs’s use-case ‘A’ and Jason’s “immutable” config).
In this case <running> will have the /interfaces/interface nodes in them (or
else the client-accessible nodes could not be represented in NETCONF). So any
config=true XPath can point at name or type leafs in <running> or <operational>.
Ack.
I have heard that <system> is needed because it has nodes in it that do not
exist
in <running> or <operational>, but could exist. The premise is that the server
could have created these nodes but it did not for some reason. And the client
can inspect these nodes and somehow force the server to change the nodes it
adds to <running>. (So what is the real use-case then?)
This is what the link provided at the top of this message regards.
Retrieving the system-created nodes with origin=system is already supported by
NMDA.
True, but this “system config” feels different, or at least parts of it does.
We may need to tease apart the “leafref-able system nodes” from the
“resource-independent nodes” from the “resource-dependent nodes”.
K.
_______________________________________________
netmod mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/netmod