On Wed, Feb 22, 2017 at 11:22:22AM +0100, Ladislav Lhotka wrote: > > > On 22 Feb 2017, at 09:31, Juergen Schoenwaelder > > <[email protected]> wrote: > > > > On Wed, Feb 22, 2017 at 08:41:55AM +0100, Ladislav Lhotka wrote: > >> > >>> > >>> The WG needs to decide what the expectations are for templates and > >>> whether validity of templated config means just (a), just (b) or both > >>> (a) and (b). I actually think it should be (a) and (b) but there might > >>> be implementations that only do (a) or only do (b). > >> > >> We now have: > >> > >> 1. YANG as a language for specifying schema, datatypes and constraints. > > > > YANG also defines when and how constraints are expected to be checked. Are > > The semantics of constraints are mostly defined in terms of a data tree, > child nodes etc. that are pretty universal. Accessible trees for evaluating > XPath have specific definitions, such as state data + running, but the XPath > semantics don't really depend on this - the tree just has to be defined > somehow. > > > you saying we should remove this, i.e., have a language where I can write > > down must constraints but leave it open when and how they are checked? > > Yes. Even now, you can write a must constraint referring to a node that may > not eventually exist in the data model (because the corresponding module > isn't implemented). YANG modules are just building blocks, it is the data > model that has to make sense as a whole. >
YANG says very clearly what it means for a configuration datastore to be valid and we have a common understanding that the <running> datastore is always kept valid. > >> 2. YANG library as a means for composing YANG modules into data models. > > > > YANG library reports the set of YANG modules implemented. I do not think > > it does composition of YANG modules into data models. > > So what's your definition of a data model? For me it's exactly what YANG > Library says, including supported features etc. Schema mount could be an > additional part of this. The point is that implementations and tools that > want to do validation have to be able to compose the schema of the entire > data tree, and the result is what I call the data model. > OK. Certainly one way to look at things. > >> What's IMO needed is > >> > >> 3. a formalism for binding data models to specific checkpoints in a > >> network management workflow (such as intended or ephemeral datastore). > >> Different use cases may have different datastores and workflows, and > >> that's why I believe this has to be "parametrised". > >> > >> RFC 6020/7950 does #3 in a relatively rigid way that really works only for > >> the NETCONF protocol (which was of course the original aim). > > > > I do not agree with the statement that the model used by YANG only > > works for the NETCONF protocol. The question is whether > > Well, yes, you can use it for any protocol as long as it has certain > datastores and operations, or if you selectively ignore/reinterpret parts of > RFC 7950. Sample from sec. 8.3.3: > > If the datastore is "running" or "startup", these constraints MUST be > enforced at the end of the <edit-config> or <copy-config> operation. If the > datastore is "candidate", the constraint enforcement is delayed until a > <commit> or <validate> operation takes place. > > > > > (a) we can agree on a common datastore model with clearly defined > > semantics such that it simplifies implementations of clients and > > servers since datastore semantics are predictable (this is what > > the datastore design team has been working on) > > I doubt that any particular datastore model can work for everybody. What's in > the revised-datastore draft is already way too complex for some use cases > but, on the other hand, other use cases may need something different or more > complicated. > Not every datastore needs to be in every implementation or accessible over every protocol. If this is not stated clearly enough, we may need to improve the writing. > > (b) or we raise the bar for clients by requiring that clients obtain > > sufficient information about the specific workflow supported by a > > server so that they can reliably map a configuration change > > request to the appropriate datastore the server likes to have > > modified. > > > > My fear is that (b) significantly raises the bar and thus many clients > > in reality will simply assume certain datastore semantics and then > > fail to interoperate with other servers. We may get back to vendor > > specific silos. > > I don't mean that implementations will necessarily have to dynamically parse > and set up such a workflow - a protocol definition could simply specify a > particular workflow, or a few related ones. In fact, already NETCONF covers a > number of workflows that are used in the wild, including > > - persistent and writable running > > - persistent startup + writable ephemeral running > > - persistent startup + persistent writable candidate + ephemeral read-only > running > > I also suspect that most troubles I2RS folks have had with YANG were due to > the need to retrofit their workflow to that of NETCONF. > So you want per protocol datastore models? How do you then deal with implementations that have to support multiple protocols, i.e., multiple data store models? How do you ensure that all combinations you get can be implemented meaningful together? But then, you wrote 'different use cases' and not 'differnet protocols' so it did sound like you want even different use cases with the same protocol to use different datastore semantics and workflows. It seems there is a flexibility - complexity tradeoff here. /js -- Juergen Schoenwaelder Jacobs University Bremen gGmbH Phone: +49 421 200 3587 Campus Ring 1 | 28759 Bremen | Germany Fax: +49 421 200 3103 <http://www.jacobs-university.de/> _______________________________________________ netmod mailing list [email protected] https://www.ietf.org/mailman/listinfo/netmod
