On 10/02/2016 13:29, Juergen Schoenwaelder wrote:
On Tue, Feb 09, 2016 at 11:17:13AM +0000, Robert Wilton wrote:
On 08/02/2016 16:20, Juergen Schoenwaelder wrote:
On Mon, Feb 08, 2016 at 01:21:52PM +0000, Robert Wilton wrote:
So, IIRC, your concern is specifically that a generic YANG client
library cannot validate that the RPC reply is well formed against the
schema without knowledge about the request. Is that correct?
None of the existing tools that assume YANG defined data is XML
encoded according to RFC 6020 will not be able to process data in a
new encoding.
OK.
As Lou indicates, the proposed protocol schema encoding could also be
generated by tooling (i.e. a pyang plugin).
I.e. a client can take the source YANG models (e.g. as defined by IETF)
and apply a transformation to them that would generate a set of YANG
models that are the same except that they have the extra applied
configuration nodes added to them.
An opstate aware client could use the generated YANG models to both
internally manage the manageability data, and also to validate that the
messages sent to/from the server conform with the extended schema.
I fail to see how on the fly schema translations makes things simpler
(and I am sure there are a number of little questions that pop up if
you really do this).
I am still waiting for the argument that convinces me that merging
datastores into a single tree makes the writing of clients orders of
magnitude simpler.
I cannot present such an argument that a protocol based solution is
orders of magnitude better than using multiple datastores.
But I wonder whether the OpenConfig operators might also ask the WG the
same question of whether a datastore solution is orders of magnitude
better than the OpenConfig solution?
My best guess is that at the moment they would regard a datastore
solution as being inferior to their current working solution (for which
they have models that are further ahead than IETF, running code, some
major vendors committed to implementing those models, and seeming more
interest from the large network operators in using those models).
Will vendors actually implement a datastore based solution if the
OpenConfig operators (who are raising the requirement) don't actually
want/need to use it?
Then I guess the final question I have is whether SDO produced models
will still be relevant if they lag 2 years behind the OpenConfig models,
and those models have become a defacto standard?
If the only real technical argument is "I need to
be able to retrieve data from multiple datastores in a single RPC
operation", then lets simply define such an RPC (that works for
arbitrary combinations of datastores and not just two specific ones)
and move on. If the problem is in the existing RPC primitives, then
lets extend them instead of building hacks around them using on the
fly schema translations and what not.
I've already stated what I perceive are the client warts with data
stores. Martin has commented on some of those. Alas, I doubt that you
would find them any more compelling if I restate them again.
Rob
/js
_______________________________________________
netmod mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/netmod