Hi Jan,
Thanks for you your answer.
Reading more I have found more evidence that the “digital network twin” should 
be single/unified for a network of any size:
- I did mention below OpenConfig that had merged configuration and operational 
data models many years ago,
- RFC8342 is asking for the same☺,
- RFC8969 is going a step further and asking to have the cross-layer 
convergence to the unified YANG model.
It looks obvious – all relationships are better to have in the single 
consistent YANG data storage.
It is slowly happening in the IETF (examples above).

Yet RFC9182 has many disclaimers like:
“the L3NM is not defined as an augment to the L3SM, because a specific 
structure is required to meet network-oriented L3 needs”
“initial design was interpreted as if the deployment of the L3NM depends on the 
L3SM, while this is not the case”.
I see a couple of reasons for it:
- to minimize the disruption for many available implementations,
- to trade off unification and interoperability for functionality and 
flexibility.

After L3NM was disconnected from L3SM, many mapping between these models should 
be done in a proprietary way by coders.
Imagine that one product could map an event “A” in the L3NM to be related to 
the configuration XpathA in the L3SM,
But the different product would map an event “A” to be related to the 
configuration XpathB in the L3SM.
Then they would act inconsistently.
The other example could be when provisioning something in L3SM would be mapped 
to different Xpath in L3NM by different products.
It is again a problem if L3NMs from different vendors should support the same 
VPN.

If humans (with really the best knowledge and intelligence in respective IETF 
WGs) were not capable of automatically mapping L3SM to L3NM,
Then no hope that the algorithm developed by coders could.
It would for sure break the “closed loop control” cross-vendor.
In reality, it would create challenges even for a single-vendor solution 
because coders developing it may be not strong enough.
I suspect that this problem has to be revisited after some additional years of 
not satisfactory automation progress.
Not many would agree to single-vendor where it is not a critical roadblock.

I agree that for the single vendor environment just L2NM and completely 
disconnected L2SM are already big progress (teaching people how to develop big 
systems, a sort of educational value). It is much better than no 
recommendations at all.

When the design started top-down from L3SM – it was right. But when automatic 
mapping between L2NM and L3SM was lost – it was a mistake.
It did break top-down design that is very needed here.

Eduard
From: Jan Lindblad [mailto:[email protected]]
Sent: Wednesday, December 7, 2022 8:35 PM
To: Vasilenko Eduard <[email protected]>
Cc: [email protected]; [email protected]; Paolo Volpato <[email protected]>; 
Xipengxiao <[email protected]>
Subject: Re: [netmod] How many "digital twins" every single network should 
have? Who would map between "twins"?

Hi Eduard,

Hi Automation Gurus,
YANG modules may be treated like a “digital twin” of the network with different 
resolution/accuracy (depending on Module details).
It looks like RFC 8969 is discussing that different YANG models (for different 
layers or functions) of the same network should be the clarification of the 
same “digital twin”.
Below are some excerpts from RFC 8969 that make me believe in the common Data 
Model after all YANG modules clarification for the same network.

But comparing RFC 8299 (L3SM) with RFC 9182 (L3NM) I conclude that “Data 
Models” are different (could not be automatically mapped).
Yet they should describe/represent the same network.

That is right. There are multiple attempts at modeling the use cases at each 
level of the management stack. This is not unlike how standards develop in 
other areas. Initially, and sometimes even after a long time, there are often 
competing standards. Sometimes even from within the same SDO.

It is evident in this situation that a big job for the vendor is needed to 
*map* Data Model of L3SM to the Data Model of L3NM.

I think you should be careful with the word "vendor", here as we're talking 
about an entire vendor eco-system. It is not typical that a router product 
would contain this mapping, but you are right that an NMS or OSS product from 
some vendor might. The mapping from network use cases to network device 
configuration is happening widely today, and a fair portion of all that is 
using YANG in some way.

It is not just a cost/time, additionally, it is a big source of 
interoperability issues. Engineers from different vendors would never map it in 
the same way.
I could pose similar examples for the other RFCs (like L2SM and L2NM, and many 
more).

Of course. Just like two router vendors would not implement a given IETF 
routing YANG model the same way, NMS/OSS vendors and any service providers that 
choose to do this on their own, will have the same freedom of implementation at 
their level. This freedom does not remove the value of standardized service 
YANG models in any way.

Why is IETF not following RFC 8969? It looks pretty evident. Why “Data Models” 
for the same network are not automatically mapped?!?

How could they be automatically mapped? Such mappings necessarily depend on use 
case, network circumstances and operator traditions/preferences, so I can't see 
any one-size-fits-all mapping here. Sure, you can make one mapping and declare 
it the one and only. But others may not agree and prefer to go with their own 
mapping.

It was logical to define initially top-level approximation for the network (the 
service model is probably the loosest one),
Then extend Data Model (augment in RFC 7950 terminology) to the network model 
and so on (continue to clarify more details).
As it is rightfully stated in RFC 8969: only a top-down approach permits 
resolving the challenge of “closed loop control”. I would add “in the 
multivendor environment”.

If I understand right (not sure): it was the primary idea of OpenConfig to have 
the common Data Model for Configuration and Assurance at every layer (the 
unified “Digital twin” for the network).

The value of hundreds of already developed YANG modules looks questionable 
because vendor mapping by different vendors between functional and layered YANG 
modules could produce m*n^2 permutations.
It may not permit interoperability in the multi-vendor environment.

We certainly experience the concrete value of the many thousands of device 
level YANG modules out there when implementing NMS/OSS type of functionality. 
Anyone in that business should come prepared to navigate combinatorial 
explosions, but I can't say I have seen any traces of the specific m*n^2 
permutations you speak of above, relating to combinations of device level YANGs 
and service level YANGs.

I could imagine some reasons why it may not be possible in some cases but the 
general rule should be to always use “augment” of the parent YANG model.

I'm afraid I can't decipher this statement. Feel free to elaborate.

Best Regards,
/jan
_______________________________________________
netmod mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/netmod

Reply via email to