All I have completed the Shepherd review for this draft.
https://datatracker.ietf.org/doc/draft-ietf-spring-sr-for-enhanced-vpn/ I would like to open up a discussion on this draft and its importance to SR architecture with respect to network slicing and one of the major use cases being for 5G Mobile core cloud native fabric URLLC low latency slice and high bandwidth slice NRP. This draft presents an exciting critical milestone for Spring WG related to SR technology with our official first SR use case for resource aware segments draft below which is now in queue for publication. https://datatracker.ietf.org/doc/draft-ietf-spring-resource-aware-segments/ The base concept behind a resource Sid is that the topological SIDs both the prefix SID and adjacency SID are now resource aware for forwarding packets. These new resource aware Sid are allocated with set of network resources such as buffers, queues and bandwidth for all links that are part of the associated NRP topology. This could be a flex algo plane or dynamic SR policy using TEDs Algo 0. Most all vendors have limits to the maximum number of flex algo planes that the NOS supports based on platform. This is due to hardware resource constraints with cSPF per flex algo plane IETF technology for instantiating the NRP. In the discussion think about SRv6 and how H-QOS can be used with Flex Algo match on SRv6 locator and can have link coloring with multiple flex algo bit positions on a link encoded in AG/EAG in ASLA and now with HQOS parent shaper can provide guaranteed bandwidth per locator Gold, Bronze and Silver allocation of a pecking order per Flex algo NRP. This example is a real world scenario and I have deployed as such in a production environment and works well. So now take the concept of resource aware SIDs and how are we getting a more granular enhancement in discrete buffers, queues and bandwidth over and above the scenario I mentioned above. Note that if you have complete isolation disjoint planes then IETF network slicing does not come into play. However that is rare and does not scale and most cases are shared paths by operators. With the industry direction moving in a large way towards massive super highways on the global internet by Tier-1 and Tier-2 and MNO providers, with POI (camp wg) bandwidth on demand p2p dark fiber or lit DWDM links with coherent pluggable ZR / ZR+ optics you can have 64 wavelengths on a single fiber. The wavelength channels in this use case can be placed on sub interfaces disaggregated and each or set of wavelengths can map to a flex algo or algo 0 TEDs plane or Flexe could be used for the NRP paths. This could be a major use case for Enhanced VPN using resource aware SIDs. During a migration there could existing provisioned non resource aware Sid’s such as for flex algo 128 prefix Sid provisioning. At the same time when new resource aware Sids are deployed for an existing flex algo topology with prefix Sids that are not resource aware, you now have a situation of double booking provisioning of resources cSPF per flex algo. This double booking goes against the vendor platform maximum supported. I mentioned in my Shepherds review of an alternative solution which I will work with Jie as co-author of a new draft I will spin up which provides an resource aware extension to existing topological SIDs to avoid double booking of resources during migration. As well as migration simplification by enabling a resource knob on the topological Sid to make it resource aware. I would like to discuss with the WG both options and compare and contrast and feedback for both solutions on this thread. As this is a very real world use case, I am interested in operator feedback on the usefulness of resource aware Sid’s for Enhanced VPN use case. and is it something that will be deployed in production. Please respond to this thread on any feedback related to my review and any questions or comments related to operational considerations. Kind Regards Gyan
_______________________________________________ spring mailing list -- [email protected] To unsubscribe send an email to [email protected]
