Huaimo - Thanx for bringing this up. Resolving this before the meeting hopefully will save us time during the meeting.
"Backwards compatibility" is possible in situations where new advertisements are being introduced and the legacy nodes either: * Don't need to process the new advertisements or * There are existing advertisements (in a different format) that contain the same information Neither is the case here. The problem being discussed is how to handle cases where the deployment requires more than 255 bytes of existing (sic) advertisements about a particular object. For example, there are cases today where the total amount of information about a link (link endpoint identifiers, bandwidth, delay, affinity, adjacency-sids, etc.) exceeds 255 bytes. This is not because new sub-TLVs are being introduced - it is because the sum total of the existing sub-TLVs required in a given deployment exceeds 255 bytes. All nodes in the network have to understand and process all of the sub-TLVs being advertised. There is no subset which is sufficient for "legacy nodes" and there is no existing way to advertise more than 255 bytes in a single TLV. This problem therefore cannot be addressed in a backwards compatible way. The solution defined in draft-pkaneria-lsr-multi-tlv is consistent with existing behavior explicitly defined for some TLVs (see https://www.ietf.org/archive/id/draft-pkaneria-lsr-multi-tlv-04.html#name-introduction ). Could the problem be addressed by "Big-TLV"? Yes - but there are multiple reasons why this is not a good choice: 1)It introduces inconsistency - some TLVs would use multiple native TLVs to deal with the issue (as per existing RFCs) - and some TLVs would use a native TLV plus Big-TLV. As there is no advantage to Big-TLV this simply adds ambiguity with no benefits. 2)There are multiple existing implementations which have already been successfully deployed and proven interoperable using the solution discussed in draft-pkaneria-lsr-multi-tlv. Declaring that solution as invalid would set the industry back and require all implementations to start from scratch since no implementation today supports Big-TLV. 3)As BIG-TLV is a generic container TLV, it is inherently less efficient as it consumes bytes in the encapsulation. Your main argument for Big-TLV has been the mistaken claim that it is "backwards-compatible". Hopefully you now understand that this is not the case and we need not debate this further. Les From: Lsr <lsr-boun...@ietf.org> On Behalf Of Huaimo Chen Sent: Sunday, November 5, 2023 11:34 AM To: lsr@ietf.org Subject: [Lsr] draft-chen-lsr-isis-big-tlv Hi Les, In last IETF, you presented/stated that Backwards compatibility is not possible. This seems not true. The solution proposed in draft-chen-lsr-isis-big-tlv is backwards compatible. Can you give your definition of backwards compatibility? In this IETF and last IETF, the slides states that With partial deployment behavior is unpredictable. This seems resolved by draft-chen-lsr-isis-big-tlv. A container TLV is used by a node to contain a piece of new information. The nodes that do not support the capability of container TLV will ignore container TLVs. This will resolve unpredictable behavior with partial deployment. Best Regards, Huaimo
_______________________________________________ Lsr mailing list Lsr@ietf.org https://www.ietf.org/mailman/listinfo/lsr