Jeffrey, Inline with [PC2].
Cheers, Pablo. -----Original Message----- From: Jeffrey (Zhaohui) Zhang <[email protected]> Sent: lunes, 7 de junio de 2021 14:48 To: Pablo Camarillo (pcamaril) <[email protected]> Cc: [email protected] Subject: RE: Additional questions/comments on draft-ietf-dmm-srv6-mobile-uplane-13 Hi Pablo, Please see zzh> below. -----Original Message----- From: Pablo Camarillo (pcamaril) <[email protected]> Sent: Friday, June 4, 2021 12:52 PM To: Jeffrey (Zhaohui) Zhang <[email protected]> Cc: [email protected] Subject: RE: Additional questions/comments on draft-ietf-dmm-srv6-mobile-uplane-13 [External Email. Be cautious of content] Hi Jeffrey, Thanks for your email. Inline with [PC]. Cheers, Pablo. -----Original Message----- From: Jeffrey (Zhaohui) Zhang <[email protected]> Sent: martes, 1 de junio de 2021 17:39 To: Pablo Camarillo (pcamaril) <[email protected]>; [email protected] Subject: Additional questions/comments on draft-ietf-dmm-srv6-mobile-uplane-13 Hi Pablo, 5.2.2. Packet flow - Downlink The downlink packet flow is as follows: UPF2_in : (Z,A) ->UPF2 maps the flow w/ SID list <C1,S1, gNB> UPF2_out: (U2::1, C1)(gNB, S1; SL=2)(Z,A) ->H.Encaps.Red C1_out : (U2::1, S1)(gNB, S1; SL=1)(Z,A) S1_out : (U2::1, gNB)(Z,A) ->End with PSP gNB_out : (Z,A) ->End.DX4/End.DX6/End.DX2 ... Once the packet arrives at the gNB, the IPv6 DA corresponds to an End.DX4, End.DX6 or End.DX2 behavior at the gNB (depending on the underlying traffic). Because of the END.DX2/4/6 behavior on gNB, the SID list and S1_out can't just simply use gNB. It must be gNB:TEID. [PC] Do you think replacing all address with ones carved out of 2001:db8:: would help? Because based on this comment and the one below, I can see there might be a point of confusion here. Zzh> I think gNB::TEID would be clearer, like how you say SRGW::TEID in 5.3.1.2. The gNB needs to use the TEID to do DX2/4/6. [PC2] That is my point, the TEID does not need to be explicitly included in the SID. [PC2] It is an IPv6 address that is associated to a particular TEID session. But TEID is not present.. therefore writing gNB::TEID would be misleading. See my point? In 5.3, for uplink traffic, the GW has End.M.GTP6.D for the UPF address B and the gNB does not need to know the existence of GW. For downlink traffic, the UPF knows there is a GW and put the GW::TEID in the SRH. Why not make GW invisible to UPF as well and just use gNB::TEID, and then have gNB/96 as End.M.GTP6.E on the SRGW? You can still put GW in the SRH to steer traffic through the GW. [PC] That is a valid point. I'll think about it and get back to you. 5.3.1.1. Packet flow - Uplink The uplink packet flow is as follows: UE_out : (A,Z) gNB_out : (gNB, B)(GTP: TEID T)(A,Z) -> Interface N3 unmodified (IPv6/GTP) SRGW_out: (SRGW, S1)(U2::1, C1; SL=2)(A,Z) -> B is an End.M.GTP6.D SID at the SRGW S1_out : (SRGW, C1)(U2::1, C1; SL=1)(A,Z) C1_out : (SRGW, U2::1)(A,Z) -> End with PSP UPF2_out: (A,Z) -> End.DT4 or End.DT6 Shouldn't U2::1 be U2::TEID? Even for the enhanced mode, TEID is still signaled and used - just that multiple UEs will share the same TEID. [PC] I don't see the difference in between U2::1 and U2::TEID. It is a SID configured for a set of multiple UEs. Meaning, this is an illustration. So not sure I follow your point. Same as previous question. I think 2001:db8:: might be helpful in here... Zzh> The AMF will signal different TEIDs for different UEs. While a local policy on SRGW could ignore the TEID, notice the following: Zzh> 1. The UPF won't be able to distinguish those UEs, yet the use case could be that the UPF may need to put those UEs into different VPNs based on the TEID and now it can't any more. While the SRGB could have different policies to map different <B, TEID group> to different U2::X and the UPF would rely on X to distinguish the associated VPNs, that requires lots of management burden to configure those policies on the SRGB. It is much simpler to just put the TEID in the IPv6 address behind the U2 locator. In the transport underlay you can still transport based only on the locator. On the UPF, you may have individual TEID specific routes, or you can have routes that aggregate on the TEID part to achieve aggregation - that is no worse than having those different policies on the SRGB. In summary, it is better to simply put TEID after the U2 locator. [PC2] The SRGW only aggregates traffic that belongs to the same context. You do not aggregate traffic of different tenants. This is not stated explicitly in the draft. I think I can add that. Zzh> 2. This kind of aggregation is actually native to the GTP-U method - the packets are transported only based on the UPF address - so it is not an advantage that SRv6 brings. [PC2] Disagreed. GTP-U does not aggregate at all. It performs routing based on IPv6 DA, but that is not TEID session aggregation. As stated in the other thread.. you can change GTPU to achieve the same thing, but GTPU today does not allow it. :-) BTW, since you removed UPF1 in Figure 5, it's better to rename UPF2 to UPF1, and change U2 to U1. [PC] I'm fine either way. Changed. For 5.3.1.3, why is downlink considered stateless while uplink has some state? Aren't the same - just one converts GTP-U to SRv6 while the other does the opposite? [PC] On the downlink, the SLA is provided by the source node UPF that has the state. Hence the SRGW is quite straightforward. [PC] On the uplink, the SLA is provided by the SRGW, which needs to hold all the state. Zzh> What are "all the state"? I see the following: When the packet arrives at the SRGW, the SRGW identifies B as an End.M.GTP6.D Binding SID (see Section 6.3). Hence, the SRGW removes the IPv6, UDP, and GTP headers, and pushes an IPv6 header with its own SRH containing the SIDs bound to the SR policy associated with this BindingSID. There is one instance of the End.M.GTP6.D SID per PDU type. [PC2] The SRGW needs to store the SR policy that is going to be applied after the BSID operation. On the downlink there is no such SR policy. Zzh> B is the UPF address signaled from the AMF. So is it that for each <PDU type, UPF> tuple, there is one policy for *all* the UEs associated with that tuple? What is the granularity level of SLA you're referring to? How is it achieved if the granularity level is finer than the <PDU type, UPF> tuple? [PC2] The operator may instantiate as many BSIDs at the SRGW as different SLAs it has. Zzh> 5.3.1.3 says: For the uplink traffic, the state at the SRGW does not necessarily need to be unique per PDU Session; the SR policy can be shared among UEs. This enables more scalable SRGW deployments compared to a solution holding millions of states, one or more per UE. Zzh> Since you use word "not necessarily", it means the state could be unique per PDU sessions. How is that done? As asked above, the state seems to be only per <PDU type, UPF>. [PC2] You could assume that each different PDU has a different SLA, and hence a different SID list. That is quite unrealistic, and I would be quite surprised to see that. Instead, it would be quite common to share the same policy across different PDU Sessions. 5.3.2.1 has the following: When the packet arrives at the SRGW for UPF1, the SRGW has an Uplink Classifier rule for incoming traffic from the gNB, that steers the traffic into an SR policy by using the function H.M.GTP4.D. The SRGW is not a 5G NF, so the "Uplink Classifier rule" does not have to be the following (draft-homma-dmm-5gs-id-loc-coexistence): Uplink Classifier (ULCL): An ULCL is an UPF functionality that aims at diverting Uplink traffic, based on filter rules provided by SMF, towards Data Network (DN). So, instead of ULCL, the SRGW could have an IPv4 route for the UPF address which steers the matching traffic to an SR policy with function H.M.GTP4.D. If that is done, then the following is not true: For the uplink traffic, the state at the SRGW is dedicated on a per UE/session basis according to an Uplink Classifier. There is state for steering the different sessions in the form of an SR Policy. However, SR policies are shared among several UE/sessions. Because we don't need per UE/session steering - we can just steer based on UPF's address (just like in IPv6 case). [PC] If you steer based on UPF address, then you cannot apply a different SLA to traffic from different UEs. Hence the need to have per UE/session steering. Zzh> So now you care about per UE/session steering 😊 Then what about other situations where you care about aggregations like in 5.2.3 😊 Zzh> The context is " 5.3.2.3. Scalability" where it talks about that "There is state for steering the different sessions in the form of an SR Policy". For comparison, " 5.3.1.3. Scalability" does not care about per UE/session steering. It seems that you use different criteria in different places 😊 [PC2] There is quite a difference in between 5.3.1 and 5.3.2. The mechanism defined for IPv6 interworking in 5.3.1 is based on remote steering (i.e. BSID). The mechanism defined in IPv4 depends on local policy at the SRGW. So the state is completely different in those two scenarios. [PC] Also, as a correction, the IPv6 case does NOT steer based on the UPF address. It leverages a BSID to perform remote steering. Zzh> Ok I was not strict on my language. The BSID is a locator of the UPF address, right? [PC2] No. The Binding SID is defined in RFC8402. It seems that the only reason ULCL is used here is just that we don't call an IPv4 address a SID - but that does not mean we can't use an IPv4 route to steer traffic into a policy (isn't it the same thing that we use an IPv6 route for an address that we call SID)? 5.3.3. Extensions to the interworking mechanisms In this section we presented two mechanisms for interworking with gNBs and UPFs that do not support SRv6. These mechanisms are used to support GTP over IPv4 and IPv6. Only gNB, not UPF, right? [PC] Not sure I follow your point. Interworking in between a GTP-U gNB, and an SRv6 UPF mainly, but it could also be used with a legacy UPF. Zzh> 5.3 only presented interworking between SRv6 UPF and GTP-U gNB (not between SRv6 gNB and GTP-U UPF), right? [PC2] Correct. Zzh> I was wondering if the same GW could be used next to a GTP-U UPF. If yes, then why there is a need for "5.4 drop-in mode"? [PC2] with subtle changes, yes, it could be used. This is defined in 5.4, hence indeed the paragraph below can be removed. Furthermore, although these mechanisms are designed for interworking with legacy RAN at the N3 interface, these methods could also be applied for interworking with a non-SRv6 capable UPF at the N9 interface. Are you referring to the following? [PC] Yes gNB (GTP-U) -- SRGW1 ----- UPF1 ------- SRGW2 ----- (GTP-U) UPF2 What's the difference between SRGW1 and SRGw2? If there is, then the above paragraph is incorrect. If there is no difference, why do we need drop-in mode (which has difference between SRGW-A and SRGW-B)? [PC] SRGW1 and SRGW2 performs the opposite operation. The drop-in mode has on both ends GTP-U, and in the middle SRv6; which is slightly different. But from a dataplane perspective the SRGW of both modes are pretty much the same. Zzh> The point is that SRGW1 and SRGW2 do perform different/opposite operations (that's why "5.4 drop-in mode" is needed). With that, the first and last paragraph in 5.3.3 (as I quoted in the earlier email) are not correct, right? [PC2] I see your point now. Indeed, the paragraph should be removed. Jeffrey Juniper Business Use Only Juniper Business Use Only Juniper Business Use Only _______________________________________________ dmm mailing list [email protected] https://www.ietf.org/mailman/listinfo/dmm
