Hi Jeffrey,

Inline with PC5.

Thanks,
Pablo.

-----Original Message-----
From: Jeffrey (Zhaohui) Zhang <[email protected]> 
Sent: miércoles, 21 de julio de 2021 23:34
To: Pablo Camarillo (pcamaril) <[email protected]>
Cc: [email protected]
Subject: RE: Additional questions/comments on 
draft-ietf-dmm-srv6-mobile-uplane-13

Hi Pablo,

Please see zzh4> below.

-----Original Message-----
From: Pablo Camarillo (pcamaril) <[email protected]>
Sent: Wednesday, July 21, 2021 1:34 PM
To: Jeffrey (Zhaohui) Zhang <[email protected]>
Cc: [email protected]
Subject: RE: Additional questions/comments on 
draft-ietf-dmm-srv6-mobile-uplane-13

[External Email. Be cautious of content]


Hi Jeffrey,

Inline with PC4.

Thanks!
(and apologize for the delay)

-----Original Message-----
From: dmm <[email protected]> On Behalf Of Jeffrey (Zhaohui) Zhang
Sent: martes, 15 de junio de 2021 22:16
To: Pablo Camarillo (pcamaril) <[email protected]>
Cc: [email protected]
Subject: Re: [DMM] Additional questions/comments on 
draft-ietf-dmm-srv6-mobile-uplane-13

Hi Pablo,

Please see zzh3> below.

-----Original Message-----
From: Pablo Camarillo (pcamaril) <[email protected]>
Sent: Tuesday, June 15, 2021 3:03 PM
To: Jeffrey (Zhaohui) Zhang <[email protected]>
Cc: [email protected]
Subject: RE: Additional questions/comments on 
draft-ietf-dmm-srv6-mobile-uplane-13

[External Email. Be cautious of content]


Hi Jeffrey,

Inline with [PC3].

Cheers,
Pablo.

-----Original Message-----
From: dmm <[email protected]> On Behalf Of Jeffrey (Zhaohui) Zhang
Sent: lunes, 14 de junio de 2021 22:10
To: Pablo Camarillo (pcamaril) <[email protected]>
Cc: [email protected]
Subject: Re: [DMM] Additional questions/comments on 
draft-ietf-dmm-srv6-mobile-uplane-13

Hi Pablo,

Please see zzh2> below.

-----Original Message-----
From: Pablo Camarillo (pcamaril) <[email protected]>
Sent: Friday, June 11, 2021 10:15 AM
To: Jeffrey (Zhaohui) Zhang <[email protected]>
Cc: [email protected]
Subject: RE: Additional questions/comments on 
draft-ietf-dmm-srv6-mobile-uplane-13

[External Email. Be cautious of content]


Jeffrey,

Inline with [PC2].

Cheers,
Pablo.

-----Original Message-----
From: Jeffrey (Zhaohui) Zhang <[email protected]>
Sent: lunes, 7 de junio de 2021 14:48
To: Pablo Camarillo (pcamaril) <[email protected]>
Cc: [email protected]
Subject: RE: Additional questions/comments on 
draft-ietf-dmm-srv6-mobile-uplane-13

Hi Pablo,

Please see zzh> below.

-----Original Message-----
From: Pablo Camarillo (pcamaril) <[email protected]>
Sent: Friday, June 4, 2021 12:52 PM
To: Jeffrey (Zhaohui) Zhang <[email protected]>
Cc: [email protected]
Subject: RE: Additional questions/comments on 
draft-ietf-dmm-srv6-mobile-uplane-13

[External Email. Be cautious of content]


Hi Jeffrey,

Thanks for your email. Inline with [PC].

Cheers,
Pablo.

-----Original Message-----
From: Jeffrey (Zhaohui) Zhang <[email protected]>
Sent: martes, 1 de junio de 2021 17:39
To: Pablo Camarillo (pcamaril) <[email protected]>; [email protected]
Subject: Additional questions/comments on draft-ietf-dmm-srv6-mobile-uplane-13

Hi Pablo,

5.2.2.  Packet flow - Downlink

   The downlink packet flow is as follows:

   UPF2_in : (Z,A)                             ->UPF2 maps the flow w/
                                                 SID list <C1,S1, gNB>
   UPF2_out: (U2::1, C1)(gNB, S1; SL=2)(Z,A)   ->H.Encaps.Red
   C1_out  : (U2::1, S1)(gNB, S1; SL=1)(Z,A)
   S1_out  : (U2::1, gNB)(Z,A)                 ->End with PSP
   gNB_out : (Z,A)                             ->End.DX4/End.DX6/End.DX2

   ...
   Once the packet arrives at the gNB, the IPv6 DA corresponds to an
   End.DX4, End.DX6 or End.DX2 behavior at the gNB (depending on the
   underlying traffic).

Because of the END.DX2/4/6 behavior on gNB, the SID list and S1_out can't just 
simply use gNB. It must be gNB:TEID.
[PC] Do you think replacing all address with ones carved out of 2001:db8:: 
would help? Because based on this comment and the one below, I can see there 
might be a point of confusion here.

Zzh> I think gNB::TEID would be clearer, like how you say SRGW::TEID in 
5.3.1.2. The gNB needs to use the TEID to do DX2/4/6.
[PC2] That is my point, the TEID does not need to be explicitly included in the 
SID.
[PC2] It is an IPv6 address that is associated to a particular TEID session. 
But TEID is not present.. therefore writing gNB::TEID would be misleading. See 
my point?
Zzh2> I don't mean that TEID is present in a GTP-U header. It is part of the 
IPv6 address (used for DX2/4/6 purpose). Writing it as gNB::TEID emphasize that 
the TEID information is embedded in the address, just like why you use 
SRGW::TEID in some examples.
[PC3] The TEID is not present on the IPv6 address used for the End.DX2, DX4, 
DX6 SIDs.
Zzh3> Could you elaborate how DX2/4/6 is done if TEID (or its mapped value) is 
not in the IPv6 address? For example, how does the gNB know if a packet is for 
UE1 vs. UE2? The gNB does not look up the inner (UE) IP address when forwarding 
(that's whole point of DX2/4/6).
[PC4] The TEID value 1234 could be associated with the SID 
2001:db8:abcd:abcd::beef. But 1234 is not present. 1234 (and therefore 
2001:db8:abcd:abcd::beef may be shared) in the enhanced mode.
Zzh4> Let's say TEID 1234 is assigned to UE1 and TEID 2345 is assigned to UE2. 
I suppose two different associated SIDs, say ::beef and ::beff need to be used 
(otherwise how do you do DX2/4/6?) gNB::TEID is just a way to represent that, 
just like how you use SRGW::TEID to represent a SID associated with the TEID.
[PC5] That's it! The question is whether we should not change the addressing in 
the draft to the IPv6 2001:db8:: block to make sure this misunderstanding does 
not occur. 

In 5.3, for uplink traffic, the GW has End.M.GTP6.D for the UPF address B and 
the gNB does not need to know the existence of GW. For downlink traffic, the 
UPF knows there is a GW and put the GW::TEID in the SRH. Why not make GW 
invisible to UPF as well and just use gNB::TEID, and then have gNB/96 as 
End.M.GTP6.E on the SRGW? You can still put GW in the SRH to steer traffic 
through the GW.
[PC] That is a valid point. I'll think about it and get back to you.

5.3.1.1.  Packet flow - Uplink

   The uplink packet flow is as follows:

   UE_out  : (A,Z)
   gNB_out : (gNB, B)(GTP: TEID T)(A,Z)       -> Interface N3 unmodified
                                                 (IPv6/GTP)
   SRGW_out: (SRGW, S1)(U2::1, C1; SL=2)(A,Z) -> B is an End.M.GTP6.D
                                                 SID at the SRGW
   S1_out  : (SRGW, C1)(U2::1, C1; SL=1)(A,Z)
   C1_out  : (SRGW, U2::1)(A,Z)               -> End with PSP
   UPF2_out: (A,Z)                            -> End.DT4 or End.DT6

Shouldn't U2::1 be U2::TEID? Even for the enhanced mode, TEID is still signaled 
and used - just that multiple UEs will share the same TEID.
[PC] I don't see the difference in between U2::1 and U2::TEID. It is a SID 
configured for a set of multiple UEs. Meaning, this is an illustration. So not 
sure I follow your point. Same as previous question. I think 2001:db8:: might 
be helpful in here...
Zzh> The AMF will signal different TEIDs for different UEs. While a local 
policy on SRGW could ignore the TEID, notice the following:
Zzh> 1. The UPF won't be able to distinguish those UEs, yet the use case could 
be that the UPF may need to put those UEs into different VPNs based on the TEID 
and now it can't any more. While the SRGB could have different policies to map 
different <B, TEID group> to different U2::X and the UPF would rely on X to 
distinguish the associated VPNs, that requires lots of management burden to 
configure those policies on the SRGB. It is much simpler to just put the TEID 
in the IPv6 address behind the U2 locator. In the transport underlay you can 
still transport based only on the locator. On the UPF, you may have individual 
TEID specific routes, or you can have routes that aggregate on the TEID part to 
achieve aggregation - that is no worse than having those different policies on 
the SRGB. In summary, it is better to simply put TEID after the U2 locator.
[PC2] The SRGW only aggregates traffic that belongs to the same context. You do 
not aggregate traffic of different tenants. This is not stated explicitly in 
the draft. I think I can add that.
Zzh> 2. This kind of aggregation is actually native to the GTP-U method - the 
packets are transported only based on the UPF address - so it is not an 
advantage that SRv6 brings.
[PC2] Disagreed. GTP-U does not aggregate at all. It performs routing based on 
IPv6 DA, but that is not TEID session aggregation. As stated in the other 
thread.. you can change GTPU to achieve the same thing, but GTPU today does not 
allow it.  :-)
Zzh2> Deferred to the other thread.

BTW, since you removed UPF1 in Figure 5, it's better to rename UPF2 to UPF1, 
and change U2 to U1.
[PC] I'm fine either way. Changed.

For 5.3.1.3, why is downlink considered stateless while uplink has some state? 
Aren't the same - just one converts GTP-U to SRv6 while the other does the 
opposite?
[PC] On the downlink, the SLA is provided by the source node UPF that has the 
state. Hence the SRGW is quite straightforward.
[PC] On the uplink, the SLA is provided by the SRGW, which needs to hold all 
the state.

Zzh> What are "all the state"? I see the following:

   When the packet arrives at the SRGW, the SRGW identifies B as an
   End.M.GTP6.D Binding SID (see Section 6.3).  Hence, the SRGW removes
   the IPv6, UDP, and GTP headers, and pushes an IPv6 header with its
   own SRH containing the SIDs bound to the SR policy associated with
   this BindingSID.  There is one instance of the End.M.GTP6.D SID per
   PDU type.
[PC2] The SRGW needs to store the SR policy that is going to be applied after 
the BSID operation. On the downlink there is no such SR policy.
Zzh2> For uplink, "B is an End.M.GTP6.D SID at the SRGW" and " There is *one* 
instance of the End.M.GTP6.D SID *per PDU type*". For downlink, "SRGW/96 is 
End.M.GTP6.E". As I see it, it's just one-to-one comparison? Are you saying 
that SID list is the extra state? It's just a single SID list anyway - unless 
that single instance of End.M.GTP6.D SID per PDU type actually look at more 
traffic information to produce different SID lists (and in that case you need 
to clarify it).
[PC3] Yes, the SID list is the extra state that needs to be stored.
[PC3] The SID list pushed depends only B -the node does not perform any 
classification or looks at traffic info to produce different SID lists-.
[PC3] It is expected to have one B, per PDU Session Type. However, this can 
change depending on operator needs (i.e., SLA); and indeed I could see one SLA 
for best effort, another for low-latency, another offering a redundancy service 
for urllc, ...
Zzh3> Does that mean the N2 signaling will use different UPF (B) addresses for 
different SLAs (since "The SID list pushed depends only B -the node does not 
perform any classification or looks at traffic info to produce different SID 
lists")? To me that is a change in the AMF function and needs to be clarified.
[PC3] In the downlink, the SRGW does not need to store the SID list.
Zzh3> That single SID list in a policy is insignificant (O(1)) when you compare 
the uplink/downlink state. What really matters more is having many different 
BSIDs/policies for different uplink SLAs. The text should reflect the latter 
point 😊
[PC4] I'll add that to the document.

Zzh> B is the UPF address signaled from the AMF. So is it that for each <PDU 
type, UPF> tuple, there is one policy for *all* the UEs associated with that 
tuple? What is the granularity level of SLA you're referring to? How is it 
achieved if the granularity level is finer than the <PDU type, UPF> tuple?
[PC2] The operator may instantiate as many BSIDs at the SRGW as different SLAs 
it has.
Zzh2> But it says "There is *one* instance of the End.M.GTP6.D SID *per PDU 
type*"?
[PC3] I'll clarify that in the draft to say there may be more than one. That is 
wrong -as stated above-. At minimum, there is one per PDU type. But there may 
be more.

Zzh> 5.3.1.3 says:

   For the uplink traffic, the state at the SRGW does not necessarily
   need to be unique per PDU Session; the SR policy can be shared among
   UEs.  This enables more scalable SRGW deployments compared to a
   solution holding millions of states, one or more per UE.

Zzh> Since you use word "not necessarily", it means the state could be unique 
per PDU sessions. How is that done? As asked above, the state seems to be only 
per <PDU type, UPF>.
[PC2] You could assume that each different PDU has a different SLA, and hence a 
different SID list. That is quite unrealistic, and I would be quite surprised 
to see that. Instead, it would be quite common to share the same policy across 
different PDU Sessions.
Zzh2> My real question is - since "The operator may instantiate as many BSIDs 
at the SRGW as different SLAs it has", how it is really done (whether the 
granularity is at PDU session level or not)?
[PC3] The operator provisions at the SRGW as many BSIDs as different SR 
policies it may want to have. Each different SR policy has a different SLA. Up 
to here there is nothing specific to mobility (i.e. all is defined in the SR 
policy IETF draft).
Zzh2> How are the BSIDs assigned (with the unchanged N2/N4 interface), and how 
is that not contradicting "There is one instance of the End.M.GTP6.D SID per 
PDU type"?
[PC3] The BSID is an IPv6 address that is passed together with the TEID on the 
N2/N4. As stated before, there might be more than one instance of the 
End.M.GTP6.D SID. This I will change it in the draft.

Zzh3> Basically the AMF needs to give out different addresses for different 
SLAs (unless the SRGW does ULCL). That needs to be clarified.
[PC4] Clarified.

Zzh2> Additionally, "a solution holding millions of states, one or more per UE" 
now does not seem to exist, so the comparison only adds confusion not value.
[PC3] To have per-UE SLA, you need to have state on a per-UE basis. Do you 
think there is no such network with a million of UEs?
Zzh3> The context is SRGW scaling. Is there a solution where an SRGW would hold 
millions of per-UE state? That's what I meant but I can let it go now.

5.3.2.1 has the following:

   When the packet arrives at the SRGW for UPF1, the SRGW has an Uplink
   Classifier rule for incoming traffic from the gNB, that steers the[PC] w
   traffic into an SR policy by using the function H.M.GTP4.D.

The SRGW is not a 5G NF, so the "Uplink Classifier rule" does not have to be 
the following (draft-homma-dmm-5gs-id-loc-coexistence):

    Uplink Classifier (ULCL):
       An ULCL is an UPF functionality that aims at diverting Uplink traffic, 
based on filter rules provided by SMF, towards Data Network (DN).

So, instead of ULCL, the SRGW could have an IPv4 route for the UPF address 
which steers the matching traffic to an SR policy with function H.M.GTP4.D. If 
that is done, then the following is not true:

   For the uplink traffic, the state at the SRGW is dedicated on a per
   UE/session basis according to an Uplink Classifier.  There is state
   for steering the different sessions in the form of an SR Policy.
   However, SR policies are shared among several UE/sessions.

Because we don't need per UE/session steering - we can just steer based on 
UPF's address (just like in IPv6 case).
[PC] If you steer based on UPF address, then you cannot apply a different SLA 
to traffic from different UEs. Hence the need to have per UE/session steering.
Zzh> So now you care about per UE/session steering 😊 Then what about other 
situations where you care about aggregations like in 5.2.3 😊
Zzh> The context is " 5.3.2.3.  Scalability" where it talks about that "There 
is state for steering the different sessions in the form of an SR Policy". For 
comparison, " 5.3.1.3.  Scalability" does not care about per UE/session 
steering. It seems that you use different criteria in different places 😊

[PC2] There is quite a difference in between 5.3.1 and 5.3.2. The mechanism 
defined for IPv6 interworking in 5.3.1 is based on remote steering (i.e. BSID). 
The mechanism defined in IPv4 depends on local policy at the SRGW. So the state 
is completely different in those two scenarios.
Zzh2> In 5.3.1, the gNB sends packets to B, and you treat B as a BSID on the 
SRGW. In 5.3.2, the gNB also sends to B, and I was saying that while you can't 
call the IPv4 address B as a BSID (since it is an IPv4 address), you can still 
"have an IPv4 route for the UPF address B which steers the matching traffic to 
an SR policy with function H.M.GTP4.D". To me, there is no need to be different 
between IPv4 and IPv6 (except that you can't call that IPv4 address as a BSID - 
but that's just a naming thing).
[PC3] Ok. I see your point. So basically you think we could define a BindingSID 
for IPv4 using an ULCL and a bunch of loopback addresses on the UPF. I do think 
that is doable from a technical point of view, however I guess the immediate 
feedback would be that the IPv4 address consumption could be large in 
proportion to the allocated address space. Any comment or consideration on that 
regard?

Zzh3> My point is that no ULCL is needed at all. Just have an IPv4 route for 
the UPF address and steer traffic through a corresponding policy, just like in 
IPv6 case (the only difference is that in IPv6 case you can call it a BSID).
[PC4] I think you have two options which are equivalent. Option A is to have 
different loopback addresses on the SRGW, whereas each loopback is associated 
by means of local policy with one SR policy. Option B is to have multiple 
loopback addresses on the UPF, and ULCL function on SRGW that maps each address 
to a different SR policy. Both are valid IMO. Any preference?

Zzh4> It's the "ULCL" term that is causing trouble. ULCL (in 3GPP) implies 
looking deeper into the packet not just the destination address - we don't need 
(3GPP) ULCL even for Option B.
Zzh4> In 5.3.1.1, B is not an address on SRGW (it is received on N2 and the AMF 
should not know about SRGW at all - so B should be an address on the UPF) yet 
"B is an End.M.GTP6.D. SID at the SRGW". Essentially, on any router, any 
address can be treated as a SID and we just need a route for that address/SID 
to map it to a policy.
Zzh4> Therefore, even with Option B, we don't need ULCL. We just need routes 
for those loopback addresses and map to SR policies.
[PC5] The current text uses a single IPv4 address on the UPF, and the SRGW 
holds per-UE state to do the right steering. I agree both Option A and Option B 
would work as well, but the three options have tradeoffs. Im not sure there is 
any benefit in either option A or B given that you would need to burn extra 
IPv4 addresses as loopbacks, and you still need the policy to do the 
steering... I agree that routes for those loopbacks addresses mapped to SR 
policies is enough. But as said, Im not sure it has a significant benefit while 
it could be more complex to operate. 

Zzh3> As for "a bunch of loopback addresses on the UPF", don't you need 
something similar for IPv6? The AMF would give different addresses for 
different SLAs, though on the UPF the IPv6 addresses don't need to be tied to 
loopbacks - they can be viewed just as SIDs.
[PC4] Agreed that you will need the same number of address on IPv6 or IPv4. My 
point was about the next one...
Zzh3> Well - ok the IPv4 address consumption could be an issue (you may want to 
clarify in the draft that's the reason for ULCL) - but how many different SLAs 
would you support, and if there are many then in the IPv6 case would you rely 
on AMF to provide different addresses or do the same as in IPv4 case and rely 
on ULCL?
[PC4] I would say at most in the order of one hundred. Is that an issue in 
IPv4? That I don’t know. If the coauthors and wg see the value, Ill change it. 
I don’t have anything against it.

Zzh4> I don't know it is an issue or not (I mentioned it "could be an issue" 
following your " IPv4 address consumption could be large in proportion to the 
allocated address space" comment). I am just saying that there is basically no 
difference between IPv4 and IPv6 if you want to achieve similar functionality.
[PC5] I do think there is quite a difference in between IPv4 and IPv6. In the 
IPv6 case you are using an SRv6 BindingSID. In the IPv4 case you are proposing 
to mimic that same behavior using several IPv4 loopback addresses and having a 
local policy that associates each IPv4 loopback with one SR policy. To me there 
is a difference on the management. E.g. do we have a yang model that allows to 
configure this IPv4 behavior?
 
Zzh4> In fact, if ULCL is used, what kind of rules will be used to steer 
traffic? How does that compared to using some different loopback addresses?
[PC5] In the current text, you steer base don the incoming GTP packet 
-including the TEID field-. This is mapped into an SR policy.
[PC5] In your proposal you would be relying on steer solely based on the IPv4 
DA.
[PC5] How you implement this steering I believe it’s a local behavior to the 
router and is not externally visible.
 
Zzh4> Jeffrey

Zzh3> Jeffrey


[PC] Also, as a correction, the IPv6 case does NOT steer based on the UPF 
address. It leverages a BSID to perform remote steering.
Zzh> Ok I was not strict on my language. The BSID is a locator of the UPF 
address, right?
[PC2] No. The Binding SID is defined in RFC8402.
Zzh2> Let me rephrase. In 5.3.1.1, the gBN sends to B, and " B is an 
End.M.GTP6.D SID at the SRGW". If you're saying that the policy associated with 
the B actually looks at the TEID or some other parameters to produce different 
SID lists for traffic steering, you need to clarify it. Regardless, 5.3.2.1 
does not have to be different - gNB sends to B, and the SRGW can have an IPv4 
route for B to steer traffic through the same policy as in IPv6 case - just 
that you can't call the IPv4 address B as a BSID anymore.
Zzh2> and please see the following in my initial emal:
"It seems that the only reason ULCL is used here is just that we don't call an 
IPv4 address a SID - but that does not mean we can't use an IPv4 route to steer 
traffic into a policy (isn't it the same thing that we use an IPv6 route for an 
address that we call SID)?"
[PC3] Already replied in previous block.

Zzh2> Jeffrey

5.3.3.  Extensions to the interworking mechanisms

   In this section we presented two mechanisms for interworking with
   gNBs and UPFs that do not support SRv6.  These mechanisms are used to
   support GTP over IPv4 and IPv6.

Only gNB, not UPF, right?
[PC] Not sure I follow your point. Interworking in between a GTP-U gNB, and an 
SRv6 UPF mainly, but it could also be used with a legacy UPF.

Zzh> 5.3 only presented interworking between SRv6 UPF and GTP-U gNB (not 
between SRv6 gNB and GTP-U UPF), right?
[PC2] Correct.
Zzh> I was wondering if the same GW could be used next to a GTP-U UPF. If yes, 
then why there is a need for "5.4 drop-in mode"?
[PC2] with subtle changes, yes, it could be used. This is defined in 5.4, hence 
indeed the paragraph below can be removed.

   Furthermore, although these mechanisms are designed for interworking
   with legacy RAN at the N3 interface, these methods could also be
   applied for interworking with a non-SRv6 capable UPF at the N9
   interface.

Are you referring to the following?
[PC] Yes

 gNB (GTP-U) -- SRGW1 ----- UPF1  -------  SRGW2 ----- (GTP-U) UPF2

What's the difference between SRGW1 and SRGw2? If there is, then the above 
paragraph is incorrect.
If there is no difference, why do we need drop-in mode (which has difference 
between SRGW-A and SRGW-B)?
[PC] SRGW1 and SRGW2 performs the opposite operation. The drop-in mode has on 
both ends GTP-U, and in the middle SRv6; which is slightly different. But from 
a dataplane perspective the SRGW of both modes are pretty much the same.
Zzh> The point is that SRGW1 and SRGW2 do perform different/opposite operations 
(that's why "5.4 drop-in mode" is needed). With that, the first and last 
paragraph in 5.3.3 (as I quoted in the earlier email) are not correct, right?
[PC2] I see your point now. Indeed, the paragraph should be removed.

Jeffrey

Juniper Business Use Only

Juniper Business Use Only

Juniper Business Use Only

Juniper Business Use Only

Juniper Business Use Only
_______________________________________________
dmm mailing list
[email protected]
https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/dmm__;!!NEt6yMaO-gk!Ro1t2yZ3JOnFGW6sh3_9s5mMLnNp5LIThuN3YCm3IjdL2WaGqH8GiFHNIMqcKNnm$

Juniper Business Use Only

Juniper Business Use Only
_______________________________________________
dmm mailing list
[email protected]
https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/dmm__;!!NEt6yMaO-gk!RAs49m7dk78UAVGJtrA3MFE7_lOljJcFM37cGLYmgQx6M0p7gl40EBwbP6e4Hz21$

Juniper Business Use Only

Juniper Business Use Only
_______________________________________________
dmm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dmm

Reply via email to