Hi, Robert:

 

What I want to say is that the proposed “Attributes” is actually “metric” to 
the prefixes, or attributes of the stub links that connected to the prefix. 

It is not different from the metric within the IGP core transport.

It should be noticed that in your mentioned scenario, all the interface address 
information about the stub link are already within the IGP if you configure 
them as “Passive”, or “Stub Link”.  The difference is only advertising the 
interface address(current status), or the interface prefix address(as the 
attributes of the stub link).

 

And, as we all understand, the mentioned IGP Egress Engineering(for 
abbreviation, we can call it “IGP-EE”) will only exist for the “ANYCAST” 
address. Will you spread your “ANYCAST” address on all of the stub links? If 
not, we need not worry the overwhelming possibilities. And I think what your 
worry should be considered within the operation deployment, not the protocol 
itself.  For example, all the metrics within the IGP are not static and can be 
adjusted, but we will adjust them only when necessary.   

 

We should also notice that the IGP are considering adding more dynamic metrics 
to meet the service/network requirement(for example, link delay metric).

 

The IGP should be evolved to have more flexible capabilities to meet various 
scenarios.  The operator will have their considerations or restrictions when 
deploying these new features.

 

Best Regards

 

Aijun Wang

China Telecom

 

From: [email protected] <[email protected]> On Behalf Of Robert Raszuk
Sent: Wednesday, January 19, 2022 10:28 PM
To: lsr <[email protected]>; Aijun Wang <[email protected]>; Linda Dunbar 
<[email protected]>
Subject: Re: [Lsr] Seeking feedback to the revised 
draft-dunbar-lsr-5g-edge-compute

 

All,

 

There is a fundamental question in respect to your and Linda drafts - how much 
service awareness should be carried by link state protocols. 

 

Historically till today IGPs provide stable underlay reachability within the 
IGP coverage area (including hierarchy). Yes they were extended to carry 
underlay TE or SR information, but still that was within the core transport. 

 

All external or service information where not carried by neither OSPF nor ISIS. 
Neither was node liveness as an extra indicator. 

 

Various services were loaded onto BGP and BGP was and still continues to be the 
victim of those who think that it is cool to make network service rich and 
smart. 

 

I understand now its IGP turn to overload it with lots of external and in fact 
opaque to native IGP function stuff. And once we open that gate we will be 
walking on the cliff where the edge is not rock, but clay or sand. 

 

Very honestly I do think that information about services do not belong to link 
state protocols. They are much better handled at the application layer and in 
the end to end fashion between actual application endpoints. We should aim to 
decrease state carried and handled by IGPs not spending time on the absolute 
reverse. 

 

See take your stub link draft ... say an area may have 100 intra-area links and 
1000s of external logical links (example VLANs) which you proposed to model as 
link. Moreover, information advertised with those stub links may be dynamic 
which multiplied by scale has clear potential to overwhelm (in processing or 
distribution) by orders of magnitude important data coming with real transport 
links. 

 

Kind regards,

Robert

 

 

On Wed, Jan 19, 2022 at 3:06 PM Aijun Wang <[email protected] 
<mailto:[email protected]> > wrote:

Hi, Robert:

 

As described in the draft, “all those locations can be close in proximity. 
There might be a tiny difference in the routing distance to reach an 
application instance attached to a different edge router”

So, the “aggregated cost” or other factors that associated with the stub link 
may be more important for selecting the right server.

I think it is not important for P router responses or Ingress router responses 
first. In Figure 10, you will notice the client’s traffic is first distributed 
via the DNS to different “ANYCAST” address. Then for the same “ANYCAST” 
address, the client will prefer one of the three server, if these server has 
different “aggregated cost”

We can now adjust the “aggregated cost” to influence the traffic distribution 
among these three servers for the same “ANYCAST” address.

For example, if we set all the”aggregated cost” same for one “ANYCAST” 
address(for example, aa08:4450), then the traffic to this address will be 
distributed equally among the three locations. No traffic oscillation will 
occur.

 

The key point here is that the attributes associated with these stub 
links/prefixes should be considered or emphasized.

 

Aijun Wang

China Telecom





On Jan 19, 2022, at 18:19, Robert Raszuk <[email protected] 
<mailto:[email protected]> > wrote:



Hi Linda,

 

The aggregated site cost change rate is comparable with the rate of adding or 
removing application instances at locations to adapt to the workload 
distribution changes.

 

[RR] What Les and myself have been trying to highlight here is that the above 
model does not effectively work well in the underlay layer. 

 

The moment you adjust such cost will not really spread workload distribution, 
but shift it between servers - members of given anycast address. The forwarding 
decision will happen at the first common P core underlay node from the server 
side and not the ingress to the network - where is what you would really want. 

 

Only in very specific topologies you may see some more control, but I would say 
that this is rather an exception then a rule. 

 

Thx,

R.

_______________________________________________
Lsr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to