All,

There is a fundamental question in respect to your and Linda drafts - how
much service awareness should be carried by link state protocols.

Historically till today IGPs provide stable underlay reachability within
the IGP coverage area (including hierarchy). Yes they were extended to
carry underlay TE or SR information, but still that was within the core
transport.

All external or service information where not carried by neither OSPF nor
ISIS. Neither was node liveness as an extra indicator.

Various services were loaded onto BGP and BGP was and still continues to be
the victim of those who think that it is cool to make network service rich
and smart.

I understand now its IGP turn to overload it with lots of external and in
fact opaque to native IGP function stuff. And once we open that gate we
will be walking on the cliff where the edge is not rock, but clay or sand.

Very honestly I do think that information about services do not belong to
link state protocols. They are much better handled at the application layer
and in the end to end fashion between actual application endpoints. We
should aim to decrease state carried and handled by IGPs not spending time
on the absolute reverse.

See take your stub link draft ... say an area may have 100 intra-area links
and 1000s of external logical links (example VLANs) which you proposed to
model as link. Moreover, information advertised with those stub links may
be dynamic which multiplied by scale has clear potential to overwhelm (in
processing or distribution) by orders of magnitude important data coming
with real transport links.

Kind regards,
Robert


On Wed, Jan 19, 2022 at 3:06 PM Aijun Wang <[email protected]>
wrote:

> Hi, Robert:
>
> As described in the draft, “all those locations can be close in proximity.
> There might be a tiny difference in the routing distance to reach an
> application instance attached to a different edge router”
> So, the “aggregated cost” or other factors that associated with the stub
> link may be more important for selecting the right server.
> I think it is not important for P router responses or Ingress router
> responses first. In Figure 10, you will notice the client’s traffic is
> first distributed via the DNS to different “ANYCAST” address. Then for the
> same “ANYCAST” address, the client will prefer one of the three server, if
> these server has different “aggregated cost”
> We can now adjust the “aggregated cost” to influence the traffic
> distribution among these three servers for the same “ANYCAST” address.
> For example, if we set all the”aggregated cost” same for one “ANYCAST”
> address(for example, aa08:4450), then the traffic to this address will be
> distributed equally among the three locations. No traffic oscillation will
> occur.
>
> The key point here is that the attributes associated with these stub
> links/prefixes should be considered or emphasized.
>
> Aijun Wang
> China Telecom
>
> On Jan 19, 2022, at 18:19, Robert Raszuk <[email protected]> wrote:
>
> 
> Hi Linda,
>
> *The aggregated site cost change rate is comparable with the rate of
> adding or removing application instances at locations to adapt to the
> workload distribution changes.*
>
> [RR] What Les and myself have been trying to highlight here is that the
> above model does not effectively work well in the underlay layer.
>
> The moment you adjust such cost will not really spread workload
> distribution, but shift it between servers - members of given anycast
> address. The forwarding decision will happen at the first common P core
> underlay node from the server side and not the ingress to the network -
> where is what you would really want.
>
> Only in very specific topologies you may see some more control, but I
> would say that this is rather an exception then a rule.
>
> Thx,
> R.
>
>
_______________________________________________
Lsr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to