Tony -
From: Lsr On Behalf Of Tony Li
Sent: Thursday, November 18, 2021 8:07 AM
To: Les Ginsberg (ginsberg)
Cc: Gyan Mishra ; Christian Hopps ;
Aijun Wang ; lsr@ietf.org; Acee Lindem (acee)
; Tony Przygienda
Subject: Re: [Lsr] 【Responses for Comments on PUAM Draft】RE: IETF 112 LSR
Meeting
Tony,
On 18/11/2021 17:38, Tony Li wrote:
Les,
You are not retaining scalability. You are damaging it. You are
proposing flooding a prefix per router that fails. If there is a mass
failure, that would result in flooding a large number of prefixes. The
last thing you want when there is a
A New Internet-Draft is available from the on-line Internet-Drafts directories.
This draft is a work item of the Link State Routing WG of the IETF.
Title : OSPF Transport Instance Extensions
Authors : Acee Lindem
Yingzhen Qu
Les,
> Why would we then punch holes in the summary for member routers? Just
> because we can?
> [LES:] No. We are doing it to improve convergence AND retain scalability.
You are not improving convergence. You are propagating liveness. The fact that
this relates to convergence in the
> If you want to address this problem with BGP keep alive timers, that’s
certainly an alternative as well.
Nope that is not what I previously described in this thread.
BGP uses recursion. So you can propage next hops in BGP and recurse your
service route next hops over those with simple config
Hi les,
*[LES2:] It is reasonable to limit the rate of pulses sent. Peter has
> already indicated in an earlier reply that we will address that in a future
> version of the event-notification draft. So, good point – and we are in
> agreement regarding mass failure.*
>
In respect to the
Robert,
> Why would we need to deploy a full mesh of BFD or introduce a new proxy
> liveness service if BGP can do all what is needed here with just a few lines
> of additional cfg on existing and deployed operating systems ?
If you want to address this problem with BGP keep alive timers,
From: Lsr on behalf of Mike Fox
Sent: 16 November 2021 20:53
Many companies in the industry including mine are undergoing initiatives
to replace offensive terminology in IT. One of the targeted terms is
master/slave, which is used in OSPF database exchange and the terms appear
in various
Hi Tony,
> How so? Doesn’t this correspond 1:1 with overlay BGP sessions?
Not at all.
BGP is never full mesh across multiple areas. RR hops are used. BFD does
not have concept of RRs last time I looked at it :)
Kind regards,
R.
On Thu, Nov 18, 2021 at 5:38 PM Tony Li wrote:
> Les,
>
>
Agreeing with T. Li here (i.e. BFD next-hops) and let me add that AFAIS the
confusion here is that a presence of a /32 route is used as SSAP liveliness
AFAIS and that's simply not what IGPs are here for if you consider their
main job to be fastest possible convergence in network _reachability_
Acee
We do have a lot of host routes in the hundreds of thousands across all
area but within an area at each level, core, aggregation, we do also have
close to hundreds of thousands of host routes as well that are flooded due
to the E2E seamless mpls hierarchy.
The flooding is much worse with
Tony,
Why would we need to deploy a full mesh of BFD or introduce a new proxy
liveness service if BGP can do all what is needed here with just a few
lines of additional cfg on existing and deployed operating systems ?
In respect to using BFD here - let's start with basic question - who would
be
Les,
> You are not retaining scalability. You are damaging it. You are proposing
> flooding a prefix per router that fails. If there is a mass failure, that
> would result in flooding a large number of prefixes. The last thing you want
> when there is a mass failure is additional load,
Les,
Thank you for the clarification.
Rgds
Shraddha
Juniper Business Use Only
From: Les Ginsberg (ginsberg)
Sent: Thursday, November 18, 2021 8:36 PM
To: Shraddha Hegde ; lsr@ietf.org
Subject: RE: Clarification on RFC 7794
[External Email. Be cautious of content]
Shraddha -
In the case
Hi Tony
The issue exists related to domain wide flooding to support seamless MPLS
E2E LSP which you end up with all host routes from all areas flooded domain
wide from Core and Agg layers. So a solution to this domain wide flooding
is area summarization, however in order to make summarization
Hi Gyan,
> On Nov 18, 2021, at 4:13 PM, Gyan Mishra wrote:
>
> The issue exists related to domain wide flooding to support seamless MPLS E2E
> LSP which you end up with all host routes from all areas flooded domain wide
> from Core and Agg layers. So a solution to this domain wide flooding
Hi, Tony:
-Original Message-
From: tony1ath...@gmail.com On Behalf Of Tony Li
Sent: Friday, November 19, 2021 9:08 AM
To: Aijun Wang
Cc: Tony Przygienda ; Les Ginsberg (ginsberg)
; Gyan Mishra ; lsr@ietf.org;
Christian Hopps ; Acee Lindem (acee)
Subject: Re: [Lsr] 【Responses for
Hi, Tony:
Aijun Wang
China Telecom
> On Nov 19, 2021, at 00:46, Tony Przygienda wrote:
>
>
> Agreeing with T. Li here (i.e. BFD next-hops) and let me add that AFAIS the
> confusion here is that a presence of a /32 route is used as SSAP liveliness
> AFAIS and that's simply not what IGPs are
Hi Aijun,
At the risk of Tony confusion…
>> Agreeing with T. Li here (i.e. BFD next-hops) and let me add that AFAIS the
>> confusion here is that a presence of a /32 route is used as SSAP liveliness
>> AFAIS and that's simply not what IGPs are here for if you consider their
>> main job to
WG,
I am looking for clarification on below text in RFC 7794 sec 2.2
"Note that the Router ID advertised is always the Router ID of the
IS-IS instance that originated the advertisement. This would be true
even if the prefix had been learned from another protocol (i.e., with
the
In such cases I would rather consider implementing pulse to be less then
host route.
Suppressing them randomly may lead to even bigger disappointment of
unreachability propagation hence delaying connectivity restoration to a
backup path.
Many thx,
R.
On Thu, Nov 18, 2021 at 1:58 PM Peter Psenak
> On Nov 17, 2021, at 6:09 PM, Aijun Wang wrote:
>
> Hi, Christian:
>
> Would you like to describe how to solve the problem via using the transport
> instance? The detail interaction process within the node and the deployment
> overhead analysis?
As A WG member:
When I said in the
Les, All,
One thing to keep in mind in this entire discussion is the reality of
compute nodes becoming L3 routing nodes in many new architectures. The
protocol which is used between TORs and such compute nodes is almost always
BGP. That means that no matter what we do in the IGP we will not avoid
>
> [WAJ] In the scenarios that you mentioned, BGP nexthop reachability is
> derived from the directed interface, there is no summary action done by the
> router. Is that true?
>
Not necessarily - TORs do not always do eBGP to compute and set next hop
self. There can be IBGP session there and
Robert,
On 18/11/2021 13:42, Robert Raszuk wrote:
[WAJ] In the scenarios that you mentioned, BGP nexthop reachability
is derived from the directed interface, there is no summary action
done by the router. Is that true?
Not necessarily - TORs do not always do eBGP to compute and
Robert,
On 18/11/2021 11:33, Robert Raszuk wrote:
Les, All,
One thing to keep in mind in this entire discussion is the reality of
compute nodes becoming L3 routing nodes in many new architectures. The
protocol which is used between TORs and such compute nodes is almost
always BGP. That
Hi, Robert:
I want to add one comments based on Peter’s comments.
Aijun Wang
China Telecom
> On Nov 18, 2021, at 19:30, Peter Psenak
> wrote:
>
> Robert,
>
>> On 18/11/2021 11:33, Robert Raszuk wrote:
>> Les, All,
>> One thing to keep in mind in this entire discussion is the reality of
>>
Next hops are part of the IGP summary when advertising outside of the area.
Within area those are host routes usually or smaller subnets (/24s for
TOR-Compute LANs).
Thx,
R.
On Thu, Nov 18, 2021 at 3:17 PM Aijun Wang
wrote:
> Hi, Robert:
>
> If you use iBGP in your mentioned scenario, how to
On 18/11/2021 15:54, Robert Raszuk wrote:
Btw even with eBGP between TOR and compute I have just checked and I see
cases of some deployments not setting next hop self on TORs hence
original next hop set by compute is used in service routes.
Link between TOR and compute is passive IGP
Hi, Robert:
If you use iBGP in your mentioned scenario, how to propagate the reachability
of BGP nexthop? I think it is impossible to use again BGP itself. Then depend
only BGP can’t solve your problem.
If the underlying using IGP to propagate such reachability information, we have
mechanism
Hi Gyan,
Are you saying you actually have hundreds of thousands of host routes in area
routing domains? I guess this is across many areas? What is the use case
(hopefully, in 40 words or less)?
Thanks,
Acee
From: Gyan Mishra
Date: Wednesday, November 17, 2021 at 9:35 PM
To: Acee Lindem
Cc:
Hi, Christian:
Aijun Wang
China Telecom
> On Nov 18, 2021, at 18:13, Christian Hopps wrote:
>
>
>
>> On Nov 17, 2021, at 6:09 PM, Aijun Wang wrote:
>>
>> Hi, Christian:
>>
>> Would you like to describe how to solve the problem via using the transport
>> instance? The detail interaction
Btw even with eBGP between TOR and compute I have just checked and I see
cases of some deployments not setting next hop self on TORs hence original
next hop set by compute is used in service routes.
Link between TOR and compute is passive IGP link/subnet.
Thx,
R.
On Thu, Nov 18, 2021 at 3:17 PM
Shraddha -
In the case you mention (redistribution from another protocol), the advertised
router-id would be the ID of the originating router in the source protocol for
redistribution.
In your example, this ID would come from OSPF. However, the term "router-id"
means something different in
34 matches
Mail list logo