Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-15 Thread Mark Tinka


On 15/Mar/18 12:18, adamv0...@netconsultings.com wrote:

> Maybe you might start looking at some scaling techniques when you'll have a 
> need to transport multiple paths for a prefix for load-sharing or 
> primary-backup use cases, say to reduce internet convergence times form 2 
> mins down to less than 1ms (MX with 2M prefixes). 

As I mentioned to Saku, we already do load balancing at the IGP/LDP
layer. We don't need multiple paths in BGP to achieve that. It works well.

The NEXT_HOP is abstracted for BGP destinations at the edge, so
installing an alternate path does not take 2 minutes. It's instant.
We've got a highly distributed (peering) edge, so a customer-serving
router will not lose the entire Internet in one go, unless that device
suffers a catastrophe itself.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-15 Thread adamv0025
Actually the fully-meshed RR's actually do "reflect" routes to each other.
They are still breaking the iBGP to iBGP rule, they're just exercising the 
"client to non-client" rule in 2) when "relying" route from a local cluster to 
RRs in other clusters and in turn those RRs in other clusters then employ the " 
non-client to client" in 1) to "relay" the route to their local-cluster clients:
1) A route from a Non-Client IBGP peer:
 Reflect to all the Clients.
2) A route from a Client peer:
 Reflect to all the Non-Client peers and also to the Client
 peers.  (Hence the Client peers are not required to be fully
 meshed.)
-But I see what you mean it's not client-to-client reflection -that happens 
only between clients within each individual local cluster.
CtoC---CtoNC--NCtoC  
C>RR>RR>C
|
C<+


Anyways I understand your design choice and it makes sense in your environment 
as your iBGP infrastructure currently carries a single copy of the DFZ routing 
table and there's most likely some path-hiding going on as RRs are employed in 
best path selection process and on top of that there are plethora of resources 
on RRs, so I agree there's no need to deploy any scaling tuning as it would be 
just added complexity with no perceived benefit. 
Maybe you might start looking at some scaling techniques when you'll have a 
need to transport multiple paths for a prefix for load-sharing or 
primary-backup use cases, say to reduce internet convergence times form 2 mins 
down to less than 1ms (MX with 2M prefixes). 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

From: Mark Tinka [mailto:mark.ti...@seacom.mu] 
Sent: Tuesday, March 13, 2018 11:29 PM
To: adamv0...@netconsultings.com; 'Saku Ytti'
Cc: 'Job Snijders'; 'Cisco Network Service Providers'
Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)


On 13/Mar/18 18:47, adamv0...@netconsultings.com wrote:
Ok you’re still missing the point, let me ty with the following example. 

Now suppose we both have: 
pe1-cluster-1 sending prefix X to rr1-cluster1 and rr2-cluster1 and these are 
then reflecting it further to RRs in cluster2

Okay, so just to be pedantic, fully-meshed RR's don't "reflect" routes to each 
other (they can, but it's redundant). 

But I get what you're trying to say...



Now in your case: 
rr1-cluster2 receives prefix X from both rr1-cluster1 and rr2cluster1 –so how 
may paths will it keep? yes 2
rr2-cluster2 receives prefix X from both rr1-cluster1 and rr2cluster1 –so how 
may paths will it keep? yes 2

In my case:
rr1-cluster2 receives prefix X from rr1-cluster1 –so how may paths will it 
keep, yes 1
rr2-cluster2 receives prefix X from rr2-cluster1 –so how may paths will it 
keep, yes 1

Yes, understood.

So in our case, we are happy to hold several more paths this way within our RR 
infrastructure in exchange for a standard, simple configuration, i.e., a 
full-mesh amongst all RR's.

Having to design RR's such that RR1-Cluster-A only peers with RR1-Cluster-B_Z, 
rinse repeat for RR2-* is just operational complexity that requires too much 
tracking. Running the network is hard enough as it is.

We are taking full advantage of the processing and memory power we have on our 
x86 platforms to run our RR's. We don't have the typical constraints associated 
with purpose-built routers configured as RR's. It's been a long time since I 
had dedicated Juniper M120's running as RR's - I'm never going back to those 
days :-).

Mark.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-13 Thread Mark Tinka


On 13/Mar/18 18:47, adamv0...@netconsultings.com wrote:

> Ok you’re still missing the point, let me ty with the following example. 
>
> Now suppose we both have: 
> pe1-cluster-1 sending prefix X to rr1-cluster1 and rr2-cluster1 and these are 
> then reflecting it further to RRs in cluster2

Okay, so just to be pedantic, fully-meshed RR's don't "reflect" routes
to each other (they can, but it's redundant).

But I get what you're trying to say...

> Now in your case: 
> rr1-cluster2 receives prefix X from both rr1-cluster1 and rr2cluster1 –so how 
> may paths will it keep? yes 2
> rr2-cluster2 receives prefix X from both rr1-cluster1 and rr2cluster1 –so how 
> may paths will it keep? yes 2
>
> In my case:
> rr1-cluster2 receives prefix X from rr1-cluster1 –so how may paths will it 
> keep, yes 1
> rr2-cluster2 receives prefix X from rr2-cluster1 –so how may paths will it 
> keep, yes 1

Yes, understood.

So in our case, we are happy to hold several more paths this way within
our RR infrastructure in exchange for a standard, simple configuration,
i.e., a full-mesh amongst all RR's.

Having to design RR's such that RR1-Cluster-A only peers with
RR1-Cluster-B_Z, rinse repeat for RR2-* is just operational complexity
that requires too much tracking. Running the network is hard enough as
it is.

We are taking full advantage of the processing and memory power we have
on our x86 platforms to run our RR's. We don't have the typical
constraints associated with purpose-built routers configured as RR's.
It's been a long time since I had dedicated Juniper M120's running as
RR's - I'm never going back to those days :-).

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-13 Thread adamv0025
Ok you’re still missing the point, let me ty with the following example. 

Now suppose we both have: 
pe1-cluster-1 sending prefix X to rr1-cluster1 and rr2-cluster1 and these are 
then reflecting it further to RRs in cluster2

Now in your case: 
rr1-cluster2 receives prefix X from both rr1-cluster1 and rr2cluster1 –so how 
may paths will it keep? yes 2
rr2-cluster2 receives prefix X from both rr1-cluster1 and rr2cluster1 –so how 
may paths will it keep? yes 2

In my case:
rr1-cluster2 receives prefix X from rr1-cluster1 –so how may paths will it 
keep, yes 1
rr2-cluster2 receives prefix X from rr2-cluster1 –so how may paths will it 
keep, yes 1 

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

From: Mark Tinka [mailto:mark.ti...@seacom.mu] 
Sent: Tuesday, March 13, 2018 2:36 PM
To: adamv0...@netconsultings.com; 'Saku Ytti'
Cc: 'Job Snijders'; 'Cisco Network Service Providers'
Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)


On 13/Mar/18 13:04, adamv0...@netconsultings.com wrote:
Keeping RR1s separate from RR2s is all about memory efficiency,

That memory saving could now be entering the real of "extreme", but hey, your 
network :-)...



The same rationale is behind having sessions between RRs as non-client sessions.
- In combination with separate RR1 and RR2 infrastructures each RR (RR1/RR2) in 
the network learns only one path for any single homed prefix. 
- If I’d have RR1s and RR2s all mixed in a full-mesh then each RR would get 2 
paths 
That is double the amount of state I’d need to keep on every RR in comparison 
with separate RR1 and RR2 infrastructures.

The actual routes the RR's would exchange with one another in a shared 
Cluster-ID scenario would be the routes they originate. Provided you are a 
network that does not de-aggregate, you're looking at just whatever allocations 
you obtain from your favorite RIR. Not about to break the memory bank (pun 
intended, hehe) compared to the state of the global Internet routing table.



If sessions between RRs are configured as client sessions AND 

Don't know why you'd do that.

Inter-/intra-RR sessions should not be clients, IMHO.

But like I said, your network...

Mark.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-13 Thread Mark Tinka


On 13/Mar/18 13:04, adamv0...@netconsultings.com wrote:

> Keeping RR1s separate from RR2s is all about memory efficiency,

That memory saving could now be entering the real of "extreme", but hey,
your network :-)...

>   
> The same rationale is behind having sessions between RRs as non-client 
> sessions.
> - In combination with separate RR1 and RR2 infrastructures each RR (RR1/RR2) 
> in the network learns only one path for any single homed prefix. 
> - If I’d have RR1s and RR2s all mixed in a full-mesh then each RR would get 2 
> paths 
> That is double the amount of state I’d need to keep on every RR in comparison 
> with separate RR1 and RR2 infrastructures.

The actual routes the RR's would exchange with one another in a shared
Cluster-ID scenario would be the routes they originate. Provided you are
a network that does not de-aggregate, you're looking at just whatever
allocations you obtain from your favorite RIR. Not about to break the
memory bank (pun intended, hehe) compared to the state of the global
Internet routing table.

> If sessions between RRs are configured as client sessions AND 

Don't know why you'd do that.

Inter-/intra-RR sessions should not be clients, IMHO.

But like I said, your network...

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-13 Thread adamv0025
Keeping RR1s separate from RR2s is all about memory efficiency,

I go with the premise that all I need is a full mesh between RR1s in order to 
distribute all the routing information across the whole backbone.
The exact mirror of RR1s’ infrastructure (same topology, same set of prefixes) 
but composed of RR2s is there just in case something happens to RR1s 
infrastructure (merely a backup). 
This model is *50% more efficient with memory on every RR in comparison with 
the model where RR1s and RR2s are in a full-mesh.  
*this is when Type 1 RDs are used, so there’s no state lost on RR to RR 
sessions. 

The same rationale is behind having sessions between RRs as non-client sessions.
- In combination with separate RR1 and RR2 infrastructures each RR (RR1/RR2) in 
the network learns only one path for any single homed prefix. 
- If I’d have RR1s and RR2s all mixed in a full-mesh then each RR would get 2 
paths 
That is double the amount of state I’d need to keep on every RR in comparison 
with separate RR1 and RR2 infrastructures.

If sessions between RRs are configured as client sessions AND 
- You keep RR1s separate from RR2s, then each RR would learn N-1 paths for each 
single homed prefix, where N is the number of RR1s.
- You have full-mesh between RR1s and RR2s then each RR would learn N-1 paths 
for each single homed prefix, where N is the number of all RRs (RR1s+RR2s). 
This clearly does not scale as the amount of state on each RR is proportional 
to the number of RRs in the network.

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Mark Tinka


On 12/Mar/18 23:04, Saku Ytti wrote:

> Quite different thing, right? You won't get balancing for single
> prefix to multiple edges with IGP/LDP. you just get multiple paths to
> same edge?

That is okay for us because we have a very distributed edge, each
focusing on being the best path to an eBGP destination.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Saku Ytti
On 12 March 2018 at 22:58, Mark Tinka  wrote:

> We are doing ECMP at the IGP/LDP layer.

Quite different thing, right? You won't get balancing for single
prefix to multiple edges with IGP/LDP. you just get multiple paths to
same edge?

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Mark Tinka


On 12/Mar/18 20:43, Saku Ytti wrote:

> add-path is not just about backup path, it's also about sending ecmp paths.
>
> Some vendors do and all vendor should have add-path toggle separately
> to how many best paths to send and how many backup paths to send.
> You'd likely always want all ECMP paths but at most you care about 1
> backup path.

We are doing ECMP at the IGP/LDP layer.


> OFC if you run INET in VRF you can just use unique RD per router, if
> you have the DRAM.

I still bow to operators that run Internet in a VRF :-).

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Saku Ytti
add-path is not just about backup path, it's also about sending ecmp paths.

Some vendors do and all vendor should have add-path toggle separately
to how many best paths to send and how many backup paths to send.
You'd likely always want all ECMP paths but at most you care about 1
backup path.

OFC if you run INET in VRF you can just use unique RD per router, if
you have the DRAM.


On 12 March 2018 at 20:39, Mark Tinka  wrote:
>
>
> On 12/Mar/18 19:20, adamv0...@netconsultings.com wrote:
>
>> With regards to ORR, are you using add-path already or RRs are doing
>> all the path selection on behalf of clients please?
>>
>
> When Add-Paths (and Diverse-Paths) came out, we did some basic
> benchmarking for re-route convergence performance in our network, and
> did not see the benefit of increasing BGP state for a return in faster
> re-routing, i.e., given our architecture, there was no actual value for
> the cost of additional state. So no, we do not use Add-Paths or
> Diverse-Paths.
>
> Our focus has been on maintaining high-speed performance of the IGP as
> the network scales, while abstracting BGP oscillations through stable
> next-hops.
>
> Mark.
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Mark Tinka


On 12/Mar/18 19:20, adamv0...@netconsultings.com wrote:

> With regards to ORR, are you using add-path already or RRs are doing
> all the path selection on behalf of clients please?
>

When Add-Paths (and Diverse-Paths) came out, we did some basic
benchmarking for re-route convergence performance in our network, and
did not see the benefit of increasing BGP state for a return in faster
re-routing, i.e., given our architecture, there was no actual value for
the cost of additional state. So no, we do not use Add-Paths or
Diverse-Paths.

Our focus has been on maintaining high-speed performance of the IGP as
the network scales, while abstracting BGP oscillations through stable
next-hops.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Mark Tinka


On 12/Mar/18 19:36, adamv0...@netconsultings.com wrote:

> Cluster-ID saves RAM only if RR1 and RR2 are connected like in your
> case, if they are not and RR1s only talk to RR1 in other POPs and RR2s
> only talk to RR2s in other POPs/Clusters then the Cluster-ID is just
> for loop prevention really.
>

Not sure why you'd want to partition RR iBGP adjacencies.

I believe in fully meshing all RR's, regardless of cluster location.


>  
>
> And on a side note,
>
> Although Cluster-ID saves some RAM in case RR1 and RR2 are connected
> and configured with the same Cluster-ID –it doesn’t save CPU cycles
> (and RAM necessary for particular RIB out/RIB in) where RR1 sends
> couple Mill of prefixes to RR2 only for RR2 to drop all of those (well
> apart form couple of the odd prefixes originated by RR1) and vice
> versa in the opposite direction.
>

True, but in our case, RAM would be more precious than CPU (remember, we
are running our RR's on a VM sitting on top of a multi-core x86 platform).

The CSR1000v images come with differing levels of supported memory for
its own operations. It's not a function of the hardware; it's a function
of the state-of-the-art of the actual VM image itself.

So you might have as much as 768GB of RAM on the hardware, but the VM is
only setup to support 4GB, or 8GB, or 16GB, or 32GB, or... The amount of
RAM the VM image will support increases with every iteration of the
build/release cycle.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Mark Tinka


On 12/Mar/18 19:14, adamv0...@netconsultings.com wrote:

> Hmm well ok, I guess if you have one set of static routes on RR1 and
> one set of static routes/loopback on RR2 –then sure you might want to
> use iBGP session between RR1 and RR2 for redundancy purposes (if say
> the particular RR1 is the only place you originate the given route from)
>
> –but why not originating the same set of static routes/loopbacks out
> of both RRs in this case?
>

All RR's originate consistently.


>  
>
> I mean in your case with common Cluster-ID on both RR1 and RR2 these
> odd per-RR routes are the only thing exchanged over that iBGP session
> anyways right? 
>

Correct.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread adamv0025
Cluster-ID saves RAM only if RR1 and RR2 are connected like in your case, if 
they are not and RR1s only talk to RR1 in other POPs and RR2s only talk to RR2s 
in other POPs/Clusters then the Cluster-ID is just for loop prevention really. 

 

And on a side note, 

Although Cluster-ID saves some RAM in case RR1 and RR2 are connected and 
configured with the same Cluster-ID –it doesn’t save CPU cycles (and RAM 
necessary for particular RIB out/RIB in) where RR1 sends couple Mill of 
prefixes to RR2 only for RR2 to drop all of those (well apart form couple of 
the odd prefixes originated by RR1) and vice versa in the opposite direction. 

 

adam

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Mark Tinka [mailto:mark.ti...@seacom.mu] 
Sent: Monday, March 12, 2018 2:52 PM
To: adamv0...@netconsultings.com; 'Saku Ytti'
Cc: 'Job Snijders'; 'Cisco Network Service Providers'
Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

 

 

On 12/Mar/18 16:19, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com>  wrote:

In iBGP infrastructures I used or built the use of common/unique cluster IDs is 
not saving any memory and is used solely for preventing a RR to learn its own 
advertisements from the network.  


That saves RAM, otherwise with unique Cluster-ID's, RR's in will exchange 
client routes with each other, using up RAM.

But this only applies to client routes. Routes originated by the RR are learned 
by neighbor RR's in a shared Cluster-ID scenario, which is useful.

Mark.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread adamv0025
Oh I see that makes sense, if all your revenue is in Internet services then of 
course it’s hard to justify building separate iBGP infrastructure to protect 
the handful of pure VPN customers.  

 

With regards to ORR, are you using add-path already or RRs are doing all the 
path selection on behalf of clients please?

 

adam  

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Mark Tinka [mailto:mark.ti...@seacom.mu] 
Sent: Monday, March 12, 2018 2:41 PM
To: adamv0...@netconsultings.com; 'Job Snijders'
Cc: 'Curtis Piehler'; 'Cisco Network Service Providers'
Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

 

 

On 12/Mar/18 13:02, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com>  wrote:

If RR1s and RR2s never talk to each to each other then it doesn't matter
whether they have common or unique Cluster-IDs


Agreed. But in our case, they do.





Job is right, you should at least use separate TCP sessions for different
AFs,


Which BGP Session/Policy templates lend themselves naturally to, already.





 but if you have Internet prefixes carried by VMPv4 then you're still at
danger, unless you carve up a separate BGP process or iBGP infrastructure
for Internet prefixes, yes the BGP Attribute Filter and Enhanced Attribute
Error Handling should keep you relatively safe but I wouldn't count on it
(it's still not an RFC and I haven't dig into it for years so not sure where
are vendors at with addressing all the requirements in the draft).


We don't do Internet in a VRF. 





 
Internet is a wild west with universities advertising unknown attributes and
operators prepending their AS 255+ times and you can only hope that any of
such events will bring your border PE sessions down and actually won't be
relied to your RRs which then start dropping sessions to all clients or
restarting BGP/RPD processes...
Hence my requirement for dedicated iBGP infrastructure for Internet
prefixes. 


Our main business is regular Internet. VPNv4/VPNv6 accounts for not even a 
rounding error in % terms on our network.

That said, it is useful to run as late as you possibly can code on your edge 
routers and RR's to protect yourself against dodgy attributes or unexpected 
NLRI behaviour.






One fact that people usually overlook with ORR in MPLS backbones is that ORR
actually requires prefixes to be the same when they arrive at the RRs so RRs
can ORR on the best and second best to a particular set of clients. 
And my experience is that usually operators are using Type 1 RDs in their
backbones (so PE's-RID:VPN-ID format) -a that was the only way how to get
ECMP or BGP-PIC/local-repair working ages before Add-Path got introduced.
And in case of Type 1 RDs the ORR is useless as RRs in this case are merely
reflecting routes and are not performing any path selection on behalf of
clients.
So the only way how to make use of ORR is to completely change your RD plan
to Type 0 RDs VPN by VPN (maybe starting with Internet VRF) and introduce
add-path as a prerequisite of course. 


We don't run Internet in a VRF, so shouldn't have that concern.

As soon as ORR is available on the CSR1000v, I'll provide some feedback on its 
performance in our scenario.

Mark.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread adamv0025
Hmm well ok, I guess if you have one set of static routes on RR1 and one set of 
static routes/loopback on RR2 –then sure you might want to use iBGP session 
between RR1 and RR2 for redundancy purposes (if say the particular RR1 is the 
only place you originate the given route from)

–but why not originating the same set of static routes/loopbacks out of both 
RRs in this case?

 

I mean in your case with common Cluster-ID on both RR1 and RR2 these odd per-RR 
routes are the only thing exchanged over that iBGP session anyways right?  

 

adam

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Mark Tinka [mailto:mark.ti...@seacom.mu] 
Sent: Monday, March 12, 2018 2:31 PM
To: adamv0...@netconsultings.com; 'Job Snijders'
Cc: 'Cisco Network Service Providers'
Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

 

 

On 12/Mar/18 12:34, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com>  wrote:

The only scenario I can think of is if your two RRs say RR1 and RR2 in a POP 

serving a set of clients (by definition a cluster btw) -if these two RRs
have an iBGP session to each other - which is a big NONO when you are using
out of band RRs, no seriously. 


Why is it a big no-no?

We run iBGP sessions between our RR's within and outside of a cluster. No 
issues. It's useful for the RR's to learn, from each other, which routes they 
are originating, regardless of which cluster they may belong to.

And yes, these are out-of-path RR's.

Mark.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Mark Tinka


On 12/Mar/18 16:19, adamv0...@netconsultings.com wrote:

> In iBGP infrastructures I used or built the use of common/unique cluster IDs 
> is not saving any memory and is used solely for preventing a RR to learn its 
> own advertisements from the network.  

That saves RAM, otherwise with unique Cluster-ID's, RR's in will
exchange client routes with each other, using up RAM.

But this only applies to client routes. Routes originated by the RR are
learned by neighbor RR's in a shared Cluster-ID scenario, which is useful.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Mark Tinka


On 12/Mar/18 13:54, Saku Ytti wrote:

> Typical reason for RR1, RR2 to have iBGP to each other is when they
> are in forwarding path and are not dedicated RR, but also have
> external BGP to them.

Or if the RR's are originating routes themselves.


> In your case, if the cluster isn't even peering with itself, then
> there truly is no purpose for clusterID.

100% agreed.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Mark Tinka


On 12/Mar/18 13:02, adamv0...@netconsultings.com wrote:

> If RR1s and RR2s never talk to each to each other then it doesn't matter
> whether they have common or unique Cluster-IDs

Agreed. But in our case, they do.


> Job is right, you should at least use separate TCP sessions for different
> AFs,

Which BGP Session/Policy templates lend themselves naturally to, already.


>  but if you have Internet prefixes carried by VMPv4 then you're still at
> danger, unless you carve up a separate BGP process or iBGP infrastructure
> for Internet prefixes, yes the BGP Attribute Filter and Enhanced Attribute
> Error Handling should keep you relatively safe but I wouldn't count on it
> (it's still not an RFC and I haven't dig into it for years so not sure where
> are vendors at with addressing all the requirements in the draft).

We don't do Internet in a VRF.


> Internet is a wild west with universities advertising unknown attributes and
> operators prepending their AS 255+ times and you can only hope that any of
> such events will bring your border PE sessions down and actually won't be
> relied to your RRs which then start dropping sessions to all clients or
> restarting BGP/RPD processes...
> Hence my requirement for dedicated iBGP infrastructure for Internet
> prefixes. 

Our main business is regular Internet. VPNv4/VPNv6 accounts for not even
a rounding error in % terms on our network.

That said, it is useful to run as late as you possibly can code on your
edge routers and RR's to protect yourself against dodgy attributes or
unexpected NLRI behaviour.



> One fact that people usually overlook with ORR in MPLS backbones is that ORR
> actually requires prefixes to be the same when they arrive at the RRs so RRs
> can ORR on the best and second best to a particular set of clients. 
> And my experience is that usually operators are using Type 1 RDs in their
> backbones (so PE's-RID:VPN-ID format) -a that was the only way how to get
> ECMP or BGP-PIC/local-repair working ages before Add-Path got introduced.
> And in case of Type 1 RDs the ORR is useless as RRs in this case are merely
> reflecting routes and are not performing any path selection on behalf of
> clients.
> So the only way how to make use of ORR is to completely change your RD plan
> to Type 0 RDs VPN by VPN (maybe starting with Internet VRF) and introduce
> add-path as a prerequisite of course. 

We don't run Internet in a VRF, so shouldn't have that concern.

As soon as ORR is available on the CSR1000v, I'll provide some feedback
on its performance in our scenario.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Mark Tinka


On 12/Mar/18 12:34, adamv0...@netconsultings.com wrote:

> The only scenario I can think of is if your two RRs say RR1 and RR2 in
> a POP
> serving a set of clients (by definition a cluster btw) -if these two RRs
> have an iBGP session to each other - which is a big NONO when you are using
> out of band RRs, no seriously. 

Why is it a big no-no?

We run iBGP sessions between our RR's within and outside of a cluster.
No issues. It's useful for the RR's to learn, from each other, which
routes they are originating, regardless of which cluster they may belong to.

And yes, these are out-of-path RR's.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread adamv0025
In iBGP infrastructures I used or built the use of common/unique cluster IDs is 
not saving any memory and is used solely for preventing a RR to learn its own 
advertisements from the network.  

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

> -Original Message-
> From: Saku Ytti [mailto:s...@ytti.fi]
> Sent: Monday, March 12, 2018 1:06 PM
> To: adamv0...@netconsultings.com
> Cc: Job Snijders; Mark Tinka; Cisco Network Service Providers
> Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)
> 
> Routing loop to me sounds like operational problem, that things are broken.
> That will not happen. Otherwise we're saying every network has routing
> loops, because if you consider all RIB in every box, there are tons of loops. 
> I
> think we all agree most networks are loop free :>
> 
> You are saving DRAM, that's it.
> 
> In your case you're not even saving DRAM as cluster doesnt peer with itself,
> so for you it's just additional complexity with no upside.
> Harder to generate config, as you need to teach some system the relation
> which node is in which cluster. Where as cluster-id==loop0 is 0-knowledge.
> 
> 
> On 12 March 2018 at 14:42,   wrote:
> > Ok I agree if a speaker is not connect to both (all) RRs in a cluster then 
> > you
> need to make up for that by connecting RRs to each other.
> >
> > Well isn't avoiding routing loops ultimately saving DRAM?
> > I'd argue the cluster-id comparison is either about preventing acceptance
> of one's own advertisement (RRs talking in circle) or about preventing
> learning clients routes from a different RR (your diagram) - hence preventing
> routing loops and saving DRAM.
> >
> > adam
> >
> > netconsultings.com
> > ::carrier-class solutions for the telecommunications industry::
> >
> >> -Original Message-
> >> From: Saku Ytti [mailto:s...@ytti.fi]
> >> Sent: Monday, March 12, 2018 11:54 AM
> >> To: adamv0...@netconsultings.com
> >> Cc: Job Snijders; Mark Tinka; Cisco Network Service Providers
> >> Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)
> >>
> >> On 12 March 2018 at 13:41,   wrote:
> >>
> >> Typical reason for RR1, RR2 to have iBGP to each other is when they
> >> are in forwarding path and are not dedicated RR, but also have
> >> external BGP to them.
> >>
> >> And no, clusterID are not used for loop prevention, they are used to
> >> save DRAM. There will be no routing loops by using arbitrary RR
> >> topology with clusterID==loopback, BGP best path selection does not
> >> depend on non- unique clusterID to choose loop free path.
> >>
> >> In your case, if the cluster isn't even peering with itself, then
> >> there truly is no purpose for clusterID.
> >>
> >> > The point' I'm trying to make is that I don't see a reason why RR1
> >> > and RR2 in
> >> a common cluster should have a session to each other and also why RR1
> >> in one cluster should have session to RR2s in all other clusters.
> >> > (and if RR1 and RR2 share a common cluster ID then session between
> >> > them
> >> is a complete nonsense).
> >> > Then if you go and shut PE1's session to RR1 and then go a shut
> >> > PE2's
> >> session to RR2 then it's just these two PEs affected and well what
> >> can I say you better think twice next time or consider automation.
> >> > One can't possibly bend the backbone architecture out of shape
> >> > because of
> >> all the cases where someone comes in a does something stupid (this
> >> complexity has to me moved somewhere else in my opinion -for instance
> >> to a system that won't allow you to commit something stupid).
> >> >
> >> > Regarding the scale - well there are setups there with couple
> >> > millions of
> >> just customer VPN prefixes.
> >> >
> >> > Regarding the Cluster-IDs - yes these are used for loop prevention
> >> > but only
> >> among RRs relying routes to each other -if  PE is in the loop then
> >> Originator-ID should do the job just fine.
> >> >
> >> >
> >> > adam
> >> >
> >> > netconsultings.com
> >> > ::carrier-class solutions for the telecommunications industry::
> >> >
> >> >> -Original Message-
> >> >> From: Saku Ytti [mailto:s...@ytti.fi]
> >> >

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Job Snijders
On Mon, Mar 12, 2018 at 03:06:25PM +0200, Saku Ytti wrote:
> Routing loop to me sounds like operational problem, that things are
> broken. That will not happen.

Indeed, that is what ORIGINATOR_ID is for.

Kind regards,

Job
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Saku Ytti
Routing loop to me sounds like operational problem, that things are
broken. That will not happen. Otherwise we're saying every network has
routing loops, because if you consider all RIB in every box, there are
tons of loops. I think we all agree most networks are loop free :>

You are saving DRAM, that's it.

In your case you're not even saving DRAM as cluster doesnt peer with
itself, so for you it's just additional complexity with no upside.
Harder to generate config, as you need to teach some system the
relation which node is in which cluster. Where as cluster-id==loop0 is
0-knowledge.


On 12 March 2018 at 14:42,   wrote:
> Ok I agree if a speaker is not connect to both (all) RRs in a cluster then 
> you need to make up for that by connecting RRs to each other.
>
> Well isn't avoiding routing loops ultimately saving DRAM?
> I'd argue the cluster-id comparison is either about preventing acceptance of 
> one's own advertisement (RRs talking in circle) or about preventing learning 
> clients routes from a different RR (your diagram) - hence preventing routing 
> loops and saving DRAM.
>
> adam
>
> netconsultings.com
> ::carrier-class solutions for the telecommunications industry::
>
>> -Original Message-
>> From: Saku Ytti [mailto:s...@ytti.fi]
>> Sent: Monday, March 12, 2018 11:54 AM
>> To: adamv0...@netconsultings.com
>> Cc: Job Snijders; Mark Tinka; Cisco Network Service Providers
>> Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)
>>
>> On 12 March 2018 at 13:41,   wrote:
>>
>> Typical reason for RR1, RR2 to have iBGP to each other is when they are in
>> forwarding path and are not dedicated RR, but also have external BGP to
>> them.
>>
>> And no, clusterID are not used for loop prevention, they are used to save
>> DRAM. There will be no routing loops by using arbitrary RR topology with
>> clusterID==loopback, BGP best path selection does not depend on non-
>> unique clusterID to choose loop free path.
>>
>> In your case, if the cluster isn't even peering with itself, then there 
>> truly is no
>> purpose for clusterID.
>>
>> > The point' I'm trying to make is that I don't see a reason why RR1 and RR2 
>> > in
>> a common cluster should have a session to each other and also why RR1 in
>> one cluster should have session to RR2s in all other clusters.
>> > (and if RR1 and RR2 share a common cluster ID then session between them
>> is a complete nonsense).
>> > Then if you go and shut PE1's session to RR1 and then go a shut PE2's
>> session to RR2 then it's just these two PEs affected and well what can I say
>> you better think twice next time or consider automation.
>> > One can't possibly bend the backbone architecture out of shape because of
>> all the cases where someone comes in a does something stupid (this
>> complexity has to me moved somewhere else in my opinion -for instance to
>> a system that won't allow you to commit something stupid).
>> >
>> > Regarding the scale - well there are setups there with couple millions of
>> just customer VPN prefixes.
>> >
>> > Regarding the Cluster-IDs - yes these are used for loop prevention but only
>> among RRs relying routes to each other -if  PE is in the loop then 
>> Originator-ID
>> should do the job just fine.
>> >
>> >
>> > adam
>> >
>> > netconsultings.com
>> > ::carrier-class solutions for the telecommunications industry::
>> >
>> >> -Original Message-
>> >> From: Saku Ytti [mailto:s...@ytti.fi]
>> >> Sent: Monday, March 12, 2018 10:43 AM
>> >> To: adamv0...@netconsultings.com
>> >> Cc: Job Snijders; Mark Tinka; Cisco Network Service Providers
>> >> Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)
>> >>
>> >> Hey,
>> >>
>> >>
>> >> RR1---RR2
>> >> |   |
>> >> PE1+
>> >>
>> >>
>> >> 1) PE1 sends 1M routes to RR2, RR2
>> >>
>> >> CaseA) Same clusterID
>> >> 1) RR1 and RR2 have 1M entries
>> >>
>> >> CaseB) Unique clusterID
>> >> 1) RR1 and RR2 have 2M entries
>> >>
>> >>
>> >>
>> >> Cluster is promise that every client peers with exactly same set of
>> >> RRs, so there is no need to for RRs to share client routes inside
>> >> cluster, as they have already received it directly.
>> >>
>>

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread adamv0025
Ok I agree if a speaker is not connect to both (all) RRs in a cluster then you 
need to make up for that by connecting RRs to each other.

Well isn't avoiding routing loops ultimately saving DRAM?
I'd argue the cluster-id comparison is either about preventing acceptance of 
one's own advertisement (RRs talking in circle) or about preventing learning 
clients routes from a different RR (your diagram) - hence preventing routing 
loops and saving DRAM.  

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

> -Original Message-
> From: Saku Ytti [mailto:s...@ytti.fi]
> Sent: Monday, March 12, 2018 11:54 AM
> To: adamv0...@netconsultings.com
> Cc: Job Snijders; Mark Tinka; Cisco Network Service Providers
> Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)
> 
> On 12 March 2018 at 13:41,   wrote:
> 
> Typical reason for RR1, RR2 to have iBGP to each other is when they are in
> forwarding path and are not dedicated RR, but also have external BGP to
> them.
> 
> And no, clusterID are not used for loop prevention, they are used to save
> DRAM. There will be no routing loops by using arbitrary RR topology with
> clusterID==loopback, BGP best path selection does not depend on non-
> unique clusterID to choose loop free path.
> 
> In your case, if the cluster isn't even peering with itself, then there truly 
> is no
> purpose for clusterID.
> 
> > The point' I'm trying to make is that I don't see a reason why RR1 and RR2 
> > in
> a common cluster should have a session to each other and also why RR1 in
> one cluster should have session to RR2s in all other clusters.
> > (and if RR1 and RR2 share a common cluster ID then session between them
> is a complete nonsense).
> > Then if you go and shut PE1's session to RR1 and then go a shut PE2's
> session to RR2 then it's just these two PEs affected and well what can I say
> you better think twice next time or consider automation.
> > One can't possibly bend the backbone architecture out of shape because of
> all the cases where someone comes in a does something stupid (this
> complexity has to me moved somewhere else in my opinion -for instance to
> a system that won't allow you to commit something stupid).
> >
> > Regarding the scale - well there are setups there with couple millions of
> just customer VPN prefixes.
> >
> > Regarding the Cluster-IDs - yes these are used for loop prevention but only
> among RRs relying routes to each other -if  PE is in the loop then 
> Originator-ID
> should do the job just fine.
> >
> >
> > adam
> >
> > netconsultings.com
> > ::carrier-class solutions for the telecommunications industry::
> >
> >> -Original Message-
> >> From: Saku Ytti [mailto:s...@ytti.fi]
> >> Sent: Monday, March 12, 2018 10:43 AM
> >> To: adamv0...@netconsultings.com
> >> Cc: Job Snijders; Mark Tinka; Cisco Network Service Providers
> >> Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)
> >>
> >> Hey,
> >>
> >>
> >> RR1---RR2
> >> |   |
> >> PE1+
> >>
> >>
> >> 1) PE1 sends 1M routes to RR2, RR2
> >>
> >> CaseA) Same clusterID
> >> 1) RR1 and RR2 have 1M entries
> >>
> >> CaseB) Unique clusterID
> >> 1) RR1 and RR2 have 2M entries
> >>
> >>
> >>
> >> Cluster is promise that every client peers with exactly same set of
> >> RRs, so there is no need to for RRs to share client routes inside
> >> cluster, as they have already received it directly.
> >>
> >>
> >> Of course if client1 loses connection to RR2 and client2 loses
> >> connection to RR1, client<->client2 do not se each other's routes.
> >>
> >> For same reason, you're not free to choose 'my nearest two RR' with
> >> same cluster-id, as you must always peer with every box in same
> >> cluster-id. So you lose topological flexibility, increase operational
> >> complexity, increase failure- modes. But you do save that sweet sweet
> DRAM.
> >>
> >>
> >> Most blogs I read and even some vendor documents propose clusterID to
> >> avoid loops, I think this is the real reason people use them, when RR
> >> was setup, people didn't know what clusterID is for, and later stayed
> >> committed on that initial false rationale and invented new rationales
> >> to justify their position.
> >>
> >> Premature optimisation is source of great many 

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Saku Ytti
On 12 March 2018 at 13:41,   wrote:

Typical reason for RR1, RR2 to have iBGP to each other is when they
are in forwarding path and are not dedicated RR, but also have
external BGP to them.

And no, clusterID are not used for loop prevention, they are used to
save DRAM. There will be no routing loops by using arbitrary RR
topology with clusterID==loopback, BGP best path selection does not
depend on non-unique clusterID to choose loop free path.

In your case, if the cluster isn't even peering with itself, then
there truly is no purpose for clusterID.

> The point' I'm trying to make is that I don't see a reason why RR1 and RR2 in 
> a common cluster should have a session to each other and also why RR1 in one 
> cluster should have session to RR2s in all other clusters.
> (and if RR1 and RR2 share a common cluster ID then session between them is a 
> complete nonsense).
> Then if you go and shut PE1's session to RR1 and then go a shut PE2's session 
> to RR2 then it's just these two PEs affected and well what can I say you 
> better think twice next time or consider automation.
> One can't possibly bend the backbone architecture out of shape because of all 
> the cases where someone comes in a does something stupid (this complexity has 
> to me moved somewhere else in my opinion -for instance to a system that won't 
> allow you to commit something stupid).
>
> Regarding the scale - well there are setups there with couple millions of 
> just customer VPN prefixes.
>
> Regarding the Cluster-IDs - yes these are used for loop prevention but only 
> among RRs relying routes to each other -if  PE is in the loop then 
> Originator-ID should do the job just fine.
>
>
> adam
>
> netconsultings.com
> ::carrier-class solutions for the telecommunications industry::
>
>> -Original Message-
>> From: Saku Ytti [mailto:s...@ytti.fi]
>> Sent: Monday, March 12, 2018 10:43 AM
>> To: adamv0...@netconsultings.com
>> Cc: Job Snijders; Mark Tinka; Cisco Network Service Providers
>> Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)
>>
>> Hey,
>>
>>
>> RR1---RR2
>> |   |
>> PE1+
>>
>>
>> 1) PE1 sends 1M routes to RR2, RR2
>>
>> CaseA) Same clusterID
>> 1) RR1 and RR2 have 1M entries
>>
>> CaseB) Unique clusterID
>> 1) RR1 and RR2 have 2M entries
>>
>>
>>
>> Cluster is promise that every client peers with exactly same set of RRs, so
>> there is no need to for RRs to share client routes inside cluster, as they 
>> have
>> already received it directly.
>>
>>
>> Of course if client1 loses connection to RR2 and client2 loses connection to
>> RR1, client<->client2 do not se each other's routes.
>>
>> For same reason, you're not free to choose 'my nearest two RR' with same
>> cluster-id, as you must always peer with every box in same cluster-id. So you
>> lose topological flexibility, increase operational complexity, increase 
>> failure-
>> modes. But you do save that sweet sweet DRAM.
>>
>>
>> Most blogs I read and even some vendor documents propose clusterID to
>> avoid loops, I think this is the real reason people use them, when RR was
>> setup, people didn't know what clusterID is for, and later stayed committed
>> on that initial false rationale and invented new rationales to justify their
>> position.
>>
>> Premature optimisation is source of great many evil. Optimise for simplicity
>> when you can, increase  complexity when you must.
>>
>>
>>
>> On 12 March 2018 at 12:34,   wrote:
>> >> Job Snijders
>> >> Sent: Sunday, March 11, 2018 12:21 PM
>> >>
>> >> Folks - i'm gonna cut short here: by sharing the cluster-id across
>> > multiple
>> >> devices, you lose in topology flexibility, robustness, and simplicity.
>> >>
>> >
>> > Gent's I have no idea what you're talking about.
>> > How can one save or burn RAM if using or not using shared cluster-IDs
>> > respectively???
>> > The only scenario I can think of is if your two RRs say RR1 and RR2 in
>> > a POP serving a set of clients (by definition a cluster btw) -if these
>> > two RRs have an iBGP session to each other - which is a big NONO when
>> > you are using out of band RRs, no seriously.
>> > Remember my previous example about separate iBGP infrastructures one
>> > formed out of all clients connecting to RR1 in local POP and then all
>> > RR1s in all POPs peering with each other in f

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread adamv0025
Hi,

The point' I'm trying to make is that I don't see a reason why RR1 and RR2 in a 
common cluster should have a session to each other and also why RR1 in one 
cluster should have session to RR2s in all other clusters. 
(and if RR1 and RR2 share a common cluster ID then session between them is a 
complete nonsense). 
Then if you go and shut PE1's session to RR1 and then go a shut PE2's session 
to RR2 then it's just these two PEs affected and well what can I say you better 
think twice next time or consider automation.  
One can't possibly bend the backbone architecture out of shape because of all 
the cases where someone comes in a does something stupid (this complexity has 
to me moved somewhere else in my opinion -for instance to a system that won't 
allow you to commit something stupid). 

Regarding the scale - well there are setups there with couple millions of just 
customer VPN prefixes.  
  
Regarding the Cluster-IDs - yes these are used for loop prevention but only 
among RRs relying routes to each other -if  PE is in the loop then 
Originator-ID should do the job just fine. 
 

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

> -Original Message-
> From: Saku Ytti [mailto:s...@ytti.fi]
> Sent: Monday, March 12, 2018 10:43 AM
> To: adamv0...@netconsultings.com
> Cc: Job Snijders; Mark Tinka; Cisco Network Service Providers
> Subject: Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)
> 
> Hey,
> 
> 
> RR1---RR2
> |   |
> PE1+
> 
> 
> 1) PE1 sends 1M routes to RR2, RR2
> 
> CaseA) Same clusterID
> 1) RR1 and RR2 have 1M entries
> 
> CaseB) Unique clusterID
> 1) RR1 and RR2 have 2M entries
> 
> 
> 
> Cluster is promise that every client peers with exactly same set of RRs, so
> there is no need to for RRs to share client routes inside cluster, as they 
> have
> already received it directly.
> 
> 
> Of course if client1 loses connection to RR2 and client2 loses connection to
> RR1, client<->client2 do not se each other's routes.
> 
> For same reason, you're not free to choose 'my nearest two RR' with same
> cluster-id, as you must always peer with every box in same cluster-id. So you
> lose topological flexibility, increase operational complexity, increase 
> failure-
> modes. But you do save that sweet sweet DRAM.
> 
> 
> Most blogs I read and even some vendor documents propose clusterID to
> avoid loops, I think this is the real reason people use them, when RR was
> setup, people didn't know what clusterID is for, and later stayed committed
> on that initial false rationale and invented new rationales to justify their
> position.
> 
> Premature optimisation is source of great many evil. Optimise for simplicity
> when you can, increase  complexity when you must.
> 
> 
> 
> On 12 March 2018 at 12:34,   wrote:
> >> Job Snijders
> >> Sent: Sunday, March 11, 2018 12:21 PM
> >>
> >> Folks - i'm gonna cut short here: by sharing the cluster-id across
> > multiple
> >> devices, you lose in topology flexibility, robustness, and simplicity.
> >>
> >
> > Gent's I have no idea what you're talking about.
> > How can one save or burn RAM if using or not using shared cluster-IDs
> > respectively???
> > The only scenario I can think of is if your two RRs say RR1 and RR2 in
> > a POP serving a set of clients (by definition a cluster btw) -if these
> > two RRs have an iBGP session to each other - which is a big NONO when
> > you are using out of band RRs, no seriously.
> > Remember my previous example about separate iBGP infrastructures one
> > formed out of all clients connecting to RR1 in local POP and then all
> > RR1s in all POPs peering with each other in full mesh and then the
> > same infrastructure involving RR2s?
> > Well these two iBGP infrastructures should work as ships in the night.
> > If one infrastructure breaks at some point you still get all your
> > prefixes to clients/RRs in affected POPs via the other infrastructure.
> > That said both of these iBGP infrastructures need to carry the same
> > set of prefixes, so the memory and cpu resources needed are
> > proportional to the amount of information carried only.
> > -but none of these need to carry the set of prefixes twice, see below.
> >
> > Yes you could argue if A loses session to RR1 and B loses session to
> > RR2 then A and B can't communicate, but the point is PEs just don't
> > lose sessions to RRs -these are iBGP sessions that can route around,
> > so the only scenario where this happens is misconfiguration and trust
> >

Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread adamv0025
> Job Snijders [mailto:j...@ntt.net]
> Sent: Sunday, March 11, 2018 10:51 AM
> 
> On Sun, Mar 11, 2018 at 12:39:13PM +0200, Mark Tinka wrote:
> > Each major PoP has been configured with its unique, global Cluster-ID.
> >
> > This has been scaling very well for us.
> >
> > I think the Multiple Cluster-ID is overkill.
> 
> Have you considered the downsides of sharing a Cluster-ID across multiple
> boxes, and do you have any arguments to support why it is overkill?
> 
If RR1s and RR2s never talk to each to each other then it doesn't matter
whether they have common or unique Cluster-IDs
 
> > > Also if you carry a mix of full internet prefixes and VPN prefixes
> > > across your backbone, then I suggest you carve up one iBGP
> > > infrastructure for VPN prefixes and a separate one for the Internet
> > > prefixes (for stability and security reasons -and helps with scaling
> > > too if that's a concern).
> >
> > We use the same RR for all address families. Resources are plenty with
> > a VM-based RR deployment.
> 
> Are you at least using separate BGP sessions for each address families?
> 
Job is right, you should at least use separate TCP sessions for different
AFs, but if you have Internet prefixes carried by VMPv4 then you're still at
danger, unless you carve up a separate BGP process or iBGP infrastructure
for Internet prefixes, yes the BGP Attribute Filter and Enhanced Attribute
Error Handling should keep you relatively safe but I wouldn't count on it
(it's still not an RFC and I haven't dig into it for years so not sure where
are vendors at with addressing all the requirements in the draft).
Internet is a wild west with universities advertising unknown attributes and
operators prepending their AS 255+ times and you can only hope that any of
such events will bring your border PE sessions down and actually won't be
relied to your RRs which then start dropping sessions to all clients or
restarting BGP/RPD processes...
Hence my requirement for dedicated iBGP infrastructure for Internet
prefixes. 

> > The RR's are out-of-path, so are not part of our IP/MPLS data plane.
> 
> Are you using optimal route reflection, or how have you mitigated negative
> effects caused by the lack of ORR?
> 
One fact that people usually overlook with ORR in MPLS backbones is that ORR
actually requires prefixes to be the same when they arrive at the RRs so RRs
can ORR on the best and second best to a particular set of clients. 
And my experience is that usually operators are using Type 1 RDs in their
backbones (so PE's-RID:VPN-ID format) -a that was the only way how to get
ECMP or BGP-PIC/local-repair working ages before Add-Path got introduced.
And in case of Type 1 RDs the ORR is useless as RRs in this case are merely
reflecting routes and are not performing any path selection on behalf of
clients.
So the only way how to make use of ORR is to completely change your RD plan
to Type 0 RDs VPN by VPN (maybe starting with Internet VRF) and introduce
add-path as a prerequisite of course. 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread Saku Ytti
Hey,


RR1---RR2
|   |
PE1+


1) PE1 sends 1M routes to RR2, RR2

CaseA) Same clusterID
1) RR1 and RR2 have 1M entries

CaseB) Unique clusterID
1) RR1 and RR2 have 2M entries



Cluster is promise that every client peers with exactly same set of
RRs, so there is no need to for RRs to share client routes inside
cluster, as they have already received it directly.


Of course if client1 loses connection to RR2 and client2 loses
connection to RR1, client<->client2 do not se each other's routes.

For same reason, you're not free to choose 'my nearest two RR' with
same cluster-id, as you must always peer with every box in same
cluster-id. So you lose topological flexibility, increase operational
complexity, increase failure-modes. But you do save that sweet sweet
DRAM.


Most blogs I read and even some vendor documents propose clusterID to
avoid loops, I think this is the real reason people use them, when RR
was setup, people didn't know what clusterID is for, and later stayed
committed on that initial false rationale and invented new rationales
to justify their position.

Premature optimisation is source of great many evil. Optimise for
simplicity when you can, increase  complexity when you must.



On 12 March 2018 at 12:34,   wrote:
>> Job Snijders
>> Sent: Sunday, March 11, 2018 12:21 PM
>>
>> Folks - i'm gonna cut short here: by sharing the cluster-id across
> multiple
>> devices, you lose in topology flexibility, robustness, and simplicity.
>>
>
> Gent's I have no idea what you're talking about.
> How can one save or burn RAM if using or not using shared cluster-IDs
> respectively???
> The only scenario I can think of is if your two RRs say RR1 and RR2 in a POP
> serving a set of clients (by definition a cluster btw) -if these two RRs
> have an iBGP session to each other - which is a big NONO when you are using
> out of band RRs, no seriously.
> Remember my previous example about separate iBGP infrastructures one formed
> out of all clients connecting to RR1 in local POP and then all RR1s in all
> POPs peering with each other in full mesh and then the same infrastructure
> involving RR2s?
> Well these two iBGP infrastructures should work as ships in the night. If
> one infrastructure breaks at some point you still get all your prefixes to
> clients/RRs in affected POPs via the other infrastructure.
> That said both of these iBGP infrastructures need to carry the same set of
> prefixes, so the memory and cpu resources needed are proportional to the
> amount of information carried only.
> -but none of these need to carry the set of prefixes twice, see below.
>
> Yes you could argue if A loses session to RR1 and B loses session to RR2
> then A and B can't communicate, but the point is PEs just don't lose
> sessions to RRs -these are iBGP sessions that can route around, so the only
> scenario where this happens is misconfiguration and trust me you'll know
> right away that you broke something.
> Then you can argue that ok what if I have A to RR1-pop1 to RR1-pop2 to B
> AND  A to RR2-pop1 to RR2-pop2 to B   AND  say RR1-pop1 as well as RR2-pop-2
> fail at the same then A and B can't communicate.
> Fair point that will certainly happen, but what is the likelihood of that
> happening? Well it's MTBF of RR1-POP-1 times MTBF of RR1-POP-1 which is fine
> for me and I bet for most folks out there.
>
>
> adam
>
> netconsultings.com
> ::carrier-class solutions for the telecommunications industry::
>
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-12 Thread adamv0025
> Job Snijders
> Sent: Sunday, March 11, 2018 12:21 PM
> 
> Folks - i'm gonna cut short here: by sharing the cluster-id across
multiple
> devices, you lose in topology flexibility, robustness, and simplicity.
> 

Gent's I have no idea what you're talking about. 
How can one save or burn RAM if using or not using shared cluster-IDs
respectively??? 
The only scenario I can think of is if your two RRs say RR1 and RR2 in a POP
serving a set of clients (by definition a cluster btw) -if these two RRs
have an iBGP session to each other - which is a big NONO when you are using
out of band RRs, no seriously. 
Remember my previous example about separate iBGP infrastructures one formed
out of all clients connecting to RR1 in local POP and then all RR1s in all
POPs peering with each other in full mesh and then the same infrastructure
involving RR2s? 
Well these two iBGP infrastructures should work as ships in the night. If
one infrastructure breaks at some point you still get all your prefixes to
clients/RRs in affected POPs via the other infrastructure. 
That said both of these iBGP infrastructures need to carry the same set of
prefixes, so the memory and cpu resources needed are proportional to the
amount of information carried only. 
-but none of these need to carry the set of prefixes twice, see below. 

Yes you could argue if A loses session to RR1 and B loses session to RR2
then A and B can't communicate, but the point is PEs just don't lose
sessions to RRs -these are iBGP sessions that can route around, so the only
scenario where this happens is misconfiguration and trust me you'll know
right away that you broke something. 
Then you can argue that ok what if I have A to RR1-pop1 to RR1-pop2 to B
AND  A to RR2-pop1 to RR2-pop2 to B   AND  say RR1-pop1 as well as RR2-pop-2
fail at the same then A and B can't communicate. 
Fair point that will certainly happen, but what is the likelihood of that
happening? Well it's MTBF of RR1-POP-1 times MTBF of RR1-POP-1 which is fine
for me and I bet for most folks out there. 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry:: 

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-11 Thread Mark Tinka


On 11/Mar/18 15:07, Job Snijders wrote:

> "32 years and I've not been hit by a bus, so I can stop looking?"
> or perhaps, https://en.wikipedia.org/wiki/Appeal_to_tradition ? :-D

Glad I'm not the only one crossing streets :-).


> One example where shared Cluster-ID is painful, is the fact that a set
> of route-reflector-clients MUST peer with ALL the devices that share
> that specific cluster ID. If one client's IBGP session is missing
> (perhaps due to misconfiguration), and another client's session is down
> (perhaps due to a XPIC or forwarding-plane problem), the view on the
> IBGP state becomes incomplete. With unique Cluster-IDs this failure
> scenario doesn't exist. The fact that 2 down IBGP sessions can cause
> operational issues shows that shared Cluster-ID designs are fragile.

That is a valid failure scenario for shared Cluster-ID's.

Device configuration stability is a network operation function that
every operator manages. We are happy with how we manage ours that this
is not a concern, particularly as different teams manage different
components of the network. iBGP is a low-touch section of our backbone,
restricted to specific task groups, compared to eBGP.

For forwarding plane issues, I can only speak for our network - there is
a significant amount of data plane abstraction in our core network
design, that a failure in the forwarding plane of a client will not
result in a loss of IGP toward any RR. The only way this becomes an
issue is if the entire forwarding plane (centralized or distributed) on
the client were to fail, in which case, the whole router is down anyway.

The forwarding plane issue becomes very fragile if there is a direct
relationship between a physical port/line card and the RR. We don't have
that condition, and for that reason, a share Cluster-ID topology is of
very little to no risk for our specific architecture.

Yes, this means that if you have a linear physical connectivity
relationship between your clients and the RR's, shared Cluster-ID's are
a bad idea. Either you abstract the physical connectivity between the
clients and RR's, making them port/line card-independent, or you switch
to unique Cluster ID's.


> I didn't say that you can't make things work with shared Cluster-IDs,
> you'll just have to make sure you stay within the constraints and are
> aware of the risks. To me that's just a design choice with no
> operational upsides, I'll happily burn the RAM in exchange for
> flexibility & robustness.

Again, really depends on how you've architected your core network in the
PoP. If it's the "classic" way, agreed, share Cluster-ID's become an
issue. We don't have that constraint, so we can afford to enjoy the
benefits of reduced RAM utilization without the inherent risk of
port/line card-dependent relationships between clients and RR's.

Flexibility and robustness might mean different things to different
people. We've not come across any logical routing iBGP routing problem
that we've not been able to solve with our design. But, I'm glad to hear
that you enjoy the same pleasures :-).

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-11 Thread Job Snijders
On Sun, Mar 11, 2018 at 02:46:31PM +0200, Mark Tinka wrote:
> On 11/Mar/18 14:20, Job Snijders wrote:
> 
> > Folks - i'm gonna cut short here: by sharing the cluster-id across
> > multiple devices, you lose in topology flexibility, robustness, and
> > simplicity.
> 
> 11 years across 3 different networks in separate continents - a shared
> Cluster-ID has never been a problem for me.
> 
> You'll have to do better than that...

"32 years and I've not been hit by a bus, so I can stop looking?"
or perhaps, https://en.wikipedia.org/wiki/Appeal_to_tradition ? :-D

One example where shared Cluster-ID is painful, is the fact that a set
of route-reflector-clients MUST peer with ALL the devices that share
that specific cluster ID. If one client's IBGP session is missing
(perhaps due to misconfiguration), and another client's session is down
(perhaps due to a XPIC or forwarding-plane problem), the view on the
IBGP state becomes incomplete. With unique Cluster-IDs this failure
scenario doesn't exist. The fact that 2 down IBGP sessions can cause
operational issues shows that shared Cluster-ID designs are fragile.

I didn't say that you can't make things work with shared Cluster-IDs,
you'll just have to make sure you stay within the constraints and are
aware of the risks. To me that's just a design choice with no
operational upsides, I'll happily burn the RAM in exchange for
flexibility & robustness.

Kind regards,

Job
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-11 Thread Mark Tinka


On 11/Mar/18 14:20, Job Snijders wrote:

> Folks - i'm gonna cut short here: by sharing the cluster-id across
> multiple devices, you lose in topology flexibility, robustness, and
> simplicity.

11 years across 3 different networks in separate continents - a shared
Cluster-ID has never been a problem for me.

You'll have to do better than that...

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-11 Thread Job Snijders
On Sun, Mar 11, 2018 at 01:57:01PM +0200, Mark Tinka wrote:
> On 11/Mar/18 12:50, Job Snijders wrote:
> > Have you considered the downsides of sharing a Cluster-ID across
> > multiple boxes,
> 
> IIRC, the biggest issue with this was if the RR was in-path (as it
> used to be back in the days - and in some networks today - when core
> routers doubled as RR's), and there was no physical link between
> clients and all RR's.
> 
> In our network, RR's in the same PoP sharing a single Cluster-ID is
> fine because every client has a redundant physical link toward each
> RR, and each client has an iBGP session with all RR's. The benefit of
> using the same Cluster-ID for the RR's in the same PoP is much reduced
> overhead per RR device, as each RR only needs to hold one copy of
> NLRI.

Folks - i'm gonna cut short here: by sharing the cluster-id across
multiple devices, you lose in topology flexibility, robustness, and
simplicity.

Cluster-ID sharing _only_ exists to save some RAM. Of course most
optimisations have some kind of hidden price. In the case of MCID the
cost is that you must manoeuvre within very specific RR topology
contraints.

Perhaps a terrible analogy: Before crossing a street, you're supposed to
look both left and right for any oncoming traffic. With MCID the idea is
that you cross the street together with a friend, and you assume your
friend will be looking left, and he'll assume that you'll be looking
right. Anyone can see that it is far more robust and simpler if each
just looks out for themselves.

Just set the cluster-id to your router's IPv4 loopback IP address and
you'll be far less likely to get hit by a bus.

Kind regards,

Job
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-11 Thread Mark Tinka


On 11/Mar/18 12:50, Job Snijders wrote:

> Have you considered the downsides of sharing a Cluster-ID across
> multiple boxes,

IIRC, the biggest issue with this was if the RR was in-path (as it used
to be back in the days - and in some networks today - when core routers
doubled as RR's), and there was no physical link between clients and all
RR's.

In our network, RR's in the same PoP sharing a single Cluster-ID is fine
because every client has a redundant physical link toward each RR, and
each client has an iBGP session with all RR's. The benefit of using the
same Cluster-ID for the RR's in the same PoP is much reduced overhead
per RR device, as each RR only needs to hold one copy of NLRI.

>  and do you have any arguments to support why it is
> overkill?

Not practical ones, just theoretical ones based on the MCID text. To
badly over-simplify the objective, it would seem that the idea is to
limit the amount of routing clients can hold for each other's NLRI. It's
likely some networks may have a use-case, but from my viewpoint, that
would create an inconsistent view of the iBGP state within a routing
domain from a per-device perspective.

On that basis alone, I have not considered any other advantages MCID
could have. But, happy to hear about them, if any, of course...


> Are you at least using separate BGP sessions for each address families?

It wasn't apparent to me that there was any other sensible way :-).

I've been using BGP Peer/Policy Session Templates on IOS and IOS XE
since they were introduced back in 2003. These lend themselves naturally
well to breaking one's BGP setup into per-session and per-policy
architectures. A bit more verbose than the classic way of doing things,
but that doesn't bother me.


> Are you using optimal route reflection, or how have you mitigated
> negative effects caused by the lack of ORR?

So still some love & hate going back & forth between me and Cisco on ORR
support for CSR1000v/IOS XE. They have support on IOS XR (for the
ASR9000) and XRv.

Juniper has supported ORR for some time now, as has Nokia (ALU).

For now, we are mitigating the effects of a lack of ORR on CSR1000v,
which specific to a few "off-net" PoP's. As of last May 2017, Cisco
expected ORR for CSR1000v to appear in 16.9(1), which is slated for July
2018. I'm still waiting for confirmation from them if this is still on
track.

Why don't we just move to XRv? Well, for some reason, we find the
IOS/IOS XE BGP infrastructure "simpler" than IOS XR, for RR purposes.
Yes, IOS XR is probably more modern, elaborate and flexible than classic
IOS/IOS XE, but for us, that only works okay for peering or edge
applications. For an RR use-case, it's overkill as our RR policy design
is already very intricate as it is, without all the granularity of RPL.

Why don't we just move to Juniper's vRR - well, we did consider it back
in 2014, but they were too slow and we needed a solution quickly. But
for the same reasons as with IOS XR, while more modern, elaborate and
flexible than classic IOS/IOS XE, it would have been a lot more
cumbersome to use for an RR application given the nature of our iBGP
domain. Juniper's BGP implementation is, as with IOS XR, fine for us in
peering and edge scenarios. In an RR application, we need a BGP
implementation that is simpler than our iBGP policy.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-11 Thread Job Snijders
On Sun, Mar 11, 2018 at 12:39:13PM +0200, Mark Tinka wrote:
> Each major PoP has been configured with its unique, global Cluster-ID.
> 
> This has been scaling very well for us.
> 
> I think the Multiple Cluster-ID is overkill.

Have you considered the downsides of sharing a Cluster-ID across
multiple boxes, and do you have any arguments to support why it is
overkill?

> > Also if you carry a mix of full internet prefixes and VPN prefixes
> > across your backbone, then I suggest you carve up one iBGP
> > infrastructure for VPN prefixes and a separate one for the Internet
> > prefixes (for stability and security reasons -and helps with scaling
> > too if that's a concern). 
> 
> We use the same RR for all address families. Resources are plenty with
> a VM-based RR deployment.

Are you at least using separate BGP sessions for each address families?

> The RR's are out-of-path, so are not part of our IP/MPLS data plane.

Are you using optimal route reflection, or how have you mitigated
negative effects caused by the lack of ORR?

Kind regards,

Job
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-11 Thread Mark Tinka


On 5/Mar/18 14:22, adamv0...@netconsultings.com wrote:

> No, hierarchical RR infrastructure is a bad idea altogether. It was not
> needed way back with c7200 as RRs in Tier1 SPs backbones and its certainly
> not needed now.  
> Just keep it simple. 
> You don't need full mesh between all your regional RRs. 
>
> Think of it as two separate iBGP infrastructures: 
> 1) 
> Clients within a region peer with a particular Regional-RR-1 (to disseminate
> prefixes within a single region). 
> And all Regional-RR-1s peer with each other, i.e. full-mesh between RR-1s
> (to disseminate prefixes between all regions). 
> 2) 
> A completely separate infrastructure using the same model as the above, just
> built using Regional-RR-2s (this is for redundancy purposes). 
>
>
> If you run out of memory or CPU cycles on RRs, then I'd say migrate to vRR
> solution.
> Or if you insist on physical boxes (or don't want to have all eggs in one
> basket -region wise), then you can scale out by dividing regions into
> smaller pieces (addressing CPU/Session limit per RR) and by adding planes to
> the above simple 1) and 2) infrastructure (addressing memory/prefix limit
> per RR).  

So we have plenty of major core PoP's across 2 continents. We've been
running the CSR1000v on top of ESXi on x86 boxes since 2014, quite
successfully, as our RR's, at each of those PoP's.

Each major PoP has been configured with its unique, global Cluster-ID.

This has been scaling very well for us.

I think the Multiple Cluster-ID is overkill.


> Also if you carry a mix of full internet prefixes and VPN prefixes across
> your backbone, then I suggest you carve up one iBGP infrastructure for VPN
> prefixes and a separate one for the Internet prefixes (for stability and
> security reasons -and helps with scaling too if that's a concern). 

We use the same RR for all address families. Resources are plenty with a
VM-based RR deployment.

The RR's are out-of-path, so are not part of our IP/MPLS data plane.

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-05 Thread adamv0025
> Curtis Piehler
> Sent: Saturday, March 03, 2018 3:51 AM
> 
> I presume this is supported in IOS-XR but just making sure.
> 
> A network across the country is split up into multiple regions.  Each
region
> housing two RRs where the local region clients peer with them.
> Instead of full meshing all of the regional RR consider a tiered topology.
> 3-4 of the regional RR are designated Super-RR where they full mesh with
> each other on a global cluster ID.  The rest of the regional RR peer with
each
> of the Super-RR and with each other for regions with no super-RR.
> 
> The Super-RR areconfigured for the global cluster ID The Regional-RR are
> configured for the regional cluster ID The Super-RR also peer with the
> regional clients on a per neighbor cluster ID basis for the local region
(set for
> the Regional cluster ID) The Regional-RR peer with the Super RR on a per
> neighbor cluster ID (set for the Global cluster ID)
> 
> The Super-RR end up straddling the fence between the global cluster ID and
> the regional cluster ID The redundant Regional RR also end up straddling
the
> fence between the global cluster ID and the regional cluster ID but it's
main
> purpose is to be a redundant RR within the region The local region clients
> peer with the Super-RR in that region and also with the redundant regional
> RR.
> 
No, hierarchical RR infrastructure is a bad idea altogether. It was not
needed way back with c7200 as RRs in Tier1 SPs backbones and its certainly
not needed now.  
Just keep it simple. 
You don't need full mesh between all your regional RRs. 

Think of it as two separate iBGP infrastructures: 
1) 
Clients within a region peer with a particular Regional-RR-1 (to disseminate
prefixes within a single region). 
And all Regional-RR-1s peer with each other, i.e. full-mesh between RR-1s
(to disseminate prefixes between all regions). 
2) 
A completely separate infrastructure using the same model as the above, just
built using Regional-RR-2s (this is for redundancy purposes). 


If you run out of memory or CPU cycles on RRs, then I'd say migrate to vRR
solution.
Or if you insist on physical boxes (or don't want to have all eggs in one
basket -region wise), then you can scale out by dividing regions into
smaller pieces (addressing CPU/Session limit per RR) and by adding planes to
the above simple 1) and 2) infrastructure (addressing memory/prefix limit
per RR).  


Also if you carry a mix of full internet prefixes and VPN prefixes across
your backbone, then I suggest you carve up one iBGP infrastructure for VPN
prefixes and a separate one for the Internet prefixes (for stability and
security reasons -and helps with scaling too if that's a concern). 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-02 Thread Saku Ytti
Why are you using RR clusterID?


ClusterID should be loopback, unless you can extremely well justify
your position to do anything else.


ClusterID is legacy of era when DRAM was extremely premium, the DRAM
cost of having unique ClusterID is extremely marginal. The benefit of
having unique ClusterID is to avoid whole lot of outages due to
misconfig and misdesign and to allow arbitrary client/rr
relationships.



On 3 March 2018 at 05:51, Curtis Piehler  wrote:
> I presume this is supported in IOS-XR but just making sure.
>
> A network across the country is split up into multiple regions.  Each
> region housing two RRs where the local region clients peer with them.
> Instead of full meshing all of the regional RR consider a tiered topology.
> 3-4 of the regional RR are designated Super-RR where they full mesh with
> each other on a global cluster ID.  The rest of the regional RR peer with
> each of the Super-RR and with each other for regions with no super-RR.
>
> The Super-RR areconfigured for the global cluster ID
> The Regional-RR are configured for the regional cluster ID
> The Super-RR also peer with the regional clients on a per neighbor cluster
> ID basis for the local region (set for the Regional cluster ID)
> The Regional-RR peer with the Super RR on a per neighbor cluster ID (set
> for the Global cluster ID)
>
> The Super-RR end up straddling the fence between the global cluster ID and
> the regional cluster ID
> The redundant Regional RR also end up straddling the fence between the
> global cluster ID and the regional cluster ID but it's main purpose is to
> be a redundant RR within the region
> The local region clients peer with the Super-RR in that region and also
> with the redundant regional RR.
>
> Is this feasible/doable with MCID?
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/



-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] IOS-XR BGP RR MCID (Multiple Cluster ID)

2018-03-02 Thread Curtis Piehler
I presume this is supported in IOS-XR but just making sure.

A network across the country is split up into multiple regions.  Each
region housing two RRs where the local region clients peer with them.
Instead of full meshing all of the regional RR consider a tiered topology.
3-4 of the regional RR are designated Super-RR where they full mesh with
each other on a global cluster ID.  The rest of the regional RR peer with
each of the Super-RR and with each other for regions with no super-RR.

The Super-RR areconfigured for the global cluster ID
The Regional-RR are configured for the regional cluster ID
The Super-RR also peer with the regional clients on a per neighbor cluster
ID basis for the local region (set for the Regional cluster ID)
The Regional-RR peer with the Super RR on a per neighbor cluster ID (set
for the Global cluster ID)

The Super-RR end up straddling the fence between the global cluster ID and
the regional cluster ID
The redundant Regional RR also end up straddling the fence between the
global cluster ID and the regional cluster ID but it's main purpose is to
be a redundant RR within the region
The local region clients peer with the Super-RR in that region and also
with the redundant regional RR.

Is this feasible/doable with MCID?
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/