Re: [c-nsp] Seamless MPLS interacting with flat LDP domains

2019-05-02 Thread Robert Raszuk
Radu,

The MPLS in modern DC is none starter purely from technology pov.

In modern DCs compute nodes are your tenant PEs all talking to rest of the
fabric L3. So if you want to roll MPLS you would need to do that to the
compute nodes. That means that with exact match you will see in MSDCs
millions of FECs and millions of underlay routes which you can not
summarize. Plus on top of that an overlay say L3VPNs for the tenant/pods
reachability.

Good luck with operating that scale with MPLS forwarding. Besides while
some vendors of the hosts NICs claim support for MPLS they do that only on
ppt. In real life take very popular NIC vendor and you will find that MPLS
packets do not get round robin queuing to kernel like IPv4 or IPv6 but all
line up to a single buffer.

Only hacking the firmware of the NIC with some other NIC vendor which also
out of the box was far from decent I was able to spread those flows around
so performance of MPLS streams arriving at the compute was acceptable.

Best,
R.


On Thu, May 2, 2019 at 5:35 AM Radu-Adrian FEURDEAN <
cisco-...@radu-adrian.feurdean.net> wrote:

> On Wed, May 1, 2019, at 00:15, adamv0...@netconsultings.com wrote:
>
> > Converting DC to standard MPLS network and all your problems are solved.
>
> Just talk about converting DC to MPLS and you will start having other kind
> of problems.
> IMHO, if DCs didn't massively adopt MPLS is because lack of training and
> what I call "end-user mentality". In lots of cases you are dealing with
> people that when you say "MPLS" they understand "site-to-site L3VPN using
> some magic technology" (that they don't understand). Others do understand
> some tiny bits but see it as a "carrier technology". And there's enough of
> them so that manufacturers take their opinion in consideration. Result =>
> VXLAN.
> Those being said, there are DCs that managed to go the MPLS way, but it
> looks more like an exception rater than the rule. Unfortunately.
>
> --
> R.-A. Feurdean
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco IOS ping utility reports lower RTT than possible

2019-05-02 Thread Octavio Alvarez
On 5/2/19 10:11 AM, Martin T wrote:
>>> Gi0/0 in Cisco 1921 ISR has 10.66.66.2/24 configured and eno3 in Linux
>>> server has 10.66.66.1/24 configured. RTT on this link is 10ms:
>>
>> How do you know this to be 100% correct - have you OTDR/iOLM tested this 
>> link?
>>
> I can't OTDR it because this delay is made with Linux netem qdisc.
> However, I can compare it with for example using Juniper RPM(Cisco IP
> SLA analogue): https://i.imgur.com/i8jccwh.png ..or Linux ping
> utility: https://i.imgur.com/NeubqAV.png On both graphs I have plotted
> 21600 measurements. None of those are below 10ms.

Hi,

I am curious about what do you see if you placed a two-card laptop in
the middle as a bridge, do a tcpdump capture on the server-facing NIC
and analyze the times with Wireshark.

Best regards,
Octavio.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco IOS ping utility reports lower RTT than possible

2019-05-02 Thread Martin T
Hi Adam,

please read my initial e-mail again. The point is that on a link with
10.0ms RTT the Cisco reports 8ms and even 7ms RTT. When one uses IP
SLA "icmp-echo" probes, then the delay is shown to be lower than it
actually is.


Martin
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Migrating a LACP bond on Catalyst to a vPC on Nexus

2019-05-02 Thread Nick Cutting
Just run a trunk between the two with all the VLANs you need while you are 
migrating.  
It will not be hitless when moving the port-channels, as the LACP ID's will not 
match between the 3850 and the VPC member switches.  
Be conscious of your gateways, and check spanning tree is forwarding on all the 
correct vlans on  uplinks/downlinks.

Setup your VPC domain first, with a bunch of test bonded links before the 
migration.  VPC will allow each nexus to send the same LACP ID so that the 
downstream device thinks it's the same switch.
Then add the VLANs to be migrated.

VPC - you should spend some serious time investigating, as there are a lot more 
scenarios that will stop forwarding traffic than a simple LACP aggregation.

-Original Message-
From: cisco-nsp  On Behalf Of Giles Coochey
Sent: Wednesday, May 1, 2019 4:53 AM
To: cisco-nsp@puck.nether.net
Subject: [c-nsp] Migrating a LACP bond on Catalyst to a vPC on Nexus

This message originates from outside of your organisation.

Hi All,

Working for a client, they have a need to migrate a LACP bond from a Catalyst 
3850 stack to a vPC bond on a Nexus pair (93180 TOR type)

There is an existing trunk bond between the Catalyst 3850s and the Nexus so the 
Layer-2 is present across these and VLANs in the bond are the same on both 
switches. The endpoints on the bonds are hosts, and configured for LACP 
(channel-group x mode active), 2 ports per channel.

What advice is there to migrate hosts on the 3850 stack to the Nexus, can it be 
performed in a hitless manner?

Many Thanks!

Giles

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 6500 series, FIB exception @ dfc

2019-05-02 Thread Gert Doering
Hi,

On Thu, May 02, 2019 at 08:32:56PM +0300, Adrian Minta wrote:
> It appear you were hit by the "768k day":

"according to documentation", 2T/6T should be able to do 1M, and no
TCAM carving needed...

With IPv6 at ~70k today, there *should* be sufficient headroom... (or,
in other words, even a Sup720-XL would be fine if carved at 800k/100k)

gert
-- 
"If was one thing all people took for granted, was conviction that if you 
 feed honest figures into a computer, honest figures come out. Never doubted 
 it myself till I met a computer with a sense of humor."
 Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany g...@greenie.muc.de


signature.asc
Description: PGP signature
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 6500 series, FIB exception @ dfc

2019-05-02 Thread Adrian Minta

It appear you were hit by the "768k day":

https://www.youtube.com/watch?v=eTtriDf_2GU

https://motherboard.vice.com/en_us/article/vb9ez9/768k-day-is-as-overhyped-as-y2k-isp-says


--
Best regards,
Adrian Minta


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco IOS ping utility reports lower RTT than possible

2019-05-02 Thread adamv0025



> Martin T
> Sent: Thursday, May 2, 2019 4:12 PM
> 
> On Thu, May 2, 2019 at 11:07 AM James Bensley 
> wrote:
> >
> > On Mon, 29 Apr 2019 at 11:14, Martin T  wrote:
> > >
> > > Hi,
> >
> > Hi Martin,
> >
> > > I have a following very simple network topology:
> > >
> > > CISCO1921[Gi0/0] <-> [eno3]svr
> > >
> > > Gi0/0 in Cisco 1921 ISR has 10.66.66.2/24 configured and eno3 in
> > > Linux server has 10.66.66.1/24 configured. RTT on this link is 10ms:
> >
> > How do you know this to be 100% correct - have you OTDR/iOLM tested
> this link?
> >
> > Cheers,
> > James.
> 
> Hi James,
> 
> I can't OTDR it because this delay is made with Linux netem qdisc.
> However, I can compare it with for example using Juniper RPM(Cisco IP SLA
> analogue): https://i.imgur.com/i8jccwh.png ..or Linux ping
> utility: https://i.imgur.com/NeubqAV.png On both graphs I have plotted
> 21600 measurements. None of those are below 10ms.
> 
That's just 1921 taking it's time to reply -it's not it's priority to
respond to some ping packets even if it doesn't have anything better to do
which it almost always has.
Try this setup and the delay disappears:
svr [eno4] <-> [Gi0/1]CISCO1921[Gi0/0] <-> [eno3]svr

adam

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 6500 series, FIB exception @ dfc

2019-05-02 Thread Gert Doering
Hi,

On Thu, May 02, 2019 at 11:54:59AM -0500, Igor Smolov wrote:
> 4. Since we are utilizing over 70% of TCAM, what is the recommended hardware
> platform to move? 1M IPv4 routes are just around the corner... SUP6T-XL has 
> the
> same 1024K limitation. Anything capable of 10gb+, over 1M routes, 1 to 5U?

ASR9001, ASR1000-something (many models, 1RU/2RU, depending on ports), MX204

That the 6500BU decided to build an "XL" version of the 6T with only
1M FIB entries is, indeed, very annoying.  Either declare the platform
dead and be done with it, or build new and interesting functionality
with proper FIB size.  Or declare it "for DC only!" and do not even
offer an "XL" sup...

gert
-- 
"If was one thing all people took for granted, was conviction that if you 
 feed honest figures into a computer, honest figures come out. Never doubted 
 it myself till I met a computer with a sense of humor."
 Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany g...@greenie.muc.de


signature.asc
Description: PGP signature
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] 6500 series, FIB exception @ dfc

2019-05-02 Thread Igor Smolov
Hi all,

We've ran into a serious bug the other day with our Cat6504 with S2T.
This machine has 3 upstreams with full views.  Wanted to get a
feedback from the list on what could it be and how to mitigate it.

So while running s2t54-adventerprisek9-mz.SPA.152-1.SY5.bin, with over
6 months uptime
the switch threw in the following:

%CFIB-DFC2-7-CFIB_EXCEPTION: FIB TCAM exception, Some entries will be
software switched routing issues @ DFC, S2T seems to be performing
correctly

#sh platform hardware cef exception status
Current IPv4 FIB exception state = FALSE
Current IPv6 FIB exception state = FALSE
Current MPLS FIB exception state = FALSE
Current EoM/VPLS FIB TCAM exception state = FALSE

#remote command mod 2 sh platform hard cef exception status detail
Current IPv4 FIB exception state = TRUE
...

#remote command mod 2 sh platform hardware cef resource-level
Global watermarks: apply to Fib shared area only.
Protocol watermarks: apply to protocols with non-default max-routes

Fib-size: 1024k (1048576), shared-size: 1016k (1040384), shared-usage:
877k(898387)

Global watermarks:
Red_WM: 95%,   Greem_WM: 80%,   Current usage: 86%

Protocol watermarks:

 Protocol   Red_WM(%)  Green_WM(%) Current(%)
    -  --  --
 IPV4-- --  73% (of shared)
 IPV4-MCAST  -- --  0 % (of shared)
 IPV6-- --  12% (of shared)
 IPV6-MCAST  -- --  0 % (of shared)
 MPLS-- --  0 % (of shared)
 EoMPLS  -- --  0 % (of shared)
 VPLS-IPV4-MCAST -- --  0 % (of shared)
 VPLS-IPV6-MCAST -- --  0 % (of shared)

#remote command mod 2 show platform hardware cef maximum-routes usage


 Fib-size: 1024k (1048576), shared-size: 1016k (1040384),
shared-usage: 874k(895227)

 Protocol Max-routes Usage  Usage-from-shared
 --- -- -  -
 IPV4 1017k  762763 (744 k)   761739 (743 k)
 IPV4-MCAST   1017k  6  (0   k)   0  (0   k)
 IPV6 1017k  134512 (131 k)   133488 (130 k)
 IPV6-MCAST   1017k  4  (0   k)   0  (0   k)
 MPLS 1017k  1  (0   k)   0  (0   k)
 EoMPLS   1017k  1  (0   k)   0  (0   k)
 VPLS-IPV4-MCAST  1017k  0  (0   k)   0  (0   k)
 VPLS-IPV6-MCAST  1017k  0  (0   k)   0  (0   k)

Maximum Tcam Routes : 901021
Current Tcam Routes : 897288


The box did not hit any TCAM limits; Usage below the red watermark.
Message comes from a
DFC card, similar to this bug:
https://quickview.cloudapps.cisco.com/quickview/bug/CSCun81101


Right after this error we performed card reseat & IOS upgrade.  Now it's running

Cisco IOS Software, s2t54 Software (s2t54-ADVENTERPRISEK9-M), Version 15.5(1)SY,
RELEASE SOFTWARE (fc6)
System image file is "bootdisk:s2t54-adventerprisek9-mz.SPA.155-1.SY.bin"

After a short while, we get an identical error:

%CFIB-DFC2-7-CFIB_EXCEPTION: FIB TCAM exception, Some entries will be
software switched

Previously, the box was still able to switch (process) packets, but
this time it's all froze,
all traffic was just dropped.

#sh inventory

NAME: "WS-C6504-E", DESCR: "Cisco Systems Cisco 6500 4-slot Chassis System"
PID: WS-C6504-E, VID: V01, SN: xxx

NAME: "1", DESCR: "VS-SUP2T-10G 5 ports Supervisor Engine 2T 10GE w/
CTS Rev. 1.8"
PID: VS-SUP2T-10G  , VID: V05, SN: xxx

NAME: "msfc sub-module of 1", DESCR: "VS-F6K-MSFC5 CPU Daughterboard Rev. 1.6"
PID: VS-F6K-MSFC5  , VID:, SN: xxx

NAME: "VS-F6K-PFC4XL Policy Feature Card 4 EARL 1 sub-module of 1", DESCR:
"VS-F6K-PFC4XL Policy Feature Card 4 Rev. 1.0"
PID: VS-F6K-PFC4XL , VID: V01, SN: xxx

NAME: "2", DESCR: "WS-X6848-SFP CEF720 48 port 1000mb SFP Rev. 3.1"
PID: WS-X6848-SFP  , VID: V02, SN: xxx

NAME: "WS-F6K-DFC4-AXL Distributed Forwarding Card 4 EARL 1 sub-module of 2",
DESCR: "WS-F6K-DFC4-AXL Distributed Forwarding Card 4 Rev. 2.0"
PID: WS-F6K-DFC4-AXL   , VID: V04, SN: xxx

Qs:

1. What version of IOS has a fix for this bug?

2. If you encountered this bug, which cards/models were you using?

3. If these BGP peers are connected directly to VS-SUP2T card, is it correct to
assume this bug is not an issue? These TCAM entries won't affect the DFC card?

4. Since we are utilizing over 70% of TCAM, what is the recommended hardware
platform to move? 1M IPv4 routes are just around the corner... SUP6T-XL has the
same 1024K limitation. Anything capable of 10gb+, over 1M routes, 1 to 5U?

Cheers,
Igor Smolov

Re: [c-nsp] Cisco IOS ping utility reports lower RTT than possible

2019-05-02 Thread Martin T
On Thu, May 2, 2019 at 11:07 AM James Bensley  wrote:
>
> On Mon, 29 Apr 2019 at 11:14, Martin T  wrote:
> >
> > Hi,
>
> Hi Martin,
>
> > I have a following very simple network topology:
> >
> > CISCO1921[Gi0/0] <-> [eno3]svr
> >
> > Gi0/0 in Cisco 1921 ISR has 10.66.66.2/24 configured and eno3 in Linux
> > server has 10.66.66.1/24 configured. RTT on this link is 10ms:
>
> How do you know this to be 100% correct - have you OTDR/iOLM tested this link?
>
> Cheers,
> James.

Hi James,

I can't OTDR it because this delay is made with Linux netem qdisc.
However, I can compare it with for example using Juniper RPM(Cisco IP
SLA analogue): https://i.imgur.com/i8jccwh.png ..or Linux ping
utility: https://i.imgur.com/NeubqAV.png On both graphs I have plotted
21600 measurements. None of those are below 10ms.


thanks,
Martin
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Seamless MPLS interacting with flat LDP domains

2019-05-02 Thread Mark Tinka



On 2/May/19 14:14, adamv0...@netconsultings.com wrote:

> Though it's interesting that the same people who are afraid of the simple
> bit which is the hop by hop transport are fine with the complex EVPN bit on
> top :)

Which is exactly why the market I was in that was heavily using VPLS at
the time was mostly about bragging than actual function.

Anyone that came to me asking for VPLS, I asked them what l3vpn didn't
do enough. We never deployed a VPLS customer, even if the network itself
was running VPLS for PPPoE backhaul. I mean, I would sort of understand
if your payload was IPX/SPX, DECnet or AppleTalk... but anyone running
such payloads is probably more comfortable building and operating an
X.25, Frame Relay or ATM network!

The constant need to re-invent yourself so that you can appeal to your
customers or employers is what has seen a number of inappropriate
technologies being deployed in Service Provider and Enterprise networks.
In 2014, a customer wanted to nail a 10Gbps EoMPLS service on specific
paths on our IP/MPLS backbone, which meant we'd have had to use RSVP. I
directed them to our EoDWDM service instead. Not every knob needs to be
touched, unless it's the one that rules them all :-).

> But I get it vendors nowadays assume the crowd's networking knowledge is
> subpar hence these nicely pre-packaged click a button solutions for DC
> deployments. 

I was speaking with a friend in the community a few months ago...
perhaps good old fashioned routing workshops at NOG's will make a
glorious come-back, since focus is less on what you can do by hand and
more about what you can click with a mouse (in the process, losing the
basics). But alas, there are no teachers anymore, as we've found. They
are in short supply, busy with day jobs, and the new blood that's coming
up is too far disconnected from the basics of deploying and running an
IP network, they'd rather be fooling around with an app somewhere.

There I go, rambling about the good old days, already :-)...

Mark.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Seamless MPLS interacting with flat LDP domains

2019-05-02 Thread adamv0025
> Radu-Adrian FEURDEAN
> Sent: Thursday, May 2, 2019 10:34 AM
> 
> On Wed, May 1, 2019, at 00:15, adamv0...@netconsultings.com wrote:
> 
> > Converting DC to standard MPLS network and all your problems are solved.
> 
> Just talk about converting DC to MPLS and you will start having other kind
of
> problems.
> IMHO, if DCs didn't massively adopt MPLS is because lack of training and
what
> I call "end-user mentality".  In lots of cases you are dealing with people
that
> when you say "MPLS" they understand "site-to-site L3VPN using some magic
> technology" (that they don't understand). 
>
Yeah that's exactly it, I'm against this "end-user mentality" on a daily
bases when talking about other "magical" stuff like MD-SAL or YANG.

> Others do understand some tiny
> bits but see it as a "carrier technology". And there's enough of them so
that
> manufacturers take their opinion in consideration. Result => VXLAN.
>
Though it's interesting that the same people who are afraid of the simple
bit which is the hop by hop transport are fine with the complex EVPN bit on
top :)
But I get it vendors nowadays assume the crowd's networking knowledge is
subpar hence these nicely pre-packaged click a button solutions for DC
deployments. 

> Those being said, there are DCs that managed to go the MPLS way, but it
> looks more like an exception rater than the rule. Unfortunately.
> 
We'll see what time brings.

adam

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Seamless MPLS interacting with flat LDP domains

2019-05-02 Thread Mark Tinka



On 2/May/19 11:33, Radu-Adrian FEURDEAN wrote:

>
> Just talk about converting DC to MPLS and you will start having other kind of 
> problems. 
> IMHO, if DCs didn't massively adopt MPLS is because lack of training and what 
> I call "end-user mentality". In lots of cases you are dealing with people 
> that when you say "MPLS" they understand "site-to-site L3VPN using some magic 
> technology" (that they don't understand). Others do understand some tiny bits 
> but see it as a "carrier technology". And there's enough of them so that 
> manufacturers take their opinion in consideration. Result => VXLAN.
> Those being said, there are DCs that managed to go the MPLS way, but it looks 
> more like an exception rater than the rule. Unfortunately.

When exchange points started using it (VPLS) to operate the member
fabric, you know it was downhill from there :-).

And that was way before all this Cloud/DCI/VXLAN/SDN/SD-WAN monstrosity
our industry finds itself in :-).

Mark.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Seamless MPLS interacting with flat LDP domains

2019-05-02 Thread Radu-Adrian FEURDEAN
On Wed, May 1, 2019, at 00:15, adamv0...@netconsultings.com wrote:

> Converting DC to standard MPLS network and all your problems are solved.

Just talk about converting DC to MPLS and you will start having other kind of 
problems. 
IMHO, if DCs didn't massively adopt MPLS is because lack of training and what I 
call "end-user mentality". In lots of cases you are dealing with people that 
when you say "MPLS" they understand "site-to-site L3VPN using some magic 
technology" (that they don't understand). Others do understand some tiny bits 
but see it as a "carrier technology". And there's enough of them so that 
manufacturers take their opinion in consideration. Result => VXLAN.
Those being said, there are DCs that managed to go the MPLS way, but it looks 
more like an exception rater than the rule. Unfortunately.

-- 
R.-A. Feurdean
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco IOS ping utility reports lower RTT than possible

2019-05-02 Thread James Bensley
On Mon, 29 Apr 2019 at 11:14, Martin T  wrote:
>
> Hi,

Hi Martin,

> I have a following very simple network topology:
>
> CISCO1921[Gi0/0] <-> [eno3]svr
>
> Gi0/0 in Cisco 1921 ISR has 10.66.66.2/24 configured and eno3 in Linux
> server has 10.66.66.1/24 configured. RTT on this link is 10ms:

How do you know this to be 100% correct - have you OTDR/iOLM tested this link?

Cheers,
James.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Seamless MPLS interacting with flat LDP domains

2019-05-02 Thread James Bensley
On Tue, 30 Apr 2019 at 15:04,  wrote:
> > So for the ASR920, you get about 20,000 FIB entries. That's what you want
> to
> > keep your eye on to determine whether you're at a point where you need to
> > do this.
> >
> > Ideally, you would be carrying IGP and LDP in FIB. With BGP-SD, you can
> > control the number of routes you want in FIB.
> >
> Also with OSPF prefix-suppression you can reduce the OSPF footprint to mere
> loopbacks (i.e. excluding all p2p links).

Originally, I was in support of prefix-suppression however, Cisco
implemented it on IOS and IOS-XE devices for OSPF and not ISIS, and
not for either OSPF or ISIS on IOS-XR. Later they implemented it on
IOS-XR devices but for ISIS only and not OSPF. Another Cisco fail at
aligning their own features across their own products. So, since it
can't even be deployed in an all Cisco network, let alone a typical
multi-vendor network I just don't use it anymore.

Having said that, it does work, this is an example on IOS using
prefix-suppression for OSPF:
https://null.53bits.co.uk/index.php?page=ospf-inter-area-filtering

IS-IS on Cisco has a method that exist to only advertise the loopback
interface in the LSDB for IOS/XE and IOS-XR however, they are two
different methods.

IOS:
router isis
 advertise passive-only
 passive-interface Loopback0

IOS-XR:
router isis
 interface x/y
  suppressed

I've got lots of notes on OSPF and IS-IS scaling in a mixed
Juniper/Cisco environment but they're very much in draft format, I can
try and make them presentable if I get some free time / and there is
demand.

Cheers,
James.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/