Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-05-02 Thread James Bensley
On 2 May 2017 at 11:30,   wrote:
>> James Bensley
>> Sent: Tuesday, May 02, 2017 9:28 AM
>>
>> Just to clarify, one doesn't need to enable indirect-next-hop because it
> is
>> enabled by default, but if it were turned off for any reason, I presume it
> is a
>> requirement for PIC Edge? Or is it really not required at all, if not, how
> is the
>> Juniper equivilent working?
>>
> It's a requirement for PIC Edge (Egress PE or Egress PE-CE link failure) as
> well as for PIC Core (Ingress PE core link failure).
> To be precise it is required for in-place modification of the forwarding
> object to the backup/alternate node.
> So in a sense that applies for ECMP/LACP NHs a well the only difference is
> that both NH are in use in those cases -but you still need to be able to
> update all FIB records using them at once in case one of the NHs goes down.
>
>
>
>> Looking on juniper.net it looks like one exports multiple routes from the
> RIB
>> to FIB however assuming the weight == 0x4000 those additional paths won't
>> be used during "normal" operations, only during a failure, so we won't
>> actually get any per-packet load balancing (which would be undesirable for
>> us), is that correct?
> That precise.
>
> adam

Thanks!
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-05-02 Thread adamv0025
> James Bensley
> Sent: Tuesday, May 02, 2017 9:28 AM
> 
> Just to clarify, one doesn't need to enable indirect-next-hop because it
is
> enabled by default, but if it were turned off for any reason, I presume it
is a
> requirement for PIC Edge? Or is it really not required at all, if not, how
is the
> Juniper equivilent working?
> 
It's a requirement for PIC Edge (Egress PE or Egress PE-CE link failure) as
well as for PIC Core (Ingress PE core link failure). 
To be precise it is required for in-place modification of the forwarding
object to the backup/alternate node. 
So in a sense that applies for ECMP/LACP NHs a well the only difference is
that both NH are in use in those cases -but you still need to be able to
update all FIB records using them at once in case one of the NHs goes down.



> Looking on juniper.net it looks like one exports multiple routes from the
RIB
> to FIB however assuming the weight == 0x4000 those additional paths won't
> be used during "normal" operations, only during a failure, so we won't
> actually get any per-packet load balancing (which would be undesirable for
> us), is that correct?
That precise.

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-05-02 Thread James Bensley
On 27 April 2017 at 14:41,   wrote:
>> James Bensley
>> Sent: Thursday, April 27, 2017 9:13 AM
>>
>> It might be worth pointing out that on Cisco you need to enable PIC Core for
>> PIC Edge to work at its best.

> So it's either Core or Core+Edge.

That's pretty much the point I was trying to make, albeit unclearly.

>> For you VPNv4/VPNv6 stuff one must enable PIC Edge with advertise best
>> external or add path etc. However enabling PIC Edge without PIC Core means
>> that backup paths will be pre-computed but not programmed into hardware.
> Again, not sure how can you enable PIC Edge but not PIC Core on Cisco?

We use PIC Core + PIC Edge however we still have some 7600s in the mix
which don't support a hierarchical FIB for labelled prefixes without
recirculating all packets (PIC Edge is basically not support for
VPNv4/VPNv6 prefixes without halving your pps rate). So you end up
with BGP computing a backup path in the BGP RIB (random prefix from
Internet table shown below as example) but there is no backup path in
CEF/FIB:

#show bgp ipv4 unicast 1.0.4.0/24
BGP routing table entry for 1.0.4.0/24, version 326263390
BGP Bestpath: compare-routerid
Paths: (3 available, best #3, table default)
  Advertise-best-external
  Advertised to update-groups:
 2  4  10
  Refresh Epoch 3
  3356 174 4826 38803 56203
x.x.x.254 (metric 2) from x.x.x.254 (x.x.x.254)
  Origin incomplete, metric 0, localpref 100, valid, internal
  Community: x:200 x:210
  rx pathid: 0, tx pathid: 0
  Refresh Epoch 1
  6453 3257 4826 38803 56203
195.219.83.137 from 195.219.83.137 (66.110.10.38)
  Origin incomplete, metric 0, localpref 100, valid, external,
backup/repair, advertise-best-external<< PIC backup path
  Community: x:200 x:211 , recursive-via-connected
  rx pathid: 0, tx pathid: 0
  Refresh Epoch 1
  174 4826 38803 56203
10.0.0.7 (metric 1001) from 10.0.0.7 (10.0.0.7)
  Origin incomplete, metric 0, localpref 100, valid, internal,
best<< best path
  Community: x:200 x:212
  rx pathid: 0, tx pathid: 0x0


So one ends up having the next best path learned and computed but not
installed in to FIB. Bit of a corner case I know but Cisco know's we
love to juggle more items than we have hands!

>> In Juniper land, does one need to activate indirect-next-hop before you can
>> provide PIC Edge for eBGP vpn4/vpn6 routes?
>>
> Nope, just load-balancing.
> And then protection under neighbour stanza.
>
>> Is indirect-next-hop enabled by default on newer MX devices / Junos
>> versions?
>>
> Yes.

Just to clarify, one doesn't need to enable indirect-next-hop because
it is enabled by default, but if it were turned off for any reason, I
presume it is a requirement for PIC Edge? Or is it really not required
at all, if not, how is the Juniper equivilent working?

Looking on juniper.net it looks like one exports multiple routes from
the RIB to FIB however assuming the weight == 0x4000 those additional
paths won't be used during "normal" operations, only during a failure,
so we won't actually get any per-packet load balancing (which would be
undesirable for us), is that correct?

Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-27 Thread adamv0025
> James Bensley
> Sent: Thursday, April 27, 2017 9:13 AM
> 
> It might be worth pointing out that on Cisco you need to enable PIC Core for
> PIC Edge to work at its best. PIC Core as already mentioned is just enabling
> the hierarchical FIB. So for your IGP / global routing table prefixes they 
> will be
> covered by backup paths if they exist (backup path computed and installed
> into hardware FIB).
> 
Don't know about that, on Cisco Edge and Core functionality seems to be joined 
together. 

For instance in IOS or XE FIB hierarchy is enabled by default and can be 
disabled using: 
"cef table output-chain build favor memory-utilization" and re-enabled using 
"cef table output-chain build favor convergence-speed". 
In detail Convergence speed and indirection characteristics are enabled by 
default for the building of Cisco Express Forwarding table output chains (Since 
or beloved 12.2(33)SRA). 

The BGP PIC (Core) is configured using  "bgp additional-paths install". 
The BGP PIC (Edge+Core) feature is automatically enabled by the BGP Best 
External feature. 
When you configure the BGP Best External feature using the bgp 
advertise-best-external command, you need not enable the BGP PIC feature with 
the bgp additional-paths install command.  
The BGP PIC feature does not work with the BGP Best External feature. If you 
try to configure the BGP PIC feature after configuring the BGP Best External 
feature, you receive an error. 
So it's either Core or Core+Edge. 

In XR  cmd "advertise best-external" only advertises best external path and 
does not enable PIC on its own. 
You need to use "additional-paths selection" policy to calculate backup or 
enable PIC (Edge+Core) functionality. 
However the FIB hierarchy is enabled by default. 
Once again no distinction between Core and Edge. 


> For you VPNv4/VPNv6 stuff one must enable PIC Edge with advertise best
> external or add path etc. However enabling PIC Edge without PIC Core means
> that backup paths will be pre-computed but not programmed into hardware. 
Again, not sure how can you enable PIC Edge but not PIC Core on Cisco? 


> In Juniper land, does one need to activate indirect-next-hop before you can
> provide PIC Edge for eBGP vpn4/vpn6 routes?
> 
Nope, just load-balancing. 
And then protection under neighbour stanza. 

> Is indirect-next-hop enabled by default on newer MX devices / Junos
> versions?
> 
Yes. 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-27 Thread James Bensley
On 19 April 2017 at 17:20, Dragan Jovicic  wrote:
> What Cisco originally calls "PIC Core" is simply indirect-next-hop feature
> on routers, same on Juniper. On "flat" architectures without indirect
> next-hop, a failure of an uplink (a core link) on a PE router would require
> this PE to reprogram all BGP prefixes to new directly connected next-hop.
> Depending on your router and number of prefixes this very well may be an
> upward of several dozen of seconds, if not a few minutes. With
> indirect-next-hop feature, a PE router simply updates a pointer from BGP
> next-hop to new interface, making this almost an instantaneous excursion.
> On older routers without it, you may resort to using multiple equal-cost
> uplinks (or LAG interfaces) since in this case you already have a backup
> next-hop in your forwarding table.
>
> What Cisco originally calls "PIC Edge" is ability to install already
> present backup route from another BGP routers into the forwarding table.
> For this you need to:
>
> 1) already have backup route from control plane into RIB (using add path,
> iBGP, additional RR, advertise external, etc),
> 2) install these route into forwarding table ( this is main part as this
> FIB update is largest piece of convergence cake).
> On Juniper, the part of importing routes into FT is, for some reason,
> called "protect core" (and available for inet.0 table post-15.1), and
> 3) the PE router need to detect failure of upstream BGP router or its link.
> One of the ways is to passively include upstream link in IGP, but there are
> others.
>
> Note the difference - in first case BGP next-hop is unchanged, in the
> second, you have a new BGP next-hop altogether.
>
> What Juniper calls "BGP Edge Link Protection" is something different. It
> allows Edge ASBR router to reroute/tunnel traffic from failed CE link over
> core to another ASBR. For this to work the router must not look at IP
> packet (still pointing to failed PE-CE links), hence per-prefix labels are
> used. Juniper very well mentions this. Also this is available only for
> labeled inet/inet6 traffic, not family inet - at least I don't see it
> available in recent versions.
>
> There is also another technology called "Egress Protection", which is
> something different but quite cool.
>
> @OP, depending on how your topology looks like you may benefit from simple
> indirect-nh (aka PIC Core) as this might not need an upgrade. For link
> failure detection on ASBR, you might use BFD, smaller times, even
> scripting, if LOS is not a viable option. But this still means BGP
> convergence. LoS opens some cool options like using same bgp next-hop
> pointing over multiple rsvp tunnels ending on multiple routers.
>
> As for default route, if its installed in FT, I don't see why the router
> wouldn't use this entry in the absence of more specific (bearing all other
> issues with such setup).
> If you use labeled internet traffic you can resolve remote next-hop of
> static route to get a label for it.
>
> BR
>
> -Dragan
> ccie/jncie

Hi,

It might be worth pointing out that on Cisco you need to enable PIC
Core for PIC Edge to work at its best. PIC Core as already mentioned
is just enabling the hierarchical FIB. So for your IGP / global
routing table prefixes they will be covered by backup paths if they
exist (backup path computed and installed into hardware FIB).

For you VPNv4/VPNv6 stuff one must enable PIC Edge with advertise best
external or add path etc. However enabling PIC Edge without PIC Core
means that backup paths will be pre-computed but not programmed into
hardware. With PIC Core enabled, the FIB is arranged hierarchically to
support prefix indirection AND for your IGP (for example) which has
visibility of multiple paths without the need for any additional
features (unlike eBGP which only sees the best paths by default) a PE
can both calculate AND program the backup path into the FIB. With BGP
PIC Edge and no PIC Core eBGP backup paths can be received and
computed but the backup path is not pre-programmed into FIB. There is
still some speed up to this but really if using BGP PIC Edge, PIC Core
should be enabled too.

There are caveats in the Cisco world like 7600s support PIC Core but
to support PIC Edge they have to recirculate all packets so you half
you Pps rate for VPNv4/VPNv6 packets. ASR9000’s have the hierarchical
fib enabled by default and I don’t think if it can be disable.
ME3600/ME3800 don’t have the H-FIB enabled by default but it can be
enabled and it supports VPNv4/VPNv6 prefixes, and so on.

In Juniper land, does one need to activate indirect-next-hop before
you can provide PIC Edge for eBGP vpn4/vpn6 routes?

Is indirect-next-hop enabled by default on newer MX devices / Junos versions?

Cheers,
James.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-26 Thread Michael Hare
Admittedly this late to arrive follow up may not be J specific.

Our transit extensions aren't really traditional metro ethernet circuits, 
topology looks more like following

a-vlanXbc-vlanX---d

The "shared l2" device connects several .edu institutions into major 
aggregation facilities.  link 'a---b'  is optically protected.  .   'b' to 'c' 
is actually a vlan-ccc so 'b' and 'c' are already tied but the point is moot.  
We run BFD with 'd'.

If I understand correctly a theoretical eOAM session between 'a' and 'd' could 
cause both 'a' and 'd' IFLs to drop on end to end connectivity fault but eOAM 
assumes you manage both eOAM endpoints and is not meant for a cross domain 
situation.  Is it the correct conclusion that eOAM between 'a' and 'd' is 
unlikely to be supported by any reasonable upstream?  In this case 'd' is a 
tier1.

using this URL as a starting point for exploring eOAM
https://www.juniper.net/documentation/en_US/junos12.3/topics/example/layer-2-802-1ag-ethernet-oam-cfm-example-over-bridge-connections-mx-solutions.html

-Michael


From: Alexander Arseniev [mailto:arsen...@btinternet.com]
Sent: Wednesday, April 19, 2017 11:19 AM
To: Michael Hare <michael.h...@wisc.edu>; juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] improving global unicast convergence (with or without 
BGP-PIC)


Hi Michael,

With multiple full tables from two or more eBGP providers + iBGP peers, Your 
ASBR has to go via BGP best path reselection first before it can start 
programming FIB. And most specific route always wins, even if it otherwise 
inferior so BGP has to go over 100,000s of prefixes to find the bests among 
specific prefixes.

JUNOS INH helps at FIB programming stage, not at BGP best path reselection 
stage. Additionally in recent JUNOS versions, there are improvements made 
regarding FIB programming speed, please ask Your Juniper account team for 
details.

If You would  not have full tables over iBGP peering, then the picture would be 
simplified in a sense that in case of full-table eBGP peer down its invalidated 
prefixes need to be only removed, and eBGP 0/0 becomes best path. But I guess 
You won't like to run the network that way?

You can sense L2 failures by using either LACP with single member link 
(assuming Your Metro Ethernet provider passes LACP PDU), or Ethernet OAM 
(assuming Your Metro Ethernet provider passes EOAM PDU) or BFD. I would 
personally rate BFD as tool of last resort as (a) BFD being an UDP/IP protocol 
means there are many other failures  that affect BFD like access-lists (b) even 
when BFD is down, the BGP session may be still up whereas You want the BFD to 
follow BGP and (c) BFD failure does not bring the interface down, it just tears 
down the BGP session whereas LACP failure/EOAM failure brings the logical 
interface down. Presumably, someone will point out to uBFD over LAG but it 
still requires LACP so LACP+uBFD is overkill for a simple network UNLESS You 
are really into microseconds convergence.

When I said "JUNOS is different from IOS - BGP session will stay up until 
holdtime fires ..." - this is default behavior, You don't need to configure 
anything for it.

HTH

Thx
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-24 Thread adamv0025
Hi Dragan,

< I'm not sure what you mean by indirect-next hop in unilist, mind showing what 
you mean exactly? 
Sorry what I mean was this: 

show route table TEST.inet.0 1.2.3.4/32 extensive
#Multipath Preference: 255
Next hop type: List, Next hop index: 1048575

show route forwarding-table destination 1.2.3.4/32 extensive
Next-hop type: unilist   Index: 1048575  Reference: 22347


Regarding the show command that will display whether you are using indirect 
NHs. 
show route table TEST.inet.0 1.2.3.4/32 extensive
...
KRT in-kernel 1.2.3.4/32 -> {list:10.0.0.99, indirect(1048604)}
...
BGP 
 Indirect next hops: 1
...
Indirect next hop: 1ff02044 1048604 INH Session ID: 0x88



adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-22 Thread Dragan Jovicic
Hi,

You are absolutely correct and after 13.3 that last command is hidden.


BR,

+Dragan
ccie/jncie

On Sun, Apr 23, 2017 at 12:04 AM, Olivier Benghozi <
olivier.bengh...@wifirst.fr> wrote:

> Hi,
>
> > On 22 apr. 2017 at 22:47, Dragan Jovicic  wrote :
> >
> > From documentation:
> >> On platforms containing only MPCs chained composite next hops are
> enabled by default. With Junos OS Release 13.3, the support for chained
> composite next hops is enhanced to automatically identify the underlying
> platform capability on composite next hops at startup time, without relying
> on user configuration, and to decide the next hop type (composite or
> indirect) to embed in the Layer 3 VPN label.
>
> In fact the most relevant part of this doc is what immediately follows
> that:
> "This enhances the support for back-to-back PE-PE connections in Layer 3
> VPN with composite next hops, and eliminates the need for the
> pe-pe-connection statement."
>
> Actually, only "pe-pe-connection" became useless, if you enable composite
> for l3vpn.
>
>
> > There's quite of few options to configure, and a few scenarios which
> might affect how are they created, such as if your PE is also a P router,
> and if you have degenerated PE-PE connection to name two,
> > +l3vpn pe-pe-connection;
>
> Since 13.3, only l3vpn.
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-22 Thread Olivier Benghozi
Hi,

> On 22 apr. 2017 at 22:47, Dragan Jovicic  wrote :
> 
> From documentation:
>> On platforms containing only MPCs chained composite next hops are enabled by 
>> default. With Junos OS Release 13.3, the support for chained composite next 
>> hops is enhanced to automatically identify the underlying platform 
>> capability on composite next hops at startup time, without relying on user 
>> configuration, and to decide the next hop type (composite or indirect) to 
>> embed in the Layer 3 VPN label.

In fact the most relevant part of this doc is what immediately follows that:
"This enhances the support for back-to-back PE-PE connections in Layer 3 VPN 
with composite next hops, and eliminates the need for the pe-pe-connection 
statement."

Actually, only "pe-pe-connection" became useless, if you enable composite for 
l3vpn.


> There's quite of few options to configure, and a few scenarios which might 
> affect how are they created, such as if your PE is also a P router, and if 
> you have degenerated PE-PE connection to name two,
> +l3vpn pe-pe-connection;

Since 13.3, only l3vpn.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-22 Thread Dragan Jovicic
Hi Alex,

To answer Your above question - when BFD goes down, BGP goes initially down
> too, but then it tries to reestablish without BFD.
> And if it succeeds, then You'd have BFD down but BFD up.
>

Is this a bug or a feature (the eternal question). Once a client protocol
registers with BFD process, why should it be up if BFD is down?

@Luis

> Even on newer Junos if you don't enable the indirect-next-hop toggle
> you'll still see krt entries with 0x2 flags.
>

You might see 0x0, 0x1, 0x2 and 0x3, last two being on later JUNOS.
0x2 means feature is not explicitly enabled via configuration. It doesn't
tell you anything about whether you have indirect-nh enabled. MPCs running
TRIO can't disable this but if you are running a mix of MPC and DPC cards
then you have to enable it explicitly. I am not aware of any other command
which will show you if this feature is running on you cards.

@adam

> Nah the KRT command doesn't tell you much, show route extensive is going
> to tell you if there's an indirect next-hop in the unilist and what
> forwarding next-hop(s) is the indirect next-hop actually pointing to along
> with its value.
>

mcast, composite, indirect next-hops (all indirect) point to unilist which
point to unicast (or aggregate which recurse to unicast).
The kernels show route extensive doesn't show you if the actual PFE
maintains route from indirect next-hop to forwarding next-hop binding on
PFE.

I'm not sure what you mean by indirect-next hop in unilist, mind showing
what you mean exactly?


>From documentation:
> On platforms containing only MPCs chained composite next hops are enabled
> by
> default.
> With Junos OS Release 13.3, the support for chained composite next hops is
> enhanced to automatically identify the underlying platform capability on
> composite next hops at startup time, without relying on user configuration,
> and to decide the next hop type (composite or indirect) to embed in the
> Layer 3 VPN label.
>


This is on recent JUNOS all MPCs,

Looking at forwarding table on routing-engine, I see full extension of vpn
and transport labels on last step of indirection, unicast next-hop. There
are no composite next-hops enabled.

# run show route forwarding-table dest 10.15.208.

Destination:  10.15.208.0/24
  Route type: user
  Route reference: 0   Route interface-index: 0
  Multicast RPF nh index: 0
  P2mpidx: 0
  Flags: sent to PFE
  Next-hop type: indirect  Index: 1049328  Reference: 7
  Next-hop type: unilist   Index: 1049304  Reference: 2
  Nexthop: 192.168.0.229
  Next-hop type: Push 155823, Push 366897(top) Index: 1988 Reference: 2
  Load Balance Label: None
  Next-hop interface: ae18.0Weight: 0x0
  Nexthop: 192.168.0.237
  Next-hop type: Push 155823, Push 322945(top) Index: 2137 Reference: 2
  Load Balance Label: None
  Next-hop interface: ae19.0Weight: 0x0

A look at PFE level will also show missing composite next-hops.

Once explicitly enable I see composites.

[edit routing-options forwarding-table]
+chained-composite-next-hop {
+ingress {
+l2vpn;
+l2ckt;
+labeled-bgp {
+inet6;
+}
+l3vpn;
+}
+}

There's quite of few options to configure, and a few scenarios which might
affect how are they created, such as if your PE is also a P router, and if
you have degenerated PE-PE connection to name two,

[edit routing-options forwarding-table]
+chained-composite-next-hop {
+ingress {
+l3vpn pe-pe-connection;
+}
+}

To recap, I wouldn't take all these options are configured automatically,
better check.

BR,

+Dragan
ccie/jncie
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-21 Thread Alexander Arseniev



On 20/04/2017 09:43, adamv0...@netconsultings.com wrote:



(b) even when BFD is down, the BGP session may be still up whereas You
want the BFD to follow BGP

Now how can that happen other than bug?


To answer Your above question - when BFD goes down, BGP goes initially 
down too, but then it tries to reestablish without BFD.

And if it succeeds, then You'd have BFD down but BFD up.
Try that in the lab - configure BGP+BFD, bring down BFD by applying ACL 
or deactivating BFD config, go have a cup of coffee and come back - 
You'd see BFD down & BGP up.

HTH
Thx
Alex

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-20 Thread adamv0025

> -Original Message-
> From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
> Of Luis Balbinot
> Sent: Thursday, April 20, 2017 3:43 PM
> To: Dragan Jovicic
> Cc: juniper-nsp@puck.nether.net; Vincent Bernat
> Subject: Re: [j-nsp] improving global unicast convergence (with or without
> BGP-PIC)
> 
> Even on newer Junos if you don't enable the indirect-next-hop toggle you'll
> still see krt entries with 0x2 flags.
> 
Nah the KRT command doesn't tell you much, show route extensive is going to 
tell you if there's an indirect next-hop in the unilist and what forwarding 
next-hop(s) is the indirect next-hop actually pointing to along with its value. 

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-20 Thread Luis Balbinot
Even on newer Junos if you don't enable the indirect-next-hop toggle
you'll still see krt entries with 0x2 flags.

On Tue, Apr 18, 2017 at 6:30 PM, Dragan Jovicic  wrote:
> As mentioned on mx trio indirect-nh is enabled and can't be disabled.
> You could check with > show krt indirect-next-hop protocol-next-hop
> commands (0x3 flag should mean it is enabled).
> However this was not the case in older Junos versions where
> indirect-next-hop was in fact not enabled and had to be enabled even on mx
> mpc (it escapes me when was this, pre-13 or so).
>
> If your uplink fails, with indirect-nh change is almost instantaneous,
> given your BGP next-hop is unchanged, as only one pointer needs to be
> rewritten (or you have equal cost uplinks...). However you still need
> composite-next-hop feature for L3VPN labeled traffic and this is NOT
> enabled by default (might be important if you run lots of routes in vrf)...
>
> If your BGP next-hop changes and you have routes in rib (add-paths,
> advertise-external, multiple RRs), and you have them installed in FT
> (pre- or post- 15.1), you still rely on failure detection of upstream BGP
> router or upstream link (even slower, but you could put upstream links in
> IGP).
>
> There's also egress-protection for labeled traffic..
>
> Before we implemented bgp pic/add-paths, we used multiple RR and iBGP mesh
> in certain parts and spread BGP partial feeds from multiple upstream
> routers to at least minimize time to update FIB, as none of this required
> any upgrade/maintenance.
>
> If you find your FIB update time is terrible, bgp pic edge will definately
> help..
>
> BR,
>
>
> -Dragan
>
> ccie/jncie
>
>
>
>
>
> On Tue, Apr 18, 2017 at 10:07 PM, Vincent Bernat  wrote:
>
>>  ❦ 18 avril 2017 21:51 +0200, Raphael Mazelier  :
>>
>> >> Is this the case for chassis MX104 and 80? Is your recommendation to run
>> >> with indirect-next-hop on them as well?
>> >>
>> >
>> > Correct me if I'm wrong but I think this is the default on all the MX
>> > since a long time. There as no downside afaik.
>>
>> Documentation says:
>>
>> > By default, the Junos Trio Modular Port Concentrator (MPC) chipset on
>> > MX Series routers is enabled with indirectly connected next hops, and
>> > this cannot be disabled using the no-indirect-next-hop statement.
>> --
>> Harp not on that string.
>> -- William Shakespeare, "Henry VI"
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-20 Thread adamv0025
> From: Saku Ytti [mailto:s...@ytti.fi]
> Thursday, April 20, 2017 10:08 AM
> 
> The memory is just DRAM on Trio, DRAM isn't significant bottleneck,
> considering there are/were pathological cases where router has advertised
> new path and not programmed in in HW for 30min or more.
> Also somewhere JNPR has gotten wrong message from customers, as they
> seemed to think this is just about FIB update being slow, but that's not even
> the main problem, 
Agree, there's a solution for that in form of FRR for IGP/RSVP and BGP, so in 
my opinion there's no value in investing time and effort into this. 

> main problem is software and hardware being decoupled.
> Blackholing is bad, using old path in software and hardware until you can
> actually program the new entry is acceptable. After this is done, THEN focus
> on making it faster.
> 
IOS-XR has BGP-RIB Feedback since 4.3.0. (It actually is FIB feedback, the name 
is so confusing). 
And you have also the periodic Route and Label Consistency Checker -very 
helpful to point out HW programming issues. 
I can't recall whether Junos has a similar feedback mechanism implemented or 
planned. 

> The synchronicity guarantees are not JNPR specific problem at all, I know
> people see these in some CSCO platforms and Arista by default does not
> guarantee it, they have knob for it today. It wasn't entirely obvious to me
> what the guarantee actually does, it wasn't all-the-way-to-chip guarantee, I
> guess it was to the LC CPU guarantee.
> 
Good point, I haven't checked actually but if the feedback wouldn't be all the 
way down to NPU lookup memory it wouldn't be of much help. 

adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-20 Thread adamv0025
> Dragan Jovicic [mailto:dragan...@gmail.com]
> Sent: Wednesday, April 19, 2017 5:20 PM
> 
> What Juniper calls "BGP Edge Link Protection" is something different. It
> allows Edge ASBR router to reroute/tunnel traffic from failed CE link over
> core to another ASBR. 
Yup same as Cisco's PIC Edge (or former feature called local-reroute/protection 
or something along those lines). 
 
> For this to work the router must not look at IP packet
> (still pointing to failed PE-CE links), hence per-prefix labels are used. 
Although more convenient it's not a hard requirement, router can forward using 
IP header you just need to make sure the backup router prefers locally 
introduced eBGP routes before iBGP routes advertised by primary router. 
Also I don't ever see a need for per-prefix labels -certainly doing that for 
the Internet VRF would be madness. 
Using per next-hop labels is sufficient to avoid lookup in this case -but then 
things like IP FW filters are bypassed.   

adam


netconsultings.com
::carrier-class solutions for the telecommunications industry::


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-20 Thread Saku Ytti
On 20 April 2017 at 11:43,   wrote:

Hey,

> FIB programming time has always been a memory write limitation, router 
> memories used for lookup are streamlined for read performance, sacrificing 
> read performance to reduce the cost, so there's only so fast you can go and 
> with the forwarding tables ever growing it's a lost battle, and a meaningless 
> one as well, since we already have elegant solutions to work around this 
> limitation. I mean it's good they're fixing crappy code to catch up with the 
> actual HW limits at hand though.

The memory is just DRAM on Trio, DRAM isn't significant bottleneck,
considering there are/were pathological cases where router has
advertised new path and not programmed in in HW for 30min or more.
Also somewhere JNPR has gotten wrong message from customers, as they
seemed to think this is just about FIB update being slow, but that's
not even the main problem, main problem is software and hardware being
decoupled. Blackholing is bad, using old path in software and hardware
until you can actually program the new entry is acceptable. After this
is done, THEN focus on making it faster.

Juniper has very good view into the problem, they know how much of
convergence budget is being used in each step, I'm sure account team
can share a deck about it. I know they are working on both problems,
better guarantees that blackholing won't happen, and reducing time
spent in each place in the overall convergence budget.

The synchronicity guarantees are not JNPR specific problem at all, I
know people see these in some CSCO platforms and Arista by default
does not guarantee it, they have knob for it today. It wasn't entirely
obvious to me what the guarantee actually does, it wasn't
all-the-way-to-chip guarantee, I guess it was to the LC CPU guarantee.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-20 Thread adamv0025
> Alexander Arseniev
> Sent: Wednesday, April 19, 2017 5:19 PM
> 
> Hi Michael,
> 
> JUNOS INH helps at FIB programming stage, not at BGP best path reselection
> stage. Additionally in recent JUNOS versions, there are improvements made
> regarding FIB programming speed, please ask Your Juniper account team for
> details.
> 
Yeah I've seen the preso but I'm not convinced. 
FIB programming time has always been a memory write limitation, router memories 
used for lookup are streamlined for read performance, sacrificing read 
performance to reduce the cost, so there's only so fast you can go and with the 
forwarding tables ever growing it's a lost battle, and a meaningless one as 
well, since we already have elegant solutions to work around this limitation. I 
mean it's good they're fixing crappy code to catch up with the actual HW limits 
at hand though.  

BGP+BFD would be my first choice. 
> would personally rate BFD as tool of last resort as (a) BFD being an UDP/IP
> protocol means there are many other failures  that affect BFD like 
> access-lists
Well but misconfigured ACL is not a failure.  

> (b) even when BFD is down, the BGP session may be still up whereas You
> want the BFD to follow BGP 
Now how can that happen other than bug?  

> and (c) BFD failure does not bring the interface
> down, it just tears down the BGP session whereas LACP failure/EOAM failure
> brings the logical interface down. Presumably, someone will point out to
> uBFD over LAG but it still requires LACP so
> LACP+uBFD is overkill for a simple network UNLESS You are really into
> microseconds convergence. 

In my experience LACP+uBFD or LACP+LFM is BAU, unless you can afford to wait 3 
seconds to detect link down in case L1 didn't kick in for some reason. 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Dragan Jovicic
Hello,

Basically I agree - as mentioned default should take over in the absence of
more specific ft entry. But, "acceptable" being a moving target, it's worth
mentioning this as a workaround more so than a final solution.

BR

-Dragan
ccie/jncie






On Wed, Apr 19, 2017 at 9:08 PM, Alexander Arseniev  wrote:

> Hi Dragan,
>
>
> As for default route, if its installed in FT, I don't see why the router
> wouldn't use this entry in the absence of more specific (bearing all other
> issues with such setup).
>
> Yes, the 0/0 will be used BUT when there are 100,000s of more specifics in
> the FIB BEING REMOVED (simplest case, when eBGP+iBGP both supply just one
> 0/0 route each) there will be period of time when stale specific routes are
> stil used for forwarding -> packet loss persists for this period of time.
> BTW, You should deny BGP NH resolution via this 0/0, or the packet loss is
> unnecessarily prolonged.
> The FIB update happens with finite speed, be it route addition, route
> removal or route nexthop rewrite.
> NH rewrite is sped up with INH.
> The performance of the other operations is improved in recent JUNOS (16.1+
> if memory serves, ask Juniper account team for details).
> HTH
> Thx
> Alex
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Alexander Arseniev

Hi Dragan,


As for default route, if its installed in FT, I don't see why the 
router wouldn't use this entry in the absence of more specific 
(bearing all other issues with such setup).


Yes, the 0/0 will be used BUT when there are 100,000s of more specifics 
in the FIB BEING REMOVED (simplest case, when eBGP+iBGP both supply just 
one 0/0 route each) there will be period of time when stale specific 
routes are stil used for forwarding -> packet loss persists for this 
period of time.
BTW, You should deny BGP NH resolution via this 0/0, or the packet loss 
is unnecessarily prolonged.
The FIB update happens with finite speed, be it route addition, route 
removal or route nexthop rewrite.

NH rewrite is sped up with INH.
The performance of the other operations is improved in recent JUNOS 
(16.1+ if memory serves, ask Juniper account team for details).

HTH
Thx
Alex

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Dragan Jovicic
What Cisco originally calls "PIC Core" is simply indirect-next-hop feature
on routers, same on Juniper. On "flat" architectures without indirect
next-hop, a failure of an uplink (a core link) on a PE router would require
this PE to reprogram all BGP prefixes to new directly connected next-hop.
Depending on your router and number of prefixes this very well may be an
upward of several dozen of seconds, if not a few minutes. With
indirect-next-hop feature, a PE router simply updates a pointer from BGP
next-hop to new interface, making this almost an instantaneous excursion.
On older routers without it, you may resort to using multiple equal-cost
uplinks (or LAG interfaces) since in this case you already have a backup
next-hop in your forwarding table.

What Cisco originally calls "PIC Edge" is ability to install already
present backup route from another BGP routers into the forwarding table.
For this you need to:

1) already have backup route from control plane into RIB (using add path,
iBGP, additional RR, advertise external, etc),
2) install these route into forwarding table ( this is main part as this
FIB update is largest piece of convergence cake).
On Juniper, the part of importing routes into FT is, for some reason,
called "protect core" (and available for inet.0 table post-15.1), and
3) the PE router need to detect failure of upstream BGP router or its link.
One of the ways is to passively include upstream link in IGP, but there are
others.

Note the difference - in first case BGP next-hop is unchanged, in the
second, you have a new BGP next-hop altogether.

What Juniper calls "BGP Edge Link Protection" is something different. It
allows Edge ASBR router to reroute/tunnel traffic from failed CE link over
core to another ASBR. For this to work the router must not look at IP
packet (still pointing to failed PE-CE links), hence per-prefix labels are
used. Juniper very well mentions this. Also this is available only for
labeled inet/inet6 traffic, not family inet - at least I don't see it
available in recent versions.

There is also another technology called "Egress Protection", which is
something different but quite cool.

@OP, depending on how your topology looks like you may benefit from simple
indirect-nh (aka PIC Core) as this might not need an upgrade. For link
failure detection on ASBR, you might use BFD, smaller times, even
scripting, if LOS is not a viable option. But this still means BGP
convergence. LoS opens some cool options like using same bgp next-hop
pointing over multiple rsvp tunnels ending on multiple routers.

As for default route, if its installed in FT, I don't see why the router
wouldn't use this entry in the absence of more specific (bearing all other
issues with such setup).
If you use labeled internet traffic you can resolve remote next-hop of
static route to get a label for it.

BR

-Dragan
ccie/jncie




On Wed, Apr 19, 2017 at 4:41 PM, Michael Hare <michael.h...@wisc.edu> wrote:

> While reading this thread I think I understand that updating the trie is
> expensive such that there is really no way to quickly promote use of the
> default route, so while I still may have use for that default (provider of
> last resort) it won't help with convergence.
>
> In several locations there is an ethernet switch between myself and
> transit/peers.  So I don't always lose local link on end to end path
> failure and if transit networks were in IGP they wouldn't necessarily be
> withdrawn.  FWIW I am currently doing NHS with transit subnets in iBGP (for
> ICMP monitoring).
>
> Alex said: "JUNOS is different from IOS - BGP session will stay up until
> holdtime fires but the protocol NH will disappear, the routes will be
> recalculated and network will reconverge even if BGP session to gone peer
> is still up."
>
> I think I see the same behavior as Alex using "routing-options resolution
> rib", correct?   This is something we are already doing iBGP wise already
> for our default and aggregate announcements that contain our NHS addrs,
> unless there is yet another feature I should be considering?
>
> An enlightening part of this thread is that I didn't realize the
> difference between BGP PIC Core vs BGP PIC Edge, the latter is seemingly
> what I'm most interested in and is seemingly unobtainable at this time.
> Our network is extremely simplified in that we really have two ABSR so I
> don't think PIC Core would accomplish anything?
>
> -Michael
>
> > -Original Message-
> > From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
> > Of Alexander Arseniev
> > Sent: Wednesday, April 19, 2017 8:12 AM
> > To: adamv0...@netconsultings.com; juniper-nsp@puck.nether.net
> > Subject: Re: [j-nsp] improving global unicast convergence (with or
> without
> > BGP-P

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Alexander Arseniev

Hi Michael,

With multiple full tables from two or more eBGP providers + iBGP peers, 
Your ASBR has to go via BGP best path reselection first before it can 
start programming FIB. And most specific route always wins, even if it 
otherwise inferior so BGP has to go over 100,000s of prefixes to find 
the bests among specific prefixes.


JUNOS INH helps at FIB programming stage, not at BGP best path 
reselection stage. Additionally in recent JUNOS versions, there are 
improvements made regarding FIB programming speed, please ask Your 
Juniper account team for details.


If You would  not have full tables over iBGP peering, then the picture 
would be simplified in a sense that in case of full-table eBGP peer down 
its invalidated prefixes need to be only removed, and eBGP 0/0 becomes 
best path. But I guess You won't like to run the network that way?


You can sense L2 failures by using either LACP with single member link 
(assuming Your Metro Ethernet provider passes LACP PDU), or Ethernet OAM 
(assuming Your Metro Ethernet provider passes EOAM PDU) or BFD. I would 
personally rate BFD as tool of last resort as (a) BFD being an UDP/IP 
protocol means there are many other failures  that affect BFD like 
access-lists (b) even when BFD is down, the BGP session may be still up 
whereas You want the BFD to follow BGP and (c) BFD failure does not 
bring the interface down, it just tears down the BGP session whereas 
LACP failure/EOAM failure brings the logical interface down. Presumably, 
someone will point out to uBFD over LAG but it still requires LACP so 
LACP+uBFD is overkill for a simple network UNLESS You are really into 
microseconds convergence.


When I said "JUNOS is different from IOS - BGP session will stay up 
until holdtime fires ..." - this is default behavior, You don't need to 
configure anything for it.


HTH

Thx

Alex

On 19/04/2017 15:41, Michael Hare wrote:

While reading this thread I think I understand that updating the trie is 
expensive such that there is really no way to quickly promote use of the 
default route, so while I still may have use for that default (provider of last 
resort) it won't help with convergence.

In several locations there is an ethernet switch between myself and 
transit/peers.  So I don't always lose local link on end to end path failure 
and if transit networks were in IGP they wouldn't necessarily be withdrawn.  
FWIW I am currently doing NHS with transit subnets in iBGP (for ICMP 
monitoring).

Alex said: "JUNOS is different from IOS - BGP session will stay up until holdtime 
fires but the protocol NH will disappear, the routes will be recalculated and network 
will reconverge even if BGP session to gone peer is still up."

I think I see the same behavior as Alex using "routing-options resolution rib", 
correct?   This is something we are already doing iBGP wise already for our default and 
aggregate announcements that contain our NHS addrs, unless there is yet another feature I 
should be considering?

An enlightening part of this thread is that I didn't realize the difference 
between BGP PIC Core vs BGP PIC Edge, the latter is seemingly what I'm most 
interested in and is seemingly unobtainable at this time.  Our network is 
extremely simplified in that we really have two ABSR so I don't think PIC Core 
would accomplish anything?

-Michael


-Original Message-
From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
Of Alexander Arseniev
Sent: Wednesday, April 19, 2017 8:12 AM
To: adamv0...@netconsultings.com; juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] improving global unicast convergence (with or without
BGP-PIC)

Sorry, "Juniper’s “Provider Edge Link Protection for BGP” (Cisco’s BGP
PIC Edge)" is not there in 15.1R5:

[edit]
user@labrouter# set protocols bgp group IBGP family inet unicast protection
  ^
syntax error.

[edit]
user@labrouter# run show version
Hostname: labrouter
Model: mx240
Junos: 15.1R5.5


The "Juniper BGP PIC for inet" (in global table) is definitely there:

https://www.juniper.net/techpubs/en_US/junos/information-
products/topic-collections/release-notes/15.1/topic-83366.html#jd0e6510

So, what feature in the global table You were surmising to helps the OP?

HTH

Thx
Alex


On 19/04/2017 13:42, adamv0...@netconsultings.com wrote:

Wow, hold on a sec, we’re starting to mix things here,

Sorry maybe my bad, cause I’ve been using Cisco terminology,

Let me use juniper terminology:

I’d recommend using Juniper’s “Provider Edge Link Protection for BGP”
(Cisco’s BGP PIC Edge). –which in Junos for some reason was supported
only for eBGP session in routing-instance –that changes since 15.1.

-that’s what me and OP is talking about (at least I think that’s what
OP is talking about)

Cmd:

set routing-instances radium protocols bgp group toCE2 family inet
unicast protection

What you mentioned below 

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Michael Hare
While reading this thread I think I understand that updating the trie is 
expensive such that there is really no way to quickly promote use of the 
default route, so while I still may have use for that default (provider of last 
resort) it won't help with convergence.  

In several locations there is an ethernet switch between myself and 
transit/peers.  So I don't always lose local link on end to end path failure 
and if transit networks were in IGP they wouldn't necessarily be withdrawn.  
FWIW I am currently doing NHS with transit subnets in iBGP (for ICMP 
monitoring).

Alex said: "JUNOS is different from IOS - BGP session will stay up until 
holdtime fires but the protocol NH will disappear, the routes will be 
recalculated and network will reconverge even if BGP session to gone peer is 
still up."

I think I see the same behavior as Alex using "routing-options resolution rib", 
correct?   This is something we are already doing iBGP wise already for our 
default and aggregate announcements that contain our NHS addrs, unless there is 
yet another feature I should be considering?

An enlightening part of this thread is that I didn't realize the difference 
between BGP PIC Core vs BGP PIC Edge, the latter is seemingly what I'm most 
interested in and is seemingly unobtainable at this time.  Our network is 
extremely simplified in that we really have two ABSR so I don't think PIC Core 
would accomplish anything?

-Michael

> -Original Message-
> From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
> Of Alexander Arseniev
> Sent: Wednesday, April 19, 2017 8:12 AM
> To: adamv0...@netconsultings.com; juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] improving global unicast convergence (with or without
> BGP-PIC)
> 
> Sorry, "Juniper’s “Provider Edge Link Protection for BGP” (Cisco’s BGP
> PIC Edge)" is not there in 15.1R5:
> 
> [edit]
> user@labrouter# set protocols bgp group IBGP family inet unicast protection
>  ^
> syntax error.
> 
> [edit]
> user@labrouter# run show version
> Hostname: labrouter
> Model: mx240
> Junos: 15.1R5.5
> 
> 
> The "Juniper BGP PIC for inet" (in global table) is definitely there:
> 
> https://www.juniper.net/techpubs/en_US/junos/information-
> products/topic-collections/release-notes/15.1/topic-83366.html#jd0e6510
> 
> So, what feature in the global table You were surmising to helps the OP?
> 
> HTH
> 
> Thx
> Alex
> 
> 
> On 19/04/2017 13:42, adamv0...@netconsultings.com wrote:
> >
> > Wow, hold on a sec, we’re starting to mix things here,
> >
> > Sorry maybe my bad, cause I’ve been using Cisco terminology,
> >
> > Let me use juniper terminology:
> >
> > I’d recommend using Juniper’s “Provider Edge Link Protection for BGP”
> > (Cisco’s BGP PIC Edge). –which in Junos for some reason was supported
> > only for eBGP session in routing-instance –that changes since 15.1.
> >
> > -that’s what me and OP is talking about (at least I think that’s what
> > OP is talking about)
> >
> > Cmd:
> >
> > set routing-instances radium protocols bgp group toCE2 family inet
> > unicast protection
> >
> > What you mentioned below is  Juniper’s “BGP PIC Edge” (Cisco’s BGP PIC
> > Core).
> >
> > Cmd:
> >
> > [edit routing-instances routing-instance-name routing-options]
> >
> > user@host# set protect core
> >
> > adam
> >
> > netconsultings.com
> >
> > ::carrier-class solutions for the telecommunications industry::
> >
> > *From:*Alexander Arseniev [mailto:arsen...@btinternet.com]
> > *Sent:* Wednesday, April 19, 2017 1:28 PM
> > *To:* adamv0...@netconsultings.com; 'Michael Hare';
> > juniper-nsp@puck.nether.net
> > *Subject:* Re: [j-nsp] improving global unicast convergence (with or
> > without BGP-PIC)
> >
> > Hi there,
> >
> > BGP PIC for inet/inet6 is primarily for complete ASBR failure use case:
> >
> > When the BGP Prefix Independent Convergence (PIC) feature is enabled
> > on a router, BGP installs to the Packet Forwarding Engine the second
> > best path in addition to the calculated best path to a destination.
> > The router uses this backup path when an egress router fails in a
> > network and drastically reduces the outage time. You can enable this
> > feature to reduce the network downtime if the egress router fails.
> >
> > https://www.juniper.net/techpubs/en_US/junos/topics/concept/use-
> case-for-bgp-pic-for-inet-inet6-lu.html
> >
> >
> > The original topic was for eBGP peer failure use case.
> >

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread adamv0025
Hmm, must have remembered that incorrectly then,  

Looks like migrating to L3VPN setup is the only way to get the desired eBGP FRR 
on Juniper boxes?

In comparison Cisco’s BGP PIC Edge is supported for eBGP sessions in global 
routing table since day one, it appears that Junos has some catching up to do.  

 

adam

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Alexander Arseniev [mailto:arsen...@btinternet.com] 
Sent: Wednesday, April 19, 2017 2:12 PM
To: adamv0...@netconsultings.com; juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] improving global unicast convergence (with or without 
BGP-PIC)

 

Sorry, "Juniper’s “Provider Edge Link Protection for BGP” (Cisco’s BGP PIC 
Edge)" is not there in 15.1R5:

[edit]
user@labrouter# set protocols bgp group IBGP family inet unicast protection
^
syntax error.

[edit]
user@labrouter# run show version
  
Hostname: labrouter
Model: mx240
Junos: 15.1R5.5

 

The "Juniper BGP PIC for inet" (in global table) is definitely there:

https://www.juniper.net/techpubs/en_US/junos/information-products/topic-collections/release-notes/15.1/topic-83366.html#jd0e6510

So, what feature in the global table You were surmising to helps the OP?

HTH

Thx
Alex

 

On 19/04/2017 13:42, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com>  wrote:

Wow, hold on a sec, we’re starting to mix things here,

Sorry maybe my bad, cause I’ve been using Cisco terminology,

 

Let me use juniper terminology:

I’d recommend using Juniper’s “Provider Edge Link Protection for BGP” (Cisco’s 
BGP PIC Edge). –which in Junos for some reason was supported only for eBGP 
session in routing-instance –that changes since 15.1. 

-that’s what me and OP is talking about (at least I think that’s what OP is 
talking about)

Cmd:

set routing-instances radium protocols bgp group toCE2 family inet unicast 
protection

 

 

What you mentioned below is  Juniper’s “BGP PIC Edge” (Cisco’s BGP PIC Core). 

Cmd:

[edit routing-instances routing-instance-name routing-options]

user@host# set protect core

 

 

adam

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Alexander Arseniev [mailto:arsen...@btinternet.com] 
Sent: Wednesday, April 19, 2017 1:28 PM
To: adamv0...@netconsultings.com <mailto:adamv0...@netconsultings.com> ; 
'Michael Hare'; juniper-nsp@puck.nether.net 
<mailto:juniper-nsp@puck.nether.net> 
Subject: Re: [j-nsp] improving global unicast convergence (with or without 
BGP-PIC)

 

Hi there,

BGP PIC for inet/inet6 is primarily for complete ASBR failure use case:

When the BGP Prefix Independent Convergence (PIC) feature is enabled on a 
router, BGP installs to the Packet Forwarding Engine the second best path in 
addition to the calculated best path to a destination. The router uses this 
backup path when an egress router fails in a network and drastically reduces 
the outage time. You can enable this feature to reduce the network downtime if 
the egress router fails.

https://www.juniper.net/techpubs/en_US/junos/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html
 

The original topic was for eBGP peer failure use case.

I admit You could make BGP PIC to work for the original topic scenario if You 
don't do eBGP->iBGP NHS on ASBR and inject eBGP peer interface subnet into Your 
IGP and into LDP/RSVP (if LDP/RSVP are in use).

HTH

Thx
Alex

 

On 19/04/2017 13:21, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com>  wrote:

I see, so it’s sort of a “half way through” solution, where the convergence 
still needs to be done in CP and then when it comes to DP programming –that’s 
going to be fast cause just one INH needs to be reprogramed. 

Not sure I‘m convinced though, would rather recommend upgrading to 15.1 to get 
PIC capability for inet0. 

 

adam 

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Alexander Arseniev [mailto:arsen...@btinternet.com] 
Sent: Wednesday, April 19, 2017 1:09 PM
To: adamv0...@netconsultings.com <mailto:adamv0...@netconsultings.com> ; 
'Michael Hare'; juniper-nsp@puck.nether.net 
<mailto:juniper-nsp@puck.nether.net> 
Subject: Re: [j-nsp] improving global unicast convergence (with or without 
BGP-PIC)

 

Hi there,

The benefit is that value of INH mapped to a 100,000s of prefixes can be 
quickly rewritten into another value - for a different INH pointing to another 
iBGP peer.

Without INH, the forwarding NH value of EACH and EVERY prefix is rewritten 
individually and for longer period of time.

Your example of "correctly programmed INH" with LFA show 2 preprogrammed 
forwarding NHs which is orthogonal to the original topic of this discussion.

INH could be preprogrammed with

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Alexander Arseniev
Sorry, "Juniper’s “Provider Edge Link Protection for BGP” (Cisco’s BGP 
PIC Edge)" is not there in 15.1R5:


[edit]
user@labrouter# set protocols bgp group IBGP family inet unicast protection
^
syntax error.

[edit]
user@labrouter# run show version
Hostname: labrouter
Model: mx240
Junos: 15.1R5.5


The "Juniper BGP PIC for inet" (in global table) is definitely there:

https://www.juniper.net/techpubs/en_US/junos/information-products/topic-collections/release-notes/15.1/topic-83366.html#jd0e6510

So, what feature in the global table You were surmising to helps the OP?

HTH

Thx
Alex


On 19/04/2017 13:42, adamv0...@netconsultings.com wrote:


Wow, hold on a sec, we’re starting to mix things here,

Sorry maybe my bad, cause I’ve been using Cisco terminology,

Let me use juniper terminology:

I’d recommend using Juniper’s “Provider Edge Link Protection for BGP” 
(Cisco’s BGP PIC Edge). –which in Junos for some reason was supported 
only for eBGP session in routing-instance –that changes since 15.1.


-that’s what me and OP is talking about (at least I think that’s what 
OP is talking about)


Cmd:

set routing-instances radium protocols bgp group toCE2 family inet 
unicast protection


What you mentioned below is  Juniper’s “BGP PIC Edge” (Cisco’s BGP PIC 
Core).


Cmd:

[edit routing-instances routing-instance-name routing-options]

user@host# set protect core

adam

netconsultings.com

::carrier-class solutions for the telecommunications industry::

*From:*Alexander Arseniev [mailto:arsen...@btinternet.com]
*Sent:* Wednesday, April 19, 2017 1:28 PM
*To:* adamv0...@netconsultings.com; 'Michael Hare'; 
juniper-nsp@puck.nether.net
*Subject:* Re: [j-nsp] improving global unicast convergence (with or 
without BGP-PIC)


Hi there,

BGP PIC for inet/inet6 is primarily for complete ASBR failure use case:

When the BGP Prefix Independent Convergence (PIC) feature is enabled 
on a router, BGP installs to the Packet Forwarding Engine the second 
best path in addition to the calculated best path to a destination. 
The router uses this backup path when an egress router fails in a 
network and drastically reduces the outage time. You can enable this 
feature to reduce the network downtime if the egress router fails.


https://www.juniper.net/techpubs/en_US/junos/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html 



The original topic was for eBGP peer failure use case.

I admit You could make BGP PIC to work for the original topic scenario 
if You don't do eBGP->iBGP NHS on ASBR and inject eBGP peer interface 
subnet into Your IGP and into LDP/RSVP (if LDP/RSVP are in use).


HTH

Thx
Alex

On 19/04/2017 13:21, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com> wrote:


I see, so it’s sort of a “half way through” solution, where the
convergence still needs to be done in CP and then when it comes to
DP programming –that’s going to be fast cause just one INH needs
to be reprogramed.

Not sure I‘m convinced though, would rather recommend upgrading to
15.1 to get PIC capability for inet0.

adam

netconsultings.com

::carrier-class solutions for the telecommunications industry::

*From:*Alexander Arseniev [mailto:arsen...@btinternet.com]
*Sent:* Wednesday, April 19, 2017 1:09 PM
*To:* adamv0...@netconsultings.com
<mailto:adamv0...@netconsultings.com>; 'Michael Hare';
juniper-nsp@puck.nether.net <mailto:juniper-nsp@puck.nether.net>
    *Subject:* Re: [j-nsp] improving global unicast convergence (with
or without BGP-PIC)

Hi there,

The benefit is that value of INH mapped to a 100,000s of prefixes
can be quickly rewritten into another value - for a different INH
pointing to another iBGP peer.

Without INH, the forwarding NH value of EACH and EVERY prefix is
rewritten individually and for longer period of time.

Your example of "correctly programmed INH" with LFA show 2
preprogrammed forwarding NHs which is orthogonal to the original
topic of this discussion.

INH could be preprogrammed with one or multiple forwarding NHs,
and to achieve "multiple forwarding NHs" preprogramming, one uses
ECMP, (r)LFA, RSVP FRR, etc.

HTH

Thx

Alex

On 19/04/2017 12:51, adamv0...@netconsultings.com
<mailto:adamv0...@netconsultings.com> wrote:

Of Alexander Arseniev

Sent: Wednesday, April 19, 2017 11:51 AM

- then 203.0.113.0 will appear as "indirect" and You can have the 
usual

INH

benefits. Example from my lab:

  


show krt indirect-next-hop | find "203.0.113."

  


Indirect Nexthop:

Index: 1048592 Protocol next-hop address: 203.0.113.0

RIB Table: inet.0

Policy Version: 1

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread adamv0025
Wow, hold on a sec, we’re starting to mix things here,

Sorry maybe my bad, cause I’ve been using Cisco terminology,

 

Let me use juniper terminology:

I’d recommend using Juniper’s “Provider Edge Link Protection for BGP” (Cisco’s 
BGP PIC Edge). –which in Junos for some reason was supported only for eBGP 
session in routing-instance –that changes since 15.1. 

-that’s what me and OP is talking about (at least I think that’s what OP is 
talking about)

Cmd:

set routing-instances radium protocols bgp group toCE2 family inet unicast 
protection

 

 

What you mentioned below is  Juniper’s “BGP PIC Edge” (Cisco’s BGP PIC Core). 

Cmd:

[edit routing-instances routing-instance-name routing-options]

user@host# set protect core

 

 

adam

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Alexander Arseniev [mailto:arsen...@btinternet.com] 
Sent: Wednesday, April 19, 2017 1:28 PM
To: adamv0...@netconsultings.com; 'Michael Hare'; juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] improving global unicast convergence (with or without 
BGP-PIC)

 

Hi there,

BGP PIC for inet/inet6 is primarily for complete ASBR failure use case:

When the BGP Prefix Independent Convergence (PIC) feature is enabled on a 
router, BGP installs to the Packet Forwarding Engine the second best path in 
addition to the calculated best path to a destination. The router uses this 
backup path when an egress router fails in a network and drastically reduces 
the outage time. You can enable this feature to reduce the network downtime if 
the egress router fails.

https://www.juniper.net/techpubs/en_US/junos/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html
 

The original topic was for eBGP peer failure use case.

I admit You could make BGP PIC to work for the original topic scenario if You 
don't do eBGP->iBGP NHS on ASBR and inject eBGP peer interface subnet into Your 
IGP and into LDP/RSVP (if LDP/RSVP are in use).

HTH

Thx
Alex

 

On 19/04/2017 13:21, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com>  wrote:

I see, so it’s sort of a “half way through” solution, where the convergence 
still needs to be done in CP and then when it comes to DP programming –that’s 
going to be fast cause just one INH needs to be reprogramed. 

Not sure I‘m convinced though, would rather recommend upgrading to 15.1 to get 
PIC capability for inet0. 

 

adam 

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Alexander Arseniev [mailto:arsen...@btinternet.com] 
Sent: Wednesday, April 19, 2017 1:09 PM
To: adamv0...@netconsultings.com <mailto:adamv0...@netconsultings.com> ; 
'Michael Hare'; juniper-nsp@puck.nether.net 
<mailto:juniper-nsp@puck.nether.net> 
Subject: Re: [j-nsp] improving global unicast convergence (with or without 
BGP-PIC)

 

Hi there,

The benefit is that value of INH mapped to a 100,000s of prefixes can be 
quickly rewritten into another value - for a different INH pointing to another 
iBGP peer.

Without INH, the forwarding NH value of EACH and EVERY prefix is rewritten 
individually and for longer period of time.

Your example of "correctly programmed INH" with LFA show 2 preprogrammed 
forwarding NHs which is orthogonal to the original topic of this discussion.

INH could be preprogrammed with one or multiple forwarding NHs, and to achieve 
"multiple forwarding NHs" preprogramming, one uses ECMP, (r)LFA, RSVP FRR, etc.

HTH

Thx

Alex

 

On 19/04/2017 12:51, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com>  wrote:

Of Alexander Arseniev
Sent: Wednesday, April 19, 2017 11:51 AM
- then 203.0.113.0 will appear as "indirect" and You can have the usual

INH

benefits. Example from my lab:
 
show krt indirect-next-hop | find "203.0.113."
 
Indirect Nexthop:
Index: 1048592 Protocol next-hop address: 203.0.113.0
   RIB Table: inet.0
   Policy Version: 1 References: 1
   Locks: 3  0x9e54f70
   Flags: 0x2
   INH Session ID: 0x185
   INH Version ID: 0
   Ref RIB Table: unknown
 Next hop: #0 0.0.0.0.0.0 via ae4.100
 Session Id: 0x182
   IGP FRR Interesting proto count : 1
   Chain IGP FRR Node Num  : 1
  IGP Resolver node(hex)   : 0xb892f54
  IGP Route handle(hex): 0x9dc8e14  IGP rt_entry
protocol: Static
  IGP Actual Route handle(hex) : 0x0IGP Actual
rt_entry protocol : Any
 
Disclaimer - I haven't tested the actual convergence with this setup.
 

But what good is an indirect next-hop if it's pointing to just a single
forwarding next-hop??
 
Example of correctly programed backup NHs for a BGP route: 
...
#Multipath Preference: 255
Next hop: ELNH Address 0x585e1440 weight 0x1, selected  <<https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Alexander Arseniev

Hi there,

BGP PIC for inet/inet6 is primarily for complete ASBR failure use case:

When the BGP Prefix Independent Convergence (PIC) feature is enabled on 
a router, BGP installs to the Packet Forwarding Engine the second best 
path in addition to the calculated best path to a destination. The 
router uses this backup path when an egress router fails in a network 
and drastically reduces the outage time. You can enable this feature to 
reduce the network downtime if the egress router fails.


https://www.juniper.net/techpubs/en_US/junos/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html 



The original topic was for eBGP peer failure use case.

I admit You could make BGP PIC to work for the original topic scenario 
if You don't do eBGP->iBGP NHS on ASBR and inject eBGP peer interface 
subnet into Your IGP and into LDP/RSVP (if LDP/RSVP are in use).


HTH

Thx
Alex


On 19/04/2017 13:21, adamv0...@netconsultings.com wrote:


I see, so it’s sort of a “half way through” solution, where the 
convergence still needs to be done in CP and then when it comes to DP 
programming –that’s going to be fast cause just one INH needs to be 
reprogramed.


Not sure I‘m convinced though, would rather recommend upgrading to 
15.1 to get PIC capability for inet0.


adam

netconsultings.com

::carrier-class solutions for the telecommunications industry::

*From:*Alexander Arseniev [mailto:arsen...@btinternet.com]
*Sent:* Wednesday, April 19, 2017 1:09 PM
*To:* adamv0...@netconsultings.com; 'Michael Hare'; 
juniper-nsp@puck.nether.net
*Subject:* Re: [j-nsp] improving global unicast convergence (with or 
without BGP-PIC)


Hi there,

The benefit is that value of INH mapped to a 100,000s of prefixes can 
be quickly rewritten into another value - for a different INH pointing 
to another iBGP peer.


Without INH, the forwarding NH value of EACH and EVERY prefix is 
rewritten individually and for longer period of time.


Your example of "correctly programmed INH" with LFA show 2 
preprogrammed forwarding NHs which is orthogonal to the original topic 
of this discussion.


INH could be preprogrammed with one or multiple forwarding NHs, and to 
achieve "multiple forwarding NHs" preprogramming, one uses ECMP, 
(r)LFA, RSVP FRR, etc.


HTH

Thx

Alex

On 19/04/2017 12:51, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com> wrote:


Of Alexander Arseniev

Sent: Wednesday, April 19, 2017 11:51 AM

- then 203.0.113.0 will appear as "indirect" and You can have the usual

INH

benefits. Example from my lab:

show krt indirect-next-hop | find "203.0.113."

Indirect Nexthop:

Index: 1048592 Protocol next-hop address: 203.0.113.0

RIB Table: inet.0

Policy Version: 1 References: 1

Locks: 3  0x9e54f70

Flags: 0x2

INH Session ID: 0x185

INH Version ID: 0

Ref RIB Table: unknown

  Next hop: #0 0.0.0.0.0.0 via ae4.100

  Session Id: 0x182

IGP FRR Interesting proto count : 1

Chain IGP FRR Node Num  : 1

   IGP Resolver node(hex)   : 0xb892f54

   IGP Route handle(hex): 0x9dc8e14  IGP rt_entry

protocol: Static

   IGP Actual Route handle(hex) : 0x0IGP Actual

rt_entry protocol : Any

Disclaimer - I haven't tested the actual convergence with this setup.

But what good is an indirect next-hop if it's pointing to just a single

forwarding next-hop??

Example of correctly programed backup NHs for a BGP route:

...

#Multipath Preference: 255

Next hop: ELNH Address 0x585e1440 weight 0x1, selected  <<

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread adamv0025
I see, so it’s sort of a “half way through” solution, where the convergence 
still needs to be done in CP and then when it comes to DP programming –that’s 
going to be fast cause just one INH needs to be reprogramed. 

Not sure I‘m convinced though, would rather recommend upgrading to 15.1 to get 
PIC capability for inet0. 

 

adam 

 

netconsultings.com

::carrier-class solutions for the telecommunications industry::

 

From: Alexander Arseniev [mailto:arsen...@btinternet.com] 
Sent: Wednesday, April 19, 2017 1:09 PM
To: adamv0...@netconsultings.com; 'Michael Hare'; juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] improving global unicast convergence (with or without 
BGP-PIC)

 

Hi there,

The benefit is that value of INH mapped to a 100,000s of prefixes can be 
quickly rewritten into another value - for a different INH pointing to another 
iBGP peer.

Without INH, the forwarding NH value of EACH and EVERY prefix is rewritten 
individually and for longer period of time.

Your example of "correctly programmed INH" with LFA show 2 preprogrammed 
forwarding NHs which is orthogonal to the original topic of this discussion.

INH could be preprogrammed with one or multiple forwarding NHs, and to achieve 
"multiple forwarding NHs" preprogramming, one uses ECMP, (r)LFA, RSVP FRR, etc.

HTH

Thx

Alex

 

On 19/04/2017 12:51, adamv0...@netconsultings.com 
<mailto:adamv0...@netconsultings.com>  wrote:

Of Alexander Arseniev
Sent: Wednesday, April 19, 2017 11:51 AM
- then 203.0.113.0 will appear as "indirect" and You can have the usual

INH

benefits. Example from my lab:
 
show krt indirect-next-hop | find "203.0.113."
 
Indirect Nexthop:
Index: 1048592 Protocol next-hop address: 203.0.113.0
   RIB Table: inet.0
   Policy Version: 1 References: 1
   Locks: 3  0x9e54f70
   Flags: 0x2
   INH Session ID: 0x185
   INH Version ID: 0
   Ref RIB Table: unknown
 Next hop: #0 0.0.0.0.0.0 via ae4.100
 Session Id: 0x182
   IGP FRR Interesting proto count : 1
   Chain IGP FRR Node Num  : 1
  IGP Resolver node(hex)   : 0xb892f54
  IGP Route handle(hex): 0x9dc8e14  IGP rt_entry
protocol: Static
  IGP Actual Route handle(hex) : 0x0IGP Actual
rt_entry protocol : Any
 
Disclaimer - I haven't tested the actual convergence with this setup.
 

But what good is an indirect next-hop if it's pointing to just a single
forwarding next-hop??
 
Example of correctly programed backup NHs for a BGP route: 
...
#Multipath Preference: 255
Next hop: ELNH Address 0x585e1440 weight 0x1, selected  <<https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Alexander Arseniev

Hi there,

The benefit is that value of INH mapped to a 100,000s of prefixes can be 
quickly rewritten into another value - for a different INH pointing to 
another iBGP peer.


Without INH, the forwarding NH value of EACH and EVERY prefix is 
rewritten individually and for longer period of time.


Your example of "correctly programmed INH" with LFA show 2 preprogrammed 
forwarding NHs which is orthogonal to the original topic of this discussion.


INH could be preprogrammed with one or multiple forwarding NHs, and to 
achieve "multiple forwarding NHs" preprogramming, one uses ECMP, (r)LFA, 
RSVP FRR, etc.


HTH

Thx

Alex


On 19/04/2017 12:51, adamv0...@netconsultings.com wrote:

Of Alexander Arseniev
Sent: Wednesday, April 19, 2017 11:51 AM
- then 203.0.113.0 will appear as "indirect" and You can have the usual

INH

benefits. Example from my lab:

show krt indirect-next-hop | find "203.0.113."

Indirect Nexthop:
Index: 1048592 Protocol next-hop address: 203.0.113.0
RIB Table: inet.0
Policy Version: 1 References: 1
Locks: 3  0x9e54f70
Flags: 0x2
INH Session ID: 0x185
INH Version ID: 0
Ref RIB Table: unknown
  Next hop: #0 0.0.0.0.0.0 via ae4.100
  Session Id: 0x182
IGP FRR Interesting proto count : 1
Chain IGP FRR Node Num  : 1
   IGP Resolver node(hex)   : 0xb892f54
   IGP Route handle(hex): 0x9dc8e14  IGP rt_entry
protocol: Static
   IGP Actual Route handle(hex) : 0x0IGP Actual
rt_entry protocol : Any

Disclaimer - I haven't tested the actual convergence with this setup.


But what good is an indirect next-hop if it's pointing to just a single
forwarding next-hop??

Example of correctly programed backup NHs for a BGP route:
...
#Multipath Preference: 255
Next hop: ELNH Address 0x585e1440 weight 0x1, selected  <<

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread adamv0025
> Of Alexander Arseniev
> Sent: Wednesday, April 19, 2017 11:51 AM
> - then 203.0.113.0 will appear as "indirect" and You can have the usual
INH
> benefits. Example from my lab:
> 
> show krt indirect-next-hop | find "203.0.113."
> 
> Indirect Nexthop:
> Index: 1048592 Protocol next-hop address: 203.0.113.0
>RIB Table: inet.0
>Policy Version: 1 References: 1
>Locks: 3  0x9e54f70
>Flags: 0x2
>INH Session ID: 0x185
>INH Version ID: 0
>Ref RIB Table: unknown
>  Next hop: #0 0.0.0.0.0.0 via ae4.100
>  Session Id: 0x182
>IGP FRR Interesting proto count : 1
>Chain IGP FRR Node Num  : 1
>   IGP Resolver node(hex)   : 0xb892f54
>   IGP Route handle(hex): 0x9dc8e14  IGP rt_entry
> protocol: Static
>   IGP Actual Route handle(hex) : 0x0IGP Actual
> rt_entry protocol : Any
> 
> Disclaimer - I haven't tested the actual convergence with this setup.
> 
But what good is an indirect next-hop if it's pointing to just a single
forwarding next-hop??

Example of correctly programed backup NHs for a BGP route: 
...
#Multipath Preference: 255
Next hop: ELNH Address 0x585e1440 weight 0x1, selected  <

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Saku Ytti
On 19 April 2017 at 14:12, Alexander Arseniev  wrote:

Hey,

> Just 1 line triggers/enables the INH on directly-connected eBGP peers:
>
> set protocols bgp group ebgp neighbor 203.0.113.0 multihop

You will lose fast fall over though, eBGP session will remain up when
interface goes down, until hold-time passes. Unsure if solution exists
to gain both.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Alexander Arseniev

Actually, You don't need the unnumbered interface at all.

Just 1 line triggers/enables the INH on directly-connected eBGP peers:

set protocols bgp group ebgp neighbor 203.0.113.0 multihop

You may want to set the local-address and TTL for other reasons but it 
is not necessary for INH enablement in this case.


HTH

Thx

Alex


On 19/04/2017 11:51, Alexander Arseniev wrote:


Hello,



indirect-next-hop being default on MPC but my understanding is this 
will not work for directly connected eBGP peers




Not by default. You can make a directly-connected nexthop appear as 
"indirect" by using unnumbered interface with static /32 route 
pointing to the eBGP peer address.


Example config:

[AS65000]ae4.100{203.0.113.1/31}{203.0.113.0/31}ae0.100[AS65001]

With usual interface-peering configuration, 203.0.113.0 is NOT seen as 
indirect NH on AS65000 side.


If You reconfigure the AS 65000 side as follows:

set interfaces ae4.100 family inet unnumbered-address lo0.0 
preferred-source-address 203.0.113.1


set interfaces lo0.0 family inet address 203.0.113.1/32

set routing-options static route 203.0.113.0/32 qualified-next-hop ae4.100

set protocols bgp group ebgp neighbor 203.0.113.0 multihop ttl 1

- then 203.0.113.0 will appear as "indirect" and You can have the 
usual INH benefits. Example from my lab:


show krt indirect-next-hop | find "203.0.113."

Indirect Nexthop:
Index: 1048592 Protocol next-hop address: 203.0.113.0
  RIB Table: inet.0
  Policy Version: 1 References: 1
  Locks: 3  0x9e54f70
  Flags: 0x2
  INH Session ID: 0x185
  INH Version ID: 0
  Ref RIB Table: unknown
Next hop: #0 0.0.0.0.0.0 via ae4.100
Session Id: 0x182
  IGP FRR Interesting proto count : 1
  Chain IGP FRR Node Num  : 1
 IGP Resolver node(hex)   : 0xb892f54
 IGP Route handle(hex): 0x9dc8e14  IGP rt_entry 
protocol: Static
 IGP Actual Route handle(hex) : 0x0IGP Actual 
rt_entry protocol : Any


Disclaimer - I haven't tested the actual convergence with this setup.

HTH

Thx

Alex


On 18/04/2017 17:50, Michael Hare wrote:

Hello,

Sorry if this is an easy question already covered.  Does anyone on list have an 
understanding of what happens in the FIB in the following circumstance?

Simplified topology;
* Router 1 RIB default points to reject
* Router 1 RIB has default free feed from attached eBGP neighbor A
* Router 1 RIB has default free feed from attached iBGP neighbor B (add-path)

I guess what I'm trying to understand, from the perspective of improving 
upstream convergence for outbound packets from our AS, if my default route 
pointed to a valid next hop of last resort am I likely to see an improvement 
(reduction) in blackholing on router 1 during topology changes?  The thought 
being that if Router 1 FIB invalidates next-hop A quickly (en masse) packets 
could match default route with valid next-hop while FIB is being re-programmed 
with more specifics via B?

I am aware of indirect-next-hop being default on MPC but my understanding is 
this will not work for directly connected eBGP peers?  So if session with A 
drops (BFD, link, whatever) are routes with next hop to neighbor A deprogrammed 
nearly atomically due to some level of indirection or are routes considered one 
by one until all routes (~600K) have been processed?  I suspect the latter but 
perhaps looking for verification.

I am aware of BGP PIC but not yet running 15.X [when internet is not in VRF].  
I am willing to accept that if BGP PIC is the best approach to improving this 
scenario an upgrade is the best path forward.  I'd be curious to hear from 
anyone who is on 15.1 [or newer] and using MPC4 in terms of perceived code 
quality and MPC4 heap utilization before/after.

Historically the AS I primarily manage has been default free (default pointing to 
reject), but I'm considering changing that to improve convergence (aware of the security 
considerations).  As for our "real" topology, adding up all the transit and 
peering we have our RIB is nearing 6M routes.  We are not doing internet in a VRF.  Our 
network has add-path 3 enabled.  In some cases our peers/upstreams are on unprotected 
transport that is longer than I'd like.  Providing a ring and placing the router closer 
would be nice but not necessarily in budget.

I haven't yet approached our account team to ask about this.

Thanks in advance for any suggestions or pointers for further reading.

-Michael
___
juniper-nsp mailing listjuniper-...@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-19 Thread Alexander Arseniev

Hello,



indirect-next-hop being default on MPC but my understanding is this will 
not work for directly connected eBGP peers




Not by default. You can make a directly-connected nexthop appear as 
"indirect" by using unnumbered interface with static /32 route pointing 
to the eBGP peer address.


Example config:

[AS65000]ae4.100{203.0.113.1/31}{203.0.113.0/31}ae0.100[AS65001]

With usual interface-peering configuration, 203.0.113.0 is NOT seen as 
indirect NH on AS65000 side.


If You reconfigure the AS 65000 side as follows:

set interfaces ae4.100 family inet unnumbered-address lo0.0 
preferred-source-address 203.0.113.1


set interfaces lo0.0 family inet address 203.0.113.1/32

set routing-options static route 203.0.113.0/32 qualified-next-hop ae4.100

set protocols bgp group ebgp neighbor 203.0.113.0 multihop ttl 1

- then 203.0.113.0 will appear as "indirect" and You can have the usual 
INH benefits. Example from my lab:


show krt indirect-next-hop | find "203.0.113."

Indirect Nexthop:
Index: 1048592 Protocol next-hop address: 203.0.113.0
  RIB Table: inet.0
  Policy Version: 1 References: 1
  Locks: 3  0x9e54f70
  Flags: 0x2
  INH Session ID: 0x185
  INH Version ID: 0
  Ref RIB Table: unknown
Next hop: #0 0.0.0.0.0.0 via ae4.100
Session Id: 0x182
  IGP FRR Interesting proto count : 1
  Chain IGP FRR Node Num  : 1
 IGP Resolver node(hex)   : 0xb892f54
 IGP Route handle(hex): 0x9dc8e14  IGP rt_entry 
protocol: Static
 IGP Actual Route handle(hex) : 0x0IGP Actual 
rt_entry protocol : Any


Disclaimer - I haven't tested the actual convergence with this setup.

HTH

Thx

Alex


On 18/04/2017 17:50, Michael Hare wrote:

Hello,

Sorry if this is an easy question already covered.  Does anyone on list have an 
understanding of what happens in the FIB in the following circumstance?

Simplified topology;
* Router 1 RIB default points to reject
* Router 1 RIB has default free feed from attached eBGP neighbor A
* Router 1 RIB has default free feed from attached iBGP neighbor B (add-path)

I guess what I'm trying to understand, from the perspective of improving 
upstream convergence for outbound packets from our AS, if my default route 
pointed to a valid next hop of last resort am I likely to see an improvement 
(reduction) in blackholing on router 1 during topology changes?  The thought 
being that if Router 1 FIB invalidates next-hop A quickly (en masse) packets 
could match default route with valid next-hop while FIB is being re-programmed 
with more specifics via B?

I am aware of indirect-next-hop being default on MPC but my understanding is 
this will not work for directly connected eBGP peers?  So if session with A 
drops (BFD, link, whatever) are routes with next hop to neighbor A deprogrammed 
nearly atomically due to some level of indirection or are routes considered one 
by one until all routes (~600K) have been processed?  I suspect the latter but 
perhaps looking for verification.

I am aware of BGP PIC but not yet running 15.X [when internet is not in VRF].  
I am willing to accept that if BGP PIC is the best approach to improving this 
scenario an upgrade is the best path forward.  I'd be curious to hear from 
anyone who is on 15.1 [or newer] and using MPC4 in terms of perceived code 
quality and MPC4 heap utilization before/after.

Historically the AS I primarily manage has been default free (default pointing to 
reject), but I'm considering changing that to improve convergence (aware of the security 
considerations).  As for our "real" topology, adding up all the transit and 
peering we have our RIB is nearing 6M routes.  We are not doing internet in a VRF.  Our 
network has add-path 3 enabled.  In some cases our peers/upstreams are on unprotected 
transport that is longer than I'd like.  Providing a ring and placing the router closer 
would be nice but not necessarily in budget.

I haven't yet approached our account team to ask about this.

Thanks in advance for any suggestions or pointers for further reading.

-Michael
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread adamv0025
> Michael Hare
> Sent: Tuesday, April 18, 2017 5:51 PM
> 
> Hello,
> 
> Sorry if this is an easy question already covered.  Does anyone on list
have an
> understanding of what happens in the FIB in the following circumstance?
> 
> Simplified topology;
> * Router 1 RIB default points to reject
> * Router 1 RIB has default free feed from attached eBGP neighbor A
> * Router 1 RIB has default free feed from attached iBGP neighbor B (add-
> path)
> 
> I guess what I'm trying to understand, from the perspective of improving
> upstream convergence for outbound packets from our AS, if my default
> route pointed to a valid next hop of last resort am I likely to see an
> improvement (reduction) in blackholing on router 1 during topology
> changes?  The thought being that if Router 1 FIB invalidates next-hop A
> quickly (en masse) packets could match default route with valid next-hop
> while FIB is being re-programmed with more specifics via B?
> 
> I am aware of indirect-next-hop being default on MPC but my understanding
> is this will not work for directly connected eBGP peers?  So if session
with A
> drops (BFD, link, whatever) are routes with next hop to neighbor A
> deprogrammed nearly atomically due to some level of indirection or are
> routes considered one by one until all routes (~600K) have been processed?
> I suspect the latter but perhaps looking for verification.
> 
Hmm I'm not sure about the "indirect next-hops for everyone" proclaimed by
documentation and folks here, but I'd be glad to be proven otherwise. 
Just tried to configure static route with primary and backup(metric 100) NH
and I don't see the backup next hop flag or any indirect NHs (and using the
"show krt" cmd doesn't show anything). 
But even then how good is an indirect-NH if it's not pointing to primary and
backup forwarding-NHs. 
Using "show route extensive" or "show krt" I've always seen INHs only for
BGP routes or next-hops. 
So I think that having default route pointing to backup router won't help
with your convergence, cause the BGP NH and static route NH are not going to
be linked together in a primary-backup fashion.  

> I am aware of BGP PIC but not yet running 15.X [when internet is not in
VRF].
> I am willing to accept that if BGP PIC is the best approach to improving
this
> scenario an upgrade is the best path forward.  I'd be curious to hear from
> anyone who is on 15.1 [or newer] and using MPC4 in terms of perceived code
> quality and MPC4 heap utilization before/after.
> 
Yes BGP Edge Link Protection will definitely help (1M prefixes converged
under 500usec -yup not even a millisecond).  But be aware of one catch on
Junos. 
Since Junos iBGP and eBGP has the same protocol preference (how stupid is
that right?), just by enabling "protection" cmd you can end up in loops
(Juniper forgets to mention this), so in addition to enabling PIC edge you
have to improve protocol preference for eBGP routes (make them more
preferred on the backup node), or enable per-prefix/per-NH VPN labels to
avoid L3 lookup -not applicable in your case.

Chained composite next-hops where mentioned. 
But this feature places another indirect next hop between VPN-Label and
NH-Label, so not applicable in your case. 
This feature can address a problem of too many VPN-Label to NH-Label pairs. 
So in other words whit this feature it doesn't matter how many VPNs (if per
VPN labels are used) or CEs (if per CE VPN-Labels are used) or prefixes (in
VRF if per prefix VPN-Labels are used) there are advertised by the
particular PE all of them will share just one indirect next hop -so in case
of a primary link failure only one indirect NH per PE needs to be updated
with a backup path NH-Label and that affects all the VPNs advertised by that
router, so it only matters now how many PEs a.k.a unique NH-Labels there are
in the network. 
>From documentation: 
On platforms containing only MPCs chained composite next hops are enabled by
default. 
With Junos OS Release 13.3, the support for chained composite next hops is
enhanced to automatically identify the underlying platform capability on
composite next hops at startup time, without relying on user configuration,
and to decide the next hop type (composite or indirect) to embed in the
Layer 3 VPN label. 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Dragan Jovicic
As mentioned on mx trio indirect-nh is enabled and can't be disabled.
You could check with > show krt indirect-next-hop protocol-next-hop
commands (0x3 flag should mean it is enabled).
However this was not the case in older Junos versions where
indirect-next-hop was in fact not enabled and had to be enabled even on mx
mpc (it escapes me when was this, pre-13 or so).

If your uplink fails, with indirect-nh change is almost instantaneous,
given your BGP next-hop is unchanged, as only one pointer needs to be
rewritten (or you have equal cost uplinks...). However you still need
composite-next-hop feature for L3VPN labeled traffic and this is NOT
enabled by default (might be important if you run lots of routes in vrf)...

If your BGP next-hop changes and you have routes in rib (add-paths,
advertise-external, multiple RRs), and you have them installed in FT
(pre- or post- 15.1), you still rely on failure detection of upstream BGP
router or upstream link (even slower, but you could put upstream links in
IGP).

There's also egress-protection for labeled traffic..

Before we implemented bgp pic/add-paths, we used multiple RR and iBGP mesh
in certain parts and spread BGP partial feeds from multiple upstream
routers to at least minimize time to update FIB, as none of this required
any upgrade/maintenance.

If you find your FIB update time is terrible, bgp pic edge will definately
help..

BR,


-Dragan

ccie/jncie





On Tue, Apr 18, 2017 at 10:07 PM, Vincent Bernat  wrote:

>  ❦ 18 avril 2017 21:51 +0200, Raphael Mazelier  :
>
> >> Is this the case for chassis MX104 and 80? Is your recommendation to run
> >> with indirect-next-hop on them as well?
> >>
> >
> > Correct me if I'm wrong but I think this is the default on all the MX
> > since a long time. There as no downside afaik.
>
> Documentation says:
>
> > By default, the Junos Trio Modular Port Concentrator (MPC) chipset on
> > MX Series routers is enabled with indirectly connected next hops, and
> > this cannot be disabled using the no-indirect-next-hop statement.
> --
> Harp not on that string.
> -- William Shakespeare, "Henry VI"
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Vincent Bernat
 ❦ 18 avril 2017 21:51 +0200, Raphael Mazelier  :

>> Is this the case for chassis MX104 and 80? Is your recommendation to run
>> with indirect-next-hop on them as well?
>>
>
> Correct me if I'm wrong but I think this is the default on all the MX
> since a long time. There as no downside afaik.

Documentation says:

> By default, the Junos Trio Modular Port Concentrator (MPC) chipset on
> MX Series routers is enabled with indirectly connected next hops, and
> this cannot be disabled using the no-indirect-next-hop statement.
-- 
Harp not on that string.
-- William Shakespeare, "Henry VI"
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Michael Hare
Agreeing with Raphael, my reading implies indirect-next-hop cannot be disabled 
on TRIO.  That said I do explicitly configure it on all of our MX gear.

You may also want to look at indirect-next-hop-change-acknowledgements, in my 
case I use LFA and dynamic-rsvp-lsp and have it configured acknowledging (no 
pun intended) it may be adding to my poor convergence woes without BGP PIC.  
FWIW I left krt-nexthop-ack-timeout at its default of 1s.

http://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/indirect-next-hop-change-acknowledgements-edit-routing-options-forwarding-options.html

-Michael

> -Original Message-
> From: Jared Mauch [mailto:ja...@puck.nether.net]
> Sent: Tuesday, April 18, 2017 2:48 PM
> To: Charlie Allom <char...@evilforbeginners.com>
> Cc: Jared Mauch <ja...@puck.nether.net>; Michael Hare
> <michael.h...@wisc.edu>; juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] improving global unicast convergence (with or without
> BGP-PIC)
> 
> On Tue, Apr 18, 2017 at 08:45:17PM +0100, Charlie Allom wrote:
> > On Tue, Apr 18, 2017 at 7:36 PM, Jared Mauch <ja...@puck.nether.net>
> wrote:
> >
> > You want to set indirect-next-hop in all use-cases.  This allows
> > > faster FIB convergence upon RIB events because all shared next-hops can
> be
> > > updated
> > > at once.
> > >
> > Is this the case for chassis MX104 and 80? Is your recommendation to run
> > with indirect-next-hop on them as well?
> >
> > ..or are there downsides on these smaller units?
> 
>   Yes, I would use this on all JunOS devices myself.
> 
>   - Jared
> 
> --
> Jared Mauch  | pgp key available via finger from ja...@puck.nether.net
> clue++;  | http://puck.nether.net/~jared/  My statements are only mine.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Raphael Mazelier




Is this the case for chassis MX104 and 80? Is your recommendation to run
with indirect-next-hop on them as well?



Correct me if I'm wrong but I think this is the default on all the MX 
since a long time. There as no downside afaik.




--
Raphael Mazelier
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Jared Mauch
On Tue, Apr 18, 2017 at 08:45:17PM +0100, Charlie Allom wrote:
> On Tue, Apr 18, 2017 at 7:36 PM, Jared Mauch  wrote:
> 
> You want to set indirect-next-hop in all use-cases.  This allows
> > faster FIB convergence upon RIB events because all shared next-hops can be
> > updated
> > at once.
> >
> Is this the case for chassis MX104 and 80? Is your recommendation to run
> with indirect-next-hop on them as well?
> 
> ..or are there downsides on these smaller units?

Yes, I would use this on all JunOS devices myself.

- Jared

-- 
Jared Mauch  | pgp key available via finger from ja...@puck.nether.net
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Charlie Allom
On Tue, Apr 18, 2017 at 7:36 PM, Jared Mauch  wrote:

You want to set indirect-next-hop in all use-cases.  This allows
> faster FIB convergence upon RIB events because all shared next-hops can be
> updated
> at once.
>
Is this the case for chassis MX104 and 80? Is your recommendation to run
with indirect-next-hop on them as well?

..or are there downsides on these smaller units?
​
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Jared Mauch
On Tue, Apr 18, 2017 at 04:50:41PM +, Michael Hare wrote:
> Hello,
> 
> Sorry if this is an easy question already covered.  Does anyone on list have 
> an understanding of what happens in the FIB in the following circumstance?
> 
> Simplified topology;
> * Router 1 RIB default points to reject
> * Router 1 RIB has default free feed from attached eBGP neighbor A
> * Router 1 RIB has default free feed from attached iBGP neighbor B (add-path)
> 
> I guess what I'm trying to understand, from the perspective of improving 
> upstream convergence for outbound packets from our AS, if my default route 
> pointed to a valid next hop of last resort am I likely to see an improvement 
> (reduction) in blackholing on router 1 during topology changes?  The thought 
> being that if Router 1 FIB invalidates next-hop A quickly (en masse) packets 
> could match default route with valid next-hop while FIB is being 
> re-programmed with more specifics via B?
> 
> I am aware of indirect-next-hop being default on MPC but my understanding is 
> this will not work for directly connected eBGP peers?  So if session with A 
> drops (BFD, link, whatever) are routes with next hop to neighbor A 
> deprogrammed nearly atomically due to some level of indirection or are routes 
> considered one by one until all routes (~600K) have been processed?  I 
> suspect the latter but perhaps looking for verification.


You want to set indirect-next-hop in all use-cases.  This allows
faster FIB convergence upon RIB events because all shared next-hops can be 
updated
at once.

> I am aware of BGP PIC but not yet running 15.X [when internet is not in VRF]. 
>  I am willing to accept that if BGP PIC is the best approach to improving 
> this scenario an upgrade is the best path forward.  I'd be curious to hear 
> from anyone who is on 15.1 [or newer] and using MPC4 in terms of perceived 
> code quality and MPC4 heap utilization before/after.  

Since you are running a full RIB+FIB, you want to leverage PIC & INH to
get the full performance feasible from your hardware.

- Jared

-- 
Jared Mauch  | pgp key available via finger from ja...@puck.nether.net
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp