On 2012-12-13, at 10:12 AM, Phil Mayers <[email protected]> wrote:

> On 13/12/12 15:04, Jason Lixfeld wrote:
>> 
>> On 2012-12-13, at 9:56 AM, Phil Mayers <[email protected]>
>> wrote:
>> 
>>> On 13/12/12 14:47, Jason Lixfeld wrote:
>>>>> 
>>>>> Yes. In fact, that's *required* if you want to do multi-path.
>>>> 
>>>> I seem to do multi-path just fine with maximum-paths ibgp 2 on my
>>>> RR clients inside a VRF that sees a default sourced from two
>>>> different RRs.  Said VRF has a common RD between the two PEs.
>>>> 
>>>> How is that different?
>>> 
>>> Well, AIUI multipath *ought* to require unique RDs. Obviously not;
>>> I wonder how that's working for you?
>> 
>> So based on the link you posted previously, which I am currently
>> making my way through, what's happening is on my production network
>> where this multi-path stuff is actually working, I'm using XR as my
>> RRs, which has add-path support.
> 
> Presumably the RR clients have add-path too (it's needed at both ends)?

I was just going to reply to my last post and correct myself...  This seems to 
work on 7600s but not on ME3600s.. (both running 15.2(2)S1).  I saw multi-path 
on the 7600s and presumed it was working everywhere.   Both are RR clients of 
the two ASR9k PEs that are sourcing the defaults.

rrc-7600#sh ip bgp vpnv4 vrf Inetv4 0.0.0.0
BGP routing table entry for 1:4:0.0.0.0/0, version 6002
Paths: (2 available, best #2, table Inetv4, not advertised to EBGP peer)
Multipath: iBGP
  Not advertised to any peer
  Refresh Epoch 1
  Local
    1.1.1.11 (metric 20) from 1.1.1.11 (1.1.1.11)
      Origin IGP, metric 0, localpref 100, valid, internal, multipath(oldest)
      Community: 1:65535 no-export
      Extended Community: RT:1:4
      mpls labels in/out nolabel/16000
  Refresh Epoch 1
  Local
    1.1.1.10 (metric 20) from 1.1.1.10 (1.1.1.10)
      Origin IGP, metric 0, localpref 100, valid, internal, multipath, best
      Community: 1:65535 no-export
      Extended Community: RT:1:4
      mpls labels in/out nolabel/289985
rrc-7600##sh ip cef vrf Inetv4 0.0.0.0/0 detail       
0.0.0.0/0, epoch 35, flags rib defined all labels, default route, 
per-destination sharing
  recursive via 1.1.1.10 label 289985
    nexthop 1.1.1.166 TenGigabitEthernet7/0/0
  recursive via 1.1.1.11 label 16000
    nexthop 1.1.1.164 TenGigabitEthernet7/0/1
rrc-7600#


vs.


rrc-3600#sh ip bgp vpnv4 vrf Inetv4 0.0.0.0
BGP routing table entry for 1:4:0.0.0.0/0, version 20719
Paths: (2 available, best #1, table Inetv4, not advertised to EBGP peer)
Multipath: iBGP
  Not advertised to any peer
  Refresh Epoch 1
  Local
    1.1.1.11 (metric 30) from 1.1.1.11 (1.1.1.11)
      Origin IGP, metric 0, localpref 100, valid, internal, best
      Community: 1:65535 no-export
      Extended Community: RT:1:4
      mpls labels in/out nolabel/16000
      rx pathid: 0, tx pathid: 0x0
  Refresh Epoch 1
  Local
    1.1.1.10 (metric 40) from 1.1.1.10 (1.1.1.10)
      Origin IGP, metric 0, localpref 100, valid, internal
      Community: 1:65535 no-export
      Extended Community: RT:1:4
      mpls labels in/out nolabel/289985
      rx pathid: 0, tx pathid: 0
rrc-3600#sh ip cef vrf Inetv4 0.0.0.0/0 detail
0.0.0.0/0, epoch 0, flags rib defined all labels, default route
  recursive via 1.1.1.11 label 16000
    nexthop 1.1.1.197 TenGigabitEthernet0/2 label 36
rrc-3600#

> Other explanations might be that by chance one RR had advertised one path and 
> the other RR another path.

This is actually the case.  I should have been more clear in this regard.  Two 
PEs each sourcing a default route.  Each of these two PEs is a RR for 
downstream PEs which are RR clients of both RRs.

> But yes, my original email should have been more specific: unless you have 
> add-paths, unique RD is required for multipath.

So... looks like I do need to do a different RD on my other 
default-route-sourcing PE.  If I'm reading the output of those two commands 
correctly, add path seems to be there on the 7600s (albeit in an undocumented 
manner) but not at all on the ME3600s.

This continues to be a really eye-opening thread for me.  Thanks all for taking 
the time to continue to participate.

> _______________________________________________
> cisco-nsp mailing list  [email protected]
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/


_______________________________________________
cisco-nsp mailing list  [email protected]
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to