Re: [c-nsp] OSPF equal cost load balancing

2017-09-04 Thread Patrick Cole
James,

That would make more sense - since the ASR1k supports subinterfaces you don't
need to use BDIs anyway so it's a non event for us - we terminate our
pseudowires on ASR1k too and loop back the pppoe traffic to terminate. 

PC

Mon, Sep 04, 2017 at 08:42:10PM +0100, James Bensley wrote:


> On 1 September 2017 at 02:02, Patrick Cole  wrote:
> > James,
> >
> > Interesting you should mention the PPPoE thing as all of our ASR920 P/PE are
> > deployed using BDI for NNI facing interfaces and we carry bucketloads
> > PPPoE traffic across them all without any issues.
> >
> > The only thing I had to be weary of was accidentally putting two service
> > instances in the bridge domain for the NNI IP interface as it would spit a
> > ASIC programming error for FRR and start blackholing some labeled traffic.
> > But as long as you're meticulous about that it seems fine.
> >
> > Thu, Aug 31, 2017 at 09:12:05AM +0100, James Bensley wrote:
> >> https://null.53bits.co.uk/index.php?page=mpls-over-phy-vs-eff-bdi-svi
> 
> Interesting! I maybe mis-remembering as it was about 6+ months ago but
> we had ASR920s and ME3600s both running pseudowires back to central
> ASR1001s, and maybe the problem was with the ASR1001s instead of the
> ASR920s. Whichever device type it was (I thought it was the ASR920s
> though) I asked TAC and they confirmed it's not supported (pseudowires
> with PPPoE payload when the core facing interface is a BDI).
> 
> I searched and searched my self and couldn't find any documentation
> saying that it wasn't supported, TAC did eventually show me some
> documentation that said it wasn't supported.
> 
> (In-fact yes, it was the ASR1001, re-reading my table linked above, in
> none of the permutations is the ASR1001 listed as working when
> transporting PPPoE with a BDI core interface. I have such a bad
> memory, that is why I have to write this stuff down :S )
> 
> Cheers,
> James.
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> 

-- 
Patrick Cole 
Senior Network Specialist
World Without Wires
PO Box 869. Palm Beach, QLD, 4221
Ph:  0410 626 630
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-09-04 Thread James Bensley
On 2 September 2017 at 07:21, CiscoNSP List  wrote:
>
> Just a quick update to this - Was in the process of converting 2 of the links 
> to a port-chan (Removed IP address from the  port taking majority of the 
> traffic (gi0/0/20), and noticed it started lad-balancing over the now "3" 
> ECMP links far better:
>
>
> sh interfaces gigabitEthernet 0/0/21 | include 30 sec
>   30 second input rate 33251000 bits/sec, 8355 packets/sec
>   30 second output rate 265517000 bits/sec, 30692 packets/sec
> #sh interfaces gigabitEthernet 0/0/22 | include 30 sec
>   30 second input rate 26199000 bits/sec, 4643 packets/sec
>   30 second output rate 84239000 bits/sec, 13864 packets/sec
> #sh interfaces gigabitEthernet 0/0/23 | include 30 sec
>   30 second input rate 12839000 bits/sec, 3794 packets/sec
>   30 second output rate 56293000 bits/sec, 7668 packets/sec
>
> As soon as I re-add the 4th port, balancing goes to crap again, and all is 
> sent via gi0/0/20:
>
>
> #sh interfaces gigabitEthernet 0/0/20 | include 30 sec
>   30 second input rate 16863000 bits/sec, 5516 packets/sec
>   30 second output rate 405225000 bits/sec, 52284 packets/sec
> #sh interfaces gigabitEthernet 0/0/21 | include 30 sec
>   30 second input rate 26944000 bits/sec, 4450 packets/sec
>   30 second output rate 3366000 bits/sec, 417 packets/sec
> #sh interfaces gigabitEthernet 0/0/22 | include 30 sec
>   30 second input rate 17212000 bits/sec, 3911 packets/sec
>   30 second output rate 6943000 bits/sec, 866 packets/sec
> #sh interfaces gigabitEthernet 0/0/23 | include 30 sec
>   30 second input rate 20943000 bits/sec, 4190 packets/sec
>   30 second output rate 518000 bits/sec, 94 packets/sec
>
> So, it does not like balancing over 4 links - 3 links is far better.
>
> So, I also tried reducing it to 2 links - And balance is also much better 
> (Not perfect, but much better than with 4 links)
>
> sh interfaces gigabitEthernet 0/0/22 | include 30 sec
>   30 second input rate 57711000 bits/sec, 8997 packets/sec
>   30 second output rate 10994 bits/sec, 20114 packets/sec
> sh interfaces gigabitEthernet 0/0/23 | include 30 sec
>   30 second input rate 40999000 bits/sec, 9508 packets/sec
>   30 second output rate 346398000 bits/sec, 35224 packets/sec
>
> sh interfaces gigabitEthernet 0/0/22 | include 30 sec
>   30 second input rate 52511000 bits/sec, 8699 packets/sec
>   30 second output rate 126974000 bits/sec, 21239 packets/sec
> sh interfaces gigabitEthernet 0/0/23 | include 30 sec
>   30 second input rate 3791 bits/sec, 9901 packets/sec
>   30 second output rate 334954000 bits/sec, 34687 packets/sec
>
> If it can maintain those type of ratios, I can live with it.why it doesnt 
> like 4 ports, and originally didnt like 2 ports, but now appears to balance 
> over 2 "better", Id love to know
>
> Cheers.

So weird. What IOS-XE version are you running on the ASR920? Based on
the output from your previous email it seems like a bug to me. When
you said FRR was also running I though that might be interfering.
We've had issues where when running feature X and Y, it mostly seems
to work with some minor issue, but actually the two features together
(after much TAC debating) aren't supported together and it isn't
documented anywhere that this combination isn't supported.

I'd been keen to see what TAC say, seems like a bug, I wonder if a
reboot of the ASR920 would have fixed this?

Perhaps you can squeeze some better commands out of TAC for
troubleshooting this sort of thing. You had multiple paths in CEF in
both software and hardware as far as I can tell (I'm not expert on the
ASR920). As far as I know there is no "test ..." command to test
load-balancing specifically on the ASR920.

Cheers,
James.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-09-04 Thread James Bensley
On 1 September 2017 at 02:02, Patrick Cole  wrote:
> James,
>
> Interesting you should mention the PPPoE thing as all of our ASR920 P/PE are
> deployed using BDI for NNI facing interfaces and we carry bucketloads
> PPPoE traffic across them all without any issues.
>
> The only thing I had to be weary of was accidentally putting two service
> instances in the bridge domain for the NNI IP interface as it would spit a
> ASIC programming error for FRR and start blackholing some labeled traffic.
> But as long as you're meticulous about that it seems fine.
>
> Thu, Aug 31, 2017 at 09:12:05AM +0100, James Bensley wrote:
>> https://null.53bits.co.uk/index.php?page=mpls-over-phy-vs-eff-bdi-svi

Interesting! I maybe mis-remembering as it was about 6+ months ago but
we had ASR920s and ME3600s both running pseudowires back to central
ASR1001s, and maybe the problem was with the ASR1001s instead of the
ASR920s. Whichever device type it was (I thought it was the ASR920s
though) I asked TAC and they confirmed it's not supported (pseudowires
with PPPoE payload when the core facing interface is a BDI).

I searched and searched my self and couldn't find any documentation
saying that it wasn't supported, TAC did eventually show me some
documentation that said it wasn't supported.

(In-fact yes, it was the ASR1001, re-reading my table linked above, in
none of the permutations is the ASR1001 listed as working when
transporting PPPoE with a BDI core interface. I have such a bad
memory, that is why I have to write this stuff down :S )

Cheers,
James.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-09-02 Thread CiscoNSP List
Just a quick update to this - Was in the process of converting 2 of the links 
to a port-chan (Removed IP address from the  port taking majority of the 
traffic (gi0/0/20), and noticed it started lad-balancing over the now "3" ECMP 
links far better:


sh interfaces gigabitEthernet 0/0/21 | include 30 sec
  30 second input rate 33251000 bits/sec, 8355 packets/sec
  30 second output rate 265517000 bits/sec, 30692 packets/sec
#sh interfaces gigabitEthernet 0/0/22 | include 30 sec
  30 second input rate 26199000 bits/sec, 4643 packets/sec
  30 second output rate 84239000 bits/sec, 13864 packets/sec
#sh interfaces gigabitEthernet 0/0/23 | include 30 sec
  30 second input rate 12839000 bits/sec, 3794 packets/sec
  30 second output rate 56293000 bits/sec, 7668 packets/sec


As soon as I re-add the 4th port, balancing goes to crap again, and all is sent 
via gi0/0/20:


#sh interfaces gigabitEthernet 0/0/20 | include 30 sec
  30 second input rate 16863000 bits/sec, 5516 packets/sec
  30 second output rate 405225000 bits/sec, 52284 packets/sec
#sh interfaces gigabitEthernet 0/0/21 | include 30 sec
  30 second input rate 26944000 bits/sec, 4450 packets/sec
  30 second output rate 3366000 bits/sec, 417 packets/sec
#sh interfaces gigabitEthernet 0/0/22 | include 30 sec
  30 second input rate 17212000 bits/sec, 3911 packets/sec
  30 second output rate 6943000 bits/sec, 866 packets/sec
#sh interfaces gigabitEthernet 0/0/23 | include 30 sec
  30 second input rate 20943000 bits/sec, 4190 packets/sec
  30 second output rate 518000 bits/sec, 94 packets/sec


So, it does not like balancing over 4 links - 3 links is far better.

So, I also tried reducing it to 2 links - And balance is also much better (Not 
perfect, but much better than with 4 links)

sh interfaces gigabitEthernet 0/0/22 | include 30 sec
  30 second input rate 57711000 bits/sec, 8997 packets/sec
  30 second output rate 10994 bits/sec, 20114 packets/sec
sh interfaces gigabitEthernet 0/0/23 | include 30 sec
  30 second input rate 40999000 bits/sec, 9508 packets/sec
  30 second output rate 346398000 bits/sec, 35224 packets/sec

sh interfaces gigabitEthernet 0/0/22 | include 30 sec
  30 second input rate 52511000 bits/sec, 8699 packets/sec
  30 second output rate 126974000 bits/sec, 21239 packets/sec
sh interfaces gigabitEthernet 0/0/23 | include 30 sec
  30 second input rate 3791 bits/sec, 9901 packets/sec
  30 second output rate 334954000 bits/sec, 34687 packets/sec

If it can maintain those type of ratios, I can live with it.why it doesnt 
like 4 ports, and originally didnt like 2 ports, but now appears to balance 
over 2 "better", Id love to know 

Cheers.


From: cisco-nsp <cisco-nsp-boun...@puck.nether.net> on behalf of CiscoNSP List 
<cisconsp_l...@hotmail.com>
Sent: Friday, 1 September 2017 8:55 AM
To: Aaron Gould; 'James Bensley'; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] OSPF equal cost load balancing

Hmm - It cant be - Its not just to one nexthop that all the traffic is 
headingi.e there are 3 or 4 destination routers (2 ASR1001s, and the ME3600 
(2 of those)...so 4 next-hop addresseswe cant be that unlucky that every 
one of those addresses is being mapped to gi0/0/20...no, just checked, and it 
arbitrarily changes based on src ipbut, that could be just cef 
miss-reporting. very frustrating.



From: cisco-nsp <cisco-nsp-boun...@puck.nether.net> on behalf of CiscoNSP List 
<cisconsp_l...@hotmail.com>
Sent: Friday, 1 September 2017 8:45 AM
To: Aaron Gould; 'James Bensley'; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] OSPF equal cost load balancing

Thanks Aaronthats what Im going to try shortly.Very strange how cef 
exact route reports it as load sharing, but it obviously isnt.and the 
next-hop link you provided, I have to read, but I think that is what is 
happening...



From: Aaron Gould <aar...@gvtc.com>
Sent: Friday, 1 September 2017 6:37 AM
To: 'CiscoNSP List'; 'James Bensley'; cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] OSPF equal cost load balancing

In my mpls cloud I usually would lag dual gige's together to feed my PE
boxes with more bandwidth.  Worked well for me

-Aaron

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
cisco-nsp Info Page - 
puck.nether.net<https://puck.nether.net/mailman/listinfo/cisco-nsp>
puck.nether.net
To see the collection of prior postings to the list, visit the cisco-nsp 
Archives. Using cisco-nsp: To post a message to all the list members, send ...



cisco-nsp Info Page - 
puck.nether.net<https://puck.nether.net/mailman/listinfo/cisco-nsp>
cisco-nsp Info Page - 
puck.nether.net<https://puck.nether.net/mailman/listinfo/cisco-nsp>
puck.nether.net
To see the collection of prior postings to the list, visi

Re: [c-nsp] OSPF equal cost load balancing

2017-08-31 Thread Patrick Cole
James,

Interesting you should mention the PPPoE thing as all of our ASR920 P/PE are 
deployed using BDI for NNI facing interfaces and we carry bucketloads
PPPoE traffic across them all without any issues.

The only thing I had to be weary of was accidentally putting two service
instances in the bridge domain for the NNI IP interface as it would spit a
ASIC programming error for FRR and start blackholing some labeled traffic.
But as long as you're meticulous about that it seems fine.

PC

Thu, Aug 31, 2017 at 09:12:05AM +0100, James Bensley wrote:
 
> > We're going to be in the same boat soon too.. ASR920's on both sides with
> > OSPF across two physical paths and worried about load sharing. Most of our
> > traffic is MPLS xconnects traversing these links (licensed backhauls).
> 
> This doesn't sound like a good idea to me (depending on your traffic
> requirements). I have had mixed results when using BDIs/SVIs for core
> MPLS facing interfaces, as an example, PPPoE frames wouldn't forward
> over a pseudowire when the ASR920 used a BDI for the for interface:
> https://null.53bits.co.uk/index.php?page=mpls-over-phy-vs-eff-bdi-svi
> 
> 
> Cheers,
> James.
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/

-- 
Patrick Cole 
Senior Network Specialist
World Without Wires
PO Box 869. Palm Beach, QLD, 4221
Ph:  0410 626 630
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-08-31 Thread CiscoNSP List
Hmm - It cant be - Its not just to one nexthop that all the traffic is 
headingi.e there are 3 or 4 destination routers (2 ASR1001s, and the ME3600 
(2 of those)...so 4 next-hop addresseswe cant be that unlucky that every 
one of those addresses is being mapped to gi0/0/20...no, just checked, and it 
arbitrarily changes based on src ipbut, that could be just cef 
miss-reporting. very frustrating.



From: cisco-nsp <cisco-nsp-boun...@puck.nether.net> on behalf of CiscoNSP List 
<cisconsp_l...@hotmail.com>
Sent: Friday, 1 September 2017 8:45 AM
To: Aaron Gould; 'James Bensley'; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] OSPF equal cost load balancing

Thanks Aaronthats what Im going to try shortly.Very strange how cef 
exact route reports it as load sharing, but it obviously isnt.and the 
next-hop link you provided, I have to read, but I think that is what is 
happening...



From: Aaron Gould <aar...@gvtc.com>
Sent: Friday, 1 September 2017 6:37 AM
To: 'CiscoNSP List'; 'James Bensley'; cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] OSPF equal cost load balancing

In my mpls cloud I usually would lag dual gige's together to feed my PE
boxes with more bandwidth.  Worked well for me

-Aaron

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
cisco-nsp Info Page - 
puck.nether.net<https://puck.nether.net/mailman/listinfo/cisco-nsp>
puck.nether.net
To see the collection of prior postings to the list, visit the cisco-nsp 
Archives. Using cisco-nsp: To post a message to all the list members, send ...



archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-08-31 Thread CiscoNSP List
Thanks Aaronthats what Im going to try shortly.Very strange how cef 
exact route reports it as load sharing, but it obviously isnt.and the 
next-hop link you provided, I have to read, but I think that is what is 
happening...



From: Aaron Gould <aar...@gvtc.com>
Sent: Friday, 1 September 2017 6:37 AM
To: 'CiscoNSP List'; 'James Bensley'; cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] OSPF equal cost load balancing

In my mpls cloud I usually would lag dual gige's together to feed my PE
boxes with more bandwidth.  Worked well for me

-Aaron

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-08-31 Thread Aaron Gould
In my mpls cloud I usually would lag dual gige's together to feed my PE
boxes with more bandwidth.  Worked well for me

-Aaron

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-08-31 Thread Aaron Gould
I just read this.  I wonder if it applies.

https://www.cisco.com/en/US/products/hw/modules/ps2033/prod_technical_reference09186a00800afeb7.html

How CEF load balancing works 

….

If the destination is on a remote network reachable via a next hop router, the 
entry in the route cache is consisting of the destination network. If parallel 
paths exist this does not provide load balancing, as only one path would be 
used.

….

 

-Aaron

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] OSPF equal cost load balancing

2017-08-31 Thread CiscoNSP List
LETE(0)   Address: YYY.YYY.230.102
  Interface: GigabitEthernet0/0/23   Protocol: TAG
  mtu:9100, flags:0x0, fixups:0x0, encap_len:14
  Handles (adj_id:0x008f) (PI:0x10648c80) (PD:0x11b41688)
  Rewrite Str: 34:62:88:2a:49:d8:00:a6:ca:cf:2c:97:88:47

  HW Info:
FID index: 0x6063EL3 index: 0x1018EL2 index: 0x
El2RW: 0x010cMET index: 0x0003202aEAID : 0x1011
HW ADJ FLAGS: 0x40
Hardware MAC Rewrite Str: 00:00:00:00:00:00:00:00:00:00:00:00

=== Label OCE ===
  Label flags: 20
  Num Labels: 1
  Num Bk Labels: 1
  Out Labels: 30
  Out Backup Labels: 30
  Next OCE Type: Fast ReRoute OCE; Next OCE handle: 0x123cbd70

=== FRR OCE ===
  FRR type : IP FRR
  FRR state: Primary
  Primary IF's gid : 22 (DPIDX : 0) Backup IF's DPIDX : 0
  Primary FID  : 0x6856
  PPO handle   : 0x
  Next OCE : Adjacency (0x12105c58)
  Bkup OCE : Adjacency (0x123b4c68)
  Primary BDI  : 0 (Index : 0)
  Backup BDI   : 0 (Index : 0)
  FRR Intf info at Primary array index DPIDX 0, FRR count 0, MET 0x 
0x, EAID 0x
  FRR Intf info at Backup array index  DPIDX 0, FRR count 0, MET 0x 
0x, EAID 0x
  Primary HW Info:
 fi_handle FID index: 0xEAID index: 0x
 nh_handle MET index: 0xEAID index: 0x
 EL3ID 0x
  Backup HW Info:
 nh_handle MET index: 0xEAID index: 0x

=== Adjacency OCE ===
  Adj State: COMPLETE(0)   Address: XXX.XXX.67.154
  Interface: GigabitEthernet0/0/21   Protocol: TAG
  mtu:9100, flags:0x0, fixups:0x0, encap_len:14
  Handles (adj_id:0x07db) (PI:0x10729058) (PD:0x12105c58)
  Rewrite Str: 34:62:88:2a:49:d6:00:a6:ca:cf:2c:95:88:47

  HW Info:
FID index: 0x6597EL3 index: 0x1016EL2 index: 0x
El2RW: 0x0126MET index: 0x00032047EAID : 0x1013
HW ADJ FLAGS: 0x40
Hardware MAC Rewrite Str: 00:00:00:00:00:00:00:00:00:00:00:00

=== Adjacency OCE ===
  Adj State: COMPLETE(0)   Address: XXX.XXX.67.156
  Interface: GigabitEthernet0/0/20   Protocol: TAG
  mtu:9100, flags:0x0, fixups:0x0, encap_len:14
  Handles (adj_id:0x07df) (PI:0x107292d8) (PD:0x123b4c68)
  Rewrite Str: 34:62:88:2a:49:d5:00:a6:ca:cf:2c:94:88:47

  HW Info:
FID index: 0x65bcEL3 index: 0x1015EL2 index: 0x
El2RW: 0x0128MET index: 0x00032049EAID : 0x1012
HW ADJ FLAGS: 0x40
Hardware MAC Rewrite Str: 00:00:00:00:00:00:00:00:00:00:00:00



Thanks again for your assistance on this.










From: cisco-nsp <cisco-nsp-boun...@puck.nether.net> on behalf of James Bensley 
<jwbens...@gmail.com>
Sent: Thursday, 31 August 2017 6:12 PM
To: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] OSPF equal cost load balancing

On 31 August 2017 at 01:35, CiscoNSP List <cisconsp_l...@hotmail.com> wrote:
>
> AAh - Thank you James!  So the ASR920 will not ECMP over 2 links, it requires 
> 4...that would explain the difference between egress/ingress (and why the 920 
> is not working particularly well!)

I'm not 100% sure but that is what the doc's indicate (and as we know,
Cisco doc's aren't the best):
https://www.cisco.com/c/en/us/td/docs/routers/asr920/configuration/guide/mpls/mp-l3-vpns-xe-3s-asr920-book/mp-l3-vpns-xe-3s-asr920-book_chapter_0100.html#reference_EDE971A94BE6443995432BE8D9E82A25
[http://www.cisco.com/web/fw/i/logo-open-graph.gif]<https://www.cisco.com/c/en/us/td/docs/routers/asr920/configuration/guide/mpls/mp-l3-vpns-xe-3s-asr920-book/mp-l3-vpns-xe-3s-asr920-book_chapter_0100.html#reference_EDE971A94BE6443995432BE8D9E82A25>

MPLS: Layer 3 VPNs Configuration Guide (Cisco ASR 920 
...<https://www.cisco.com/c/en/us/td/docs/routers/asr920/configuration/guide/mpls/mp-l3-vpns-xe-3s-asr920-book/mp-l3-vpns-xe-3s-asr920-book_chapter_0100.html#reference_EDE971A94BE6443995432BE8D9E82A25>
www.cisco.com
MPLS: Layer 3 VPNs Configuration Guide (Cisco ASR 920 Series) -ECMP Load 
Balancing




Restrictions for ECMP Load Balancing
-Both 4 ECMP and 8 ECMP paths are supported.
-Load balancing is supported on global IPv4 and IPv6 traffic. For
global IPv4 and IPv6 traffic, the traffic distribution can be equal
among the available 8 links.
-Per packet load balancing is not supported.
-Label load balancing is supported.

> And yes, we are running MPLS over these links (But not a LAG as mentioned) - 
> So does your comment re MPLS hasting still apply to our setup, or only to a 
> LAG?

Hmm, OK well see above "Label load balancing is supported." - although
not clear I assume that means MPLS labels? So perhaps it seems ECMP
should supports MPLS labelled paths and recognise different labelled
paths with the same IGP cost as seperate "ECMP" paths.


> #sh ip cef YYY.YYY.229.193 internal
> YYY.YYY.229.192/30, epoch 2, fla

Re: [c-nsp] OSPF equal cost load balancing

2017-08-31 Thread James Bensley
On 31 August 2017 at 01:35, CiscoNSP List  wrote:
>
> AAh - Thank you James!  So the ASR920 will not ECMP over 2 links, it requires 
> 4...that would explain the difference between egress/ingress (and why the 920 
> is not working particularly well!)

I'm not 100% sure but that is what the doc's indicate (and as we know,
Cisco doc's aren't the best):
https://www.cisco.com/c/en/us/td/docs/routers/asr920/configuration/guide/mpls/mp-l3-vpns-xe-3s-asr920-book/mp-l3-vpns-xe-3s-asr920-book_chapter_0100.html#reference_EDE971A94BE6443995432BE8D9E82A25

Restrictions for ECMP Load Balancing
-Both 4 ECMP and 8 ECMP paths are supported.
-Load balancing is supported on global IPv4 and IPv6 traffic. For
global IPv4 and IPv6 traffic, the traffic distribution can be equal
among the available 8 links.
-Per packet load balancing is not supported.
-Label load balancing is supported.

> And yes, we are running MPLS over these links (But not a LAG as mentioned) - 
> So does your comment re MPLS hasting still apply to our setup, or only to a 
> LAG?

Hmm, OK well see above "Label load balancing is supported." - although
not clear I assume that means MPLS labels? So perhaps it seems ECMP
should supports MPLS labelled paths and recognise different labelled
paths with the same IGP cost as seperate "ECMP" paths.


> #sh ip cef YYY.YYY.229.193 internal
> YYY.YYY.229.192/30, epoch 2, flags [rnolbl, rlbls], RIB[B], refcnt 6, 
> per-destination sharing
>   sources: RIB
>   feature space:
> IPRM: 0x00018000
> Broker: linked, distributed at 4th priority
>   ifnums:
> GigabitEthernet0/0/22(29): XXX.XXX.67.152
> GigabitEthernet0/0/23(30): YYY.YYY.230.102
>   path list 3C293988, 35 locks, per-destination, flags 0x26D [shble, hvsh, 
> rif, rcrsv, hwcn, bgp]
> path 3C292714, share 1/1, type recursive, for IPv4
>   recursive via XXX.XXX.76.211[IPv4:Default], fib 3C9AE64C, 1 terminal 
> fib, v4:Default:XXX.XXX.76.211/32
>   path list 3D583FF0, 13 locks, per-destination, flags 0x49 [shble, rif, 
> hwcn]
>   path 3D4A221C, share 0/1, type attached nexthop, for IPv4, flags 
> [has-rpr]
> MPLS short path extensions: MOI flags = 0x21 label explicit-null
> nexthop YYY.YYY.230.102 GigabitEthernet0/0/23 label 
> [explicit-null|explicit-null], IP adj out of GigabitEthernet0/0/23, addr 
> YYY.YYY.230.102 3C287540
>   repair: attached-nexthop XXX.XXX.67.152 GigabitEthernet0/0/22 
> (3D4A44A4)
>   path 3D4A44A4, share 1/1, type attached nexthop, for IPv4, flags 
> [has-rpr]
> MPLS short path extensions: MOI flags = 0x21 label explicit-null
> nexthop XXX.XXX.67.152 GigabitEthernet0/0/22 label 
> [explicit-null|explicit-null], IP adj out of GigabitEthernet0/0/22, addr 
> XXX.XXX.67.152 3CC74980
>   repair: attached-nexthop YYY.YYY.230.102 GigabitEthernet0/0/23 
> (3D4A221C)
>   output chain:
> loadinfo 3D43D410, per-session, 2 choices, flags 0103, 21 locks
>   flags [Per-session, for-rx-IPv4, indirection]
>   16 hash buckets
> < 0 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51B980)
>  XXX.XXX.67.152 3D643CE0>
>  YYY.YYY.230.102 3CC74300>
> < 1 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51BA40)
>  YYY.YYY.230.102 3CC74300>
>  XXX.XXX.67.152 3D643CE0>
> < 2 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51B980)
>  XXX.XXX.67.152 3D643CE0>
>  YYY.YYY.230.102 3CC74300>
> < 3 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51BA40)
>  YYY.YYY.230.102 3CC74300>
>  XXX.XXX.67.152 3D643CE0>
> < 4 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51B980)
>  XXX.XXX.67.152 3D643CE0>
>  YYY.YYY.230.102 3CC74300>
> < 5 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51BA40)
>  YYY.YYY.230.102 3CC74300>
>  XXX.XXX.67.152 3D643CE0>
> < 6 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51B980)
>  XXX.XXX.67.152 3D643CE0>
>  YYY.YYY.230.102 3CC74300>
> < 7 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51BA40)
>  YYY.YYY.230.102 3CC74300>
>  XXX.XXX.67.152 3D643CE0>
> < 8 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51B980)
>  XXX.XXX.67.152 3D643CE0>
>  YYY.YYY.230.102 3CC74300>
> < 9 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51BA40)
>  YYY.YYY.230.102 3CC74300>
>  XXX.XXX.67.152 3D643CE0>
> <10 > label [explicit-null|explicit-null]
>   FRR Primary (0x3D51B980)
>

Re: [c-nsp] OSPF equal cost load balancing

2017-08-30 Thread CiscoNSP List

Hmm - Well this is just not wanting to play nicely at all


Ive added another 2 links (Now 4 total), all equal cost - Egress load (From 
ASR920->ME3600) went from Gi0/0/22 doing 950M/sec,  Gi0/0/23 doing 5-10Mb/sec, 
to Gi0/0/20 now taking all the load...


So, we have gi0/0/20,21,22,23 connected to the "corresponding" ports on the 
ME3600 (gi0/20,21,22,23)


Now gi0/0/20 is doing 970Mb/s a sec.Ive tried every combination of 
load-sharing in global conf, and they initially make a bit of a difference 
(i.e. the other ports will do 5-10Mb/sec each), but then revert back to 
Gi0/0/20 being maxed out.


 #show int gigabitEthernet 0/0/20 | inc 30 sec
  30 second input rate 9019 bits/sec, 14882 packets/sec
  30 second output rate 969898000 bits/sec, 144872 packets/sec
 #show int gigabitEthernet 0/0/21 | inc 30 sec
  30 second input rate 74069000 bits/sec, 13780 packets/sec
  30 second output rate 1778000 bits/sec, 312 packets/sec
 #show int gigabitEthernet 0/0/22 | inc 30 sec
  30 second input rate 9676 bits/sec, 15992 packets/sec
  30 second output rate 3067000 bits/sec, 444 packets/sec
 #show int gigabitEthernet 0/0/23 | inc 30 sec
  30 second input rate 103174000 bits/sec, 16690 packets/sec
  30 second output rate 395000 bits/sec, 101 packets/sec


Help?  




From: CBL <alanda...@gmail.com>
Sent: Thursday, 31 August 2017 1:13 PM
To: CiscoNSP List
Cc: James Bensley; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] OSPF equal cost load balancing

What if you were to setup four BDIs running OSPF/MPLS across these two physical 
interfaces. Two BDIs per physical interface. Would that make ECMP work 
correctly using an ASR920?

We're going to be in the same boat soon too.. ASR920's on both sides with OSPF 
across two physical paths and worried about load sharing. Most of our traffic 
is MPLS xconnects traversing these links (licensed backhauls).


On Wed, Aug 30, 2017 at 6:35 PM, CiscoNSP List 
<cisconsp_l...@hotmail.com<mailto:cisconsp_l...@hotmail.com>> wrote:
AAh - Thank you James!  So the ASR920 will not ECMP over 2 links, it requires 
4...that would explain the difference between egress/ingress (and why the 920 
is not working particularly well!)


Yes, this is ECMP, not LAG - So changing the load sharing algorithm can only be 
done globally (As I tried to do it under the individual interfaces, and was 
only presented with per dst as an option)


(config-if)#ip load-sharing ?
  per-destination  Deterministic distribution


So, changing globally will potentially cause a service disruption? (May need to 
do this in maintenance window) - Do you suggest "include-ports" as a possible 
candidate?


#ip cef load-sharing algorithm ?
  include-ports  Algorithm that includes layer 4 ports
  original   Original algorithm
  tunnel Algorithm for use in tunnel only environments
  universal  Algorithm for use in most environments

And yes, we are running MPLS over these links (But not a LAG as mentioned) - So 
does your comment re MPLS hasting still apply to our setup, or only to a LAG?


Thanks again for your response - Extremely helpful!



From: cisco-nsp 
<cisco-nsp-boun...@puck.nether.net<mailto:cisco-nsp-boun...@puck.nether.net>> 
on behalf of James Bensley <jwbens...@gmail.com<mailto:jwbens...@gmail.com>>
Sent: Thursday, 31 August 2017 6:43 AM
To: cisco-nsp@puck.nether.net<mailto:cisco-nsp@puck.nether.net>
Subject: Re: [c-nsp] OSPF equal cost load balancing

I think two layer ECMP links are being used here, both of which are in
the IGP. Are you running MPLS over these links too?

The ME3600 is able to ECMP over any number of links as far as I know
(up to the max, which is 8 or 16) however I think the ASR920 will only
ECMP over 4 or 8 links (so not 2 as in your case). This could be the
problem here.

Could you also try to change the CEF load balancing algorithm
(assuming this is ECMP and not LAG, this won't affect a LAG):

ASR920(config)#ip cef load-sharing algorithm ?
  include-ports  Algorithm that includes layer 4 ports
  original   Original algorithm
  tunnel Algorithm for use in tunnel only environments
  universal  Algorithm for use in most environments

If it is a LAG then on the ASR920 try to adjust these options:

ASR920(config)#port-channel load-balance-hash-algo ?
  dst-ip Destination IP
  dst-macDestination MAC
  src-dst-ip Source XOR Destination IP Addr
  src-dst-macSource XOR Destination MAC
  src-dst-mixed-ip-port  Source XOR Destination Port, IP addr
  src-ip Source IP
  src-macSource MAC

If you're running MPLS over the LAG the ASR920 can hash MPLS over the
LAG and the ASR920 should hash over 2 links just fine.

Cheers,
James.
___
cisco-nsp mailing list  
cisco-nsp@puck.nether.net<mailto

Re: [c-nsp] OSPF equal cost load balancing

2017-08-30 Thread CBL
What if you were to setup four BDIs running OSPF/MPLS across these two
physical interfaces. Two BDIs per physical interface. Would that make ECMP
work correctly using an ASR920?

We're going to be in the same boat soon too.. ASR920's on both sides with
OSPF across two physical paths and worried about load sharing. Most of our
traffic is MPLS xconnects traversing these links (licensed backhauls).


On Wed, Aug 30, 2017 at 6:35 PM, CiscoNSP List <cisconsp_l...@hotmail.com>
wrote:

> AAh - Thank you James!  So the ASR920 will not ECMP over 2 links, it
> requires 4...that would explain the difference between egress/ingress (and
> why the 920 is not working particularly well!)
>
>
> Yes, this is ECMP, not LAG - So changing the load sharing algorithm can
> only be done globally (As I tried to do it under the individual interfaces,
> and was only presented with per dst as an option)
>
>
> (config-if)#ip load-sharing ?
>   per-destination  Deterministic distribution
>
>
> So, changing globally will potentially cause a service disruption? (May
> need to do this in maintenance window) - Do you suggest "include-ports" as
> a possible candidate?
>
>
> #ip cef load-sharing algorithm ?
>   include-ports  Algorithm that includes layer 4 ports
>   original   Original algorithm
>   tunnel Algorithm for use in tunnel only environments
>   universal  Algorithm for use in most environments
>
> And yes, we are running MPLS over these links (But not a LAG as mentioned)
> - So does your comment re MPLS hasting still apply to our setup, or only to
> a LAG?
>
>
> Thanks again for your response - Extremely helpful!
>
>
> 
> From: cisco-nsp <cisco-nsp-boun...@puck.nether.net> on behalf of James
> Bensley <jwbens...@gmail.com>
> Sent: Thursday, 31 August 2017 6:43 AM
> To: cisco-nsp@puck.nether.net
> Subject: Re: [c-nsp] OSPF equal cost load balancing
>
> I think two layer ECMP links are being used here, both of which are in
> the IGP. Are you running MPLS over these links too?
>
> The ME3600 is able to ECMP over any number of links as far as I know
> (up to the max, which is 8 or 16) however I think the ASR920 will only
> ECMP over 4 or 8 links (so not 2 as in your case). This could be the
> problem here.
>
> Could you also try to change the CEF load balancing algorithm
> (assuming this is ECMP and not LAG, this won't affect a LAG):
>
> ASR920(config)#ip cef load-sharing algorithm ?
>   include-ports  Algorithm that includes layer 4 ports
>   original   Original algorithm
>   tunnel Algorithm for use in tunnel only environments
>   universal  Algorithm for use in most environments
>
> If it is a LAG then on the ASR920 try to adjust these options:
>
> ASR920(config)#port-channel load-balance-hash-algo ?
>   dst-ip Destination IP
>   dst-macDestination MAC
>   src-dst-ip Source XOR Destination IP Addr
>   src-dst-macSource XOR Destination MAC
>   src-dst-mixed-ip-port  Source XOR Destination Port, IP addr
>   src-ip Source IP
>   src-macSource MAC
>
> If you're running MPLS over the LAG the ASR920 can hash MPLS over the
> LAG and the ASR920 should hash over 2 links just fine.
>
> Cheers,
> James.
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> cisco-nsp Info Page - puck.nether.net<https://puck.
> nether.net/mailman/listinfo/cisco-nsp>
> puck.nether.net
> To see the collection of prior postings to the list, visit the cisco-nsp
> Archives. Using cisco-nsp: To post a message to all the list members, send
> ...
>
>
>
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-08-30 Thread CiscoNSP List

Hi Pshem - No, only L3VPN and "standard" Inet links


cheers



From: Pshem Kowalczyk <pshe...@gmail.com>
Sent: Thursday, 31 August 2017 6:51 AM
To: CiscoNSP List; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] OSPF equal cost load balancing

Are you running L2VPN traffic across those ECMP links?

kind regards
Pshem


On Wed, 30 Aug 2017 at 16:59 CiscoNSP List 
<cisconsp_l...@hotmail.com<mailto:cisconsp_l...@hotmail.com>> wrote:
Hi Everyone,


Have an ASR920 connected to an ME3600 with 2 x 1Gb links with same ospf cost 
(It was a single 1Gb, but secondary 1Gb was added as utilization was getting 
close to 1Gb) - Was hoping for at least a partial balance of traffic across the 
2 links, but egress from ASR920 to the ME3600(Ingress to customers), we are 
seeing one of the 1Gb links basically maxing out, and the other doing virtually 
nothing (10-15Mb/sec)...other direction we are seeing pretty much 50:50 balance 
across the 2 x 1Gb linksI know per-dest algorithm is used, and know that 
there are a only few big bandwidth users on the ME3600, but I cant understand 
why basically "all" of the traffic is going down one link?


Is there anyway to "tweak" the load-sharing of the equal cost paths (I can only 
see per-dst as an option)


Is a L3 etherchannel going to be any "better" with load-balancing than the 
current ospf equal cost?  (we have voip running over these links, so want to 
avoid packet delivery order issues)


Is TE a potential solution in this case?


We cant go 10G unfortunately, as the ME3600's dont have 10G ports unlocked, and 
they are earmarked for retirement - so stuck with multiple 1G links for a 
short-term fix 


Appreciate any feedback/suggestions.


Thanks
___
cisco-nsp mailing list  
cisco-nsp@puck.nether.net<mailto:cisco-nsp@puck.nether.net>
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] OSPF equal cost load balancing

2017-08-30 Thread CiscoNSP List
AAh - Thank you James!  So the ASR920 will not ECMP over 2 links, it requires 
4...that would explain the difference between egress/ingress (and why the 920 
is not working particularly well!)


Yes, this is ECMP, not LAG - So changing the load sharing algorithm can only be 
done globally (As I tried to do it under the individual interfaces, and was 
only presented with per dst as an option)


(config-if)#ip load-sharing ?
  per-destination  Deterministic distribution


So, changing globally will potentially cause a service disruption? (May need to 
do this in maintenance window) - Do you suggest "include-ports" as a possible 
candidate?


#ip cef load-sharing algorithm ?
  include-ports  Algorithm that includes layer 4 ports
  original   Original algorithm
  tunnel Algorithm for use in tunnel only environments
  universal  Algorithm for use in most environments

And yes, we are running MPLS over these links (But not a LAG as mentioned) - So 
does your comment re MPLS hasting still apply to our setup, or only to a LAG?


Thanks again for your response - Extremely helpful!



From: cisco-nsp <cisco-nsp-boun...@puck.nether.net> on behalf of James Bensley 
<jwbens...@gmail.com>
Sent: Thursday, 31 August 2017 6:43 AM
To: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] OSPF equal cost load balancing

I think two layer ECMP links are being used here, both of which are in
the IGP. Are you running MPLS over these links too?

The ME3600 is able to ECMP over any number of links as far as I know
(up to the max, which is 8 or 16) however I think the ASR920 will only
ECMP over 4 or 8 links (so not 2 as in your case). This could be the
problem here.

Could you also try to change the CEF load balancing algorithm
(assuming this is ECMP and not LAG, this won't affect a LAG):

ASR920(config)#ip cef load-sharing algorithm ?
  include-ports  Algorithm that includes layer 4 ports
  original   Original algorithm
  tunnel Algorithm for use in tunnel only environments
  universal  Algorithm for use in most environments

If it is a LAG then on the ASR920 try to adjust these options:

ASR920(config)#port-channel load-balance-hash-algo ?
  dst-ip Destination IP
  dst-macDestination MAC
  src-dst-ip Source XOR Destination IP Addr
  src-dst-macSource XOR Destination MAC
  src-dst-mixed-ip-port  Source XOR Destination Port, IP addr
  src-ip Source IP
  src-macSource MAC

If you're running MPLS over the LAG the ASR920 can hash MPLS over the
LAG and the ASR920 should hash over 2 links just fine.

Cheers,
James.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
cisco-nsp Info Page - 
puck.nether.net<https://puck.nether.net/mailman/listinfo/cisco-nsp>
puck.nether.net
To see the collection of prior postings to the list, visit the cisco-nsp 
Archives. Using cisco-nsp: To post a message to all the list members, send ...



archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-08-30 Thread CiscoNSP List
ll|explicit-null]
  FRR Primary (0x3D51B980)


<15 > label [explicit-null|explicit-null]
  FRR Primary (0x3D51BA40)


  Subblocks:
None


So, all looks ok from a load sharing perspective, but majority of traffic still 
goes via gi0/0/22..so was wondering if a L3 Etherchannel may provide some 
"better" balance as it has more algorithm balancing options to choose from?   
(Or potentially setting up TE, but I think this may be overkill, and not 
provide more benefit?)

Cheers




From: Aaron Gould <aar...@gvtc.com>
Sent: Thursday, 31 August 2017 5:19 AM
To: 'CiscoNSP List'; cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] OSPF equal cost load balancing

Are you doing a 2-port etherchannel between the 920 and 3600 ?  Asking since 
you seem to be asking question about etherchannel load balancing and hashing

...or...

Are you doing 2 separate layer 3 subnets between the 920 and 3600 ?  asking 
since your subject heading implies so. (ospf equal cost LB)

...you might be confusing/mixing 2 different subjects and how-to's in the same 
explanation.

I think you mentioned the 920 is network side and 3600 is closer to customer... 
if so, please go to 920 and show a customer route on the 3600 that you wish you 
would load balance please... sanitize your output to protect the innocent...

Show ip route a.b.c.d

Show ip arp of next hop

If it goes via L2

Show mac-address-table address ..


-Aaron


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] OSPF equal cost load balancing

2017-08-30 Thread Pshem Kowalczyk
Are you running L2VPN traffic across those ECMP links?

kind regards
Pshem


On Wed, 30 Aug 2017 at 16:59 CiscoNSP List 
wrote:

> Hi Everyone,
>
>
> Have an ASR920 connected to an ME3600 with 2 x 1Gb links with same ospf
> cost (It was a single 1Gb, but secondary 1Gb was added as utilization was
> getting close to 1Gb) - Was hoping for at least a partial balance of
> traffic across the 2 links, but egress from ASR920 to the ME3600(Ingress to
> customers), we are seeing one of the 1Gb links basically maxing out, and
> the other doing virtually nothing (10-15Mb/sec)...other direction we are
> seeing pretty much 50:50 balance across the 2 x 1Gb linksI know
> per-dest algorithm is used, and know that there are a only few big
> bandwidth users on the ME3600, but I cant understand why basically "all" of
> the traffic is going down one link?
>
>
> Is there anyway to "tweak" the load-sharing of the equal cost paths (I can
> only see per-dst as an option)
>
>
> Is a L3 etherchannel going to be any "better" with load-balancing than the
> current ospf equal cost?  (we have voip running over these links, so want
> to avoid packet delivery order issues)
>
>
> Is TE a potential solution in this case?
>
>
> We cant go 10G unfortunately, as the ME3600's dont have 10G ports
> unlocked, and they are earmarked for retirement - so stuck with multiple 1G
> links for a short-term fix 
>
>
> Appreciate any feedback/suggestions.
>
>
> Thanks
> ___
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] OSPF equal cost load balancing

2017-08-30 Thread James Bensley
I think two layer ECMP links are being used here, both of which are in
the IGP. Are you running MPLS over these links too?

The ME3600 is able to ECMP over any number of links as far as I know
(up to the max, which is 8 or 16) however I think the ASR920 will only
ECMP over 4 or 8 links (so not 2 as in your case). This could be the
problem here.

Could you also try to change the CEF load balancing algorithm
(assuming this is ECMP and not LAG, this won't affect a LAG):

ASR920(config)#ip cef load-sharing algorithm ?
  include-ports  Algorithm that includes layer 4 ports
  original   Original algorithm
  tunnel Algorithm for use in tunnel only environments
  universal  Algorithm for use in most environments

If it is a LAG then on the ASR920 try to adjust these options:

ASR920(config)#port-channel load-balance-hash-algo ?
  dst-ip Destination IP
  dst-macDestination MAC
  src-dst-ip Source XOR Destination IP Addr
  src-dst-macSource XOR Destination MAC
  src-dst-mixed-ip-port  Source XOR Destination Port, IP addr
  src-ip Source IP
  src-macSource MAC

If you're running MPLS over the LAG the ASR920 can hash MPLS over the
LAG and the ASR920 should hash over 2 links just fine.

Cheers,
James.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] OSPF equal cost load balancing

2017-08-30 Thread Aaron Gould
Are you doing a 2-port etherchannel between the 920 and 3600 ?  Asking since 
you seem to be asking question about etherchannel load balancing and hashing

...or...

Are you doing 2 separate layer 3 subnets between the 920 and 3600 ?  asking 
since your subject heading implies so. (ospf equal cost LB)

...you might be confusing/mixing 2 different subjects and how-to's in the same 
explanation.

I think you mentioned the 920 is network side and 3600 is closer to customer... 
if so, please go to 920 and show a customer route on the 3600 that you wish you 
would load balance please... sanitize your output to protect the innocent...

Show ip route a.b.c.d

Show ip arp of next hop

If it goes via L2

Show mac-address-table address ..


-Aaron


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] OSPF equal cost load balancing

2017-08-29 Thread CiscoNSP List
Hi Everyone,


Have an ASR920 connected to an ME3600 with 2 x 1Gb links with same ospf cost 
(It was a single 1Gb, but secondary 1Gb was added as utilization was getting 
close to 1Gb) - Was hoping for at least a partial balance of traffic across the 
2 links, but egress from ASR920 to the ME3600(Ingress to customers), we are 
seeing one of the 1Gb links basically maxing out, and the other doing virtually 
nothing (10-15Mb/sec)...other direction we are seeing pretty much 50:50 balance 
across the 2 x 1Gb linksI know per-dest algorithm is used, and know that 
there are a only few big bandwidth users on the ME3600, but I cant understand 
why basically "all" of the traffic is going down one link?


Is there anyway to "tweak" the load-sharing of the equal cost paths (I can only 
see per-dst as an option)


Is a L3 etherchannel going to be any "better" with load-balancing than the 
current ospf equal cost?  (we have voip running over these links, so want to 
avoid packet delivery order issues)


Is TE a potential solution in this case?


We cant go 10G unfortunately, as the ME3600's dont have 10G ports unlocked, and 
they are earmarked for retirement - so stuck with multiple 1G links for a 
short-term fix 


Appreciate any feedback/suggestions.


Thanks
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/