Just a quick update to this - Was in the process of converting 2 of the links 
to a port-chan (Removed IP address from the  port taking majority of the 
traffic (gi0/0/20), and noticed it started lad-balancing over the now "3" ECMP 
links far better:


sh interfaces gigabitEthernet 0/0/21 | include 30 sec
  30 second input rate 33251000 bits/sec, 8355 packets/sec
  30 second output rate 265517000 bits/sec, 30692 packets/sec
#sh interfaces gigabitEthernet 0/0/22 | include 30 sec
  30 second input rate 26199000 bits/sec, 4643 packets/sec
  30 second output rate 84239000 bits/sec, 13864 packets/sec
#sh interfaces gigabitEthernet 0/0/23 | include 30 sec
  30 second input rate 12839000 bits/sec, 3794 packets/sec
  30 second output rate 56293000 bits/sec, 7668 packets/sec


As soon as I re-add the 4th port, balancing goes to crap again, and all is sent 
via gi0/0/20:


#sh interfaces gigabitEthernet 0/0/20 | include 30 sec
  30 second input rate 16863000 bits/sec, 5516 packets/sec
  30 second output rate 405225000 bits/sec, 52284 packets/sec
#sh interfaces gigabitEthernet 0/0/21 | include 30 sec
  30 second input rate 26944000 bits/sec, 4450 packets/sec
  30 second output rate 3366000 bits/sec, 417 packets/sec
#sh interfaces gigabitEthernet 0/0/22 | include 30 sec
  30 second input rate 17212000 bits/sec, 3911 packets/sec
  30 second output rate 6943000 bits/sec, 866 packets/sec
#sh interfaces gigabitEthernet 0/0/23 | include 30 sec
  30 second input rate 20943000 bits/sec, 4190 packets/sec
  30 second output rate 518000 bits/sec, 94 packets/sec


So, it does not like balancing over 4 links - 3 links is far better.

So, I also tried reducing it to 2 links - And balance is also much better (Not 
perfect, but much better than with 4 links)

sh interfaces gigabitEthernet 0/0/22 | include 30 sec
  30 second input rate 57711000 bits/sec, 8997 packets/sec
  30 second output rate 109940000 bits/sec, 20114 packets/sec
sh interfaces gigabitEthernet 0/0/23 | include 30 sec
  30 second input rate 40999000 bits/sec, 9508 packets/sec
  30 second output rate 346398000 bits/sec, 35224 packets/sec

sh interfaces gigabitEthernet 0/0/22 | include 30 sec
  30 second input rate 52511000 bits/sec, 8699 packets/sec
  30 second output rate 126974000 bits/sec, 21239 packets/sec
sh interfaces gigabitEthernet 0/0/23 | include 30 sec
  30 second input rate 37910000 bits/sec, 9901 packets/sec
  30 second output rate 334954000 bits/sec, 34687 packets/sec

If it can maintain those type of ratios, I can live with it.....why it doesnt 
like 4 ports, and originally didnt like 2 ports, but now appears to balance 
over 2 "better", Id love to know 😊

Cheers.

________________________________
From: cisco-nsp <cisco-nsp-boun...@puck.nether.net> on behalf of CiscoNSP List 
<cisconsp_l...@hotmail.com>
Sent: Friday, 1 September 2017 8:55 AM
To: Aaron Gould; 'James Bensley'; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] OSPF equal cost load balancing

Hmm - It cant be - Its not just to one nexthop that all the traffic is 
heading....i.e there are 3 or 4 destination routers (2 ASR1001s, and the ME3600 
(2 of those)...so 4 next-hop addresses....we cant be that unlucky that every 
one of those addresses is being mapped to gi0/0/20...no, just checked, and it 
arbitrarily changes based on src ip....but, that could be just cef 
miss-reporting. ....very frustrating.


________________________________
From: cisco-nsp <cisco-nsp-boun...@puck.nether.net> on behalf of CiscoNSP List 
<cisconsp_l...@hotmail.com>
Sent: Friday, 1 September 2017 8:45 AM
To: Aaron Gould; 'James Bensley'; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] OSPF equal cost load balancing

Thanks Aaron....thats what Im going to try shortly.....Very strange how cef 
exact route reports it as load sharing, but it obviously isnt.....and the 
next-hop link you provided, I have to read, but I think that is what is 
happening...


________________________________
From: Aaron Gould <aar...@gvtc.com>
Sent: Friday, 1 September 2017 6:37 AM
To: 'CiscoNSP List'; 'James Bensley'; cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] OSPF equal cost load balancing

In my mpls cloud I usually would lag dual gige's together to feed my PE
boxes with more bandwidth.  Worked well for me

-Aaron

_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
cisco-nsp Info Page - 
puck.nether.net<https://puck.nether.net/mailman/listinfo/cisco-nsp>
puck.nether.net
To see the collection of prior postings to the list, visit the cisco-nsp 
Archives. Using cisco-nsp: To post a message to all the list members, send ...



cisco-nsp Info Page - 
puck.nether.net<https://puck.nether.net/mailman/listinfo/cisco-nsp>
cisco-nsp Info Page - 
puck.nether.net<https://puck.nether.net/mailman/listinfo/cisco-nsp>
puck.nether.net
To see the collection of prior postings to the list, visit the cisco-nsp 
Archives. Using cisco-nsp: To post a message to all the list members, send ...



puck.nether.net
To see the collection of prior postings to the list, visit the cisco-nsp 
Archives. Using cisco-nsp: To post a message to all the list members, send ...



archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
cisco-nsp Info Page - 
puck.nether.net<https://puck.nether.net/mailman/listinfo/cisco-nsp>
puck.nether.net
To see the collection of prior postings to the list, visit the cisco-nsp 
Archives. Using cisco-nsp: To post a message to all the list members, send ...



archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to