Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-02 Thread Sam Stickland

Chris Hale wrote:

We have a set of 7206VXR's, NPE400 CPUs on each end of a point to point OC3
using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE interface
(essentially turning the 7206's into a OC-3 to GigE converter) with a single
bridge group.

We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we seem
to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're seeing a
large number of input errors (ignored, total of 5% of input packets) and a
fair amount of output pauses (0.12% of output packets)
On a slightly different tack, make sure you are using 64 bit counters in 
MRTG or you will never record more than 114Mbps (the MRTG graph will 
wrap). (Probably you already know this, but I was struck by the 
similarity between ~110Mbps and 114Mbps).


Sam
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-02 Thread Michael Ulitskiy
Could you please elaborate on the PA-GE issues? Or may be you could provide 
some pointers to where they're described?
We're using quite a few of those with traffic rate anywhere from 50M to 100M 
and I didn't notice
any issues so far, but traffic rate is increasing and I'd really like to know 
what to expect in the future,
especially if there are any known caveats.
Thank you,

Michael

On Wednesday 01 July 2009 01:41:44 pm Rodney Dunn wrote:
 The PA-GE has issues at higher speeds.
 
 You should move to L2TPV3 and see if it's better in regards
 to performance. Your best would be pure L3 forwarding.
 
 If the PA-GE is the issue you will have to get off that PA.
 
 What happens if you move it to one of the onboard GigE ports on the NPE-400?
 
 Rodney
 
 On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
  We have a set of 7206VXR's, NPE400 CPUs on each end of a point to point OC3
  using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE interface
  (essentially turning the 7206's into a OC-3 to GigE converter) with a single
  bridge group.
  
  We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we seem
  to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're seeing a
  large number of input errors (ignored, total of 5% of input packets) and a
  fair amount of output pauses (0.12% of output packets).
  
  GigabitEthernet1/0 is up, line protocol is up
Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
   reliability 255/255, txload 36/255, rxload 16/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, link type is autonegotiation, media type is unknown
  media type
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of show interface counters 12w0d
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 208
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 66046000 bits/sec, 29231 packets/sec
30 second output rate 141617000 bits/sec, 31690 packets/sec
   2816822087 packets input, 1367339773 bytes, 0 no buffer
   Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
   143326584 input errors, 0 CRC, 0 frame, 481945 overrun, 142844639
  ignored
   0 watchdog, 4536607 multicast, 0 pause input
   0 input packets with dribble condition detected
   3993978307 packets output, 979813878 bytes, 0 underruns
   0 output errors, 0 collisions, 0 interface resets
   0 babbles, 0 late collision, 0 deferred
   4 lost carrier, 0 no carrier, 4808187 pause output
   0 output buffer failures, 0 output buffers swapped out
  
  If we move this to a routed infrastructure with CEF, can we expect the CPU
  to drop considerably?   The routing will be static only, very simple config
  with no ACLs, no policy maps, etc.  We're just trying to get the routers to
  let us push as much of the OC3 bandwidth as possible.
  
  We would rather not upgrade the NPE400's if possible.  The internal LAN
  equipment is Nortel L3 switches which don't seem to support flow-control.
  
  Thanks in advance for any ideas.
  
  Chris
  
  -- 
  --
  Chris Hale
  chal...@gmail.com
  ___
  cisco-nsp mailing list  cisco-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/cisco-nsp
  archive at http://puck.nether.net/pipermail/cisco-nsp/
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-02 Thread Rodney Dunn
Michael,

I can't find the performance document I saw once before now. I'm still trying
to find it.

If you want real Gige you should go with the ASR1000. Even the G1 GE ports
will have problems at high rates with any features enabled.

Rodney

On Thu, Jul 02, 2009 at 11:00:29AM -0400, Michael Ulitskiy wrote:
 Could you please elaborate on the PA-GE issues? Or may be you could provide 
 some pointers to where they're described?
 We're using quite a few of those with traffic rate anywhere from 50M to 100M 
 and I didn't notice
 any issues so far, but traffic rate is increasing and I'd really like to know 
 what to expect in the future,
 especially if there are any known caveats.
 Thank you,
 
 Michael
 
 On Wednesday 01 July 2009 01:41:44 pm Rodney Dunn wrote:
  The PA-GE has issues at higher speeds.
  
  You should move to L2TPV3 and see if it's better in regards
  to performance. Your best would be pure L3 forwarding.
  
  If the PA-GE is the issue you will have to get off that PA.
  
  What happens if you move it to one of the onboard GigE ports on the NPE-400?
  
  Rodney
  
  On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
   We have a set of 7206VXR's, NPE400 CPUs on each end of a point to point 
   OC3
   using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE 
   interface
   (essentially turning the 7206's into a OC-3 to GigE converter) with a 
   single
   bridge group.
   
   We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we seem
   to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're 
   seeing a
   large number of input errors (ignored, total of 5% of input packets) and a
   fair amount of output pauses (0.12% of output packets).
   
   GigabitEthernet1/0 is up, line protocol is up
 Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
 MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
reliability 255/255, txload 36/255, rxload 16/255
 Encapsulation ARPA, loopback not set
 Keepalive set (10 sec)
 Full-duplex, 1000Mb/s, link type is autonegotiation, media type is 
   unknown
   media type
 output flow-control is XON, input flow-control is XON
 ARP type: ARPA, ARP Timeout 04:00:00
 Last input 00:00:00, output 00:00:00, output hang never
 Last clearing of show interface counters 12w0d
 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 208
 Queueing strategy: fifo
 Output queue: 0/40 (size/max)
 30 second input rate 66046000 bits/sec, 29231 packets/sec
 30 second output rate 141617000 bits/sec, 31690 packets/sec
2816822087 packets input, 1367339773 bytes, 0 no buffer
Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
143326584 input errors, 0 CRC, 0 frame, 481945 overrun, 142844639
   ignored
0 watchdog, 4536607 multicast, 0 pause input
0 input packets with dribble condition detected
3993978307 packets output, 979813878 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
4 lost carrier, 0 no carrier, 4808187 pause output
0 output buffer failures, 0 output buffers swapped out
   
   If we move this to a routed infrastructure with CEF, can we expect the CPU
   to drop considerably?   The routing will be static only, very simple 
   config
   with no ACLs, no policy maps, etc.  We're just trying to get the routers 
   to
   let us push as much of the OC3 bandwidth as possible.
   
   We would rather not upgrade the NPE400's if possible.  The internal LAN
   equipment is Nortel L3 switches which don't seem to support flow-control.
   
   Thanks in advance for any ideas.
   
   Chris
   
   -- 
   --
   Chris Hale
   chal...@gmail.com
   ___
   cisco-nsp mailing list  cisco-nsp@puck.nether.net
   https://puck.nether.net/mailman/listinfo/cisco-nsp
   archive at http://puck.nether.net/pipermail/cisco-nsp/
  ___
  cisco-nsp mailing list  cisco-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/cisco-nsp
  archive at http://puck.nether.net/pipermail/cisco-nsp/
  
 
 
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-02 Thread Rodney Dunn
I found what I was looking. The test was on older code but in concept it
still applies.

Bi-directional going native gige port to another native gige port on the
G1 you are looking at around 470 kpps (double 940 kpps bi-directional)
at 64 byte packets with NO features.

At 1500 byte packets it can pretty much fill up the gig in both directions
without dropping frames...again with no features.

It appears from the tet you can just about fill up the links with 256 byte
packets for native gige to native gige.

However, with the PA-GE it appears it's around 127 kpps in one direction (double
to get bi-directional) at 64 byte packets. Which ends up being about 400 Mbps
total (200 M tx and 200 M rx) going from a native Gig port to the PA-GE.

These are rough numbers from a lab test with absolutly nothing configured.

And also this is from a test set where there are no micro-burst from the
real world traffic flows. We've seen that way too many times where some
L3 forwarding switch is connected and it overruns the GigE ability on the
connecting device. That's why the ASR1k is the suggested platform for that
space now as it can do linerate Gige.

Hope this helps. As always with performance numbers YMMV depending on actual
code and configuration and design.

Rodney



On Thu, Jul 02, 2009 at 11:26:33AM -0400, Rodney Dunn wrote:
 Michael,
 
 I can't find the performance document I saw once before now. I'm still trying
 to find it.
 
 If you want real Gige you should go with the ASR1000. Even the G1 GE ports
 will have problems at high rates with any features enabled.
 
 Rodney
 
 On Thu, Jul 02, 2009 at 11:00:29AM -0400, Michael Ulitskiy wrote:
  Could you please elaborate on the PA-GE issues? Or may be you could provide 
  some pointers to where they're described?
  We're using quite a few of those with traffic rate anywhere from 50M to 
  100M and I didn't notice
  any issues so far, but traffic rate is increasing and I'd really like to 
  know what to expect in the future,
  especially if there are any known caveats.
  Thank you,
  
  Michael
  
  On Wednesday 01 July 2009 01:41:44 pm Rodney Dunn wrote:
   The PA-GE has issues at higher speeds.
   
   You should move to L2TPV3 and see if it's better in regards
   to performance. Your best would be pure L3 forwarding.
   
   If the PA-GE is the issue you will have to get off that PA.
   
   What happens if you move it to one of the onboard GigE ports on the 
   NPE-400?
   
   Rodney
   
   On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
We have a set of 7206VXR's, NPE400 CPUs on each end of a point to point 
OC3
using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE 
interface
(essentially turning the 7206's into a OC-3 to GigE converter) with a 
single
bridge group.

We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we 
seem
to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're 
seeing a
large number of input errors (ignored, total of 5% of input packets) 
and a
fair amount of output pauses (0.12% of output packets).

GigabitEthernet1/0 is up, line protocol is up
  Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
  MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
 reliability 255/255, txload 36/255, rxload 16/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, link type is autonegotiation, media type is 
unknown
media type
  output flow-control is XON, input flow-control is XON
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of show interface counters 12w0d
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 
208
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  30 second input rate 66046000 bits/sec, 29231 packets/sec
  30 second output rate 141617000 bits/sec, 31690 packets/sec
 2816822087 packets input, 1367339773 bytes, 0 no buffer
 Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
 143326584 input errors, 0 CRC, 0 frame, 481945 overrun, 142844639
ignored
 0 watchdog, 4536607 multicast, 0 pause input
 0 input packets with dribble condition detected
 3993978307 packets output, 979813878 bytes, 0 underruns
 0 output errors, 0 collisions, 0 interface resets
 0 babbles, 0 late collision, 0 deferred
 4 lost carrier, 0 no carrier, 4808187 pause output
 0 output buffer failures, 0 output buffers swapped out

If we move this to a routed infrastructure with CEF, can we expect the 
CPU
to drop considerably?   The routing will be static only, very simple 
config
with no ACLs, no policy maps, etc.  We're just trying to get the 
routers to
let us push as much of the OC3 bandwidth as possible.

We 

Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-02 Thread Rodney Dunn
One note, I'd be really interested to see how it worked if you configured
it as a L2TPV3 tunnel to connect the L2 segments vs. bridging it.
The bridge code was never designed for high speed switching.

Can you try that?

Rodney


On Thu, Jul 02, 2009 at 11:48:26AM -0400, Rodney Dunn wrote:
 I found what I was looking. The test was on older code but in concept it
 still applies.
 
 Bi-directional going native gige port to another native gige port on the
 G1 you are looking at around 470 kpps (double 940 kpps bi-directional)
 at 64 byte packets with NO features.
 
 At 1500 byte packets it can pretty much fill up the gig in both directions
 without dropping frames...again with no features.
 
 It appears from the tet you can just about fill up the links with 256 byte
 packets for native gige to native gige.
 
 However, with the PA-GE it appears it's around 127 kpps in one direction 
 (double
 to get bi-directional) at 64 byte packets. Which ends up being about 400 Mbps
 total (200 M tx and 200 M rx) going from a native Gig port to the PA-GE.
 
 These are rough numbers from a lab test with absolutly nothing configured.
 
 And also this is from a test set where there are no micro-burst from the
 real world traffic flows. We've seen that way too many times where some
 L3 forwarding switch is connected and it overruns the GigE ability on the
 connecting device. That's why the ASR1k is the suggested platform for that
 space now as it can do linerate Gige.
 
 Hope this helps. As always with performance numbers YMMV depending on actual
 code and configuration and design.
 
 Rodney
 
 
 
 On Thu, Jul 02, 2009 at 11:26:33AM -0400, Rodney Dunn wrote:
  Michael,
  
  I can't find the performance document I saw once before now. I'm still 
  trying
  to find it.
  
  If you want real Gige you should go with the ASR1000. Even the G1 GE ports
  will have problems at high rates with any features enabled.
  
  Rodney
  
  On Thu, Jul 02, 2009 at 11:00:29AM -0400, Michael Ulitskiy wrote:
   Could you please elaborate on the PA-GE issues? Or may be you could 
   provide some pointers to where they're described?
   We're using quite a few of those with traffic rate anywhere from 50M to 
   100M and I didn't notice
   any issues so far, but traffic rate is increasing and I'd really like to 
   know what to expect in the future,
   especially if there are any known caveats.
   Thank you,
   
   Michael
   
   On Wednesday 01 July 2009 01:41:44 pm Rodney Dunn wrote:
The PA-GE has issues at higher speeds.

You should move to L2TPV3 and see if it's better in regards
to performance. Your best would be pure L3 forwarding.

If the PA-GE is the issue you will have to get off that PA.

What happens if you move it to one of the onboard GigE ports on the 
NPE-400?

Rodney

On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
 We have a set of 7206VXR's, NPE400 CPUs on each end of a point to 
 point OC3
 using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE 
 interface
 (essentially turning the 7206's into a OC-3 to GigE converter) with a 
 single
 bridge group.
 
 We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we 
 seem
 to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're 
 seeing a
 large number of input errors (ignored, total of 5% of input packets) 
 and a
 fair amount of output pauses (0.12% of output packets).
 
 GigabitEthernet1/0 is up, line protocol is up
   Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
   MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
  reliability 255/255, txload 36/255, rxload 16/255
   Encapsulation ARPA, loopback not set
   Keepalive set (10 sec)
   Full-duplex, 1000Mb/s, link type is autonegotiation, media type is 
 unknown
 media type
   output flow-control is XON, input flow-control is XON
   ARP type: ARPA, ARP Timeout 04:00:00
   Last input 00:00:00, output 00:00:00, output hang never
   Last clearing of show interface counters 12w0d
   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 
 208
   Queueing strategy: fifo
   Output queue: 0/40 (size/max)
   30 second input rate 66046000 bits/sec, 29231 packets/sec
   30 second output rate 141617000 bits/sec, 31690 packets/sec
  2816822087 packets input, 1367339773 bytes, 0 no buffer
  Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
  143326584 input errors, 0 CRC, 0 frame, 481945 overrun, 142844639
 ignored
  0 watchdog, 4536607 multicast, 0 pause input
  0 input packets with dribble condition detected
  3993978307 packets output, 979813878 bytes, 0 underruns
  0 output errors, 0 collisions, 0 interface resets
  0 babbles, 0 late collision, 0 deferred
  4 lost carrier, 0 no carrier, 

Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-02 Thread Chris Hale
Can you give me some sample code for this?  I'm willing to try it, but need
some help!

We moved to routed mode with plain static routing, and the customer is still
seeing issues.  CPU dropped about 15-20%, but we're still being overrun
everywhere...  One side is using the GE on the IO card, and the other side
is using a PA-GE.  I'm trying to muster up some NPE-G1's for testing as
well, but if this is a buffer problem, will there be any difference between
the onboard GigE ports on the NPE-G1 vs. the PA-GE or IO/GE?

navisite#sho proc cpu hist



navisite   11:21:24 AM Sunday Apr 2 2000 UTC







3373120370313555

100

 90

 80

 70   * ****

 60 

 50 

 40 

 30 

 20 

 10 

   0511223344556

 05050505050

   CPU% per second (last 60 seconds)





676776776776677677667667776666677767

728127116878800870080189179140978027095020565788988001913103

100

 90

 80*  *   

 70 #***##****##*#**

 60 

 50 

 40 

 30 

 20 

 10 

   0511223344556

 05050505050

   CPU% per minute (last 60 minutes)

  * = maximum CPU%   # = average CPU%





787656 856768899878775678899878778668889

72586548823924177656882061167925468753067768775014474733397914817667

100***  

 90*  **###*##***   * **##** ***

 80 ******##* ***  ***#** ** *  ***#

 70 ##** * *  **#  * *****   ***

 60 ###*** ****#***#

 50 ** ***#***###***#**#

 40 #* *##*#

 30 #* *

 20 ## #

 10 ## #

   051122334455667..

 0505050505050

   CPU% per hour (last 72 hours)

  * = maximum CPU%   # = average CPU%



navisite#sh int gigabitEthernet 0/0

GigabitEthernet0/0 is up, line protocol is up

  Hardware is i82543 (Livengood), address is 000f.8f58.3908 (bia
000f.8f58.3908)

  Internet address is 10.10.254.25/30

  MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,

 reliability 255/255, txload 20/255, rxload 29/255

  Encapsulation ARPA, loopback not set

  Keepalive set (10 sec)

  Full-duplex, 1000Mb/s, link type is autonegotiation, media type is T

  output flow-control is XON, input flow-control is XON

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input 00:00:00, output 00:00:00, output hang never

  Last clearing of show interface counters never

  Input queue: 2/75/0/0 (size/max/drops/flushes); Total output drops: 82

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  5 minute input rate 114705000 bits/sec, 33699 packets/sec

  5 minute output rate 79291000 bits/sec, 32889 packets/sec

 3562588727 packets input, 3062002285 bytes, 0 no buffer

 Received 7861538 broadcasts, 0 runts, 0 giants, 0 throttles

 297165303 input errors, 0 CRC, 0 frame, 5842451 overrun, 291322852
ignored

 0 watchdog, 5171889 multicast, 0 pause input

 0 input packets with dribble condition detected

 1554205161 packets output, 3202662663 bytes, 0 underruns

 10 output errors, 0 collisions, 1 interface resets

 0 babbles, 0 late collision, 0 deferred

 10 lost carrier, 0 no carrier, 56190635 pause output

 0 output buffer failures, 0 output buffers swapped out




Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-02 Thread Michael Ulitskiy
 Rodney,

Thanks for the reply. Please let me clarify it a little.
So you're saying that switching packets through PA-GE involves 3.5 times more 
processing overhead
compared to switching them through native port (btw, by native port you mean 
G1/G2 builtin one, right?),
hence pps goes down from 470kpps to 127kpps. Is that right?
I actually always thought that for the software-based platform max pps is a 
function of CPU.
Do you think that these figures can be improved in G2 chassis?
Thanks,

Michael

On Thursday 02 July 2009 11:48:26 am you wrote:
 I found what I was looking. The test was on older code but in concept it
 still applies.
 
 Bi-directional going native gige port to another native gige port on the
 G1 you are looking at around 470 kpps (double 940 kpps bi-directional)
 at 64 byte packets with NO features.
 
 At 1500 byte packets it can pretty much fill up the gig in both directions
 without dropping frames...again with no features.
 
 It appears from the tet you can just about fill up the links with 256 byte
 packets for native gige to native gige.
 
 However, with the PA-GE it appears it's around 127 kpps in one direction 
 (double
 to get bi-directional) at 64 byte packets. Which ends up being about 400 Mbps
 total (200 M tx and 200 M rx) going from a native Gig port to the PA-GE.
 
 These are rough numbers from a lab test with absolutly nothing configured.
 
 And also this is from a test set where there are no micro-burst from the
 real world traffic flows. We've seen that way too many times where some
 L3 forwarding switch is connected and it overruns the GigE ability on the
 connecting device. That's why the ASR1k is the suggested platform for that
 space now as it can do linerate Gige.
 
 Hope this helps. As always with performance numbers YMMV depending on actual
 code and configuration and design.
 
 Rodney
 
 
 
 On Thu, Jul 02, 2009 at 11:26:33AM -0400, Rodney Dunn wrote:
  Michael,
  
  I can't find the performance document I saw once before now. I'm still 
  trying
  to find it.
  
  If you want real Gige you should go with the ASR1000. Even the G1 GE ports
  will have problems at high rates with any features enabled.
  
  Rodney
  
  On Thu, Jul 02, 2009 at 11:00:29AM -0400, Michael Ulitskiy wrote:
   Could you please elaborate on the PA-GE issues? Or may be you could 
   provide some pointers to where they're described?
   We're using quite a few of those with traffic rate anywhere from 50M to 
   100M and I didn't notice
   any issues so far, but traffic rate is increasing and I'd really like to 
   know what to expect in the future,
   especially if there are any known caveats.
   Thank you,
   
   Michael
   
   On Wednesday 01 July 2009 01:41:44 pm Rodney Dunn wrote:
The PA-GE has issues at higher speeds.

You should move to L2TPV3 and see if it's better in regards
to performance. Your best would be pure L3 forwarding.

If the PA-GE is the issue you will have to get off that PA.

What happens if you move it to one of the onboard GigE ports on the 
NPE-400?

Rodney

On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
 We have a set of 7206VXR's, NPE400 CPUs on each end of a point to 
 point OC3
 using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE 
 interface
 (essentially turning the 7206's into a OC-3 to GigE converter) with a 
 single
 bridge group.
 
 We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we 
 seem
 to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're 
 seeing a
 large number of input errors (ignored, total of 5% of input packets) 
 and a
 fair amount of output pauses (0.12% of output packets).
 
 GigabitEthernet1/0 is up, line protocol is up
   Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
   MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
  reliability 255/255, txload 36/255, rxload 16/255
   Encapsulation ARPA, loopback not set
   Keepalive set (10 sec)
   Full-duplex, 1000Mb/s, link type is autonegotiation, media type is 
 unknown
 media type
   output flow-control is XON, input flow-control is XON
   ARP type: ARPA, ARP Timeout 04:00:00
   Last input 00:00:00, output 00:00:00, output hang never
   Last clearing of show interface counters 12w0d
   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 
 208
   Queueing strategy: fifo
   Output queue: 0/40 (size/max)
   30 second input rate 66046000 bits/sec, 29231 packets/sec
   30 second output rate 141617000 bits/sec, 31690 packets/sec
  2816822087 packets input, 1367339773 bytes, 0 no buffer
  Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
  143326584 input errors, 0 CRC, 0 frame, 481945 overrun, 142844639
 ignored
  0 watchdog, 4536607 multicast, 0 pause input
  0 

Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-02 Thread Christopher E. Brown

IIRC the 7000 series PA buses are derived from classic PCI tech, or
something similar.  Is a simplex bus limited to around 600Mbit.


This imposes a 600Mbit minus overhead simplex burst limit on the bus.


Microbursts are an issue, the bus and the CPU limit how fast the buffers
on the PA can be drained.


Personally, I treat NPE-400 systems as capable of 100Mbit full duplex
average flow and NPE-G1 as capable of 200Mbit.  This leaves some
headroom for peaks/etc, as they both can (more or less) handle twice
that for most traffic mixes (assuming a clean/simple config).


I have seen an NPE-400 doing 250 - 300 one way and 50 - 100 the other
between Gig-IO and PA-GE for an extended perion of time, but it was
dropping a couple packets _every_ burst.



Moral of the story...  If you are connecting to things via line rate
GigE, and those things are happy doing GigE bursts (just about any
modern PC), use something other than a 7200


Michael Ulitskiy wrote:
  Rodney,
 
 Thanks for the reply. Please let me clarify it a little.
 So you're saying that switching packets through PA-GE involves 3.5 times more 
 processing overhead
 compared to switching them through native port (btw, by native port you mean 
 G1/G2 builtin one, right?),
 hence pps goes down from 470kpps to 127kpps. Is that right?
 I actually always thought that for the software-based platform max pps is a 
 function of CPU.
 Do you think that these figures can be improved in G2 chassis?
 Thanks,
 
 Michael
 
 On Thursday 02 July 2009 11:48:26 am you wrote:
 I found what I was looking. The test was on older code but in concept it
 still applies.

 Bi-directional going native gige port to another native gige port on the
 G1 you are looking at around 470 kpps (double 940 kpps bi-directional)
 at 64 byte packets with NO features.

 At 1500 byte packets it can pretty much fill up the gig in both directions
 without dropping frames...again with no features.

 It appears from the tet you can just about fill up the links with 256 byte
 packets for native gige to native gige.

 However, with the PA-GE it appears it's around 127 kpps in one direction 
 (double
 to get bi-directional) at 64 byte packets. Which ends up being about 400 Mbps
 total (200 M tx and 200 M rx) going from a native Gig port to the PA-GE.

 These are rough numbers from a lab test with absolutly nothing configured.

 And also this is from a test set where there are no micro-burst from the
 real world traffic flows. We've seen that way too many times where some
 L3 forwarding switch is connected and it overruns the GigE ability on the
 connecting device. That's why the ASR1k is the suggested platform for that
 space now as it can do linerate Gige.

 Hope this helps. As always with performance numbers YMMV depending on actual
 code and configuration and design.

 Rodney



 On Thu, Jul 02, 2009 at 11:26:33AM -0400, Rodney Dunn wrote:
 Michael,

 I can't find the performance document I saw once before now. I'm still 
 trying
 to find it.

 If you want real Gige you should go with the ASR1000. Even the G1 GE ports
 will have problems at high rates with any features enabled.

 Rodney

 On Thu, Jul 02, 2009 at 11:00:29AM -0400, Michael Ulitskiy wrote:
 Could you please elaborate on the PA-GE issues? Or may be you could 
 provide some pointers to where they're described?
 We're using quite a few of those with traffic rate anywhere from 50M to 
 100M and I didn't notice
 any issues so far, but traffic rate is increasing and I'd really like to 
 know what to expect in the future,
 especially if there are any known caveats.
 Thank you,

 Michael

 On Wednesday 01 July 2009 01:41:44 pm Rodney Dunn wrote:
 The PA-GE has issues at higher speeds.

 You should move to L2TPV3 and see if it's better in regards
 to performance. Your best would be pure L3 forwarding.

 If the PA-GE is the issue you will have to get off that PA.

 What happens if you move it to one of the onboard GigE ports on the 
 NPE-400?

 Rodney

 On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
 We have a set of 7206VXR's, NPE400 CPUs on each end of a point to point 
 OC3
 using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE 
 interface
 (essentially turning the 7206's into a OC-3 to GigE converter) with a 
 single
 bridge group.

 We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we 
 seem
 to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're 
 seeing a
 large number of input errors (ignored, total of 5% of input packets) and 
 a
 fair amount of output pauses (0.12% of output packets).

 GigabitEthernet1/0 is up, line protocol is up
   Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
   MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
  reliability 255/255, txload 36/255, rxload 16/255
   Encapsulation ARPA, loopback not set
   Keepalive set (10 sec)
   Full-duplex, 1000Mb/s, link type is autonegotiation, media type is 
 unknown
 media type
   output flow-control 

[c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-01 Thread Chris Hale
We have a set of 7206VXR's, NPE400 CPUs on each end of a point to point OC3
using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE interface
(essentially turning the 7206's into a OC-3 to GigE converter) with a single
bridge group.

We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we seem
to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're seeing a
large number of input errors (ignored, total of 5% of input packets) and a
fair amount of output pauses (0.12% of output packets).

GigabitEthernet1/0 is up, line protocol is up
  Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
  MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
 reliability 255/255, txload 36/255, rxload 16/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, link type is autonegotiation, media type is unknown
media type
  output flow-control is XON, input flow-control is XON
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of show interface counters 12w0d
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 208
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  30 second input rate 66046000 bits/sec, 29231 packets/sec
  30 second output rate 141617000 bits/sec, 31690 packets/sec
 2816822087 packets input, 1367339773 bytes, 0 no buffer
 Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
 143326584 input errors, 0 CRC, 0 frame, 481945 overrun, 142844639
ignored
 0 watchdog, 4536607 multicast, 0 pause input
 0 input packets with dribble condition detected
 3993978307 packets output, 979813878 bytes, 0 underruns
 0 output errors, 0 collisions, 0 interface resets
 0 babbles, 0 late collision, 0 deferred
 4 lost carrier, 0 no carrier, 4808187 pause output
 0 output buffer failures, 0 output buffers swapped out

If we move this to a routed infrastructure with CEF, can we expect the CPU
to drop considerably?   The routing will be static only, very simple config
with no ACLs, no policy maps, etc.  We're just trying to get the routers to
let us push as much of the OC3 bandwidth as possible.

We would rather not upgrade the NPE400's if possible.  The internal LAN
equipment is Nortel L3 switches which don't seem to support flow-control.

Thanks in advance for any ideas.

Chris

-- 
--
Chris Hale
chal...@gmail.com
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-01 Thread Rodney Dunn
The PA-GE has issues at higher speeds.

You should move to L2TPV3 and see if it's better in regards
to performance. Your best would be pure L3 forwarding.

If the PA-GE is the issue you will have to get off that PA.

What happens if you move it to one of the onboard GigE ports on the NPE-400?

Rodney

On Wed, Jul 01, 2009 at 12:56:39PM -0400, Chris Hale wrote:
 We have a set of 7206VXR's, NPE400 CPUs on each end of a point to point OC3
 using PA-POS-OC3 cards.  We bridge these circuits through a PA-GE interface
 (essentially turning the 7206's into a OC-3 to GigE converter) with a single
 bridge group.
 
 We are trying to push nearly 130-140Mbps, but per the MRTG graphs, we seem
 to be capping @ ~110Mbps.  The CPU is also averaging 80-90%.  We're seeing a
 large number of input errors (ignored, total of 5% of input packets) and a
 fair amount of output pauses (0.12% of output packets).
 
 GigabitEthernet1/0 is up, line protocol is up
   Hardware is WISEMAN, address is 0016.46e6.1c1c (bia 0016.46e6.1c1c)
   MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
  reliability 255/255, txload 36/255, rxload 16/255
   Encapsulation ARPA, loopback not set
   Keepalive set (10 sec)
   Full-duplex, 1000Mb/s, link type is autonegotiation, media type is unknown
 media type
   output flow-control is XON, input flow-control is XON
   ARP type: ARPA, ARP Timeout 04:00:00
   Last input 00:00:00, output 00:00:00, output hang never
   Last clearing of show interface counters 12w0d
   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 208
   Queueing strategy: fifo
   Output queue: 0/40 (size/max)
   30 second input rate 66046000 bits/sec, 29231 packets/sec
   30 second output rate 141617000 bits/sec, 31690 packets/sec
  2816822087 packets input, 1367339773 bytes, 0 no buffer
  Received 7138653 broadcasts, 0 runts, 0 giants, 0 throttles
  143326584 input errors, 0 CRC, 0 frame, 481945 overrun, 142844639
 ignored
  0 watchdog, 4536607 multicast, 0 pause input
  0 input packets with dribble condition detected
  3993978307 packets output, 979813878 bytes, 0 underruns
  0 output errors, 0 collisions, 0 interface resets
  0 babbles, 0 late collision, 0 deferred
  4 lost carrier, 0 no carrier, 4808187 pause output
  0 output buffer failures, 0 output buffers swapped out
 
 If we move this to a routed infrastructure with CEF, can we expect the CPU
 to drop considerably?   The routing will be static only, very simple config
 with no ACLs, no policy maps, etc.  We're just trying to get the routers to
 let us push as much of the OC3 bandwidth as possible.
 
 We would rather not upgrade the NPE400's if possible.  The internal LAN
 equipment is Nortel L3 switches which don't seem to support flow-control.
 
 Thanks in advance for any ideas.
 
 Chris
 
 -- 
 --
 Chris Hale
 chal...@gmail.com
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-01 Thread Jay Hennigan

Rodney Dunn wrote:

The PA-GE has issues at higher speeds.

You should move to L2TPV3 and see if it's better in regards
to performance. Your best would be pure L3 forwarding.

If the PA-GE is the issue you will have to get off that PA.

What happens if you move it to one of the onboard GigE ports on the NPE-400?


There aren't any onboard gigE ports on an NPE-400.  You need NPE-G1 for 
those.


--
Jay Hennigan - CCIE #7880 - Network Engineering - j...@impulse.net
Impulse Internet Service  -  http://www.impulse.net/
Your local telephone and internet company - 805 884-6323 - WB6RDV
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] CPU comparison - bridge vs. route on 7206?

2009-07-01 Thread Rodney Dunn
I couldn't remember so I looked for a picture and thought I saw one it did
have.

They would need the G1/G2 then.

Or maybe go to routed mode.

Rodney


On Wed, Jul 01, 2009 at 10:53:28AM -0700, Jay Hennigan wrote:
 Rodney Dunn wrote:
 The PA-GE has issues at higher speeds.
 
 You should move to L2TPV3 and see if it's better in regards
 to performance. Your best would be pure L3 forwarding.
 
 If the PA-GE is the issue you will have to get off that PA.
 
 What happens if you move it to one of the onboard GigE ports on the 
 NPE-400?
 
 There aren't any onboard gigE ports on an NPE-400.  You need NPE-G1 for 
 those.
 
 --
 Jay Hennigan - CCIE #7880 - Network Engineering - j...@impulse.net
 Impulse Internet Service  -  http://www.impulse.net/
 Your local telephone and internet company - 805 884-6323 - WB6RDV
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/