We have a box with the above specifications in production. 4 interfaces are being used; 2 ATM, 2 GigE.

SLOT 0  (RP/LC 0 ): Route Processor
        Route Memory: MEM-GRP-512=
SLOT 2  (RP/LC 2 ): 4 port ATM Over SONET OC12c/STM-4c Multi Mode
        Processor Memory: MEM-GRP/LC-256=
        Packet Memory: MEM-LC1-PKT-256=
  L3 Engine: 2 - Backbone OC48 (2.5 Gbps)
SLOT 5  (RP/LC 5 ): 3 Port Gigabit Ethernet
        Processor Memory: MEM-GRP/LC-256=
        Packet Memory: MEM-LC1-PKT-256=
  L3 Engine: 2 - Backbone OC48 (2.5 Gbps)

For the life of us, we can't seem to get any more than 60Mbps sustained across the ATM testing with iperf, so we're just trying to figure out if the GSR just can't push any more than what it's doing or if there's something else afoot.

CPU doesn't seem to be running too hot:

CPU utilization for five seconds: 6%/0%; one minute: 20%; five minutes: 19%

Interface utilization seems reasonable.

bdr1.nyc-hudson-12008#show int a2/0 load

      Interface                   bits/sec     pack/sec
 --------------------           ------------  ----------
 AT2/0                 Tx           48464000      14099
                       Rx          104808000      18012
bdr1.nyc-hudson-12008#show int a2/1 load

      Interface                   bits/sec     pack/sec
 --------------------           ------------  ----------
 AT2/1                 Tx           57581000      13032
                       Rx          116319000      14466
bdr1.nyc-hudson-12008#show int g5/0 load

      Interface                   bits/sec     pack/sec
 --------------------           ------------  ----------
 Gi5/0                 Tx           56851000       8981
                       Rx           35082000       7833
bdr1.nyc-hudson-12008#show int g5/1 load

      Interface                   bits/sec     pack/sec
 --------------------           ------------  ----------
 Gi5/1                 Tx          166072000      23424
                       Rx           70951000      19116
bdr1.nyc-hudson-12008#

Total Throughput: 656128000
Total PPS:        118963
Average Size (B): 689.4

We've done our due diligence to ensure the bits of the network between the test machine and the ATM can support 100Mbps, so we're fairly confident that our test setup is adequate. We can get ~97Mbps across other portions of the network (riding GE and 10GE on completely different devices).

Are we pushing this thing to it's limits taking into consideration the packet size vs. total throughput and total pps?
_______________________________________________
cisco-nsp mailing list  [email protected]
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to