Rodney,
With the PA-MC-T3-EC, any idea how much would be offloaded to the PA?
The router is running at about 75% peak average utilization, which is a
bit high considering it's mostly doing routing and not pushing more then
100Mbits. If this is being interrupt switched, I wouldn't expect the EC
PA to help, right?
-Jason
Rodney Dunn wrote:
On Thu, Mar 26, 2009 at 10:30:08AM -0400, Jason Berenson wrote:
Rodney,
It's running: 12.4(18a). I had to downgrade from the latest about 6
months ago because of a bug where 'show policy' would show no output
even if QoS was working properly.
router#show int mul2 stat
Multilink2
Switching path Pkts In Chars In Pkts Out Chars Out
Processor 0 0 0 0
Route cache 18049 3982931 25553 14234069
Total 18049 3982931 25553 14234069
router#show int mul2 stat
Multilink2
Switching path Pkts In Chars In Pkts Out Chars Out
Processor 0 0 0 0
Route cache 18601 4110973 26553 14682852
Total 18601 4110973 26553 14682852
fonseca#show cef in mul 2
Multilink2 is up (if_number 132)
Corresponding hwidb fast_if_number 132
Corresponding hwidb firstsw->if_number 132
Internet address is 10.3.4.229/30
ICMP redirects are always sent
Per packet load-sharing is disabled
IP unicast RPF check is disabled
Inbound access list is not set
Outbound access list is not set
Interface is marked as point to point interface
Hardware idb is Multilink2
Fast switching type 7, interface type 105
IP CEF switching enabled
IP CEF VPN Feature Fast switching turbo vector
IP Null turbo vector
VPN Forwarding table "nypirg"
Input fast flags 0x1000, Input fast flags2 0x0, Output fast flags
0x4000, Output fast flags2 0x0
ifindex 127(127)
Slot -1 Slot unit 2 Unit 2 VC -1
Transmit limit accumulator 0x0 (0x0)
IP MTU 1500
Does that mean that there's no processor switching going on there?
Yep. It's all being interrupt switched so you should be fine.
Why
would a VRF make any difference to the MLPPP?
forwarding vectors are different. But in this code we have the hooks
to do MPLSoMLPPP if that's what you were doing..which you are not.
The vrf interface on a bundle isn't what we call MPLSoMLPPP...that's when
you enable MPLS on the bundle.
I see the same outputs
for a non VRF'd MLPPP.
It's working as it should.
With the new PA the overall CPU would be less b/c the mlppp work is offloaded
to an asic on the PA.
-Jason
Rodney Dunn wrote:
You have it in a VRF which really shouldn't cause an issue as it's
tag2ip and ip2tag.
What code is it?
Make sure it's the latest 12.4 mainline as we did some work in 12.4 to make
this work.
Can you get a 'sh int mul 2 stat' after a clear counters...get it a few
times and send it?
Also, what are the other interface configs feeding this bundle?
It could be features on them causing the punts.
What does 'sh cef int' say?
Rodney
On Wed, Mar 25, 2009 at 05:03:03PM -0400, Jason Berenson wrote:
Here's a sample:
interface Multilink2
ip vrf forwarding VPN1
ip address x.x.x.x 255.255.255.252
no cdp enable
ppp multilink
ppp multilink group 2
service-policy output voice
!
interface Serial6/0/25:0
no ip address
encapsulation ppp
down-when-looped
no cdp enable
ppp multilink
ppp multilink group 2
!
interface Serial6/0/26:0
no ip address
encapsulation ppp
down-when-looped
no cdp enable
ppp multilink
ppp multilink group 2
!
-Jason
Rodney Dunn wrote:
The G1's with MLPPP should not be process switching the traffic.
What is the config?
The EC cards just offload the MLPPP to the new asic on the PA.
Rodney
On Wed, Mar 25, 2009 at 04:35:50PM -0400, Jason Berenson wrote:
Greetings,
I've got a 7206VXR NPE-G1 with a bunch of DS3 cards in it (PA-MC-T3).
There's about 25 multilinks with an average of 2 T1s per bundle. I see
a lot of process switching on the router and I have a feeling it's
because we don't have the PA-MC-T3-EC card so the processor has to step
in for the MLPPP.
Is this the case? If I get some PA-MC-T3-EC cards to swap in, will
that take a lot of load off the NPE-G1? Any output needed, please let
me know.
Thanks,
Jason
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/