[c-nsp] ASR9000 vlan rewrite

2013-07-25 Thread Lars Eidsheim
I do some vlan rewrites/aggregation at a POP with the configuration below. 
Everything works as expected.
However, it would be nice to see which mac addresses which are mapped to each 
vlan. My understanding is that the vlan is popped at incoming traffic and added 
to outgoing traffic. It is possible to show the mapping table within IOS-XR?

interface TenGigE0/1/0/0.22001 l2transport
encapsulation dot1q 100-399 exact
rewrite ingress tag pop 1 symmetric


Thanks

Lars Eidsheim | INTELLIT



This email has been scanned and secured by Intellit

This communication is for use by the intended recipient and contains 
information that may be privileged, confidential and exempt from disclosure or 
copyrighted under applicable law. If you are not the intended recipient, you 
are hereby formally notified that any dissemination, use, copying or 
distribution of this e-mail, in whole or in part, is strictly prohibited. 
Please notify the sender by return e-mail and delete this e-mail from your 
system.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] QoS

2013-07-25 Thread Tony
Is this a trick question ?

Every time it sees a packet that matches the criteria you have specified and is 
put into your class it increments the packets counter by 1 and adds the size 
of the packet to the bytes counter.

What is or isn't happening that you're concerned about ?



regards,
Tony.







 From: M K gunner_...@live.com
To: cisco-nsp@puck.nether.net cisco-nsp@puck.nether.net 
Sent: Tuesday, 23 July 2013 8:10 PM
Subject: [c-nsp] QoS
 

Hi allI have configured QoS between two sites across my backbone , the 
classification was done based on telnet traffic and the marking was done based 
on the precedence valueI have configured to mark all telnet traffic with 
precedence value of 3 and I received it fine without any issues
Now my question is as belowWhen I first wrote telnet 7.7.7.7 and checked the 
output of show policy-map interface fastEthernet 1/0 | inc Class|packet
telnet 7.7.7.7    Class-map: PRECEDENCE_3 (match-all)        9 packets, 520 
bytesUsername : cisco    Class-map: PRECEDENCE_3 (match-all)        16 
packets, 905 bytesPassword : cisco    Class-map: PRECEDENCE_3 (match-all)      
  23 packets, 1290 bytesR7exit    Class-map: PRECEDENCE_3 (match-all)        
30 packets, 1674 bytes
I want to know what is the methodology used to count these numbers ?
Thanks


                          
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] BGP export filter config help

2013-07-25 Thread Adam Vitkovsky
The policy should look similar to:

route-policy rp_ramjet-export
  if community matches-any cs_local-aggregates or community matches-any
cs_customer-routes or community matches-any cs_customer-attached-routes then
pass
  else
drop
  endif
end-policy


adam

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco trunk port startup delay

2013-07-25 Thread Adam Vitkovsky
 @alan - carrier-delay msec 0 did not seem to make any difference, I still
lost 5 pings. I simply turned it on the single interface I am testing with
and assumed it will kick in.

Doesn't the carrier-delay msec 0 affect only the down events if up|down
knob is supported by any chance? 
Have you tried carrier-delay up 0 ? 
Also for 0 seconds I'd try it without the msec knob. 

adam

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ME3800X/ME3600X/ME3600X-24CX/ASR903/ASR901 Deployment Simplification Feedback

2013-07-25 Thread Leigh Harrison
Hi there Waris,

We've got quite a few of the ME3600's deployed now, which we migrated to over 
and above a legacy 3750ME estate.  The big point for us was to migrate to MPLS 
access rather than have any spanning tree knocking about in the Core.

Favoured points from my team involves the ease of configuration and their raw 
speed.  Down sides are port capacity and buggy software.  

A denser system of 48 Gig ports and more 10Gb ports would assist greatly as we 
can fill up 24 1Gb ports quite quickly depending on which PoP the system has 
been built for.  We tend to ring the 3600's into ASR9K's and the more rings we 
buy, the more 9K 10Gb ports have to be taken up.  Additional 10Gb ports would 
be of great benefit to increase the capacity of each ring we build, rather than 
build new rings.  Our provider connections are also moving from 1Gb up to 10Gb 
and I need to be able to cater for this towards the Access, rather than the 
Core.

I would also like to see more horsepower in the systems.  We recently went to 
implement multicasting in VRF and ran into some odd challenges.   We have the 
3600's set up for routing and are about to push 24,000 IPv4 routes.   In our 
busier boxes we have around 9,000 routes, so I'm more than happy with the 
capacity there.  However, in order to turn on 250 MDT routes, we have to drop 
the IPv4 routes down to 12,000.  A sliding scale would be nice for memory 
allocation, but in the face of having 3600's move from 30% full to 60% full in 
the routing table to add in a new feature, we went for a redesign of how we 
delivered the multicasting.

Leigh


 Hi Everyone,
 I have seen lot of good inputs on this mailer. I am collecting 
 feedback for the existing deployment challenges on the following 
 platforms so that we can address them.
 
 -ME3800X
 -ME3600X
 -ME3600X-24CX
 -ASR903
 -ASR901
 -ME3400E

__
This email has been scanned by the Symantec Email Security Cloud System, 
Managed and Supported by TekNet Solutions (http://www.teknet.co.uk)
__

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] MPLS down to the CPE

2013-07-25 Thread Adam Vitkovsky
I see so the islands are stitched together over the CsC L3VPN, since all
islands have the same AS together they act like a common AS. 
And the CsC L3VPN is provided by the underlying common backbone
Inter-AS-MPLS optC style. 
Right?

So all access nodes within a particular island have RSVP-TE tunnels to
ABRs/ASBRs within the island (ASBRs than provide connectivity to other
islands). 
And there's a full mesh of tunnels between all ASBRs. 
Right?

I'd like to ask is there a full mesh of iBGP sessions between the ASBRs or
some of the ASBRs have a role of RRs please? 

So you have decided to create this sort of overlay AS dedicated for L2
services. 
I think I understand your reasoning behind the setup and must say it's very
bold and creative. 

See this is what I was talking about before, back in the old days engineers
would have to get very creative and bold to create something extraordinary
with such a limited set of features. With today's boxes you could all stack
it up into a single AS not ever worrying about scalability or convergence
times.

Thank you very much for sharing the design with us

adam
-Original Message-
From: Phil Bedard [mailto:phil...@gmail.com] 
Sent: Thursday, July 11, 2013 3:48 AM
To: Adam Vitkovsky; mark.ti...@seacom.mu
Cc: 'Andrew Miehs'; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] MPLS down to the CPE



On 7/10/13 4:16 AM, Adam Vitkovsky adam.vitkov...@swan.sk wrote:

 the different network islands are tied together using CsC over a 
 common MPLS core.
You got me scared for a moment CsC would mean to run a separate 
OSPF/LDP/BGP-ASN for each area and doing MP-eBGP between ASBRs within 
each
area(OptB) or between RRs in each area(optC) with core area/AS acting 
as a labeled relay for ASBRs loopback addresses, though I believe by 
common MPLS core you mean a single AS right please?

The islands are actually all in the same ASN, the common core is not the
same ASN.  Could have been the same ASN, more political reasons for it not
being the same than technical.  In the end it looks like Option C, the CsC
L3VPN only carries loopbacks and aggregate IP prefixes.   The common core
is RSVP-TE based, if I had my preference today I would build TE tunnels
across it between the islands and then use RFC3107 as a way to tie it all
together end to end.  Years ago when we first built it some of the feature
support wasn't there to do that.


 At the ABR all of the L2VPN services are stitched since you are 
 entering a different RSVP-TE/MPLS domain, the L3VPN configuration 
 exists on these nodes with the access nodes using
 L2 pseudowires into virtual L3 interfaces.
I see, right that's a clever way to save some money by pushing the 
L3VPN stuff to only a few powerful boxes with high-queue line cards and 
L3VPN licenses. Though the PWHE -a setup where you can actually 
terminate the PW into L3 interface on the same box was introduced to 
Cisco boxes only recently so prior to that you'd have to have a 
separate box bridging the PW to sub-int/serv-inst on a QinQ trunk where 
the L3VPN box would be connected to.

I'm still confused about the TE part.
So I believe you are pushing PW directly into TE tunnels what gives you 
the ability to balance the PWs around the ring as well as to use a 
backup tunnel via the opposite leg of the circuit. So the TE tunnels 
are actually terminated on the PWHE nodes right? Or do they actually 
continue into the backbone area please?

The tunnels from the access boxes terminate on the PWHE nodes, they do not
extend beyond that boundary.  There is another set of tunnels which connect
the PWHE nodes together.  This isn't a one-off deployment or anything, there
are other folks out there with basically the same type of deployment.  

Phil 
 







adam




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ME3800X/ME3600X/ME3600X-24CX/ASR903/ASR901 Deployment Simplification Feedback

2013-07-25 Thread Mattias Gyllenvarg
Good point, SDM is just another gotcha. Allocate according to use and
complain in the log when your getting close to max.

ME3600x + ASR9k FTW! Just make more physical variants of the ME and lower
the price on ASR9k.


On Thu, Jul 25, 2013 at 12:56 PM, Leigh Harrison 
lharri...@convergencegroup.co.uk wrote:

 Hi there Waris,

 We've got quite a few of the ME3600's deployed now, which we migrated to
 over and above a legacy 3750ME estate.  The big point for us was to migrate
 to MPLS access rather than have any spanning tree knocking about in the
 Core.

 Favoured points from my team involves the ease of configuration and their
 raw speed.  Down sides are port capacity and buggy software.

 A denser system of 48 Gig ports and more 10Gb ports would assist greatly
 as we can fill up 24 1Gb ports quite quickly depending on which PoP the
 system has been built for.  We tend to ring the 3600's into ASR9K's and the
 more rings we buy, the more 9K 10Gb ports have to be taken up.  Additional
 10Gb ports would be of great benefit to increase the capacity of each ring
 we build, rather than build new rings.  Our provider connections are also
 moving from 1Gb up to 10Gb and I need to be able to cater for this towards
 the Access, rather than the Core.

 I would also like to see more horsepower in the systems.  We recently went
 to implement multicasting in VRF and ran into some odd challenges.   We
 have the 3600's set up for routing and are about to push 24,000 IPv4
 routes.   In our busier boxes we have around 9,000 routes, so I'm more than
 happy with the capacity there.  However, in order to turn on 250 MDT
 routes, we have to drop the IPv4 routes down to 12,000.  A sliding scale
 would be nice for memory allocation, but in the face of having 3600's move
 from 30% full to 60% full in the routing table to add in a new feature, we
 went for a redesign of how we delivered the multicasting.

 Leigh


  Hi Everyone,
  I have seen lot of good inputs on this mailer. I am collecting
  feedback for the existing deployment challenges on the following
  platforms so that we can address them.
 
  -ME3800X
  -ME3600X
  -ME3600X-24CX
  -ASR903
  -ASR901
  -ME3400E

 __
 This email has been scanned by the Symantec Email Security Cloud System,
 Managed and Supported by TekNet Solutions (http://www.teknet.co.uk)
 __

 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/




-- 
*Med Vänliga Hälsningar*
*Mattias Gyllenvarg*
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ME3800X/ME3600X/ME3600X-24CX/ASR903/ASR901 Deployment Simplification Feedback

2013-07-25 Thread natacha lebaron
+1 on last comment


2013/7/25 Mattias Gyllenvarg matt...@gyllenvarg.se

 Good point, SDM is just another gotcha. Allocate according to use and
 complain in the log when your getting close to max.

 ME3600x + ASR9k FTW! Just make more physical variants of the ME and lower
 the price on ASR9k.


 On Thu, Jul 25, 2013 at 12:56 PM, Leigh Harrison 
 lharri...@convergencegroup.co.uk wrote:

  Hi there Waris,
 
  We've got quite a few of the ME3600's deployed now, which we migrated to
  over and above a legacy 3750ME estate.  The big point for us was to
 migrate
  to MPLS access rather than have any spanning tree knocking about in the
  Core.
 
  Favoured points from my team involves the ease of configuration and their
  raw speed.  Down sides are port capacity and buggy software.
 
  A denser system of 48 Gig ports and more 10Gb ports would assist greatly
  as we can fill up 24 1Gb ports quite quickly depending on which PoP the
  system has been built for.  We tend to ring the 3600's into ASR9K's and
 the
  more rings we buy, the more 9K 10Gb ports have to be taken up.
  Additional
  10Gb ports would be of great benefit to increase the capacity of each
 ring
  we build, rather than build new rings.  Our provider connections are also
  moving from 1Gb up to 10Gb and I need to be able to cater for this
 towards
  the Access, rather than the Core.
 
  I would also like to see more horsepower in the systems.  We recently
 went
  to implement multicasting in VRF and ran into some odd challenges.   We
  have the 3600's set up for routing and are about to push 24,000 IPv4
  routes.   In our busier boxes we have around 9,000 routes, so I'm more
 than
  happy with the capacity there.  However, in order to turn on 250 MDT
  routes, we have to drop the IPv4 routes down to 12,000.  A sliding scale
  would be nice for memory allocation, but in the face of having 3600's
 move
  from 30% full to 60% full in the routing table to add in a new feature,
 we
  went for a redesign of how we delivered the multicasting.
 
  Leigh
 
 
   Hi Everyone,
   I have seen lot of good inputs on this mailer. I am collecting
   feedback for the existing deployment challenges on the following
   platforms so that we can address them.
  
   -ME3800X
   -ME3600X
   -ME3600X-24CX
   -ASR903
   -ASR901
   -ME3400E
 
  __
  This email has been scanned by the Symantec Email Security Cloud System,
  Managed and Supported by TekNet Solutions (http://www.teknet.co.uk)
  __
 
  ___
  cisco-nsp mailing list  cisco-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/cisco-nsp
  archive at http://puck.nether.net/pipermail/cisco-nsp/
 



 --
 *Med Vänliga Hälsningar*
 *Mattias Gyllenvarg*
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] N7k sending LACP PDUs tagged w/ vlan dot1q tag native?

2013-07-25 Thread Phil Mayers

(Apologies to people who've seen the other half of this on j-nsp)

Has anyone seen an N7k sending LACP PDUs tagged with the following config:

vlan dot1Q tag native
int eth3/1
  channel-group 20 mode active
int po20
  switchport
  switchport mode trunk
  switchport trunk native vlan 111
  switchport trunk allowed vlan 999

In this situation, the LACP PDUs are coming out tagged with vlan 111, 
and are being ignored by the neighbouring device (a Juniper SRX).


AFAIK, IOS doesn't do this, even with vlan dot1q tag native set 
globally (which we set for avoidance of half-directional tagged links).


We are able to work around with this config:

int eth3/1
  channel-group 20 mode active
int po20
  no shut
int po20.999
  encapsulation dot1q 999
  ...

...but I'd prefer to avoid that.

Also unlike IOS, there doesn't seem to be a per-interface [no] 
switchport trunk native vlan tag to override it :o(


This is on NX-OS 5.2(4), with M-series linecards.

Cheers,
Phil
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Multi-Vendor CAPWAP AP Interop Using Cisco 5508 WLC

2013-07-25 Thread Darin Herteen
Greetings List,

I have been trying to hunt down some definitive answers regarding 
interoperability using non-cisco AP's running CAPWAP in conjunction with a 
Cisco WLC 5508 running 7.3.101.0.

A TAC case I opened on this issue didn't really give me a definitive answer 
only to say that a non-cisco AP could only be used in Workgroup Bridge Mode 
which would not be desired or even applicable to our deployment. ( I have not 
reached out to our SE as of yet)

Upon further research I came across a statement from Aruba Networks in 2009 
regarding their position on CAPWAP in which they state anybody claiming to be 
running CAPWAP are using proprietary extensions and do not adhere to RFC5415.

Can anybody tell me if this is still the current state of CAPWAP? 

Has anybody seen or had experience running a multi-vendor AP deployment using 
Cisco WLC's ?

Thanks in advance for any info you could pass my way.

Darin








  
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multi-Vendor CAPWAP AP Interop Using Cisco 5508 WLC

2013-07-25 Thread A . L . M . Buxey
Hi,

 Can anybody tell me if this is still the current state of CAPWAP? 
 Has anybody seen or had experience running a multi-vendor AP deployment using 
 Cisco WLC's ?

I havent seen any cross-vendor wireless solution using CAPWAP 'standard' at all.
let alone working on cisco controllers...  (given their issues getting their 
OWN wireless
APs working fine with their wireless controllers...)

would be VERY interested in seeing/knowing of working solutions (even not 
involving cisco!)

alan
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ME3800X/ME3600X/ME3600X-24CX/ASR903/ASR901 Deployment Simplification Feedback

2013-07-25 Thread Mark Tinka
On Thursday, July 25, 2013 12:56:52 PM Leigh Harrison wrote:

 I would also like to see more horsepower in the systems. 
 We recently went to implement multicasting in VRF and
 ran into some odd challenges.   We have the 3600's set
 up for routing and are about to push 24,000 IPv4 routes.
   In our busier boxes we have around 9,000 routes, so
 I'm more than happy with the capacity there.  However,
 in order to turn on 250 MDT routes, we have to drop the
 IPv4 routes down to 12,000.  A sliding scale would be
 nice for memory allocation, but in the face of having
 3600's move from 30% full to 60% full in the routing
 table to add in a new feature, we went for a redesign of
 how we delivered the multicasting.

A reasonable use-case for a larger FIB in this platform 
family :-).

Mark.


signature.asc
Description: This is a digitally signed message part.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] Two HUBS-Location Specific Spokes-Redundant to each other

2013-07-25 Thread vasu varma
Hi Lumbis,

Thanks for your response.

Its not all about latency, latency may vary depending on the backbone
utilization irrespective of closest location.

I want in such a way that east locations should prefer the default route
from East HUB with West HUB acting as secondary and west locations should
prefer the default route from WEST HUB with EAST HUB acting as secondary.

One location may be equally destined in terms of latency or distance but we
should be able to configure as we desired.

Regards
Yaswanth


On Tue, Jul 23, 2013 at 8:32 PM, Pete Lumbis alum...@gmail.com wrote:

 If by closest you mean lowest latency you probably want to look at
 something like PfR to do this dynamically for you.


 On Tue, Jul 23, 2013 at 1:48 AM, vasu varma ypk...@gmail.com wrote:

 Hi Team,

 I have a requirement in such a way that there are two HUB's, one in
 Newyork
 and other in LOS Angeles. The spoke locations will access the HUB location
 whichever is closer geographically and the other acts as the backup for
 that particular site.

 If both the HUB's injects default route into the cloud, how can I
 configure
 the iBGP attributes to select the best path based on the closest physical
 location.

 Our's is a MPLS cloud with multiple customers sharing the same Infra.

 Can someone assist me with the solution approach and most importantly the
 changes that I need to do in my network.

 Regards
 Yaswanth
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/