[c-nsp] nexus 5548 versus C4900M

2012-11-21 Thread Holemans Wim
We have a service cluster build around a C4900M : it concentrates a mix of 10G 
(intercampus) connections and 1G connections (some backup lines and central 
services such as DNS, VPN servers,...)
This works fine but to be able to connect all these, I had to add the 20 port 
10/100/1000 UTP card and the extra 8x 10G card (with X2 convertor to provide 
for fiber SFPs). At the time that seemed a good and reasonable priced solution. 
This C4900M only does L2 traffic for the moment but will do some minor static 
(500Mb) IPv4 L3 routing in the near future.

Now I have to create a new, similar  service cluster. The first idea was to 
copy the setup but as we are also looking at Nexus for our datacenter, I 
noticed the Nexus 5548UP. This gives you out-of-the-box 32 1G/10G ports and 
costs (based on the prices I have seen) 25% less than the above C4900M 
configuration.
Anyone has a reason why we should stick to the C4900M (or maybe similar C4500 
solution) and not put a Nexus in place, apart from the obvious differences 
between IOS and NXOS for management ?
I think, when adding the L3 card to the Nexus, the 25% price difference will 
disappear but are there any limits you see (arp table, mac address table size, 
buffering, IPv6 support..) that would take the Nexus out of the picture ?

Greetings,

Wim Holemans
Netwerkdienst Universiteit Antwerpen
Network Services University of Antwerp

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] nexus 5548 versus C4900M

2012-11-21 Thread Mike Hale
We have a very similar setup.

Our nexus 5548s are pains in our ass.  We have ten dead ports between the
two, encountered huge issues upgrading the code and attempting to replace
them with RMA units from Cisco.  They also run incredibly hot, eat copper
sfps for breakfast and are in general annoying to work on.

We also have a 4500 acting as a core, and that works wonderfully for us.
We too have a mix of gig and 10g interfaces.  Plus, it handles all layer 3
stuff we've thrown at it (unlike the nexus).
On Nov 21, 2012 12:32 AM, Holemans Wim wim.holem...@ua.ac.be wrote:

 We have a service cluster build around a C4900M : it concentrates a mix of
 10G (intercampus) connections and 1G connections (some backup lines and
 central services such as DNS, VPN servers,...)
 This works fine but to be able to connect all these, I had to add the 20
 port 10/100/1000 UTP card and the extra 8x 10G card (with X2 convertor to
 provide for fiber SFPs). At the time that seemed a good and reasonable
 priced solution. This C4900M only does L2 traffic for the moment but will
 do some minor static (500Mb) IPv4 L3 routing in the near future.

 Now I have to create a new, similar  service cluster. The first idea was
 to copy the setup but as we are also looking at Nexus for our datacenter, I
 noticed the Nexus 5548UP. This gives you out-of-the-box 32 1G/10G ports and
 costs (based on the prices I have seen) 25% less than the above C4900M
 configuration.
 Anyone has a reason why we should stick to the C4900M (or maybe similar
 C4500 solution) and not put a Nexus in place, apart from the obvious
 differences between IOS and NXOS for management ?
 I think, when adding the L3 card to the Nexus, the 25% price difference
 will disappear but are there any limits you see (arp table, mac address
 table size, buffering, IPv6 support..) that would take the Nexus out of the
 picture ?

 Greetings,

 Wim Holemans
 Netwerkdienst Universiteit Antwerpen
 Network Services University of Antwerp

 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Cisco Configuration Engine - Latest Stable?

2012-11-21 Thread Alistair C
Howdy,

having some fun and games with CCE, could someone confirm which is the
latest stable version?

I have 3.5 (0.0) on CD but have seen 3.5 (0.3) about so would like to
confirm if 3.5 (0.3) is the latest. Yes we paid for it but am looking
for a quick answer rather than waiting for account managers /
resellers to respond and google is not my friend today. :-)

Many thanks.


-- 
Alistair C.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] nexus 5548 versus C4900M

2012-11-21 Thread Garry

On 21.11.2012 08:55, Holemans Wim wrote:

We have a service cluster build around a C4900M : it concentrates a mix of 10G 
(intercampus) connections and 1G connections (some backup lines and central 
services such as DNS, VPN servers,...)
This works fine but to be able to connect all these, I had to add the 20 port 
10/100/1000 UTP card and the extra 8x 10G card (with X2 convertor to provide 
for fiber SFPs). At the time that seemed a good and reasonable priced solution. 
This C4900M only does L2 traffic for the moment but will do some minor static 
(500Mb) IPv4 L3 routing in the near future.

Now I have to create a new, similar  service cluster. The first idea was to 
copy the setup but as we are also looking at Nexus for our datacenter, I 
noticed the Nexus 5548UP. This gives you out-of-the-box 32 1G/10G ports and 
costs (based on the prices I have seen) 25% less than the above C4900M 
configuration.
Anyone has a reason why we should stick to the C4900M (or maybe similar C4500 
solution) and not put a Nexus in place, apart from the obvious differences 
between IOS and NXOS for management ?
I think, when adding the L3 card to the Nexus, the 25% price difference will 
disappear but are there any limits you see (arp table, mac address table size, 
buffering, IPv6 support..) that would take the Nexus out of the picture ?
We have a dual-5548P/L3+quad-2248 setup at a customer site, with some 20 
2960 switches (1G and 10G versions) for access switches ... apart from 
some initial problems the setup is very nice and performing well ... 
when the project was initially looked at, the original setup (only one 
5548 + 2 2248) was about half of what a comparable setup with required 
interface cards would have been with a 6500, except that the Nexus 
delivers the 960Gig L2 forwarding non-blocking, which the 6500 setup 
wouldn't have been able to do at the time, as its 10G cards are 
oversubscribed. 4500 series setups will be cheaper than a 6500 solution, 
but you will not have the performance of the Nexus, and I doubt that the 
price difference would be in favor of the 4500 ...


In general, I reckon your choice depends on the actual usage - as a 
datacenter/campus switch, the Nexus has a definite price- and 
performance-advantage. If you will need to do non-ethernet ports, a 
modular switch/router like the Catalyst 4500/6500 will be the better 
choice ...


-garry
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] nexus 5548 versus C4900M

2012-11-21 Thread Aled Morris
On 21 November 2012 07:55, Holemans Wim wim.holem...@ua.ac.be wrote:

 Now I have to create a new, similar  service cluster. The first idea was
 to copy the setup but as we are also looking at Nexus for our datacenter, I
 noticed the Nexus 5548UP. This gives you out-of-the-box 32 1G/10G ports and
 costs (based on the prices I have seen) 25% less than the above C4900M
 configuration.
 Anyone has a reason why we should stick to the C4900M (or maybe similar
 C4500 solution) and not put a Nexus in place, apart from the obvious
 differences between IOS and NXOS for management ?
 I think, when adding the L3 card to the Nexus, the 25% price difference
 will disappear but are there any limits you see (arp table, mac address
 table size, buffering, IPv6 support..) that would take the Nexus out of the
 picture ?


The 4500M is near end-of-sale, if you want a compact solution the 4500X is
the better choice.

L3 isn't the N5k's strong point  - if you need L3 features or performance,
study the data sheets carefully.

Aled
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Fwd: Re: nexus 5548 versus C4900M

2012-11-21 Thread Mark Lewis





In general, I reckon your choice depends on the actual usage - as a
datacenter/campus switch, the Nexus has a definite price- and
performance-advantage. If you will need to do non-ethernet ports, a
modular switch/router like the Catalyst 4500/6500 will be the better
choice ...



And, of course, the 5548UP has unified ports that you can config as
either 1/10GE or Fibre Channel. Good if you want to toss out a few
MDS/other FC switches :-)



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Fwd: Re: nexus 5548 versus C4900M

2012-11-21 Thread Deny IP Any Any
On Wed, Nov 21, 2012 at 10:20 AM, Mark Lewis m...@mjlnet.com wrote:


  And, of course, the 5548UP has unified ports that you can config as
 either 1/10GE or Fibre Channel. Good if you want to toss out a few
 MDS/other FC switches :-)


Changing those ports from 1/10 to FC (or vice versa) requires a reboot of
the Nexus. Not what I'd call 'data-center class'.



-- 
deny ip any any (4393649193 matches)
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] nexus 5548 versus C4900M

2012-11-21 Thread Andrew Miehs
On Wed, Nov 21, 2012 at 6:55 PM, Holemans Wim wim.holem...@ua.ac.be wrote:

 Now I have to create a new, similar  service cluster. The first idea was
 to copy the setup but as we are also looking at Nexus for our datacenter, I
 noticed the Nexus 5548UP. This gives you out-of-the-box 32 1G/10G ports and
 costs (based on the prices I have seen) 25% less than the above C4900M
 configuration.
 Anyone has a reason why we should stick to the C4900M (or maybe similar
 C4500 solution) and not put a Nexus in place, apart from the obvious
 differences between IOS and NXOS for management ?
 I think, when adding the L3 card to the Nexus, the 25% price difference
 will disappear but are there any limits you see (arp table, mac address
 table size, buffering, IPv6 support..) that would take the Nexus out of the
 picture ?


N5K has a limit of effectively 25K MAC addresses.

If you are even remotely thinking of using L3 go for an N7K, 6500 or 4500.

Andrew
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] sup2t XL with non XL linecards

2012-11-21 Thread . .
Hi,

If we have a 6500 with a Sup2T XL (VS-S2T-10G-XL) that is doing BGP with full
tables (so ~400k routes), does that mean all line cards must be XL as well? The
FAQ here:

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/qa_c67-362061.html

seems to indicate yes:

Q. What happens if I mix DFC4 and DFC4XL in the same chassis? Is this supported?

A. Yes, this is supported. When mixing DFC4 and DFC4XL, the system will operate
in lower common denominator mode. This means that it will operate in PFC4 mode
to match the lesser capabilities of the DFC4. The consequences of this are that
the larger FIB, ACL, and NetFlow tables of the DFC4XL will not be utilized as
they will need to be programmed to match the smaller DFC4 tables for consistency
within the chassis. See the Cisco IOSĀ® Software Release 12.2(50)SY notes for
further details (Put the URL).

However, unclear if that is just with mixing linecards, or includes the
supervisor as well. 

The linecard will just be doing l2 switching, and any l3 that needs the routes
will be going to the supervisor, that's fine, just don't want the supervisor to
fall back to non XL mode.

Thanks!

--
This e-mail was brought to you by Cosmo Mail
http://www.cosmo.com/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] CRC errors on fastethernet interface

2012-11-21 Thread Joe Mays
Have a 7206 connected to a Catalyst 2900XL switch port.

The 2900XL is getting CRC errors on the port at the rate of about one
every one or two seconds. I've tried replacing the cable, no effect.

core-sw1.noc#show int fastethernet0/1
FastEthernet0/1 is up, line protocol is up 
  Hardware is Fast Ethernet, address is 0002.7d2f.bc41 (bia
0002.7d2f.bc41)
  Description: 802.1q trunk to core-gw1.noc.win.net port FastEthernet0/0
  MTU 1500 bytes, BW 10 Kbit, DLY 100 usec, 
 reliability 255/255, txload 51/255, rxload 37/255
  Encapsulation ARPA, loopback not set
  Keepalive not set
  Full-duplex, 100Mb/s, 100BaseTX/FX
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output 00:00:00, output hang never
  Last clearing of show interface counters 00:05:49
  Queueing strategy: fifo
  Output queue 0/40, 0 drops; input queue 0/75, 0 drops
  30 second input rate 14547000 bits/sec, 2327 packets/sec
  30 second output rate 20099000 bits/sec, 3507 packets/sec
 862330 packets input, 682108246 bytes
 Received 398 broadcasts, 0 runts, 0 giants, 0 throttles
 63 input errors, 63 CRC, 0 frame, 64 overrun, 64 ignored
 0 watchdog, 257 multicast
 0 input packets with dribble condition detected
 1262698 packets output, 899402766 bytes, 0 underruns
 0 output errors, 0 collisions, 0 interface resets
 0 babbles, 0 late collision, 0 deferred
 0 lost carrier, 0 no carrier
 0 output buffer failures, 0 output buffers swapped out

Since changing the cable made no difference, it's either a port problem
on the 7206 or 2900XL, or a config problem. Here are the configs for the
interfaces on each end.

(Since the 7206 does not specify 100mbps, I had thought maybe it was
occasionally trying to renegotiate the speed, which might screw up the
switch end, which is hardwired 100-full, while the 7206 is set to
full-duplex, the speed command to force 100mbps speed does not seem to
exist on the 7206.)

Cisco 7206 --

interface FastEthernet0/0
 description Win.net NOC gateway LAN, 911 Heyburn Bldg (via
core-sw1.noc.win.net)
 ip address nnn.nnn.nnn.nnn 255.255.255.192
 ip access-group block-out-to-dot30 out
 no ip proxy-arp
 ip route-cache same-interface
 ip route-cache flow
 ip ospf message-digest-key 1 md5 7 xxx
 ip ospf cost 2
 ip ospf priority 200
 no ip mroute-cache
 load-interval 60
 duplex full
 no keepalive
 no cdp enable
 standby 1 ip 216.24.30.65
 standby 1 timers 5 15
 standby 1 priority 105
 standby 1 preempt delay minimum 60
 standby 1 authentication dfwmhsrp
 standby 1 track Serial6/0
 crypto map KYtoINvpn
 service-policy output queue-on-dscp

2900XL

interface FastEthernet0/1
 description 802.1q trunk to core-gw1.noc.win.net port FastEthernet0/0
 load-interval 30
 duplex full
 speed 100
 switchport trunk encapsulation dot1q
 switchport mode trunk
 no cdp enable

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] High CPU: punted packets

2012-11-21 Thread Dobbins, Roland

On Nov 22, 2012, at 9:50 AM, Jefri Abdullah wrote:

 We've no idea why the incoming interface is NULL, and why this packet is 
 punted.

Is the destination IP the box itself?

What does the output of sh proc c sorted show?

---
Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com

  Luck is the residue of opportunity and design.

   -- John Milton


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] High CPU: punted packets

2012-11-21 Thread Jefri Abdullah

On 2012-11-22 02:05 PM, Dobbins, Roland wrote:


On Nov 22, 2012, at 9:50 AM, Jefri Abdullah wrote:

We've no idea why the incoming interface is NULL, and why this 
packet

is punted.


Is the destination IP the box itself?

What does the output of sh proc c sorted show?



No the destination IP is legitimate host directly connect to this 7600. 
show proc cpu showing that high interrupt utilization (about 80%). And 
also, I just noticed that the mac address source is 00.00.00.00.00.00 .


--

Regards,

Jefri Abdullah
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Need for large buffers for 1-to-1 forwarding?

2012-11-21 Thread Saku Ytti
On (2012-11-22 01:05 +0100), Mathias Sundman wrote:

 As we only provide the customer with 1 client port on the remote
 device, there is only one ingress port that needs to forward to one
 egress port. Is there a need for larger buffers for such a usage or
 should any switch with enough pps forwarding capacity do the job for
 even the most demanding traffic?

You're golden. This is the optimum situation and easiest to handle in
hardware. There is no need for deep buffering.
Microbursts are issue when ingress is higher rate than egress or when there
are more ingress than egress interfaces.

 It's my believe that it is the client's own switch that aggregates
 multiple ingress ports that need large egress buffers on the
 interface connecting to our switch, right?

I'm losing you. If you have one port to customer, how can customer have
many ingress ports? Or were you first talking NET-CUST direction and now
CUST-NET direction? Surely the customer traffic is not balanced, that is
very untypical, almost always traffic is either inbound heavy or outbound
heavy.
But yes, if he customer has many ingress to single egress, it's customer
who is having issues.

 I'm also considering turning off mac-learning on the client's
 q-tunnel VLAN, as I can't why you would want to maintain a mac-table
 when you only have two ports to forward between, right? The client's
 switch should never be sending me anything unless he wants it to
 arrive at the remote site.

I don't see problem there. Just remember to be strict about allowed-vlan
stanzas in trunk ports.

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] High CPU: punted packets

2012-11-21 Thread Saku Ytti
On (2012-11-22 09:50 +0700), Jefri Abdullah wrote:

 --- dump of incoming inband packet ---
 interface NULL, routine mistral_process_rx_packet_inlin, timestamp
 10:21:58.298
 dbus info: src_vlan 0x402(1026), src_indx 0x342(834), len 0x5D(93)
   bpdu 0, index_dir 0, flood 0, dont_lrn 0, dest_indx 0x380(896)
   2D020C01 0402 0342 5D00 0011 2000 
 0380
 mistral hdr: req_token 0x0(0), src_index 0x342(834), rx_offset
 0x76(118)
   requeue 0, obl_pkt 0, vlan 0x402(1026)
 destmac 00.24.C4.C0.0B.40, srcmac 00.00.00.00.00.00, protocol 0800
 protocol ip: version 0x04, hlen 0x05, tos 0xBA, totlen 75, identifier 0
   df 0, mf 0, fo 0, ttl 61, src *.*.*.*, dst *.*.*.*
   udp src 20868, dst 19342 len 55 checksum 0xA915
 
 We've no idea why the incoming interface is NULL, and why this packet
 is punted. Do you guys have any clue about this?

Clue here is 'dest_indx', we can determine this is not interface LTL index as
0x380.divmod(64) gives 14,0 and we don't have 15 slots.

You can do 'remote command switch show platform hardware tycho register 0 1794
| i 000380' to see potential reasons for punt.

On my box, I see:

 0x017F:PP_RF_SRC_IDX0 = 0x0380 [896   ]
 0x03C4:RED_SW_ERR_IDX = 0x0380 [896   ]
 0x0456:RED_FINRST_IDX = 0x0380 [896   ]
 0x045B:  RED_IPV6_SCP_IDX = 0x0380 [896   ]
---

I happen to know from heart that one reason for 380 is MTU-failure, while it is
not obvious from above. If there is some way to know for sure which of those
registers the punt is hitting, I don't know it, but would love to hear.

But as you're having very small frame, I doubt it is MTU-failure. I'm pretty
much out of ideas. You could verify it's not URPF, but I think URPF is using
different index. (Just because uRPF is not now configured in interface in CLI,
does not mean uRPF/strict isn't dropping your frames)

If you can reliably produce it and measure via ping, you could try setting mls
rate-limites to 10/10 one by one, to see exactly which one it is hitting, it
could give more data.

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] High CPU: punted packets

2012-11-21 Thread Saku Ytti

df 0, mf 0, fo 0, ttl 61, src *.*.*.*, dst *.*.*.*

 But as you're having very small frame, I doubt it is MTU-failure. I'm pretty

Actually come to think of it, sometimes index is incorrectly programmed
with 0 MTU, so it could be still MTU failure.

Try:
sh mls cef lookup DST detail
from above 2nd line of useful output there is A:number take this number
sh mls cef adjacency entry number detail

There you should see 'mtu: xyz' in second line. If it is MTU: 0, you found
the problem. If not, alas, keep searching. (You could try adding and
removing uRPF from interface, to circumvent another bug)

If it is MTU failure, try 'ip mtu 1499, ip mtu 1500' or so in interface,
and it should fix itself.

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/