--- Begin Message ---
On 3/06/2015 7:59 PM, Nick Cutting wrote:
Thank you for the suggestion - I've been using these in the lab quite
a bit lately as I've lost faith in GNS3 (watching it fall apart when
showing clients proof of concept - "this won't happen on the real
kit..") , however I am a little scared to run the internet vlan(s)
into the esx estate at this time, there is a rather old fashioned
security policy in place. Perhaps if we had dedicated hosts

I think we can pick up 2 of the new little 3560's-CX's for £~5k each
with ip services & netflow  - just hoping 11k prefixes is enough.

It probably won't be. Here's from a WS-C3560CX-12PD-S:

switch-2#show sdm prefer
The current template is "default" template.
The selected template optimizes the resources in
the switch to support this level of features for
8 routed interfaces and 1024 VLANs.

number of unicast mac addresses: 16K
number of IPv4 IGMP groups + multicast routes: 1K
number of IPv4 unicast routes: 5K
number of directly-connected IPv4 hosts: 4K
number of indirect IPv4 routes: 1K
number of IPv6 multicast groups: 1K
number of IPv6 unicast routes: 5K
number of directly-connected IPv6 addresses: 4K
number of indirect IPv6 unicast routes: 1K
number of IPv4 policy based routing aces: 0.25K
number of IPv4/MAC qos aces: 0.375k
number of IPv4/MAC security aces: 0.375k
number of IPv6 policy based routing aces: 0.25K
number of IPv6 qos aces: 0.25K
number of IPv6 security aces: 0.375k

switch-2#

There's only one SDM template at this stage.

A 3650 switch (not a 3560) would be sufficient though - think this is as low as you could go while fitting that many prefixes in tcam.

Aside from this I'm quite disappointed with how the ISR 4300/G3 platform has been put together in so far as the licensing and throughput restrictions. It seems to me that it's a big step backwards from the ISR G2 platform, and the upgrade from G2 to G3 is a hard sell. I would be most interested to see how the sales figures are looking...and other peoples thoughts on this.

Reuben


--- End Message ---
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to