Hello, I'm working on options for a small DC switch design. This DC has 5 virtual hosts with 10-20 guest vm's each. Each server has two quad port gig nics with 6 of the 8 gig ports connected (3 for iSCSI and 3 for data or management. It also has two 3 node sans each with 2 gig ports per node, a host of other small servers including voice servers, management servers, asa firewall, and a few routers. Total of 50-60 ports as of right now.
Connected to the DC is 7 other buildings each with there own 1 gig fiber connection serving about 3000 devices in total including desktops, laptops, ip phones, wireless ap's, building automation, alarm panels, etc.... Right now in each of the 7 buildings has a 3560G as an aggregation switch connected back to the DC. The DC also has a few 3560G's and 3750G's for the sans and servers. The system seems to work ok for the most part aside from micro bursts overwhelming the buffers on these switches and the etherchannel trunks between them dropping a minor amount of packets. QOS is configured for the voice network and there are little to no complaints. What I would like to know (costs being the biggest factor) is what would be a better switch design for the current and future traffic in this network. Some options I was thinking about are as follows: I would needs at least 96 ports. So option A is to go with a 4506-E bundle with 2 48 port line cards, sup 6l-e and a WS-X4712-SFP+E or something of the sorts. And then upgrade to the enterprise services license and do all of the routing and switching for the DC on this one switch. Means little redundancy and no failover. Option B was to go with the same 4506-E bundle, without the extra license and without the SFP line card and put in some sort of layer three aggregation switch with sftp slots and a layer three license. Option C Is to go with the 4503-E, the SFP line card and the IP Enterprise services license. And two top of rack switches, either 2360's or 4948's. I have no experience in this matter so any other thoughts or suggestions would be appreciated. Thanks, Dan. _______________________________________________ cisco-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
