On Wednesday, May 16, 2012 05:14:54 AM Dan Letkeman wrote:
Most high bandwidth traffic is to and from the servers
and sans, and would stay within the 4500-E, second to
that would be the traffic from all of the users from all
the buildings to and from the servers, and then all of
the internet
This switch will never need to hold a bgp table. I do how ever want
to do PBR, and I am finding mixed messages on if it works or not. And
if it does work will it work in my situation or will it switch in
software and have poor performance? The idea of using it as an
aggregation switch would
In the absense of Waris chiming in, PBR isn't yet supported on the
ME3600, I believe.
Last posting about this as of Dec 2011 was that PBR was on the roadmap,
and I haven't yet seen it come up as a new feature in any of the
software releases subsequent to this.
You may (or may not) be able
Your size sounds fairly close to our situation... Do you have a spare
fiber pair going to each location?
Right now in each of the 7 buildings has a 3560G as an aggregation
switch connected back to the DC. The DC also has a few 3560G's and
3750G's for the sans and servers.
[...]
What I would
On Tuesday, May 15, 2012 05:58:34 PM Jason Gurtz wrote:
For
the core, look at the 4900M or the newer 4500-X; these
two switches are basically a semi-fixed version of the
cat45xx (fixed sup, replaceable line cards).
We quite like the 4500-X jobs for core switching these days,
especially when
Jason,
Thank you for the response. I have a few more questions and maybe
some clarification if you could.
On Tue, May 15, 2012 at 10:58 AM, Jason Gurtz jasongu...@npumail.com wrote:
Your size sounds fairly close to our situation... Do you have a spare
fiber pair going to each location?
Hello,
I'm working on options for a small DC switch design. This DC has 5
virtual hosts with 10-20 guest vm's each. Each server has two quad
port gig nics with 6 of the 8 gig ports connected (3 for iSCSI and 3
for data or management. It also has two 3 node sans each with 2 gig
ports per node,