On Thu, May 17, 2012 at 2:30 PM, Tom Miller <[email protected]> wrote:
> My core switch bank is a series of 3COM (HP) 1GIG managed switches.  They've
> worked very well.  I don't think the exact model is made anymore, so I
> cannot add to the current bank.

  Call HP.  See if they offer a specific migration path for the
model(s) you have.  They may have options that will make your path
forward easier.

  I am a big fan of HP ProCurve switches.  Very cost effective, lots
of features, generally rock-solid operation, lifetime warranty.

> Looking at my options, what speeds are you now using for your core
> switches:  1 GB, 10, 100?

  I can tell you about our needs, but your needs may be entirely
different.  There's no generic answer to this one.  But as food for
thought:

  We're 100 meg to most desktops.  1 gig to servers, between switches,
and to heavy users like IT and CAD people.  Link aggregation (2 x 1
gig) between two switches in the server room.  That meets our needs
for now.

  For new construction, I'm having them run CAT6A between switches, to
allow for 10 gig in the future.  But I'm only running 1 gig on it
presently.  I don't think we will need more than that in the
foreseeable future.

> I do have several SANS that are connected to the core.

  SANs, or disk I/O of any kind, are one of the things that benefits
the most from higher speed links.  Since such uses are often in one
server room, very high speed links are also a lot more practical.  (As
opposed to a whole building, where both distance and old wiring often
hinder speeds => 1 gig).

> I haven't run any port stats yet but I will.

  You should start there.  :-)

> What about port size?  Each of these switches has 24 ports.  I could
> continue with smaller switches or look for a few switches with many ports.

  This depends mostly on the total port count you need.  If you only
need, say, 50 to 100 ports, a modular chassis may well be overkill.

  Stacking smaller switches gives you some fault-tolerance: If you
loose an entire switch, you can move critical stuff to the remaining
switches and drop less important nodes.  Or keep an entire spare
switch on-hand.

  On the downside, stacking smaller switches may make management
harder, as now you have to deal with several entities rather than one
big chassis.  It may also limit your bandwidth: Most modular chassis
platforms offer more backplane bandwidth than you can get using
interconnect cables on stacked switches.  Chassis may also be more
space and power efficient.

> I recall seeing a Foundry core switch a few years ago and I think it had a
> few hundred ports.

  There are switch platforms that support > 1000 ports in a single
chassis.  It's all about what you need.

-- Ben

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/>  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to [email protected]
with the body: unsubscribe ntsysadmin

Reply via email to