On Sat, Sep 05, 2009 at 10:24:30AM -0400, Edward Ned Harvey spake thusly:
> This comment doesn't make any sense to me, would you care to expand?  If
> you're suggesting AoE and putting the OS and temp/scratch/swap space all on
> centralized storage ...  Then we're talking a single Ethernet bottleneck for
> all machines.

Given that just one ethernet link can easily transport 100MB/s and a
typical SATA hard drive can only do 70MB/s how is the ethernet a
bottleneck? And I use LACP to bond together 2. So I have 2.5 hard
drives worth of bandwidth. But that's just raw throughput. We all know
that the seeks are what usually slow you down. Each of my AoE servers
have 2G of RAM in them for caching.

> It may give the flexibility of relocating a virtual server from one
> hardware to another, but at the expense of diminished performance
> compared to local disk.  Here's what I have:

In my experience the performance is better compared to the typical
installation of 1-2 local disks.

> A dozen or so compute nodes, which all do heavy temporary disk IO, compute
> and memory intensive jobs, while under load doing batched jobs.  To
> alleviate the bottleneck of centralized storage, there are two solutions:
> SAN with nonoverlapping LUNS on separate physical disks, or local disk.  The
> local disk is a lot cheaper.  The only thing I see to gain with the SAN is
> the ability to VMotion or Live Migrate to different physical hardware, which
> is an unneeded feature in our environment.

Temporary disk IO? Will they all be doing this temporary disk IO at
the same time? I have at least 200MB/s of bandwidth minimum between
any compute node and any storage node. That's a lot of IO for most
applications. And given that this can all be spread out over a number
of disks on the SAN side of things plus the cache in the SAN head the
IOPS can really get up there.

I use Supermicro motherboards and chassis and Seagate SATA 7.2k RPM
(for general mass storage) or SAS 15k RPM (for database and swap and
other higher end storage) with HP ProCurve switches. It is pretty darn
cheap either price/gig or price/performance. Certainly cheaper and
easier than fibrechannel. I hate messing around with HBA drivers and
fibrechannel switches etc. Doing everything with well tested ethernet
is cheap and easy. Yes, you could go a lot faster with fibrechannel
but bang for buck you can't beat ethernet.

Sure, if you need gigabytes of IO per second you are going to have to
spend big bucks. But I need availability and price/performance.

-- 
Tracy Reed
http://tracyreed.org

Attachment: pgp2rkeQdOJlm.pgp
Description: PGP signature

_______________________________________________
Tech mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to