aspasia wrote:
> Thank you all so much for all the valuable information!!!  I have downloaded 
> the ZFS Admin Guide and will be indulging in ZFS over the Thanksgiving 
> weekend! ... so basically ... ZFS is the file system and volume manager roled 
> into one with a centralized form of FS/storage mgmt capability ... supports 
> RAIDZ, RAIDZ2 (dual parity like HW-Raid 6) .... 
>
> some questions, if it's not too much to ask ...
>
> a.  ZFS on the Thumper hardware, almost reminds me of the old days 
> architecture of SUN E4500 with Veritas VxFS  days, sans the need for "fsck" 
> ?? ... any thoughts or am I jumping the gun?
>   

Yes and no.  In the sense that in the golden days (oh how I miss them)
of the Sun Midrange, around the turn of the century, you either had a
bunch of fibre channel JBOD's (Sun A5200's and A1000's were popular) and
managed storage via VxVM/VxFS or you had a big array with hardware
controllers (EMC, HDS, NetApp, etc) that abstracted management into a
propriety interface. 

In that sense, ya, this is a return to the days of the JBOD as
superior.  Its lower cost and more manageable.  I think the big change
is that there were things that you got from an EMC that you didn't get
from VxVM, such as snapshots, clones, etc... now that ZFS actually has
as many or more features than the big SAN solutions why bother?

So, ya, I totally see your point.

> b.  is ZFS a general purpose storage FS... seems like when deployed in 
> conjunction with Luster/Gluster .. would be a good solution for HPC 
> environments - large sequential IO type of environments?  ... am I right?
>   

The next major release of Lustre will actually utilize ZFS.   See
http://wiki.lustre.org/index.php?title=Lustre_OSS/MDS_with_ZFS_DMU

ZFS/Lustre will be a prime solution, not only for HPC, but for a variety
of enterprise environments as well.  I'm personally interested in use it
to leverage excess capacity on rack servers for backups and such.

> c.  What are the typical application environments have you deployed ZFS on 
> Thumper (and any favorite clustering FS/SW) ?
>   


When I first started with Thumpers I'd just create a big RAIDZ2 pool
(4*11 disk RAIDZ) to maximize capacity and protection, and then use them
as NFS servers.  Huge mistake.   As you can read in the ZFS manual,
RAIDZ fixed the "write hole" in RAID5 by not allowing partial stripe
writes.  This, combined with aggressive pre-fetch, made NFS storage of
small files (web images, email, etc) a performance disaster.

There are two ways to get the most out of a Thumper:

1) Large sequential workloads, such as video streaming, video editing,
or the like, work very well in the configuration above because your
almost always making reads and writes greater than the width of a given
RAIDZ.  In these cases ZFS's progressive prefetch really shines.

2) If you look at Thumper not as one big storage device or LUN... but
rather as a bunch of centralized disks that can be sliced up.  That is,
don't create 1 pool, create 20.  If a server or client needs a high
performance NFS server, create a pool on 4 disks or something so that no
one else is competing for the IO of those disks.  If you, for instance,
used 1 thumper to create 20 pools, each of which as a mirror, you'd have
a really nice solution.

There is a lot of grey space in-between, and while not many people talk
about it, the Thumper is a platform that can fit almost any situation
but there is no one software configuration (zpool layout) that solves
all problems.

While there are links in the ZFS Admin Guide, I recommend reading
through Roch's blog (http://blogs.sun.com/roch/).  He's in Sun
Performance Engineering and his blog is required reading for anyone
serious about ZFS deployments.  Get a donut and pot of coffee and just
read everything he's written, even if you don't understand it, just
absorb it.


benr.
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to