thanks for the quick response!!!!!

> The next major release of Lustre will actually utilize ZFS.   See
> http://wiki.lustre.org/index.php?title=Lustre_OSS/MDS_with_ZFS_DMU

this would be cool ...

>
> ZFS/Lustre will be a prime solution, not only for HPC, but for a variety
> of enterprise environments as well.  I'm personally interested in use it
> to leverage excess capacity on rack servers for backups and such.
>

I can see it really taking off in HPC .. for example, having the
separate MDS server for metadata will improve the writes for very
large apps, since not a lot of write to MDS would occur ...

however, I wonder if it will be a problem in very small random files
environment?  I would assume that the MDS server will eventually be a
bottleneck if there is constant need to perform IO on both MDS and the
OSS servers .. i'm guessing, that is.

> When I first started with Thumpers I'd just create a big RAIDZ2 pool
> (4*11 disk RAIDZ) to maximize capacity and protection, and then use them
> as NFS servers.  Huge mistake.   As you can read in the ZFS manual,
> RAIDZ fixed the "write hole" in RAID5 by not allowing partial stripe
> writes.  This, combined with aggressive pre-fetch, made NFS storage of
> small files (web images, email, etc) a performance disaster.

well as I mentioned above, there does not seem to be (SUN solution or
not) a storage that specifically addresses the small files (static web
content and email) data characteristics


> 1) Large sequential workloads, such as video streaming, video editing,
> or the like, work very well in the configuration above because your
> almost always making reads and writes greater than the width of a given
> RAIDZ.  In these cases ZFS's progressive prefetch really shines.

totally agree!

>
> 2) If you look at Thumper not as one big storage device or LUN... but
> rather as a bunch of centralized disks that can be sliced up.  That is,
> don't create 1 pool, create 20.  If a server or client needs a high
> performance NFS server, create a pool on 4 disks or something so that no
> one else is competing for the IO of those disks.  If you, for instance,
> used 1 thumper to create 20 pools, each of which as a mirror, you'd have
> a really nice solution.
>

so the Thumper with ZFS is flexible and "moldable" .. you can use it
as a single large LUN and then slice it, or create pools of subset of
disks, etc.... ... all of which are over NFS ....

Problem with NFS in general, is when storage is maximized, need to
deploy another silo, so as you grow, you end up with "mount map"
nightmare ....

Would you say, that with the Thumper/ZFS/Gluster+Unify - and proper
configuration/tweaking, one would still be able to enjoy the
flexibility and features of ZFS while also providing a single
namespace??

> While there are links in the ZFS Admin Guide, I recommend reading
> through Roch's blog (http://blogs.sun.com/roch/).  He's in Sun

this is a GREAT site!!  thanks!

>

Question - has anyone deployed Thumper/ZFS (on NFS or Gluster/Lustre)
running in a small random files environment?  any thoughts?  .. If
not, what are the common storage alternatives to address the unique
requirements of random small files ?

thanks again!
-- 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to