Dominic Kay wrote:
> Hi
>
> Firstly apologies for the spam if you got this email via multiple aliases.
>
> I'm trying to document a number of common scenarios where ZFS is used 
> as part of the solution such as email server, $homeserver, RDBMS and 
> so forth but taken from real implementations where things worked and 
> equally importantly threw up things that needed to be avoided (even if 
> that was the whole of ZFS!).
>
> I'm not looking to replace the Best Practices or Evil Tuning guides 
> but to take a slightly different slant.  If you have been involved in 
> a ZFS implementation small or large and would like to discuss it 
> either in confidence or as a referenceable case study that can be 
> written up, I'd be grateful if you'd make contact.
>
> -- 
> Dominic Kay
> http://blogs.sun.com/dom

For all the storage under my management, we are deploying ZFS going 
forward.  There have been issues, to be sure, though none of them show 
stoppers.  I agree with other posters that the way the z* commands 
lockup on a failed device are really not good, and it would be nice to 
be able to remove devices from a zpool.  There have been other 
performance issues that are more the fault of of our SAN nodes than 
ZFS.  But the ease of management, the unlimited nature (volume size to 
number of file systems) of everything ZFS, built in snapshots, and the 
confidence we get in our data make ZFS a winner. 

The way we've deployed ZFS has been to map iSCSI devices from our SAN.  
I know this isn't an ideal way to deploy ZFS, but SAN's do offer 
flexibility that direct attached drives do not.  Performance is now 
sufficient for our needs, but it wasn't at first.  We do everything here 
on the cheap, we have to.  After all, this is University research ;)  
Anyway, we buy commodity x86 servers, and use software iSCSI.  Most of 
our iSCSI nodes run Open-E iSCSI-R3.  The latest version is actually 
quite quick, which wasn't always the case.  I am experimenting using ZFS 
on the iSCSI target, but haven't finished validating that yet. 

I've also rebuilt an older 24 disk SATA chassis with the following parts:

Motherboard:Supermicro PDSME+
Processor: Intel Xeon X3210 Kentsfield 2.13GHz 2 x 4MB L2 Cache LGA 775 
Quad-Core
Disk Controllers x3: Supermicro AOC-SAT2-MV8 8-Port SATA
Hard disks x24: WD-1TB RE2, GP
RAM: Crucial, 4x2GB unbuffered ECC PC2-5300 (8GB total)
New power supplies...

The PDSME+ MB was on the Solaris HCL, and it has four PCI-X slots, so 
using three of the Super Micro MVs' is no problem.  This is obviously a 
standalone system, but it will be for nearline backup data, and doesn't 
have the same expansion requirements as our other servers.  The thing 
about this guy is how smokin fast it is.  I've set it up on snv b86, 
with 4 x 6 drive raid2z stripes, and I'm seeing up to 450MB/sec write 
and 900MB/sec read speeds.  We can't get data into it anywhere that 
quick, but the potential is awesome.  And it was really cheap, for this 
amount of storage.

Our total storage on ZFS now is at: 103TB, some user home directories, 
some software distribution, and a whole lot of scientific data.  I 
compress almost everything, since our bandwidth tends to be SAN pinched, 
not at the head nodes, so we can afford it.

I sleep at night, and the users don't see problems.  I'm a happy camper.

Cheers,

Jon
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to