I know there was a thread about this a few months ago.
However, with the costs of SSD's falling like they have, the idea of an Oracle
X4270 M2/Cisco C210 M2/IBM x3650 M3 class of machine with a 13 drive RAIDZ2
zpool (1 hot spare) is really starting to sound alluring to me/us. Especially
with something like the OCZ Deneva 2 drives (Sandforce 2281 with a supercap),
the SanDisk (Pliant) Lightning series, or perhaps the Hitachi SSD400M's coming
in at prices that aren't a whole lot more than 600GB 15k drives. (From an
enterprise perspective anyway.)
Systems with a similar load (OLTP) are frequently I/O bound - eg a server with
a Sun 2540 FC array w/ 11x300GB 15k SAS drives and 2x X25-e's for ZIL/L2ARC, so
the extra bandwidth would be welcome.
Am I crazy for putting something like this into production using Solaris 10/11?
On paper, it really seems ideal for our needs.
Also, maybe I read it wrong, but why is it that (in the previous thread about
hw raid and zpools) zpools with large numbers of physical drives (eg 20+) were
frowned upon? I know that ZFS!=WAFL but it's so common in the NetApp world that
I was surprised to read that. A 20 drive RAID-Z2 pool really wouldn't/couldn't
recover (resilver) from a drive failure? That seems to fly in the face of the
x4500 boxes from a few years ago.
zfs-discuss mailing list