On Mar 15, 2013, at 6:09 PM, Marion Hakanson <hakan...@ohsu.edu> wrote:

> Greetings,
> Has anyone out there built a 1-petabyte pool?

Yes, I've done quite a few.

>  I've been asked to look
> into this, and was told "low performance" is fine, workload is likely
> to be write-once, read-occasionally, archive storage of gene sequencing
> data.  Probably a single 10Gbit NIC for connectivity is sufficient.
> We've had decent success with the 45-slot, 4U SuperMicro SAS disk chassis,
> using 4TB "nearline SAS" drives, giving over 100TB usable space (raidz3).
> Back-of-the-envelope might suggest stacking up eight to ten of those,
> depending if you want a "raw marketing petabyte", or a proper "power-of-two
> usable petabyte".

Yes. NB, for the PHB, using N^2 is found 2B less effective than N^10.

> I get a little nervous at the thought of hooking all that up to a single
> server, and am a little vague on how much RAM would be advisable, other
> than "as much as will fit" (:-).  Then again, I've been waiting for
> something like pNFS/NFSv4.1 to be usable for gluing together multiple
> NFS servers into a single global namespace, without any sign of that
> happening anytime soon.

NFS v4 or DFS (or even clever sysadmin + automount) offers single namespace
without needing the complexity of NFSv4.1, lustre, glusterfs, etc.

> So, has anyone done this?  Or come close to it?  Thoughts, even if you
> haven't done it yourself?

Don't forget about backups :-)
 -- richard



zfs-discuss mailing list

Reply via email to