Well, off the top of my head:

2 x Storage Heads, 4 x 10G, 256G RAM, 2 x Intel E5 CPU's
8 x 60-Bay JBOD's with 60 x 4TB SAS drives
RAIDZ2 stripe over the 8 x JBOD's

That should fit within 1 rack comfortably and provide 1 PB storage..


Kristoffer Sheather
Cloud Central
Scale Your Data Center In The Cloud 
Phone: 1300 144 007 | Mobile: +61 414 573 130 | Email: 
LinkedIn:   | Skype: kristoffer.sheather | Twitter: 

 From: "Marion Hakanson" <hakan...@ohsu.edu>
Sent: Saturday, March 16, 2013 12:12 PM
To: z...@lists.illumos.org
Subject: [zfs] Petabyte pool?


Has anyone out there built a 1-petabyte pool?  I've been asked to look
into this, and was told "low performance" is fine, workload is likely
to be write-once, read-occasionally, archive storage of gene sequencing
data.  Probably a single 10Gbit NIC for connectivity is sufficient.

We've had decent success with the 45-slot, 4U SuperMicro SAS disk chassis,
using 4TB "nearline SAS" drives, giving over 100TB usable space (raidz3).
Back-of-the-envelope might suggest stacking up eight to ten of those,
depending if you want a "raw marketing petabyte", or a proper 
usable petabyte".

I get a little nervous at the thought of hooking all that up to a single
server, and am a little vague on how much RAM would be advisable, other
than "as much as will fit" (:-).  Then again, I've been waiting for
something like pNFS/NFSv4.1 to be usable for gluing together multiple
NFS servers into a single global namespace, without any sign of that
happening anytime soon.

So, has anyone done this?  Or come close to it?  Thoughts, even if you
haven't done it yourself?

Thanks and regards,


Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: 
Modify Your Subscription: 
Powered by Listbox: http://www.listbox.com

zfs-discuss mailing list

Reply via email to