Actually, you could use 3TB drives and with a 6/8 RAIDZ2 stripe achieve 
1080 TB usable.

You'll also need 8-16 x SAS ports available on each storage head to provide 
redundant multi-pathed SAS connectivity to the JBOD's, recommend LSI 
9207-8E's for those and Intel X520-DA2's for the 10G NIC's.

 From: "Kristoffer Sheather @ CloudCentral" 
Sent: Saturday, March 16, 2013 12:21 PM
Subject: re: [zfs] Petabyte pool?

Well, off the top of my head:

2 x Storage Heads, 4 x 10G, 256G RAM, 2 x Intel E5 CPU's
8 x 60-Bay JBOD's with 60 x 4TB SAS drives
RAIDZ2 stripe over the 8 x JBOD's

That should fit within 1 rack comfortably and provide 1 PB storage..


Kristoffer Sheather
Cloud Central
Scale Your Data Center In The Cloud 
Phone: 1300 144 007 | Mobile: +61 414 573 130 | Email:
LinkedIn:   | Skype: kristoffer.sheather | Twitter: 

 From: "Marion Hakanson" <>
Sent: Saturday, March 16, 2013 12:12 PM
Subject: [zfs] Petabyte pool?


Has anyone out there built a 1-petabyte pool?  I've been asked to look
into this, and was told "low performance" is fine, workload is likely
to be write-once, read-occasionally, archive storage of gene sequencing
data.  Probably a single 10Gbit NIC for connectivity is sufficient.

We've had decent success with the 45-slot, 4U SuperMicro SAS disk chassis,
using 4TB "nearline SAS" drives, giving over 100TB usable space (raidz3).
Back-of-the-envelope might suggest stacking up eight to ten of those,
depending if you want a "raw marketing petabyte", or a proper 
usable petabyte".

I get a little nervous at the thought of hooking all that up to a single
server, and am a little vague on how much RAM would be advisable, other
than "as much as will fit" (:-).  Then again, I've been waiting for
something like pNFS/NFSv4.1 to be usable for gluing together multiple
NFS servers into a single global namespace, without any sign of that
happening anytime soon.

So, has anyone done this?  Or come close to it?  Thoughts, even if you
haven't done it yourself?

Thanks and regards,


RSS Feed:
Modify Your Subscription:
Powered by Listbox:

zfs-discuss mailing list

Reply via email to