Re: [zfs-discuss] [zfs] Re: Petabyte pool?

2013-03-17 Thread Richard Yao
On 03/16/2013 12:57 AM, Richard Elling wrote:
 On Mar 15, 2013, at 6:09 PM, Marion Hakanson hakan...@ohsu.edu wrote:
 So, has anyone done this?  Or come close to it?  Thoughts, even if you
 haven't done it yourself?
 
 Don't forget about backups :-)
  -- richard

Transferring 1 PB over a 10 gigabit link will take at least 10 days when
overhead is taken into account. The backup system should have a
dedicated 10 gigabit link at the minimum and using incremental send/recv
will be extremely important.



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs] Re: Petabyte pool?

2013-03-17 Thread Trey Palmer
I know it's heresy these days, but given the I/O throughput you're looking for 
and the amount you're going to spend on disks, a T5-2 could make sense when 
they're released (I think) later this month.

Crucial sells RAM they guarantee for use in SPARC T-series, and since you're at 
an edu the academic discount is 35%.   So A T4-2 with 512GB RAM could be had 
for under $35K shortly after release, 4-5 months before the E5 Xeon was 
released.  It seemed a surprisingly good deal to me.

The T5-2 has 32x3.6GHz cores, 256 threads and ~150GB/s aggregate memory 
bandwidth.   In my testing a T4-1 can compete with a 12-core E-5 box on I/O and 
memory bandwidth, and this thing is about 5 times bigger than the T4-1.   It 
should have at least 10 PCIe's and will take 32 DIMMs minimum, maybe 64.  And 
is likely to cost you less than $50K with aftermarket RAM.

-- Trey



On Mar 15, 2013, at 10:35 PM, Marion Hakanson hakan...@ohsu.edu wrote:

 Ray said:
 Using a Dell R720 head unit, plus a bunch of Dell MD1200 JBODs dual pathed
 to a couple of LSI SAS switches.
 Marion said:
 How many HBA's in the R720?
 Ray said:
 We have qty 2 LSI SAS 9201-16e HBA's (Dell resold[1]).
 
 Sounds similar in approach to the Aberdeen product another sender referred to,
 with SAS switch layout:
  http://www.aberdeeninc.com/images/1-up-petarack2.jpg
 
 One concern I had is that I compared our SuperMicro JBOD with 40x 4TB drives
 in it, connected via a dual-port LSI SAS 9200-8e HBA, to the same pool layout
 on a 40-slot server with 40x SATA drives in it.  But the server uses n
 expanders, instead using SAS-to-SATA octopus cables to connect the drives
 directly to three internal SAS HBA's (2x 9201-16i's, 1x 9211-8i).
 
 What I found was that the internal pool was significantly faster for both
 sequential and random I/O than the pool on the external JBOD.
 
 My conclusion was that I would not want to exceed ~48 drives on a single
 8-port SAS HBA.  So I thought that running the I/O of all your hundreds
 of drives through only two HBA's would be a bottleneck.
 
 LSI's specs say 4800MBytes/sec for an 8-port SAS HBA, but 4000MBytes/sec
 for that card in an x8 PCIe-2.0 slot.  Sure, the newer 9207-8e is rated
 at 8000MBytes/sec in an x8 PCIe-3.0 slot, but it still has only the same
 8 SAS ports going at 4800MBytes/sec.
 
 Yes, I know the disks probably can't go that fast.  But in my tests
 above, the internal 40-disk pool measures 2000MBytes/sec sequential
 reads and writes, while the external 40-disk JBOD measures at 1500
 to 1700 MBytes/sec.  Not a lot slower, but significantly slower, so
 I do think the number of HBA's makes a difference.
 
 At the moment, I'm leaning toward piling six, eight, or ten HBA's into
 a server, preferably one with dual IOH's (thus two PCIe busses), and
 connecting dual-path JBOD's in that manner.
 
 I hadn't looked into SAS switches much, but they do look more reliable
 than daisy-chaining a bunch of JBOD's together.  I just haven't seen
 how to get more bandwidth through them to a single host.
 
 Regards,
 
 Marion
 
 
 
 
 ---
 illumos-zfs
 Archives: https://www.listbox.com/member/archive/182191/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/182191/22500336-78e51065
 Modify Your Subscription: 
 https://www.listbox.com/member/?member_id=22500336id_secret=22500336-0da17977
 Powered by Listbox: http://www.listbox.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss