Hey Adam,

>> My first posting contained my use-cases, but I'd say that video 
>> recording/serving will dominate the disk utilization - thats why I'm 
>> pushing for 4 striped sets of RAIDZ2 - I think that it would be all 
>> around goodness
>
> It sounds good, that way, but (in theory), you'll see random I/O 
> suffer a bit when using RAID-Z2: the extra parity will drag 
> performance down a bit.
I know what you are saying, but I , wonder if it would be noticeable?  I 
think my worst case scenario would be 3 myth frontends watching 1080p 
content while 4 tuners are recording 1080p content - with each 1080p 
stream being 27Mb/s, that would be 108Mb/s writes and 81Mb/s reads (all 
sequential I/O) - does that sound like it would even come close to 
pushing a 4(4+2) array?



> The RAS guys will flinch at this, but have you considered 8*(2+1) 
> RAID-Z1?
That configuration showed up in the output of the program I posted back 
in July 
(http://mail.opensolaris.org/pipermail/zfs-discuss/2007-July/041778.html):

    24 bays w/ 500 GB drives having MTBF=5 years
      - can have 8 (2+1) w/ 0 spares providing 8000 GB with MTTDL of
    95.05 years
      - can have 6 (2+2) w/ 0 spares providing 6000 GB with MTTDL of
    28911.68 years
      - can have 4 (4+1) w/ 4 spares providing 8000 GB with MTTDL of
    684.38 years
      - can have 4 (4+2) w/ 0 spares providing 8000 GB with MTTDL of
    8673.50 years
      - can have 2 (8+1) w/ 6 spares providing 8000 GB with MTTDL of
    380.21 years
      - can have 2 (8+2) w/ 4 spares providing 8000 GB with MTTDL of
    416328.12 years

But it is 91 times more likely to fail and this system will contain data 
that  I don't want to risk losing



> I don't want to over-pimp my links, but I do think my blogged 
> experiences with my server (also linked in another thread) might give 
> you something to think about:
>  http://lindsay.at/blog/archive/tag/zfs-performance/
I see that you also set up a video server (myth?),  from you blog, I 
think you are doing 5(2+1) (plus a hot-spare?)  - this is what my 
program says about a 16-bay system:

    16 bays w/ 500 GB drives having MTBF=5 years
      - can have 5 (2+1) w/ 1 spares providing 5000 GB with MTTDL of
    1825.00 years
      - can have 4 (2+2) w/ 0 spares providing 4000 GB with MTTDL of
    43367.51 years
      - can have 3 (4+1) w/ 1 spares providing 6000 GB with MTTDL of
    912.50 years
      - can have 2 (4+2) w/ 4 spares providing 4000 GB with MTTDL of
    2497968.75 years
      - can have 1 (8+1) w/ 7 spares providing 4000 GB with MTTDL of
    760.42 years
      - can have 1 (8+2) w/ 6 spares providing 4000 GB with MTTDL of
    832656.25 years

Note that are MTTDL isn't quite as bad as 8(2+1) since you have three 
less strips.  Also, its interesting for me to note that have have 5 
strips and my 4(4+2) setup would have just one less - so the question to 
answer if your extra strip is better than my 2 extra disks in each raid-set?



> Testing 16 disks locally, however, I do run into noticeable I/O 
> bottlenecks, and I believe it's down to the top limits of the PCI-X bus.
Yes, too bad Supermicro doesn't make a PCIe-based version...   But 
still, the limit of a 64-bit, 133.3MHz PCI-X bus is 1067 MB/s whereas a 
64-bit, 100MHz, PCI-X bus is 800MB/s - either way, its much faster than 
my worst case scenario from above where 7 1080p streams would be 189Mb/s...



> > As far as a mobo with "good PCI-X architecture" - check out
>> the latest from Tyan 
>> (http://tyan.com/product_board_detail.aspx?pid=523) - it has three 
>> 133/100MHz PCI-X slots
>
> I use a Tyan in my server, and have looked at a lot of variations, but 
> I hadn't noticed that one. It has some potential.
>
> Still, though, take a look at the block diagram on the datasheet: that 
> actually looks like 1x PCI-X 133MHz slot and a bridge sharing 2x 
> 100MHz slots. My benchmarks so far show that putting a controller on a 
> 100MHz slot is measurably slower than 133MHz, but contention over a 
> single bridge can be even worse.
Hmmm, I hadn't thought about that...  Here is another new mobo from Tyan 
(http://tyan.com/product_board_detail.aspx?pid=517) - its datasheet 
shows the PCI-X buses configured the same way as your S3892:



Thanks!
Kent

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to