I read the "ZFS_Best_Practices_Guide" and "ZFS_Evil_Tuning_Guide", and have 
some 
questions:

 1. Cache device for L2ARC
     Say we get a decent ssd, ~500MB/s read/write. If we have a 20 HDD zpool 
setup shouldn't we be reading at least at the 500MB/s read/write range? Why 
would we want a ~500MB/s cache?
 2. ZFS dynamically strips along the top-most vdev's and that "performance for 
1 
vdev is equivalent to performance of one drive in that group". Am I correct in 
thinking this means, for example, I have a single 14 disk raidz2 vdev zpool, 
the 
disks will go ~100MB/s each , this zpool would theoretically read/write at 
~100MB/s max (how about real world average?)? If this was RAID6 I think this 
would go theoretically ~1.4GB/s, but in real life I am thinking ~1GB/s (aka 10x-
14x faster than zfs, and both provide the same amount of redundancy)? Is my 
thinking off in the RAID6 or RAIDZ2 numbers? Why doesn't ZFS try to dynamically 
strip inside vdevs (and if it is, is there an easy to understand explanation 
why 
a vdev doesn't read from multiple drives at once when requesting data, or why a 
zpool wouldn't make N number of requests to a vdev with N being the number of 
disks in that vdev)?

Since "performance for 1 vdev is equivalent to performance of one drive in that 
group" it seems like the higher raidzN are not very useful. If your using 
raidzN 
your probably looking for a lower than mirroring parity (aka 10%-33%), but if 
you try to use raidz3 with 15% parity your putting 20 HDDs in 1 vdev which is 
terrible (almost unimaginable) if your running at 1/20 the "ideal" performance.


Main Question:
 3. I am updating my old RAID5 and want to reuse my old drives. I have 8 1.5TB 
drives and buying new 3TB drives to fill up the rest of a 20 disk enclosure 
(Norco RPC-4220); there is also 1 spare, plus the bootdrive so 22 total. I want 
around 20%-25% parity. My system is like so:

Main Application: Home NAS
* Like to optimize max space with 20%(ideal) or 25% parity - would like 
'decent' 
reading performance
  - 'decent' being max of 10GigE Ethernet, right now it is only 1 gigabit 
Ethernet but hope to leave room to update in future if 10GigE becomes cheaper. 
My RAID5 runs at ~500MB/s so was hoping to get at least above that with the 20 
disk raid.
* 16GB RAM
* Open to using ZIL/L2ARC, but, left out for now: writing doesn't occur much 
(~7GB a week, maybe a big burst every couple months), and don't really read 
same 
data multiple times.

What would be the best setup? I'm thinking one of the following:
    a. 1vdev of 8 1.5TB disks (raidz2). 1vdev of 12 3TB disks (raidz3)? 
(~200MB/s reading, best reliability)
    b. 1vdev of 8 1.5TB disks (raidz2). 3vdev of 4 3TB disks (raidz)? (~400MB/s 
reading, evens out size across vdevs)
    c. 2vdev of 4 1.5TB disks (raidz). 3vdev of 4 3TB disks (raidz)? (~500MB/s 
reading, maximize vdevs for performance)

I am leaning towards "a." since I am thinking "raidz3"+"raidz2" should provide 
a 
little more reliability than 5 "raidz1"'s, but, worried that the real world 
read/write performance will be low (theoridical is ~200MB/s, and, since the 2nd 
vdev is 3x the size as the 1st, I am probably looking at more like 133MB/s?). 
The 12 disk array is also above the "9 disk group max" recommendation in the 
Best Practices guide, so not sure if this affects read performance (if it is 
just resilver time I am not as worried about it as long it isn't like 3x 
longer)?

I guess I'm hoping "a." really isn't ~200MB/s hehe, if it is I'm leaning 
towards 
"b.", but, if so, all three are downgrades from my initial setup read 
performance wise -_-.

Is someone able to correct my understanding if some of my numbers are off, or 
would someone have a better raidzN configuration I should consider? Thanks for 
any help.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to