Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Lars-Gunnar Persson
How about this configuration? On the Nexsan SATABeast, add all disk to one RAID 5 or 6 group. Then on the Nexsan define several smaller volumes and then add those volumes to a raidz2/raidz zpool? Could that be an useful configuration? Maybe I'll loose too much space with double raid 5 or

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Lars-Gunnar Persson
I realized that I'll loose too much disk space with the double raid configuration suggested below. Agree? I've done some performance testing with raidz/raidz1 vs raidz2: bash-3.00# zpool status -v raid5 pool: raid5 state: ONLINE scrub: none requested config: NAME

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Lars-Gunnar Persson
Test 1: bash-3.00# echo zfs_nocacheflush/D | mdb -k zfs_nocacheflush: zfs_nocacheflush: 0 bash-3.00# ./filesync-1 /raid6 1 Time in seconds to create and unlink 1 files with O_DSYNC: 292.223081 bash-3.00# echo zfs_nocacheflush/W1 | mdb -kw zfs_nocacheflush:

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Tim
On Tue, Mar 10, 2009 at 3:13 AM, Lars-Gunnar Persson lars-gunnar.pers...@nersc.no wrote: How about this configuration? On the Nexsan SATABeast, add all disk to one RAID 5 or 6 group. Then on the Nexsan define several smaller volumes and then add those volumes to a raidz2/raidz zpool?

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Moore, Joe
Bob Friesenhahn wrote: Your idea to stripe two disks per LUN should work. Make sure to use raidz2 rather than plain raidz for the extra reliability. This solution is optimized for high data throughput from one user. Striping two disks per LUN (RAID0 on 2 disks) and then adding a ZFS form of

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Bob Friesenhahn
On Tue, 10 Mar 2009, Lars-Gunnar Persson wrote: My conclusion on raidz1 vs raidz2 would be no difference in performance and big difference in disk space available. I am not so sure about the big difference in disk space available. Disk capacity is cheap, but failure is not. If you need to

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Richard Elling
[amplification of Joe's point below...] Moore, Joe wrote: Bob Friesenhahn wrote: Your idea to stripe two disks per LUN should work. Make sure to use raidz2 rather than plain raidz for the extra reliability. This solution is optimized for high data throughput from one user. Striping

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Bob Friesenhahn
On Tue, 10 Mar 2009, Moore, Joe wrote: As far as workload, any time you use RAIDZ[2], ZFS must read the entire stripe (across all of the disks) in order to verify the checksum for that data block. This means that a 128k read (the default zfs blocksize) requires a 32kb read from each of 6

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Mattias Pantzare
On Tue, Mar 10, 2009 at 23:57, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 10 Mar 2009, Moore, Joe wrote: As far as workload, any time you use RAIDZ[2], ZFS must read the entire stripe (across all of the disks) in order to verify the checksum for that data block.  This means

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread A Darren Dunham
On Tue, Mar 10, 2009 at 05:57:16PM -0500, Bob Friesenhahn wrote: On Tue, 10 Mar 2009, Moore, Joe wrote: As far as workload, any time you use RAIDZ[2], ZFS must read the entire stripe (across all of the disks) in order to verify the checksum for that data block. This means that a 128k read

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Bob Friesenhahn
On Tue, 10 Mar 2009, A Darren Dunham wrote: What part isn't true? ZFS has a independent checksum for the data block. But if the data block is spread over multiple disks, then each of the disks have to be read to verify the checksum. I interpreted what you said to imply that RAID6 type

[zfs-discuss] usedby* properties for datasets created before v13

2009-03-10 Thread Gavin Maltby
Hi, The manpage says Specifically, used = usedbychildren + usedbydataset + usedbyrefreservation +, usedbysnapshots. These proper- ties are only available for datasets created on zpool version 13 pools. .. and I now realize that created at v13 is the