I realized that I'll loose too much disk space with the "double" raid configuration suggested below. Agree?

I've done some performance testing with raidz/raidz1 vs raidz2:

bash-3.00# zpool status -v raid5
  pool: raid5
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM raid5 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c7t6000402001FC442C609DC5A300000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCA4A00000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCA2200000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCABF00000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCADB00000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCAF800000000d0 ONLINE 0 0 0 c7t6000402001FC442C609F029100000000d0 ONLINE 0 0 0

errors: No known data errors

bash-3.00# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
raid5                  12.6T    141K   12.6T     0%  ONLINE     -

bash-3.00# df -h /raid5
Filesystem             size   used  avail capacity  Mounted on
raid5                   11T    41K    11T     1%    /raid5

bash-3.00# echo zfs_nocacheflush/D | mdb -k
zfs_nocacheflush:
zfs_nocacheflush:               0
bash-3.00# ./filesync-1 /raid5 10000
Time in seconds to create and unlink 10000 files with O_DSYNC: 9.871197

bash-3.00# echo zfs_nocacheflush/W1 | mdb -kw
zfs_nocacheflush:               0               =       0x1
bash-3.00# ./filesync-1 /raid5 10000
Time in seconds to create and unlink 10000 files with O_DSYNC: 7.363303

Then I destroyed the raid5 pool and created a raid6 pool:

bash-3.00# zpool status -v raid6 pool: raid6 state: ONLINE scrub: none requested
config:

NAME STATE READ WRITE CKSUM raid6 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c7t6000402001FC442C609DC5A300000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCA4A00000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCA2200000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCABF00000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCADB00000000d0 ONLINE 0 0 0 c7t6000402001FC442C609DCAF800000000d0 ONLINE 0 0 0 c7t6000402001FC442C609F029100000000d0 ONLINE 0 0 0

errors: No known data errors

bash-3.00# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
raid6                  12.6T    195K   12.6T     0%  ONLINE     -

bash-3.00# df -h /raid6
Filesystem             size   used  avail capacity  Mounted on
raid6                  8.8T    52K   8.8T     1%    /raid6

bash-3.00# echo zfs_nocacheflush/D | mdb -k
zfs_nocacheflush:
zfs_nocacheflush:               0
bash-3.00# ./filesync-1 /raid6 10000
Time in seconds to create and unlink 10000 files with O_DSYNC: 9.879219

bash-3.00# echo zfs_nocacheflush/W1 | mdb -kw
zfs_nocacheflush:               0               =       0x1
bash-3.00# ./filesync-1 /raid6 10000
Time in seconds to create and unlink 10000 files with O_DSYNC: 7.560435

My conclusion on raidz1 vs raidz2 would be no difference in performance and big difference in disk space available.


On 10. mars. 2009, at 09.13, Lars-Gunnar Persson wrote:

How about this configuration?

On the Nexsan SATABeast, add all disk to one RAID 5 or 6 group. Then on the Nexsan define several smaller volumes and then add those volumes to a raidz2/raidz zpool?

Could that be an useful configuration? Maybe I'll loose too much space with "double" raid 5 or 6 configuration? What about performance?

Regards,

Lars-Gunnar Persson

On 10. mars. 2009, at 00.26, Kees Nuyt wrote:

On Mon, 9 Mar 2009 12:06:40 +0100, Lars-Gunnar Persson
<lars-gunnar.pers...@nersc.no> wrote:

1. On the external disk array, I not able to configure JBOD or RAID 0
or 1 with just one disk.

In some arrays it seems to be possible to configure separate
disks by offering the array just one disk in one slot at a
time, and, very important, leaving all other slots empty(!).

Repeat for as many disks as you have, seating each disk in
its own slot, and all other slots empty.

(ok, it's just hear-say, but it might be worth a try with
the first 4 disks or so).
--
(  Kees Nuyt
)
c[_]
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


.--------------------------------------------------------------------------.
|Lars-Gunnar Persson | |IT- sjef | | | |Nansen senteret for miljø og fjernmåling | |Adresse : Thormøhlensgate 47, 5006 Bergen | |Direkte : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58 01 | |Internett: http://www.nersc.no, e-post: lars- gunnar.pers...@nersc.no |
'--------------------------------------------------------------------------'

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to