Hi Robert,

That makes sense. Thank you. :-) Also, it was zpool I was looking at.
zfs always showed the correct size.

-J

On 1/3/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Jason,

Wednesday, January 3, 2007, 11:40:38 PM, you wrote:

JJWW> Just got an interesting benchmark. I made two zpools:

JJWW> RAID-10 (9x 2-way RAID-1 mirrors: 18 disks total)
JJWW> RAID-Z2 (3x 6-way RAIDZ2 group: 18 disks total)

JJWW> Copying 38.4GB of data from the RAID-Z2 to the RAID-10 took 307
JJWW> seconds. Deleted the data from the RAID-Z2. Then copying the 38.4GB of
JJWW> data from the RAID-10 to the RAID-Z2 took 258 seconds. Would have
JJWW> expected the RAID-10 to write data more quickly.

Actually with 18 disks in raid-10 in theory you get write performance
equal to stripe of 9 disks. With 18 disks in 3 raidz2 groups of 6 disks each you
should expect something like (6-2)*3 = 12 disk, so equal to 12 disks
in stripe.

JJWW> Its interesting to me that the RAID-10 pool registered the 38.4GB of
JJWW> data as 38.4GB, whereas the RAID-Z2 registered it as 56.4.

If you checked with zpool - then it's "ok" - it reports disk usage
also wit parity overhead. If zfs list showed you that numbers then
either you're using old snv bits or s10U2 as it was corrected some
time ago (in U3).


--
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to