Thank you, following your suggestion improves things - reading a ZFS
file from a RAID-0 pair now gives me 95MB/sec - about the same as from
/dev/dsk. What I find surprising is that reading from RAID-1 2-drive
zpool gives me only 56MB/s - I imagined it would be roughly like
reading from RAID-0. I
I'll make an attempt to keep it simple, and tell what is true in 'most'
cases. For some values of 'most' ;-)
The words used are at times confusing. Block mostly refers to
a logical filesystem block, which can be variable in size.
There's also checksum and parity, which are completely
Ok, I just had an idea. I think I know what happened. I moved the
dump device to an alternate device so that I could give my pool the
whole disks. To do this, I removed the one half of the mirror and
then re-attached it as a whole disk, and then did the same with the
other half.
I have this
Darren and Henk;
Firstly, thank you very much for both of your replies. I am very
grateful indeed for you all taking time off to answer my questions.
I understand RAID-5 quite well and from both of your RAID-Z
description, I see that the RAID-Z parity is also a separate block on a
separate
Steven Sim wrote:
Darren and Henk;
Firstly, thank you very much for both of your replies. I am very
grateful indeed for you all taking time off to answer my questions.
I understand RAID-5 quite well and from both of your RAID-Z description,
I see that the RAID-Z parity is also a separate
Hi Steven,
Steven Sim wrote:
My confusion is simple. Would this not then give rise also to the
write-hole vulnerability of RAID-5?
Jeff Bonwick states /that there's no way to update two or more disks
atomically, so RAID stripes can become damaged during a crash or power
outage./
If I
[b]Given[/b]: A Solaris 10 u3 server with an externally attached disk array
with RAID controller(s)
[b]Question[/b]: Is it better to create a zpool from a [u]single[/u] external
LUN on an external disk array, or is it better to use no RAID on the disk array
and just present individual disks
this is not a problem we're trying to solve, but part of a characterization
study of the zfs implementation ... we're currently using the default 8KB
blocksize for our zvol deployment, and we're performing tests using write block
sizes as small as 4KB and as large as 1MB as previously described
Steven Sim wrote:
I understand RAID-5 quite well and from both of your RAID-Z description,
I see that the RAID-Z parity is also a separate block on a separate
disk. Very well. This is just like RAID-5.
Yup. But there's a little bit of magic, which I'll
try to explain below. With more ascii
Hello Phillip,
Thursday, May 17, 2007, 6:30:38 PM, you wrote:
PF [b]Given[/b]: A Solaris 10 u3 server with an externally attached
PF disk array with RAID controller(s)
PF [b]Question[/b]: Is it better to create a zpool from a
PF [u]single[/u] external LUN on an external disk array, or is it
Hello Henk,
Friday, May 18, 2007, 12:09:40 AM, you wrote:
If I understand correctly, then the parity block for RAID-Z are also
written in two different atomic operations. As per RAID-5. (the only
difference being each can be of a different stripe size).
HL As with Raid-5 on a four disk
Hi,
I was wondering if there is any way to read a ZFS snapshot using
system/zfs lib (ie refer to it as a block device).
I dug through the libzfs source but could not find anything that could
enable me to 'read' the contents of a
snapshot/filesystem.
What I really want to do would be
12 matches
Mail list logo