How about this configuration?
On the Nexsan SATABeast, add all disk to one RAID 5 or 6 group. Then
on the Nexsan define several smaller volumes and then add those
volumes to a raidz2/raidz zpool?
Could that be an useful configuration? Maybe I'll loose too much space
with double raid 5 or
I realized that I'll loose too much disk space with the double raid
configuration suggested below. Agree?
I've done some performance testing with raidz/raidz1 vs raidz2:
bash-3.00# zpool status -v raid5
pool: raid5
state: ONLINE
scrub: none requested
config:
NAME
Test 1:
bash-3.00# echo zfs_nocacheflush/D | mdb -k
zfs_nocacheflush:
zfs_nocacheflush: 0
bash-3.00# ./filesync-1 /raid6 1
Time in seconds to create and unlink 1 files with O_DSYNC:
292.223081
bash-3.00# echo zfs_nocacheflush/W1 | mdb -kw
zfs_nocacheflush:
On Tue, Mar 10, 2009 at 3:13 AM, Lars-Gunnar Persson
lars-gunnar.pers...@nersc.no wrote:
How about this configuration?
On the Nexsan SATABeast, add all disk to one RAID 5 or 6 group. Then on the
Nexsan define several smaller volumes and then add those volumes to a
raidz2/raidz zpool?
Bob Friesenhahn wrote:
Your idea to stripe two disks per LUN should work. Make sure to use
raidz2 rather than plain raidz for the extra reliability. This
solution is optimized for high data throughput from one user.
Striping two disks per LUN (RAID0 on 2 disks) and then adding a ZFS form of
On Tue, 10 Mar 2009, Lars-Gunnar Persson wrote:
My conclusion on raidz1 vs raidz2 would be no difference in performance and
big difference in disk space available.
I am not so sure about the big difference in disk space available.
Disk capacity is cheap, but failure is not.
If you need to
[amplification of Joe's point below...]
Moore, Joe wrote:
Bob Friesenhahn wrote:
Your idea to stripe two disks per LUN should work. Make sure to use
raidz2 rather than plain raidz for the extra reliability. This
solution is optimized for high data throughput from one user.
Striping
On Tue, 10 Mar 2009, Moore, Joe wrote:
As far as workload, any time you use RAIDZ[2], ZFS must read the
entire stripe (across all of the disks) in order to verify the
checksum for that data block. This means that a 128k read (the
default zfs blocksize) requires a 32kb read from each of 6
On Tue, Mar 10, 2009 at 23:57, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 10 Mar 2009, Moore, Joe wrote:
As far as workload, any time you use RAIDZ[2], ZFS must read the entire
stripe (across all of the disks) in order to verify the checksum for that
data block. This means
On Tue, Mar 10, 2009 at 05:57:16PM -0500, Bob Friesenhahn wrote:
On Tue, 10 Mar 2009, Moore, Joe wrote:
As far as workload, any time you use RAIDZ[2], ZFS must read the
entire stripe (across all of the disks) in order to verify the
checksum for that data block. This means that a 128k read
On Tue, 10 Mar 2009, A Darren Dunham wrote:
What part isn't true? ZFS has a independent checksum for the data
block. But if the data block is spread over multiple disks, then each
of the disks have to be read to verify the checksum.
I interpreted what you said to imply that RAID6 type
Hi,
The manpage says
Specifically, used = usedbychildren + usedbydataset +
usedbyrefreservation +, usedbysnapshots. These proper-
ties are only available for datasets created on zpool
version 13 pools.
.. and I now realize that created at v13 is the
12 matches
Mail list logo