Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-11 Thread Moore, Joe
Lars-Gunnar Persson wrote: > I would like to go back to my question for a second: > > I checked with my Nexsan supplier and they confirmed that access to > every single disk in SATABeast is not possible. The smallest entities > I can create on the SATABeast are RAID 0 or 1 arrays. With RAID 1 I'll

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-11 Thread Lars-Gunnar Persson
I would like to go back to my question for a second: I checked with my Nexsan supplier and they confirmed that access to every single disk in SATABeast is not possible. The smallest entities I can create on the SATABeast are RAID 0 or 1 arrays. With RAID 1 I'll loose too much disk space and

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Bob Friesenhahn
On Tue, 10 Mar 2009, A Darren Dunham wrote: What part isn't true? ZFS has a independent checksum for the data block. But if the data block is spread over multiple disks, then each of the disks have to be read to verify the checksum. I interpreted what you said to imply that RAID6 type algori

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread A Darren Dunham
On Tue, Mar 10, 2009 at 05:57:16PM -0500, Bob Friesenhahn wrote: > On Tue, 10 Mar 2009, Moore, Joe wrote: > > >As far as workload, any time you use RAIDZ[2], ZFS must read the > >entire stripe (across all of the disks) in order to verify the > >checksum for that data block. This means that a 12

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Mattias Pantzare
On Tue, Mar 10, 2009 at 23:57, Bob Friesenhahn wrote: > On Tue, 10 Mar 2009, Moore, Joe wrote: > >> As far as workload, any time you use RAIDZ[2], ZFS must read the entire >> stripe (across all of the disks) in order to verify the checksum for that >> data block.  This means that a 128k read (the

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Bob Friesenhahn
On Tue, 10 Mar 2009, Moore, Joe wrote: As far as workload, any time you use RAIDZ[2], ZFS must read the entire stripe (across all of the disks) in order to verify the checksum for that data block. This means that a 128k read (the default zfs blocksize) requires a 32kb read from each of 6 disk

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Richard Elling
[amplification of Joe's point below...] Moore, Joe wrote: Bob Friesenhahn wrote: Your idea to stripe two disks per LUN should work. Make sure to use raidz2 rather than plain raidz for the extra reliability. This solution is optimized for high data throughput from one user. Striping

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Bob Friesenhahn
On Tue, 10 Mar 2009, Lars-Gunnar Persson wrote: My conclusion on raidz1 vs raidz2 would be no difference in performance and big difference in disk space available. I am not so sure about the "big difference" in disk space available. Disk capacity is cheap, but failure is not. If you need t

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Moore, Joe
Bob Friesenhahn wrote: > Your idea to stripe two disks per LUN should work. Make sure to use > raidz2 rather than plain raidz for the extra reliability. This > solution is optimized for high data throughput from one user. Striping two disks per LUN (RAID0 on 2 disks) and then adding a ZFS form o

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Tim
On Tue, Mar 10, 2009 at 3:13 AM, Lars-Gunnar Persson < lars-gunnar.pers...@nersc.no> wrote: > How about this configuration? > > On the Nexsan SATABeast, add all disk to one RAID 5 or 6 group. Then on the > Nexsan define several smaller volumes and then add those volumes to a > raidz2/raidz zpool?

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Lars-Gunnar Persson
Test 1: bash-3.00# echo zfs_nocacheflush/D | mdb -k zfs_nocacheflush: zfs_nocacheflush: 0 bash-3.00# ./filesync-1 /raid6 1 Time in seconds to create and unlink 1 files with O_DSYNC: 292.223081 bash-3.00# echo zfs_nocacheflush/W1 | mdb -kw zfs_nocacheflush: 0

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Lars-Gunnar Persson
I realized that I'll loose too much disk space with the "double" raid configuration suggested below. Agree? I've done some performance testing with raidz/raidz1 vs raidz2: bash-3.00# zpool status -v raid5 pool: raid5 state: ONLINE scrub: none requested config: NAME

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Lars-Gunnar Persson
How about this configuration? On the Nexsan SATABeast, add all disk to one RAID 5 or 6 group. Then on the Nexsan define several smaller volumes and then add those volumes to a raidz2/raidz zpool? Could that be an useful configuration? Maybe I'll loose too much space with "double" raid 5 o

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-09 Thread Kees Nuyt
On Mon, 9 Mar 2009 12:06:40 +0100, Lars-Gunnar Persson wrote: >1. On the external disk array, I not able to configure JBOD or RAID 0 >or 1 with just one disk. In some arrays it seems to be possible to configure separate disks by offering the array just one disk in one slot at a time, and, ver

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-09 Thread Frank Cusack
On March 9, 2009 12:06:40 PM +0100 Lars-Gunnar Persson wrote: I'm trying to implement a Nexsan SATABeast ... 1. On the external disk array, I not able to configure JBOD or RAID 0 or 1 with just one disk. exactly why i didn't buy this product. ___ z

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-09 Thread Bob Friesenhahn
On Mon, 9 Mar 2009, Lars-Gunnar Persson wrote: 1. On the external disk array, I not able to configure JBOD or RAID 0 or 1 with just one disk. I can't find any options for my Solaris server to access the disk directly so I have to configure some raids on the SATABeast. I was thinking of stripi

[zfs-discuss] Nexsan SATABeast and ZFS

2009-03-09 Thread Lars-Gunnar Persson
I'm trying to implement a Nexsan SATABeast (an external disk array, read more: http://www.nexsan.com/satabeast.php, 14 disks available) with a Sun Fire X4100 M2 server running Solaris 10 u6 (connected via fiber) and have a couple of questions: (My motivation for this is the corrupted ZFS vo