[zfs-discuss] RAID Z stripes

2010-08-10 Thread Terry Hull
I am wanting to build a server with 16 - 1TB drives with 2 ­ 8 drive RAID Z2 arrays striped together. However, I would like the capability of adding additional stripes of 2TB drives in the future. Will this be a problem? I thought I read it is best to keep the stripes the same width and was

Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Ian Collins
On 08/10/10 06:21 PM, Terry Hull wrote: I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive RAID Z2 arrays striped together. However, I would like the capability of adding additional stripes of 2TB drives in the future. Will this be a problem? I thought I read it is best to

Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Phil Harman
On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote: On 08/10/10 06:21 PM, Terry Hull wrote: I am wanting to build a server with 16 - 1TB drives with 2 – 8 dri ve RAID Z2 arrays striped together. However, I would like the capa bility of adding additional stripes of 2TB drives in the

Re: [zfs-discuss] Upgrading 2009.06 to something current

2010-08-10 Thread Dick Hoogendijk
On 1-8-2010 19:57, David Dyer-Bennet wrote: I've kind of given up on that. This is a home production server; it's got all my photos on it. The uncertainty around OpenSolaris made me drop it. I'm very sorry to say, because I loved the system. I do not want to worry all the time though, so

[zfs-discuss] ZFS OpenSolaris and FreeBSD

2010-08-10 Thread Dick Hoogendijk
If I create a ZFS mirrored zpool on FreeBSD (zfs v14) will I be able to boot off an OpenSolaris-b131 CD and copy my data off (another) ZFS mirror created by OpenSolaris (ZFS v22)? A simple question, but my data is precious, so I ask beforehand. ;-)

Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Andrew Gabriel
Phil Harman wrote: On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote: On 08/10/10 06:21 PM, Terry Hull wrote: I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive RAID Z2 arrays striped together. However, I would like the capability of adding additional stripes of

Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Ian Collins
On 08/10/10 09:12 PM, Andrew Gabriel wrote: Phil Harman wrote: On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote: On 08/10/10 06:21 PM, Terry Hull wrote: I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive RAID Z2 arrays striped together. However, I would like the

Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Phil Harman
On 10 Aug 2010, at 10:22, Ian Collins i...@ianshome.com wrote: On 08/10/10 09:12 PM, Andrew Gabriel wrote: Phil Harman wrote: On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote: On 08/10/10 06:21 PM, Terry Hull wrote: I am wanting to build a server with 16 - 1TB drives with 2 – 8

Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Terry Hull I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive RAID Z2 arrays striped together.   However, I would like the capability of adding additional stripes of 2TB

[zfs-discuss] Global Spare for 2 pools

2010-08-10 Thread Tony MacDoodle
I have 2 ZFS pools all using the same drive type and size. The question is can I have 1 global hot spare for both of those pools? Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Global Spare for 2 pools

2010-08-10 Thread Andrew Gabriel
Tony MacDoodle wrote: I have 2 ZFS pools all using the same drive type and size. The question is can I have 1 global hot spare for both of those pools? Yes. A hot spare disk can be added to more than one pool at the same time. -- Andrew Gabriel ___

[zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-10 Thread David Dyer-Bennet
My full backup still doesn't complete. However, instead of hanging the entire disk subsystem as it did on 111b, it now issues error messages. Errors at the end. sending from @bup-daily-20100726-10CDT to zp1/d...@bup-daily-20100727-10cdt received 3.80GB stream in 136 seconds (28.6MB/sec)

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-10 Thread David Dyer-Bennet
in this new run, and we'll see what happens at the end of this run. (These are from a bash trace as produced by set -x) + zfs create -p bup-wrack/fsfs/zp1 + zfs send -Rp z...@bup-20100810-154542gmt + zfs recv -Fud bup-wrack/fsfs/zp1 (The send and the receive are source and sink in a pipeline

Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Terry Hull
From: Phil Harman phil.har...@gmail.com Date: Tue, 10 Aug 2010 09:24:52 +0100 To: Ian Collins i...@ianshome.com Cc: Terry Hull t...@nrg-inc.com, zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] RAID Z stripes On 10 Aug 2010, at 08:49, Ian Collins

Re: [zfs-discuss] Corrupt file without filename

2010-08-10 Thread Cindy Swearingen
You would look for the device name that might be a problem, like this: # fmdump -eV | grep c2t4d0 vdev_path = /dev/dsk/c2t4d0s0 vdev_path = /dev/dsk/c2t4d0s0 vdev_path = /dev/dsk/c2t4d0s0 vdev_path = /dev/dsk/c2t4d0s0 Then, review the file more closely for the details of these errors, such as

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-10 Thread Dave Pacheco
David Dyer-Bennet wrote: My full backup still doesn't complete. However, instead of hanging the entire disk subsystem as it did on 111b, it now issues error messages. Errors at the end. [...] cannot receive incremental stream: most recent snapshot of bup-wrack/fsfs/zp1/ddb does not match

Re: [zfs-discuss] Global Spare for 2 pools

2010-08-10 Thread Cindy Swearingen
Yes, as long as the pools are on the same system, you can share a spare between two pools, but we are not recommending sharing spares at this time. We'll keep you posted. Thanks, Cindy On 08/10/10 07:39, Tony MacDoodle wrote: I have 2 ZFS pools all using the same drive type and size. The

[zfs-discuss] zfs replace problems please please help

2010-08-10 Thread seth keith
first off I don't have the exact failure messages here, and I did not take good notes of the failures, so I will do the best I can. Please try and give me advice anyway. I have a 7 drive raidz1 pool with 500G drives, and I wanted to replace them all with 2TB drives. Immediately I ran into

Re: [zfs-discuss] zfs replace problems please please help

2010-08-10 Thread Mark J Musante
On Tue, 10 Aug 2010, seth keith wrote: first off I don't have the exact failure messages here, and I did not take good notes of the failures, so I will do the best I can. Please try and give me advice anyway. I have a 7 drive raidz1 pool with 500G drives, and I wanted to replace them all

Re: [zfs-discuss] ZFS with EMC PowerPath

2010-08-10 Thread Cindy Swearingen
Hi Brian, Is the pool exported before the update/upgrade of PowerPath software? This recommended practice might help the resulting devices to be more coherent. If the format utility sees the devices the same way as ZFS, then I don't see how ZFS can rename the devices. If the format utility

Re: [zfs-discuss] ZFS SCRUB

2010-08-10 Thread Cindy Swearingen
The ZFS best practices is here: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Run zpool scrub on a regular basis to identify data integrity problems. If you have consumer-quality drives, consider a weekly scrubbing schedule. If you have datacenter-quality drives,

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-10 Thread Dave Pacheco
David Dyer-Bennet wrote: On Tue, August 10, 2010 13:23, Dave Pacheco wrote: David Dyer-Bennet wrote: My full backup still doesn't complete. However, instead of hanging the entire disk subsystem as it did on 111b, it now issues error messages. Errors at the end. [...] cannot receive

Re: [zfs-discuss] ZFS with EMC PowerPath

2010-08-10 Thread Brian Kolaci
On Aug 10, 2010, at 4:07 PM, Cindy Swearingen wrote: Hi Brian, Is the pool exported before the update/upgrade of PowerPath software? Yes, that's the standard procedure. This recommended practice might help the resulting devices to be more coherent. If the format utility sees the

Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Ian Collins
On 08/10/10 10:09 PM, Phil Harman wrote: On 10 Aug 2010, at 10:22, Ian Collinsi...@ianshome.com wrote: On 08/10/10 09:12 PM, Andrew Gabriel wrote: Another option - use the new 2TB drives to swap out the existing 1TB drives. If you can find another use for the swapped out drives, this

Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Ian Collins
On 08/11/10 05:16 AM, Terry Hull wrote: So do I understand correctly that really the Right thing to do is to build a pool not only with a consistent strip width, but also to build it with drives on only one size? It also sounds like from a practical point of view that building the pool

[zfs-discuss] Raidz - what is stored in parity?

2010-08-10 Thread Peter Taps
Hi, I am going through understanding the fundamentals of raidz. From the man pages, a raidz configuration of P disks and N parity provides (P-N)*X storage space where X is the size of the disk. For example, if I have 3 disks of 10G each and I configure it with raidz1, I will have 20G of usable

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-10 Thread Eric D. Mudama
On Tue, Aug 10 at 15:40, Peter Taps wrote: Hi, First, I don't understand why parity takes so much space. From what I know about parity, there is typically one parity bit per byte. Therefore, the parity should be taking 1/8 of storage, not 1/3 of storage. What am I missing? Think of it as 1

[zfs-discuss] OT-ish: Solaris 11

2010-08-10 Thread David Magda
For those who missed it, Oracle/Sun announcement on Solaris 11: Solaris 11 will be based on technologies currently available for preview in OpenSolaris including: * Image packaging system * Crossbow network virtualization * ZFS de-duplication * CIFS file

Re: [zfs-discuss] zfs replace problems please please help

2010-08-10 Thread seth keith
First off double thanks for replying to my post. I tried to your advice but something is way wrong. I have all 2TB drives disconnected, and the 7 500GB drives connected. All 7 show up in bios and in format. Here all the drives are the original 7 500Mb drives: # format Searching for

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-10 Thread David Dyer-Bennet
I'm not using -I or -i at all. Here are the commands again, and all the output. + zfs create -p bup-wrack/fsfs/zp1 + zfs send -Rp z...@bup-20100810-154542gmt + zfs recv -Fud bup-wrack/fsfs/zp1 warning: cannot send 'zp1/d...@bup-2hr-20100731-12cdt': no such pool or dataset warning: cannot

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-10 Thread Ian Collins
that some of get deleted every 2 hours). There are also errors relating to incremental streams which is strange since I'm not using -I or -i at all. Here are the commands again, and all the output. + zfs create -p bup-wrack/fsfs/zp1 + zfs send -Rp z...@bup-20100810-154542gmt + zfs recv -Fud

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-10 Thread Peter Taps
Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it simple let's not consider block sizes. Let's say I send a write

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-10 Thread Erik Trimble
On 8/10/2010 9:57 PM, Peter Taps wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it simple let's not