Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Haudy Kazemi
Peter Taps wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it simple let's not consider block sizes.

[zfs-discuss] Need a link on data corruption

2010-08-11 Thread Orvar Korvar
Someone posted about CERN having a bad network card which injected faulty bits into the data stream. And ZFS detected it, because of end-to-end checksum. Does anyone has more information on this? -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Need a link on data corruption

2010-08-11 Thread Lassi Tuura
Hi, Someone posted about CERN having a bad network card which injected faulty bits into the data stream. I've regularly heard people mention data corruption in network layer here - and other physics sites like FNAL and SLAC - but I don't have details of any specific incident. We do see our

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Thomas Burgess
On Wed, Aug 11, 2010 at 12:57 AM, Peter Taps ptr...@yahoo.com wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. Consider my earlier example of 3 disks zpool configured for raidz-1. To

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Tue, 10 Aug 2010, seth keith wrote: # zpool status pool: brick state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Marty Scholes
Erik Trimble wrote: On 8/10/2010 9:57 PM, Peter Taps wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. Consider my earlier example of 3 disks zpool configured for raidz-1. To

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-11 Thread David Dyer-Bennet
On Tue, August 10, 2010 23:13, Ian Collins wrote: On 08/11/10 03:45 PM, David Dyer-Bennet wrote: cannot receive incremental stream: most recent snapshot of bup-wrack/fsfs/zp1/ddb does not match incremental source That last error occurs if the snapshot exists, but has changed, it has been

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-11 Thread David Dyer-Bennet
On Tue, August 10, 2010 16:41, Dave Pacheco wrote: David Dyer-Bennet wrote: If that turns out to be the problem, that'll be annoying to work around (I'm making snapshots every two hours and deleting them after a couple of weeks). Locks between admin scripts rarely end well, in my

Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-11 Thread Zachary Bedell
Just wanted to post a bit of closure to this thread quick... Most of the import taking too long threads I've found on the list tend to fade out without any definitive answer as to what went wrong. I needed something a bit more concrete to make me happy. After zfs send'ing everything to a

[zfs-discuss] autoreplace not kicking in

2010-08-11 Thread Giovanni Tirloni
Hello, In OpenSolaris b111 with autoreplace=on and a pool without spares, ZFS is not kicking the resilver after a faulty disk is replaced and shows up with the same device name, even after waiting several minutes. The solution is to do a manual `zpool replace` which returns the following: #

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Eric D. Mudama
On Tue, Aug 10 at 21:57, Peter Taps wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. The data for any given sector striped across all drives can be thought of as: A+B+C = P where

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Peter Taps
Thank you all for your help. It appears my understanding of parity was rather limited. I kept on thinking about parity in memory where the extra bit would be used to ensure that the total of all 9 bits is always even. In case of zfs, the above type of checking is actually moved into checksum.

[zfs-discuss] strange permission problem with sharing over NFS

2010-08-11 Thread antst
I found strange issue. Let say I have zfs filesystem export/test1, which is shared over NFSv3 then I zfs create export/test1/test2 chown myuser /export/test1/test2 ls -l /export/test1/test2 (it will output that myusers is owner). But if I do ls -l on NFS client where /export/test1 is mounted,

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Wed, 11 Aug 2010, Seth Keith wrote: When I do a zdb -l /dev/rdsk/any device I get the same output for all my drives in the pool, but I don't think it looks right: # zdb -l /dev/rdsk/c4d0 What about /dev/rdsk/c4d0s0? ___ zfs-discuss mailing

Re: [zfs-discuss] autoreplace not kicking in

2010-08-11 Thread Cindy Swearingen
Hi Giovanni, The spare behavior and the autoreplace property behavior are separate but they should work pretty well in recent builds. You should not need to perform a zpool replace operation if the autoreplace property is set. If autoreplace is set and a replacement disk is inserted into the

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Marty Scholes
Peter wrote: One question though. Marty mentioned that raidz parity is limited to 3. But in my experiment, it seems I can get parity to any level. You create a raidz zpool as: # zpool create mypool raidzx disk1 diskk2 Here, x in raidzx is a numeric value indicating the desired

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread seth keith
this is for newbies like myself: I used using 'zdb -l' wrong, just using the drive name from 'zpool status' or format which is like c6d1, didn't work. I needed to add s0 to the end: zdb -l /dev/dsk/c6d1s0 gives me a good looking label ( I think ). The pool_guid values are the same for all

[zfs-discuss] Performance Testing

2010-08-11 Thread Paul Kraus
I know that performance has been discussed often here, but I have just gone through some testing in preparation for deploying a large configuration (120 drives is a large configuration for me) and I wanted to share my results, both to share the results as well as to see if anyone sees

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Wed, 11 Aug 2010, seth keith wrote: NAME STATE READ WRITE CKSUM brick DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 c13d0 ONLINE 0 0 0 c4d0

[zfs-discuss] ZFS and VMware

2010-08-11 Thread Paul Kraus
I am looking for references of folks using ZFS with either NFS or iSCSI as the backing store for VMware (4.x) backing store for virtual machines. We asked the local VMware folks and they had not even heard of ZFS. Part of what we are looking for is a recommendation for NFS or iSCSI, and all

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-11 Thread Paul Kraus
On Wed, Aug 11, 2010 at 10:36 AM, David Dyer-Bennet d...@dd-b.net wrote: On Tue, August 10, 2010 16:41, Dave Pacheco wrote: David Dyer-Bennet wrote: If that turns out to be the problem, that'll be annoying to work around (I'm making snapshots every two hours and deleting them after a couple

[zfs-discuss] BugID 6961707

2010-08-11 Thread John D Groenveld
I'm stumbling over BugID 6961707 on build 134. OpenSolaris Development snv_134 X86 Via the b134 Live CD, when I try to zpool import -f -F -n rpool I get this helpful panic. panic[cpu4]/thread=ff006cd06c60: zfs: allocating allocated segment(offset=95698377728 size=16384) ff006cd06580

Re: [zfs-discuss] Performance Testing

2010-08-11 Thread Marion Hakanson
p...@kraus-haus.org said: Based on these results, and our capacity needs, I am planning to go with 5 disk raidz2 vdevs. I did similar tests with a Thumper in 2008, with X4150/J4400 in 2009, and more recently comparing X4170/J4400 and X4170/MD1200:

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Peter Taps
I am running ZFS file system version 5 on Nexenta. Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Peter Taps
Thank you, Eric. Your explanation is clear to understand. Regards, Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Simone Caldana
Hi Paul, I am using EXSi 4.0 with a NFS-on-ZFS datastore running on OSOL b134. It previously ran on Solaris 10u7 with VMware Server 2.x. Disks are SATAs in a JBOD over FC. I'll try to summarize my experience here, albeit our system does not provide services to end users and thus is not very

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Saxon, Will
-Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Kraus Sent: Wednesday, August 11, 2010 3:53 PM To: ZFS Discussions Subject: [zfs-discuss] ZFS and VMware I am looking for references of folks using

Re: [zfs-discuss] autoreplace not kicking in

2010-08-11 Thread Giovanni Tirloni
On Wed, Aug 11, 2010 at 4:06 PM, Cindy Swearingen cindy.swearin...@oracle.com wrote: Hi Giovanni, The spare behavior and the autoreplace property behavior are separate but they should work pretty well in recent builds. You should not need to perform a zpool replace operation if the

Re: [zfs-discuss] Need a link on data corruption

2010-08-11 Thread David Magda
On Aug 11, 2010, at 04:05, Orvar Korvar wrote: Someone posted about CERN having a bad network card which injected faulty bits into the data stream. And ZFS detected it, because of end-to-end checksum. Does anyone has more information on this? CERN generally uses Linux AFAICT:

[zfs-discuss] How to obtain vdev information for a zpool?

2010-08-11 Thread Peter Taps
Folks, When I create a zpool, I get to specify the vdev type - mirror, raidz1, raidz2, etc. How do I get back this information for an existing pool? The status command does not reveal this information: # zpool status mypool When this command is run, I can see the disks in use. However, I

Re: [zfs-discuss] How to obtain vdev information for a zpool?

2010-08-11 Thread James C. McPherson
On 12/08/10 09:21 AM, Peter Taps wrote: Folks, When I create a zpool, I get to specify the vdev type - mirror, raidz1, raidz2, etc. How do I get back this information for an existing pool? The status command does not reveal this information: # zpool status mypool When this command is run, I

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Paul Kraus I am looking for references of folks using ZFS with either NFS or iSCSI as the backing store for VMware (4.x) backing store for I'll try to clearly separate what I know,

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Tim Cook
On Wed, Aug 11, 2010 at 7:27 PM, Edward Ned Harvey sh...@nedharvey.comwrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Paul Kraus I am looking for references of folks using ZFS with either NFS or iSCSI as the backing

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Saxon, Will
-Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tim Cook Sent: Wednesday, August 11, 2010 8:46 PM To: Edward Ned Harvey Cc: ZFS Discussions Subject: Re: [zfs-discuss] ZFS and VMware On Wed, Aug 11, 2010

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Tim Cook
This is not entirely correct either. You're not forced to use VMFS. It is entirely true. You absolutely cannot use ESX with a guest on a block device without formatting the LUN with VMFS. You are *FORCED* to use VMFS. You can format the LUN with VMFS, then put VM files inside the VMFS;

Re: [zfs-discuss] ZFS and VMware (and now, VirtualBox)

2010-08-11 Thread Erik Trimble
Actually, this brings up a related issue. Does anyone have experience with running VirtualBox on iSCSI volumes vs NFS shares, both of which would be backed by a ZFS server? -Erik On Wed, 2010-08-11 at 21:41 -0500, Tim Cook wrote: This is not entirely

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Saxon, Will
-Original Message- From: Tim Cook [mailto:t...@cook.ms] Sent: Wednesday, August 11, 2010 10:42 PM To: Saxon, Will Cc: Edward Ned Harvey; ZFS Discussions Subject: Re: [zfs-discuss] ZFS and VMware I still think there are reasons why iSCSI would be better than NFS and vice

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Tim Cook
My understanding is that if you wanted to use MS Cluster Server, you'd need to use a LUN as an RDM for the quorum drive. VMDK files are locked when open, so they can't typically be shared. VMware's Fault Tolerance gets around this somehow, and I have a suspicion that their Lab Manager

Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-11 Thread Ville Ojamo
I am having a similar issue at the moment.. 3 GB RAM under ESXi, but dedup for this zvol (1.2 T) was turned off and only 300 G was used. The pool does contain other datasets with dedup turned on but are small enough so I'm not hitting the memory limits (been there, tried that, never again

Re: [zfs-discuss] ZFS p[erformance drop with new Xeon 55xx and 56xx cpus

2010-08-11 Thread michael schuster
On 08/12/10 04:16, Steve Gonczi wrote: Greetings, I am seeing some unexplained performance drop using the above cpus, using a fairly up-to-date build ( late 145). Basically, the system seems to be 98% idle, spending most if its time in this stack: unix`i86_mwait+0xd