Re: [zfs-discuss] multiple disk failure (solved?)

2011-02-01 Thread Mike Tancsa
On 1/31/2011 4:19 PM, Mike Tancsa wrote: On 1/31/2011 3:14 PM, Cindy Swearingen wrote: Hi Mike, Yes, this is looking much better. Some combination of removing corrupted files indicated in the zpool status -v output, running zpool scrub and then zpool clear should resolve the corruption,

Re: [zfs-discuss] multiple disk failure (solved?)

2011-02-01 Thread Cindy Swearingen
Excellent. I think you are good for now as long as your hardware setup is stable. You survived a severe hardware failure so say a prayer and make sure this doesn't happen again. Always have good backups. Thanks, Cindy On 02/01/11 06:56, Mike Tancsa wrote: On 1/31/2011 4:19 PM, Mike Tancsa

Re: [zfs-discuss] multiple disk failure (solved?)

2011-02-01 Thread Richard Elling
On Feb 1, 2011, at 5:56 AM, Mike Tancsa wrote: On 1/31/2011 4:19 PM, Mike Tancsa wrote: On 1/31/2011 3:14 PM, Cindy Swearingen wrote: Hi Mike, Yes, this is looking much better. Some combination of removing corrupted files indicated in the zpool status -v output, running zpool scrub and

Re: [zfs-discuss] multiple disk failure

2011-01-31 Thread James Van Artsdalen
He says he's using FreeBSD. ZFS recorded names like ada0 which always means a whole disk. In any case FreeBSD will search all block storage for the ZFS dev components if the cached name is wrong: if the attached disks are connected to the system at all FreeBSD will find them wherever they may

Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Mike Tancsa
On 1/29/2011 6:18 PM, Richard Elling wrote: On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote: On 1/29/2011 12:57 PM, Richard Elling wrote: 0(offsite)# zpool status pool: tank1 state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the

Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Cindy Swearingen
Hi Mike, Yes, this is looking much better. Some combination of removing corrupted files indicated in the zpool status -v output, running zpool scrub and then zpool clear should resolve the corruption, but its depends on how bad the corruption is. First, I would try least destruction method:

Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Mike Tancsa
On 1/31/2011 3:14 PM, Cindy Swearingen wrote: Hi Mike, Yes, this is looking much better. Some combination of removing corrupted files indicated in the zpool status -v output, running zpool scrub and then zpool clear should resolve the corruption, but its depends on how bad the corruption

Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Richard Elling
On Jan 31, 2011, at 1:19 PM, Mike Tancsa wrote: On 1/31/2011 3:14 PM, Cindy Swearingen wrote: Hi Mike, Yes, this is looking much better. Some combination of removing corrupted files indicated in the zpool status -v output, running zpool scrub and then zpool clear should resolve the

Re: [zfs-discuss] multiple disk failure

2011-01-30 Thread Mike Tancsa
On 1/30/2011 12:39 AM, Richard Elling wrote: Hmmm, doesnt look good on any of the drives. I'm not sure of the way BSD enumerates devices. Some clever person thought that hiding the partition or slice would be useful. I don't find it useful. On a Solaris system, ZFS can show a disk

Re: [zfs-discuss] multiple disk failure

2011-01-30 Thread Richard Elling
On Jan 30, 2011, at 4:31 AM, Mike Tancsa wrote: On 1/30/2011 12:39 AM, Richard Elling wrote: Hmmm, doesnt look good on any of the drives. I'm not sure of the way BSD enumerates devices. Some clever person thought that hiding the partition or slice would be useful. I don't find it useful.

Re: [zfs-discuss] multiple disk failure

2011-01-30 Thread Peter Jeremy
On 2011-Jan-30 13:39:22 +0800, Richard Elling richard.ell...@gmail.com wrote: I'm not sure of the way BSD enumerates devices. Some clever person thought that hiding the partition or slice would be useful. No, there's no hiding. /dev/ada0 always refers to the entire physical disk. If it had

Re: [zfs-discuss] multiple disk failure

2011-01-30 Thread Richard Elling
On Jan 30, 2011, at 1:09 PM, Peter Jeremy wrote: On 2011-Jan-30 13:39:22 +0800, Richard Elling richard.ell...@gmail.com wrote: I'm not sure of the way BSD enumerates devices. Some clever person thought that hiding the partition or slice would be useful. No, there's no hiding. /dev/ada0

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Mike Tancsa NAMESTATE READ WRITE CKSUM tank1 UNAVAIL 0 0 0 insufficient replicas raidz1ONLINE 0 0 0

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Richard Elling
On Jan 28, 2011, at 6:41 PM, Mike Tancsa wrote: Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Mike Tancsa
On 1/29/2011 12:57 PM, Richard Elling wrote: 0(offsite)# zpool status pool: tank1 state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Mike Tancsa
On 1/29/2011 11:38 AM, Edward Ned Harvey wrote: That is precisely the reason why you always want to spread your mirror/raidz devices across multiple controllers or chassis. If you lose a controller or a whole chassis, you lose one device from each vdev, and you're able to continue

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Richard Elling
On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote: On 1/29/2011 12:57 PM, Richard Elling wrote: 0(offsite)# zpool status pool: tank1 state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Mike Tancsa
On 1/29/2011 6:18 PM, Richard Elling wrote: 0(offsite)# The next step is to run zdb -l and look for all 4 labels. Something like: zdb -l /dev/ada2 If all 4 labels exist for each drive and appear intact, then look more closely at how the OS locates the vdevs. If you can't solve the

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Richard Elling
On Jan 29, 2011, at 4:14 PM, Mike Tancsa wrote: On 1/29/2011 6:18 PM, Richard Elling wrote: 0(offsite)# The next step is to run zdb -l and look for all 4 labels. Something like: zdb -l /dev/ada2 If all 4 labels exist for each drive and appear intact, then look more closely at how

[zfs-discuss] multiple disk failure

2011-01-28 Thread Mike Tancsa
Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new drive cage started to fail, it hung the server and the