Basically, it is complaining that there aren't enough disks to read
the pool metadata.  This would suggest that in your 3-disk RAID-Z
config, either two disks are missing, or one disk is missing *and*
another disk is damaged -- due to prior failed writes, perhaps.

(I know there's at least one disk missing because the failure mode
is errno 6, which is EXNIO.)

Can you tell from /var/adm/messages or fmdump whether there write
errors to multiple disks, or to just one?

Jeff

On Tue, Sep 18, 2007 at 05:26:16PM -0700, Geoffroy Doucet wrote:
> I have a raid-z zfs filesystem with 3 disks. The disk was starting have read 
> and write errors.
> 
> The disks was so bad that I started to have trans_err. The server lock up and 
> the server was reset. Then now when trying to import the pool the system 
> panic.
> 
> I installed the last Recommend on my Solaris U3 and also install the last 
> Kernel patch (120011-14).
> 
> But still when trying to do zpool import <pool> it panic.
> 
> I also dd the disk and tested on another server with OpenSolaris B72 and 
> still the same thing. Here is the panic backtrace:
> 
>                 Stack Backtrace
>                 -----------------
> vpanic()
> assfail3+0xb9(fffffffff7dde5f0, 6, fffffffff7dde840, 0, fffffffff7dde820, 153)
> space_map_load+0x2ef(ffffff008f1290b8, ffffffffc00fc5b0, 1, ffffff008f128d88,
> ffffff008dd58ab0)
> metaslab_activate+0x66(ffffff008f128d80, 8000000000000000)
> metaslab_group_alloc+0x24e(ffffff008f46bcc0, 400, 3fd0f1, 32dc18000,
> ffffff008fbeaa80, 0)
> metaslab_alloc_dva+0x192(ffffff008f2d1a80, ffffff008f235730, 200,
> ffffff008fbeaa80, 0, 0)
> metaslab_alloc+0x82(ffffff008f2d1a80, ffffff008f235730, 200, 
> ffffff008fbeaa80, 2
> , 3fd0f1)
> zio_dva_allocate+0x68(ffffff008f722790)
> zio_next_stage+0xb3(ffffff008f722790)
> zio_checksum_generate+0x6e(ffffff008f722790)
> zio_next_stage+0xb3(ffffff008f722790)
> zio_write_compress+0x239(ffffff008f722790)
> zio_next_stage+0xb3(ffffff008f722790)
> zio_wait_for_children+0x5d(ffffff008f722790, 1, ffffff008f7229e0)
> zio_wait_children_ready+0x20(ffffff008f722790)
> zio_next_stage_async+0xbb(ffffff008f722790)
> zio_nowait+0x11(ffffff008f722790)
> dmu_objset_sync+0x196(ffffff008e4e5000, ffffff008f722a10, ffffff008f260a80)
> dsl_dataset_sync+0x5d(ffffff008df47e00, ffffff008f722a10, ffffff008f260a80)
> dsl_pool_sync+0xb5(ffffff00882fb800, 3fd0f1)
> spa_sync+0x1c5(ffffff008f2d1a80, 3fd0f1)
> txg_sync_thread+0x19a(ffffff00882fb800)
> thread_start+8()
> 
> 
> 
> And here is the panic message buf:
> panic[cpu0]/thread=ffffff0001ba2c80:
> assertion failed: dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 
> (0
> x6 == 0x0), file: ../../common/fs/zfs/space_map.c, line: 339
> 
> 
> ffffff0001ba24f0 genunix:assfail3+b9 ()
> ffffff0001ba2590 zfs:space_map_load+2ef ()
> ffffff0001ba25d0 zfs:metaslab_activate+66 ()
> ffffff0001ba2690 zfs:metaslab_group_alloc+24e ()
> ffffff0001ba2760 zfs:metaslab_alloc_dva+192 ()
> ffffff0001ba2800 zfs:metaslab_alloc+82 ()
> ffffff0001ba2850 zfs:zio_dva_allocate+68 ()
> ffffff0001ba2870 zfs:zio_next_stage+b3 ()
> ffffff0001ba28a0 zfs:zio_checksum_generate+6e ()
> ffffff0001ba28c0 zfs:zio_next_stage+b3 ()
> ffffff0001ba2930 zfs:zio_write_compress+239 ()
> ffffff0001ba2950 zfs:zio_next_stage+b3 ()
> ffffff0001ba29a0 zfs:zio_wait_for_children+5d ()
> ffffff0001ba29c0 zfs:zio_wait_children_ready+20 ()
> ffffff0001ba29e0 zfs:zio_next_stage_async+bb ()
> ffffff0001ba2a00 zfs:zio_nowait+11 ()
> ffffff0001ba2a80 zfs:dmu_objset_sync+196 ()
> ffffff0001ba2ad0 zfs:dsl_dataset_sync+5d ()
> ffffff0001ba2b40 zfs:dsl_pool_sync+b5 ()
> ffffff0001ba2bd0 zfs:spa_sync+1c5 ()
> ffffff0001ba2c60 zfs:txg_sync_thread+19a ()
> ffffff0001ba2c70 unix:thread_start+8 ()
> 
> syncing file systems...
> 
> 
> Is there a way to restore the data? Is there a way to "fsck" the zpool, and 
> correct the error manually?
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to