Thanks Richard and Matthew.

After applying fix for https://www.illumos.org/issues/5770, I was able to
run zdb and zpool import with -X, -F, -T and etc. But unfortunately I am no
luck to successfully import the zpool. -F or -T returned "cannot import
'zp13': one or more devices is currently unavailable" (which seems to fail
at the following code block), -X seems to run forever so I just killed it.

/*
* Find the best uberblock.
*/
vdev_uberblock_load(rvd, ub, &label);

/*
* If we weren't able to find a single valid uberblock, return failure.
*/
if (ub->ub_txg == 0) {
nvlist_free(label);
return (spa_vdev_err(rvd, VDEV_AUX_CORRUPT_DATA, ENXIO));
}

Fortunately we had backup of the zpool available so we just restored from
the backup.

Thanks again for the tips which may be useful in the future (I hope we will
not make such mistake again).

-Youzhong


On Mon, Jan 25, 2016 at 3:50 PM, Youzhong Yang <youzh...@gmail.com> wrote:

> Hi all,
>
> Just wondering if anyone has done similar recovery using txg stuff.
>
> We have a zpool attached to two hosts physically, ideally at any time only
> one host imports this zpool. Due to some operational mistake this zpool was
> corrupted when the two hosts tried to have access to it. Here is the crash
> stack:
>
> Jan 25 10:07:17 batfs0346 genunix: [ID 403854 kern.notice] assertion
> failed: 0 == dmu_bonus_hold(spa->spa_meta_objset, obj, FTAG, &db), file:
> ../../common/fs/zfs/spa.c, line: 1549
> Jan 25 10:07:17 batfs0346 unix: [ID 100000 kern.notice]
> Jan 25 10:07:17 batfs0346 genunix: [ID 802836 kern.notice]
> ffffff017495c920 fffffffffba6b1f8 ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495c9a0 zfs:load_nvlist+e8 ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495ca90 zfs:spa_load_impl+10bb ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495cb30 zfs:spa_load+14e ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495cb80 zfs:spa_tryimport+aa ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495cbd0 zfs:zfs_ioc_pool_tryimport+51 ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495cc80 zfs:zfsdev_ioctl+4a7 ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495ccc0 genunix:cdev_ioctl+39 ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495cd10 specfs:spec_ioctl+60 ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495cda0 genunix:fop_ioctl+55 ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495cec0 genunix:ioctl+9b ()
> Jan 25 10:07:17 batfs0346 genunix: [ID 655072 kern.notice]
> ffffff017495cf10 unix:brand_sys_sysenter+1c9 ()
>
> Is it possible to roll back the zpool to its last known good txg? We know
> when the zpool should be in good state.
>
> Any suggestion would be very much appreciated. We can build a kernel if
> needed.
>
> Thanks,
>
> - Youzhong
>
>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to