On Wed, June 25, 2014 at 12:16 (+0200), Pawel Jakub Dawidek Via Illumos-zfs wrote: > On Wed, Jun 25, 2014 at 12:09:07PM +0200, Jan Schmidt via illumos-zfs wrote: >> It seems that we've hit what is described in >> https://www.illumos.org/issues/4390. To me it looks like the mentioned fixes >> are >> only preventing the pool corruption to occur in the first place. >> >> How to recover a pool with a corrupted space map? > > When I had space map corruption I created this evil patch: > > http://people.freebsd.org/~pjd/patches/space_map_add_recovery.patch > > which did save the pool for me, but your mileage may wary.
That patch looks somewhat promising, though I have not tried it yet. How did you decide which of the overlapping space map ranges to drop? From my understanding, either range might be the one that's currently correct, isn't it? > All in all, the best option would be to try importing the pool > read-only, backing up the data and recreating the pool. That gives a different stack trace: Jun 25 12:25:26 hostname ^Mpanic[cpu3]/thread=ffffff001eaaac40: Jun 25 12:25:26 hostname genunix: [ID 403854 kern.notice] assertion failed: zio->io_type != ZIO_TYPE_WRITE || spa_writeable(spa), file: .../../common/fs/zfs/zio.c, line: 2460 Jun 25 12:25:26 hostname unix: [ID 100000 kern.notice] Jun 25 12:25:26 hostname genunix: [ID 802836 kern.notice] ffffff001eaaa9d0 fffffffffba883b8 () Jun 25 12:25:26 hostname genunix: [ID 655072 kern.notice] ffffff001eaaaa30 zfs:zio_vdev_io_start+198 () Jun 25 12:25:26 hostname genunix: [ID 655072 kern.notice] ffffff001eaaaa70 zfs:zio_execute+88 () Jun 25 12:25:26 hostname genunix: [ID 655072 kern.notice] ffffff001eaaab30 genunix:taskq_thread+2d0 () Jun 25 12:25:27 hostname genunix: [ID 655072 kern.notice] ffffff001eaaab40 unix:thread_start+8 () Jun 25 12:25:27 hostname unix: [ID 100000 kern.notice] Thanks for your help! -Jan _______________________________________________ developer mailing list [email protected] http://lists.open-zfs.org/mailman/listinfo/developer
