how does one free segment(offset=77984887808 size=66560)
on a pool that won't import?

looks like I found
http://bugs.opensolaris.org/view_bug.do?bug_id=6580715
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-September/042541.html

when I luupgrade a ufs partition with a
dvd-b62 that was bfu to b68 with a dvd of b74
it booted fine and I was doing the same thing
that I had done on another machine (/usr can
live on raidz if boot is ufs)  on a

zfs destroy -r z/snv_68 with lzjb and {usr var opt} partitions
it crashed with:

Oct 11 14:28:11 nas ^Mpanic[cpu0]/thread=b4b6ee00:
   freeing free segment (vdev=1 offset=122842f400 size=10400)
824aabac genunix:vcmn_err+16 (3, f49966e4, 824aab)
824aabcc zfs:zfs_panic_recover+28 (f49966e4, 1, 0, 284)
824aac20 zfs:metaslab_free_dva+1d1 (82a5b980, 824aace0,)
824aac6c zfs:metaslab_free+90 (82a5b980, 824aace0,)
824aac98 zfs:zio_free_blk+2d (82a5b980, 824aace0,)
824aacb4 zfs:zil_free_log_block+20 (c314f440, 824aace0,)
824aad90 zfs:zil_parse+1aa (c314f440, f4974768,)
824aaddc zfs:zil_destroy+dd (c314f440, 0)
824aae00 zfs:dmu_objset_destroy+35 (8e6ef000)
824aae18 zfs:zfs_ioc_destroy+41 (8e6ef000, 5a18, 3, )
824aae40 zfs:zfsdev_ioctl+d8 (2d80000, 5a18, 8046)
824aae6c genunix:cdev_ioctl+2e (2d80000, 5a18, 8046)
824aae94 specfs:spec_ioctl+65 (8773eb40, 5a18, 804)
824aaed4 genunix:fop_ioctl+46 (8773eb40, 5a18, 804)
824aaf84 genunix:ioctl+151 (3, 5a18, 8046ab8, 8)

on reboot I then finished the zfs destroy -r z/snv_68 and

zfs create z/snv_74
zfs create z/snv_74/usr
zfs create z/snv_74/opt
zfs create z/snv_74/var
zfs set compression=lzjb z/snv_74
cd /z/snv_74
ufsdump 0fs - 999999 /usr /var /opt | ufsrestore -rf -

Oct 11 18:10:06 nas ^Mpanic[cpu0]/thread=87a61de0:
   zfs: allocating allocated segment(offset=77984887808 size=66560)
87a6185c genunix:vcmn_err+16 (3, f4571654, 87a618)
87a61874 zfs:zfs_panic_recover+28 (f4571654, 2842f400,)
87a618e4 zfs:space_map_add+13f (8cbc1e78, 2842f400,)
87a6196c zfs:space_map_load+27a (8cbc1e78, 8613b5b0,)
87a6199c zfs:metaslab_activate+44 (8cbc1c40, 0, 800000)
87a619f4 zfs:metaslab_group_alloc+22a (8c8e4d80, 400, 0, 2)
87a61a80 zfs:metaslab_alloc_dva+170 (82a7b900, 86057bc0,)
87a61af0 zfs:metaslab_alloc+80 (82a7b900, 86057bc0,)
87a61b40 zfs:zio_dva_allocate+6b (88e56dc0)
87a61b58 zfs:zio_next_stage+aa (88e56dc0)
87a61b70 zfs:zio_checksum_generate+5e (88e56dc0)
87a61b84 zfs:zio_next_stage+aa (88e56dc0)
87a61bd0 zfs:zio_write_compress+2c8 (88e56dc0)
87a61bec zfs:zio_next_stage+aa (88e56dc0)
87a61c0c zfs:zio_wait_for_children+46 (88e56dc0, 1, 88e56f)
87a61c20 zfs:zio_wait_children_ready+18 (88e56dc0)
87a61c34 zfs:zio_next_stage_async+ac (88e56dc0)
87a61c48 zfs:zio_nowait+e (88e56dc0)
87a61c94 zfs:dmu_objset_sync+184 (85fe96c0, 88757ae0,)
87a61cbc zfs:dsl_dataset_sync+40 (813ad000, 88757ae0,)
87a61d0c zfs:dsl_pool_sync+a3 (8291c0c0, 286de2, 0)
87a61d6c zfs:spa_sync+1fc (82a7b900, 286de2, 0)
87a61dc8 zfs:txg_sync_thread+1df (8291c0c0, 0)
87a61dd8 unix:thread_start+8 ()

on second reboot it also

Oct 11 18:17:56 nas ^Mpanic[cpu1]/thread=8f334de0:
   zfs: allocating allocated segment(offset=77984887808 size=66560)
8f33485c genunix:vcmn_err+16 (3, f4571654, 8f3348)
8f334874 zfs:zfs_panic_recover+28 (f4571654, 2842f400,)
8f3348e4 zfs:space_map_add+13f (916a2278, 2842f400,)
8f33496c zfs:space_map_load+27a (916a2278, 829d25b0,)
8f33499c zfs:metaslab_activate+44 (916a2040, 0, 800000)
8f3349f4 zfs:metaslab_group_alloc+22a (88ffb100, 400, 0, 2)
8f334a80 zfs:metaslab_alloc_dva+170 (82a7c980, 8ab851d0,)
8f334af0 zfs:metaslab_alloc+80 (82a7c980, 8ab851d0,)
8f334b40 zfs:zio_dva_allocate+6b (8f8286b8)
8f334b58 zfs:zio_next_stage+aa (8f8286b8)
8f334b70 zfs:zio_checksum_generate+5e (8f8286b8)
8f334b84 zfs:zio_next_stage+aa (8f8286b8)
8f334bd0 zfs:zio_write_compress+2c8 (8f8286b8)
8f334bec zfs:zio_next_stage+aa (8f8286b8)
8f334c0c zfs:zio_wait_for_children+46 (8f8286b8, 1, 8f8288)
8f334c20 zfs:zio_wait_children_ready+18 (8f8286b8)
8f334c34 zfs:zio_next_stage_async+ac (8f8286b8)
8f334c48 zfs:zio_nowait+e (8f8286b8)
8f334c94 zfs:dmu_objset_sync+184 (82ad32c0, 8f5ea480,)
8f334cbc zfs:dsl_dataset_sync+40 (8956b1c0, 8f5ea480,)
8f334d0c zfs:dsl_pool_sync+a3 (89ca5340, 286de2, 0)
8f334d6c zfs:spa_sync+1fc (82a7c980, 286de2, 0)
8f334dc8 zfs:txg_sync_thread+1df (89ca5340, 0)
8f334dd8 unix:thread_start+8 ()

upgrading to:

Sun Microsystems Inc.   SunOS 5.11      snv_75  Oct. 09, 2007
SunOS Internal Development:  dm120769 2007-10-09 [onnv_75-tonic]
with debug and two
cpu1: x86 (chipid 0x3 GenuineIntel F27 family 15 model 2 step 7 clock 3057 MHz)
kernelbase set to 0x80000000, system is not i386 ABI compliant.
mem = 5242412K (0x3ff8b000)

got the same:

Oct 11 18:58:35 nas ^Mpanic[cpu0]/thread=95425de0:
   zfs: allocating allocated segment(offset=77984887808 size=66560)
954257dc genunix:vcmn_err+16 (3, f4c2bdfc, 954258)
954257f4 zfs:zfs_panic_recover+28 (f4c2bdfc, 2842f400,)
95425874 zfs:space_map_add+153 (94e6da38, 2842f400,)
954258fc zfs:space_map_load+2d8 (94e6da38, 8eebd5b0,)
9542592c zfs:metaslab_activate+64 (94e6d800, 0, 800000)
95425984 zfs:metaslab_group_alloc+22a (94451a80, 400, 0, 2)
95425a10 zfs:metaslab_alloc_dva+1ac (87c74340, 86882e00,)
95425a80 zfs:metaslab_alloc+143 (87c74340, 86882e00,)
95425adc zfs:zio_dva_allocate+132 (9330a6b8)
95425af8 zfs:zio_next_stage+132 (9330a6b8)
95425b3c zfs:zio_checksum_generate+ad (9330a6b8)
95425b54 zfs:zio_next_stage+132 (9330a6b8)
95425bac zfs:zio_write_compress+3b1 (9330a6b8)
95425bcc zfs:zio_next_stage+132 (9330a6b8)
95425bec zfs:zio_wait_for_children+3b (9330a6b8, 1, 9330a8)
95425c0c zfs:zio_wait_children_ready+18 (9330a6b8)
95425c24 zfs:zio_next_stage_async+134 (9330a6b8)
95425c38 zfs:zio_nowait+e (9330a6b8)
95425c84 zfs:dmu_objset_sync+205 (8dff1580, 92ef58d8,)
95425cac zfs:dsl_dataset_sync+5e (8e45ee00, 92ef58d8,)
95425cfc zfs:dsl_pool_sync+a3 (92335040, 286de2, 0)
95425d6c zfs:spa_sync+212 (87c74340, 286de2, 0)
95425dc8 zfs:txg_sync_thread+25a (92335040, 0)
95425dd8 unix:thread_start+8 ()

I'm up without /etc/zfs/zpool.cache and no way (that I see)
to poke around with zdb -ivv

-rw-r--r--   1 root     root        448M Oct 11 19:01 vmcore.3
-rw-r--r--   1 root     root        1.1M Oct 11 19:00 unix.3
-rw-r--r--   1 root     root        388M Oct 11 18:20 vmcore.2
-rw-r--r--   1 root     root        1.7M Oct 11 18:19 unix.2
-rw-r--r--   1 root     root        327M Oct 11 18:14 vmcore.1
-rw-r--r--   1 root     root        1.1M Oct 11 18:14 unix.1
-rw-r--r--   1 root     root        774M Oct 11 17:43 vmcore.0
-rw-r--r--   1 root     root        1.2M Oct 11 17:41 unix.0

# zpool import

   pool: z
     id: 1481707724184457360
  state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
         some features will not be available without an explicit 'zpool 
upgrade'.
config:
         z           ONLINE
           raidz1    ONLINE
             c1t1d0  ONLINE
             c2t1d0  ONLINE
             c1t6d0  ONLINE
             c1t4d0  ONLINE
             c2t3d0  ONLINE
           raidz1    ONLINE
             c2t4d0  ONLINE
             c2t6d0  ONLINE
             c1t3d0  ONLINE
             c2t5d0  ONLINE
             c1t5d0  ONLINE

ssh access available.

how does one free segment(offset=77984887808 size=66560)
on a pool that won't import?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to