Your stack trace matches CR 6616286 which is in fact a variant of CR 6603147 which is claimed to be fixed in snv_76, although looking again, it appears to have kind of re-surfaced and was logged as CR 6971273, and fixed in snv_146. You can read the publicly available information at:
   http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6616286
   http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6971273

although there's not a lot there, and there is no workaround listed, so I'm not sure how you can proceed from here.

The people on the zfs-discuss mailing list may be able to help you out.

Regards,
Brian


Karel Gardas wrote:
Hello,

during my attempts to update my workstation OS to the latest Solaris 11 Express 
2010.11 I've come to the point where machine nolonger booted anymore. That was just 
after the last reboot of Sol11Express when everything was updated well and set in 
usual way (as is here). During the last Sol11Express reboot, i.e. during the boot 
I've got following panic (well, I haven't seen it for the first time so I booted 
with -s -k to see it). Also please note that this crash is taken from the console of 
second computer where I've added disks in order to see if my workstation hardware 
(mobo) is broken or if those are the disk failures or what. Anyway, the message on 
the workstation was the same "zfs: allocating allocated segment etc.).

panic[cpu0]/thread=ffffff0016e03c40: zfs: allocating allocated 
segment(offset=335092685312 size=4096)


Warning - stack not written to the dump buffer
ffffff0016e03500 genunix:vcmn_err+2c ()
            35f0 zfs:zfs_panic_recover+ae ()
            3690 zfs:space_map_add+fb ()
            3750 zfs:space_map_load+294 ()
            37b0 zfs:metaslab_activate+95 ()
            3870 zfs:metaslab_group_alloc+246 ()
            3930 zfs:metaslab_alloc_dva+2a4 ()
            3a00 zfs:metaslab_alloc+d4 ()
            3a60 zfs:zio_dva_allocate+db ()
            3a90 zfs:zio_execute+8d ()
            3b30 genunix:taskq_thread+248 ()
            3b40 unix:thread_start+8 ()
panic: entering debugger (no dump device, continue to reboot)

Welcome to kmdb
Loaded modules: [ scsi_vhci mac uppc sd unix cpu_ms.AuthenticAMD.15 mpt zfs 
krtld sata apix genunix specfs pcplusmp cpu.generic ]
[0]>

Please note that my workstation is (where the issue happened first, although I 
remember just its first line) Asus P5W64 WS Pro + 6GB ECC RAM + 2 500GB Hitachi 
Travelstar 2.5" 7200RPM connected to mobo's ICHx (I don't know which ICH is 
provided by the i975X chipset...). When this happened I've put Supermicro 
AOC-SAT2-MV8 into the workstation and attempted to test this with 3 old drives where 
luckily some OS 2009.06 instance was still presented. Machine worked as expected so 
I hooked my 500GB Hitachis to the card and got the same panic. I also need to note 
that before this I performed 2 whole passes in memcheck86 w/o any memory error. So 
it looks more as a drives issue, hence I put Supermicro card into my testing 
hp585/4opteron box, hooked old 3 drives to it, verify that they behaves like they 
should and then tested my workstation's 500GB hitachis. Again the same panic -- at 
which point I took the notebook and written the panic to it from seeing the hp585 
console output -- so
 i
 t is panic from 2xHitachi Travelstar 500GB 2.5" 7200RPM hooked to the 
Supermicro AOC-SAT2-MV8 card which is put into 133MHz PCI-X inside HP585 box coming 
from the attempt to boot Solaris 11 Express 2010.11. Ask if you also need to have 
panic written from the original workstation....

Anyway, now sad things starts to be described. This is my workstation rpool 
with unbackedup emails for 2-3 months and unbackedup work of last 3-4 weeks. 
Also the pool holds still preserved BE of OS 2009.11 and OpenSolaris snv_134b. 
I've tried to boot both while drives were still in the workstation hooked to 
the ICHx but the panic was still the same (allocating already allocated 
segment....)
Now, is there any chance to get my data back?

Last note: this pool is formed by the mirror of both drives -- question shall I 
attempt attaching just one of drives for test and then another if the test does 
not succeed? I've not attempted this yet as I would like to get some ZFS expert 
advice first on it.

Thanks for any idea how to proceeed!
Karel
PS: shall I also enter some bugreport for it?

--
Brian Ruthven
Solaris Network RPE (Sustaining)
Oracle UK


_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to