Re: [zfs-discuss] zpool import panics

2010-11-11 Thread Stephan Budach

Am 11.11.10 11:51, schrieb Ville Ojamo:


Some generic ideas:

Looks like the zdb output is cut, is it cut because of mail reader or 
because zdb died for some reason? If died pstack core..


What does panic  stack trace say?

Does zpool import work any better with option -f or -F? Some have in 
the past suggested giving -o ro to it as well, but don't think that 
would do much (?).



Well, I had cut the zdb output, since it was going on listing all sorts 
of other information from the datasets and I figured that this output 
wouldn't be of any use to anyone, as long as nobody requests it.


I tried to import the pool when having booted from the OSol snv_134 live 
CD, with the following options:


echo aok/W 1 | mdb -kw
echo zfs_recover/W 1 | mdb -kw
zpool import -f -F pool

zfs then read from the devices and after a couple of seconds the host 
rebooted again with a kernel panic.
I guess, since I booted off the live CD I don't have any core dumps at 
hand. Maybe on the other host, which also has the same issue with 
another pool, where a core dump should have been written somewhere, 
although I seem unable to find it anywhere.




--
Stephan Budach
Jung von Matt/it-services GmbH
Glashüttenstraße 79
20357 Hamburg

Tel: +49 40-4321-1353
Fax: +49 40-4321-1114
E-Mail: stephan.bud...@jvm.de
Internet: http://www.jvm.com

Geschäftsführer: Ulrich Pallas, Frank Wilhelm
AG HH HRB 98380

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import panics

2010-11-11 Thread Stephan Budach

Am 11.11.10 14:26, schrieb Steve Gonczi:

Dumpadm should tell you how your
Dumps are set up
Also you could load mdb before importing


I have located the dump, it's called vmdump.0. I also loaded mdb before 
I imported the pool, but that didn't help. Actually I tried it this way:


mdb -K -F
:c
zpool import -f -o readonly pool
Afterwards I tried to get back to mdb by hitting F1-a, but that didn't 
work - it was only printing 'a' on the console. Otherwise I would have 
tried systemdump, but that didn't came to pass.


Is there anything I can do with the vmdump.0 file. Unfortuanetly I am 
not a Kernel hacker…


Thanks

--
Stephan Budach
Jung von Matt/it-services GmbH
Glashüttenstraße 79
20357 Hamburg

Tel: +49 40-4321-1353
Fax: +49 40-4321-1114
E-Mail: stephan.bud...@jvm.de
Internet: http://www.jvm.com

Geschäftsführer: Ulrich Pallas, Frank Wilhelm
AG HH HRB 98380

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import panics

2010-11-11 Thread Khushil Dep
Hi,

# savecore -vf vmdump.0

This should produce two files: unix.0 and vmcore.0

Now we use mdb on these as follows:

# mdb unix.0 vmcore.0

Now when presented with the '' prompt, type ::status and send us all the
output please?

---
W. A. Khushil Dep - khushil@gmail.com -  07905374843

Visit my blog at http://www.khushil.com/






On 11 November 2010 14:51, Stephan Budach stephan.bud...@jvm.de wrote:

 vmdump.0
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import panics

2010-11-11 Thread David Blasingame Oracle
The vmdump.0 is a compressed crash dump.  You will need to convert it to 
a format that can be read.


#  savecore -f ./vmdump.0  ./

This will create a couple of files, but the ones you will need next is 
unix.0  vmcore.0.  Use mdb to print out the stack.


#  mdb unix.0 vmcore.0

run the following to print the stack.  This would at least tell you what 
function the system is having a panic in.  You could then do a sunsolve 
search or google search.


$C

and gather zio_state data

::zio_state

And check the msgbuf to see if there are any hardware problems.

::msgbuf

Then quit mdb.  More drill down would be dependent on what you see.

::quit

Dave

On 11/11/10 08:51, Stephan Budach wrote:

Am 11.11.10 14:26, schrieb Steve Gonczi:

Dumpadm should tell you how your
Dumps are set up
Also you could load mdb before importing


I have located the dump, it's called vmdump.0. I also loaded mdb 
before I imported the pool, but that didn't help. Actually I tried it 
this way:


mdb -K -F
:c
zpool import -f -o readonly pool
Afterwards I tried to get back to mdb by hitting F1-a, but that didn't 
work - it was only printing 'a' on the console. Otherwise I would have 
tried systemdump, but that didn't came to pass.


Is there anything I can do with the vmdump.0 file. Unfortuanetly I am 
not a Kernel hacker…


Thanks




-




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import panics

2010-11-11 Thread Stephan Budach

David,

thanks so much (and of course to all other helpful souls here as well) 
for providing such great guidance!

Here we go:

Am 11.11.10 16:17, schrieb David Blasingame Oracle:
The vmdump.0 is a compressed crash dump.  You will need to convert it 
to a format that can be read.


#  savecore -f ./vmdump.0  ./

This will create a couple of files, but the ones you will need next is 
unix.0  vmcore.0.  Use mdb to print out the stack.


#  mdb unix.0 vmcore.0

run the following to print the stack.  This would at least tell you 
what function the system is having a panic in.  You could then do a 
sunsolve search or google search.


$C

 $C
ff0023b7d450 zap_leaf_lookup_closest+0x40(ff0588c61750, 0, 0,
ff0023b7d470)
ff0023b7d4e0 fzap_cursor_retrieve+0xc9(ff0588c61750, 
ff0023b7d5c0,

ff0023b7d600)
ff0023b7d5a0 zap_cursor_retrieve+0x19a(ff0023b7d5c0, 
ff0023b7d600)

ff0023b7d780 zfs_purgedir+0x4c(ff0581079260)
ff0023b7d7d0 zfs_rmnode+0x52(ff0581079260)
ff0023b7d810 zfs_zinactive+0xb5(ff0581079260)
ff0023b7d860 zfs_inactive+0xee(ff058118ae00, ff056ac3c108, 0)
ff0023b7d8b0 fop_inactive+0xaf(ff058118ae00, ff056ac3c108, 0)
ff0023b7d8d0 vn_rele+0x5f(ff058118ae00)
ff0023b7dac0 zfs_unlinked_drain+0xaf(ff05874c8b00)
ff0023b7daf0 zfsvfs_setup+0xfb(ff05874c8b00, 1)
ff0023b7db50 zfs_domount+0x17c(ff0588aaf698, ff0580cb3d80)
ff0023b7dc70 zfs_mount+0x1e4(ff0588aaf698, ff0588a9f100,
ff0023b7de20, ff056ac3c108)
ff0023b7dca0 fsop_mount+0x21(ff0588aaf698, ff0588a9f100,
ff0023b7de20, ff056ac3c108)
ff0023b7de00 domount+0xae3(0, ff0023b7de20, ff0588a9f100,
ff056ac3c108, ff0023b7de18)
ff0023b7de80 mount+0x121(ff0580c7e548, ff0023b7de98)
ff0023b7dec0 syscall_ap+0x8c()
ff0023b7df10 _sys_sysenter_post_swapgs+0x149()





and gather zio_state data

::zio_state

 ::zio_state
ADDRESS  TYPE  STAGEWAITER
ff0584be2348 NULL  OPEN -
ff0570ebcc88 NULL  OPEN -




And check the msgbuf to see if there are any hardware problems.

::msgbuf


 ::msgbuf
MESSAGE
QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50925h, 
lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50aefh, 
lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
WARNING: pool 'obelixData' could not be loaded as it was last accessed 
by another system (host

: opensolaris hostid: 0x75b3c). See: http://www.sun.com/msg/ZFS-8000-EY
QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50925h, 
lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50aefh, 
lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
WARNING: pool 'obelixData' could not be loaded as it was last accessed 
by another system (host

: opensolaris hostid: 0x75b3c). See: http://www.sun.com/msg/ZFS-8000-EY
QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50aefh, 
lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50925h, 
lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
WARNING: pool 'obelixData' could not be loaded as it was last accessed 
by another system (host

: opensolaris hostid: 0x75b3c). See: http://www.sun.com/msg/ZFS-8000-EY
pseudo-device: devinfo0
devinfo0 is /pseudo/devi...@0
pcplusmp: asy (asy) instance 0 irq 0x4 vector 0xb0 ioapic 0x0 intin 0x4 
is bound to cpu 2

ISA-device: asy0
asy0 is /p...@0,0/i...@1f/a...@1,3f8
pcplusmp: asy (asy) instance 1 irq 0x3 vector 0xb1 ioapic 0x0 intin 0x3 
is bound to cpu 3

ISA-device: asy1
asy1 is /p...@0,0/i...@1f/a...@1,2f8
pseudo-device: ucode0
ucode0 is /pseudo/uc...@0
sgen0 at ata0: target 0 lun 0
sgen0 is /p...@0,0/pci-...@1f,2/i...@0/s...@0,0
sgen2 at mega_sas0: target 0 lun 1
sgen2 is /p...@0,0/pci8086,2...@1c/pci1028,1...@0/s...@0,1
QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50925h, 
lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50925h, 
lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 20h  0h  0h  0h  0h  0h
sd4 at fp0: unit-address w21d02305ff42,0: 50925
sd4 is /p...@0,0/pci8086,3...@7/pci1077,1...@0/f...@0,0/d...@w21d02305ff42,0
pseudo-device: llc10
llc10 is /pseudo/l...@0
pseudo-device: lofi0
lofi0 is /pseudo/l...@0
pseudo-device: ramdisk1024
ramdisk1024 is /pseudo/ramd...@1024
pseudo-device: dcpc0
dcpc0 is /pseudo/d...@0
pseudo-device: fasttrap0
fasttrap0 is /pseudo/fastt...@0
pseudo-device: fbt0
fbt0 is 

Re: [zfs-discuss] zpool import panics

2010-11-11 Thread David Blasingame Oracle
In this function, the second argument is a pointer to the osname 
(mount).  You can dump out the string of what it is.

ff0023b7db50 zfs_domount+0x17c(ff0588aaf698, ff0580cb3d80)

mdb unix.0 vmcore.0

ff0580cb3d80/S

Should print out the offending FS.  You could try to then import the 
pool read only (-o ro) and set the parameter to the file system to 
readonly (zfs set readonly=on fs).


Dave

On 11/11/10 09:37, Stephan Budach wrote:

David,

thanks so much (and of course to all other helpful souls here as well) 
for providing such great guidance!

Here we go:

Am 11.11.10 16:17, schrieb David Blasingame Oracle:
The vmdump.0 is a compressed crash dump.  You will need to convert it 
to a format that can be read.


#  savecore -f ./vmdump.0  ./

This will create a couple of files, but the ones you will need next 
is unix.0  vmcore.0.  Use mdb to print out the stack.


#  mdb unix.0 vmcore.0

run the following to print the stack.  This would at least tell you 
what function the system is having a panic in.  You could then do a 
sunsolve search or google search.


$C

 $C
ff0023b7d450 zap_leaf_lookup_closest+0x40(ff0588c61750, 0, 0,
ff0023b7d470)
ff0023b7d4e0 fzap_cursor_retrieve+0xc9(ff0588c61750, 
ff0023b7d5c0,

ff0023b7d600)
ff0023b7d5a0 zap_cursor_retrieve+0x19a(ff0023b7d5c0, 
ff0023b7d600)

ff0023b7d780 zfs_purgedir+0x4c(ff0581079260)
ff0023b7d7d0 zfs_rmnode+0x52(ff0581079260)
ff0023b7d810 zfs_zinactive+0xb5(ff0581079260)
ff0023b7d860 zfs_inactive+0xee(ff058118ae00, ff056ac3c108, 0)
ff0023b7d8b0 fop_inactive+0xaf(ff058118ae00, ff056ac3c108, 0)
ff0023b7d8d0 vn_rele+0x5f(ff058118ae00)
ff0023b7dac0 zfs_unlinked_drain+0xaf(ff05874c8b00)
ff0023b7daf0 zfsvfs_setup+0xfb(ff05874c8b00, 1)
ff0023b7db50 zfs_domount+0x17c(ff0588aaf698, ff0580cb3d80)
ff0023b7dc70 zfs_mount+0x1e4(ff0588aaf698, ff0588a9f100,
ff0023b7de20, ff056ac3c108)
ff0023b7dca0 fsop_mount+0x21(ff0588aaf698, ff0588a9f100,
ff0023b7de20, ff056ac3c108)
ff0023b7de00 domount+0xae3(0, ff0023b7de20, ff0588a9f100,
ff056ac3c108, ff0023b7de18)
ff0023b7de80 mount+0x121(ff0580c7e548, ff0023b7de98)
ff0023b7dec0 syscall_ap+0x8c()
ff0023b7df10 _sys_sysenter_post_swapgs+0x149()





and gather zio_state data

::zio_state

 ::zio_state
ADDRESS  TYPE  STAGEWAITER
ff0584be2348 NULL  OPEN -
ff0570ebcc88 NULL  OPEN -




And check the msgbuf to see if there are any hardware problems.

::msgbuf


 ::msgbuf
MESSAGE
QEL qlc(0,0): ql_status_error, check condition sense data, 
d_id=50925h, lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
QEL qlc(0,0): ql_status_error, check condition sense data, 
d_id=50aefh, lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
WARNING: pool 'obelixData' could not be loaded as it was last accessed 
by another system (host

: opensolaris hostid: 0x75b3c). See: http://www.sun.com/msg/ZFS-8000-EY
QEL qlc(0,0): ql_status_error, check condition sense data, 
d_id=50925h, lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
QEL qlc(0,0): ql_status_error, check condition sense data, 
d_id=50aefh, lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
WARNING: pool 'obelixData' could not be loaded as it was last accessed 
by another system (host

: opensolaris hostid: 0x75b3c). See: http://www.sun.com/msg/ZFS-8000-EY
QEL qlc(0,0): ql_status_error, check condition sense data, 
d_id=50aefh, lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
QEL qlc(0,0): ql_status_error, check condition sense data, 
d_id=50925h, lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
WARNING: pool 'obelixData' could not be loaded as it was last accessed 
by another system (host

: opensolaris hostid: 0x75b3c). See: http://www.sun.com/msg/ZFS-8000-EY
pseudo-device: devinfo0
devinfo0 is /pseudo/devi...@0
pcplusmp: asy (asy) instance 0 irq 0x4 vector 0xb0 ioapic 0x0 intin 
0x4 is bound to cpu 2

ISA-device: asy0
asy0 is /p...@0,0/i...@1f/a...@1,3f8
pcplusmp: asy (asy) instance 1 irq 0x3 vector 0xb1 ioapic 0x0 intin 
0x3 is bound to cpu 3

ISA-device: asy1
asy1 is /p...@0,0/i...@1f/a...@1,2f8
pseudo-device: ucode0
ucode0 is /pseudo/uc...@0
sgen0 at ata0: target 0 lun 0
sgen0 is /p...@0,0/pci-...@1f,2/i...@0/s...@0,0
sgen2 at mega_sas0: target 0 lun 1
sgen2 is /p...@0,0/pci8086,2...@1c/pci1028,1...@0/s...@0,1
QEL qlc(0,0): ql_status_error, check condition sense data, 
d_id=50925h, lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
QEL qlc(0,0): ql_status_error, check condition sense data, 
d_id=50925h, lun=0h

70h  0h  5h  0h  0h  0h  0h  ah  0h  

Re: [zfs-discuss] zpool import panics

2010-11-11 Thread Stephan Budach
David,

thanks a lot for your support. I have been able to get both of my zpools up 
again by checking which zfs fs caused these problems.

And... today I also learned at least a bit about zpool troubleshooting.

Thanks
Stephan



--
Von meinem iPhone iOS4
 gesendet.

 In this function, the second argument is a pointer to the osname (mount).  
 You can dump out the string of what it is.
 ff0023b7db50 zfs_domount+0x17c(ff0588aaf698, ff0580cb3d80)
 
 mdb unix.0 vmcore.0
 
 ff0580cb3d80/S
 
 Should print out the offending FS.  You could try to then import the pool 
 read only (-o ro) and set the parameter to the file system to readonly (zfs 
 set readonly=on fs).
 
 Dave
 
 On 11/11/10 09:37, Stephan Budach wrote:
 David,
 
 thanks so much (and of course to all other helpful souls here as well) for 
 providing such great guidance!
 Here we go:
 
 Am 11.11.10 16:17, schrieb David Blasingame Oracle:
 The vmdump.0 is a compressed crash dump.  You will need to convert it to a 
 format that can be read.
 
 #  savecore -f ./vmdump.0  ./
 
 This will create a couple of files, but the ones you will need next is 
 unix.0  vmcore.0.  Use mdb to print out the stack.
 
 #  mdb unix.0 vmcore.0
 
 run the following to print the stack.  This would at least tell you what 
 function the system is having a panic in.  You could then do a sunsolve 
 search or google search.
 
 $C
  $C
 ff0023b7d450 zap_leaf_lookup_closest+0x40(ff0588c61750, 0, 0,
 ff0023b7d470)
 ff0023b7d4e0 fzap_cursor_retrieve+0xc9(ff0588c61750, 
 ff0023b7d5c0,
 ff0023b7d600)
 ff0023b7d5a0 zap_cursor_retrieve+0x19a(ff0023b7d5c0, 
 ff0023b7d600)
 ff0023b7d780 zfs_purgedir+0x4c(ff0581079260)
 ff0023b7d7d0 zfs_rmnode+0x52(ff0581079260)
 ff0023b7d810 zfs_zinactive+0xb5(ff0581079260)
 ff0023b7d860 zfs_inactive+0xee(ff058118ae00, ff056ac3c108, 0)
 ff0023b7d8b0 fop_inactive+0xaf(ff058118ae00, ff056ac3c108, 0)
 ff0023b7d8d0 vn_rele+0x5f(ff058118ae00)
 ff0023b7dac0 zfs_unlinked_drain+0xaf(ff05874c8b00)
 ff0023b7daf0 zfsvfs_setup+0xfb(ff05874c8b00, 1)
 ff0023b7db50 zfs_domount+0x17c(ff0588aaf698, ff0580cb3d80)
 ff0023b7dc70 zfs_mount+0x1e4(ff0588aaf698, ff0588a9f100,
 ff0023b7de20, ff056ac3c108)
 ff0023b7dca0 fsop_mount+0x21(ff0588aaf698, ff0588a9f100,
 ff0023b7de20, ff056ac3c108)
 ff0023b7de00 domount+0xae3(0, ff0023b7de20, ff0588a9f100,
 ff056ac3c108, ff0023b7de18)
 ff0023b7de80 mount+0x121(ff0580c7e548, ff0023b7de98)
 ff0023b7dec0 syscall_ap+0x8c()
 ff0023b7df10 _sys_sysenter_post_swapgs+0x149()
 
 
 
 
 and gather zio_state data
 
 ::zio_state
  ::zio_state
 ADDRESS  TYPE  STAGEWAITER
 ff0584be2348 NULL  OPEN -
 ff0570ebcc88 NULL  OPEN -
 
 
 
 And check the msgbuf to see if there are any hardware problems.
 
 ::msgbuf
 
  ::msgbuf
 MESSAGE
 QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50925h, 
 lun=0h
 70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
 QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50aefh, 
 lun=0h
 70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
 WARNING: pool 'obelixData' could not be loaded as it was last accessed by 
 another system (host
 : opensolaris hostid: 0x75b3c). See: http://www.sun.com/msg/ZFS-8000-EY
 QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50925h, 
 lun=0h
 70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
 QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50aefh, 
 lun=0h
 70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
 WARNING: pool 'obelixData' could not be loaded as it was last accessed by 
 another system (host
 : opensolaris hostid: 0x75b3c). See: http://www.sun.com/msg/ZFS-8000-EY
 QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50aefh, 
 lun=0h
 70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
 QEL qlc(0,0): ql_status_error, check condition sense data, d_id=50925h, 
 lun=0h
 70h  0h  5h  0h  0h  0h  0h  ah  0h  0h  0h  0h 26h  0h  0h  0h  0h  0h
 WARNING: pool 'obelixData' could not be loaded as it was last accessed by 
 another system (host
 : opensolaris hostid: 0x75b3c). See: http://www.sun.com/msg/ZFS-8000-EY
 pseudo-device: devinfo0
 devinfo0 is /pseudo/devi...@0
 pcplusmp: asy (asy) instance 0 irq 0x4 vector 0xb0 ioapic 0x0 intin 0x4 is 
 bound to cpu 2
 ISA-device: asy0
 asy0 is /p...@0,0/i...@1f/a...@1,3f8
 pcplusmp: asy (asy) instance 1 irq 0x3 vector 0xb1 ioapic 0x0 intin 0x3 is 
 bound to cpu 3
 ISA-device: asy1
 asy1 is /p...@0,0/i...@1f/a...@1,2f8
 pseudo-device: ucode0
 ucode0 is /pseudo/uc...@0
 sgen0 at ata0: target 0 lun 0
 sgen0 is