Rick McNeal wrote:
> I'm looking at this trace and things are weird alright. First, you
> asked in your original email about the sense data. You thought the
> initiator after receiving a check condition should perform a request
> sense. That's old school. ;-) 

Heh, that's from my experience diagnosing <evil>tape drives</evil>

> Everything is enabled with auto_sense
> such that on a check condition the target automatically return the
> sense data. In the SCSI Response packet you'll see that the data
> length is set to 0x16. What you don't see is the the actual sense data
> and that's because it's in the following two packets. 

I take it that's packets 15-16 in my trace?  They come immediately after the
check condition packet.

> So, I have decoded the sense data and I found the strangest error. The
> sense key is set to HARDWARE_ERROR. I need to dig through the code to
> find that one. I seem to remember only using that sense key when the
> underlying hardware returned an error during I/O.

Well, I *am* running this in a virtual machine, and the zpool that has the
exported volume is built on files.  I can create regular ZFS filesystems and
write and read from them with no trouble, so I believe my zpool is functional.
Still, an indication of hardware error gives me a trail to follow.

Interestingly, I note that the symlinks in /dev/zvol/dsk/data/vol/ appear to be
wrong, which may be the source of the problem.  I cannot run newfs on the zvol
either, getting an ENOENT when stat()ing the device.

# zfs list -r data
NAME                USED  AVAIL  REFER  MOUNTPOINT
data               28.5M  1008M  42.6K  /data
data/regular       40.4K  1008M  40.4K  /data/regular
data/vol           28.2M  1008M  40.4K  /data/vol
data/vol/testlun1  28.1M  1008M  28.1M  -

# newfs /dev/zvol/rdsk/data/vol/testlun1
newfs: /dev/zvol/rdsk/data/vol/testlun1: No such file or directory

# ls -l /dev/zvol/rdsk/data/vol/
total 3
lrwxrwxrwx   1 root     root          42 Jan 20 12:33 lun001 ->
../../../../../devices/pseudo/z...@0:3c,raw
lrwxrwxrwx   1 root     root          42 Jan 20 16:33 testlun ->
../../../../../devices/pseudo/z...@0:3c,raw
lrwxrwxrwx   1 root     root          42 Jan 21 14:09 testlun1 ->
../../../../../devices/pseudo/z...@0:3c,raw

# ls -l /devices/pseudo/zfs*
brw-------   1 root     sys      182,  1 Jan 23 15:00 /devices/pseudo/z...@0:1c
crw-------   1 root     sys      182,  1 Jan 23 15:00 
/devices/pseudo/z...@0:1c,raw
brw-------   1 root     sys      182,  2 Jan 23 15:00 /devices/pseudo/z...@0:2c
crw-------   1 root     sys      182,  2 Jan 23 15:00 
/devices/pseudo/z...@0:2c,raw
crw-rw-rw-   1 root     sys      182,  0 Jan 23 14:29 /devices/pseudo/z...@0:zfs

There are no "z...@0:3c" device nodes.  A "devfsadm -C" has no effect.  I'm out
of my depth with the zvol device nodes, so maybe someone can enlighten me. :)

Eric
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to