Hi Sean,

A better way probably exists but I use the fdump -eV to identify the
pool and the device information (vdev_path) that is listed like this:

# fmdump -eV | more
.
.
.


        pool = test
        pool_guid = 0x6de45047d7bde91d
        pool_context = 0
        pool_failmode = wait
        vdev_guid = 0x2ab2d3ba9fc1922b
        vdev_type = disk
        vdev_path = /dev/dsk/c0t6d0s0

Then you can match the vdev_path device to the device in your storage
pool.

You can also review the date/time stamps in this output to see how long
the device has had a problem.

Its probably a good idea to run a zpool scrub on this pool too.

Cindy


On 10/23/09 12:04, sean walmsley wrote:
This morning we got a fault management message from one of our production 
servers stating that a fault in one of our pools had been detected and fixed. 
Looking into the error using fmdump gives:

fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006
TIME                 UUID                                 SUNW-MSG-ID
Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
  100%  fault.fs.zfs.device

        Problem in: zfs://pool=vol02/vdev=179e471c0732582
           Affects:   zfs://pool=vol02/vdev=179e471c0732582
               FRU: -
          Location: -

My question is: how do I relate the vdev name above (179e471c0732582) with an actual drive? I've 
checked these id's against the device ids (cXtYdZ - obviously no match) and against all of the disk 
serial numbers. I've also tried all of the "zpool list" and "zpool status" 
options with no luck.

I'm sure I'm missing something obvious here, but if anyone can point me in the 
right direction I'd appreciate it!
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to