Hi Matthew, You can use various forms of fmdump to decode this output. It might be easier to use fmdump -eV and look for the device info in the vdev path entry, like the one below.
Also see if the errors on these vdevs are reported in your zpool status output. Thanks, Cindy # fmdump -eV | more TIME CLASS Oct 14 2009 09:56:54.639354792 ereport.fs.zfs.vdev.open_failed nvlist version: 0 class = ereport.fs.zfs.vdev.open_failed ena = 0xd9fa6d282c00001 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0xacea55024964f6d6 vdev = 0xf04f53d61ed76317 (end detector) pool = mirpool pool_guid = 0xacea55024964f6d6 pool_context = 0 pool_failmode = wait vdev_guid = 0xf04f53d61ed76317 vdev_type = disk vdev_path = /dev/dsk/c1t226000C0FFA001ABd3s0 vdev_devid = id1,s...@n600c0ff0000000000001ab23c5606e03/a parent_guid = 0x6035386f7936f350 On 10/21/09 10:18, Matthew C Aycock wrote:
I have several of these messages from fmdump: fmdump -v -u 98abae95-8053-4cdc-d91a-dad89b125db4 ~ TIME UUID SUNW-MSG-ID Sep 18 00:45:23.7621 98abae95-8053-4cdc-d91a-dad89b125db4 ZFS-8000-FD 100% fault.fs.zfs.vdev.io Problem in: zfs://pool=mzfs/vdev=a414878cf09644a Affects: zfs://pool=mzfs/vdev=a414878cf09644a FRU: - Location: - Oct 21 10:34:41.8014 98abae95-8053-4cdc-d91a-dad89b125db4 FMD-8000-4M Repaired 100% fault.fs.zfs.vdev.io Problem in: zfs://pool=mzfs/vdev=a414878cf09644a Affects: zfs://pool=mzfs/vdev=a414878cf09644a FRU: - Location: - I am trying to determine which of the four vdevs is involved. Hdow do I translate vdev=a414878cf09644a a cWtXdYsZ?
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss