Solaris and/or ZFS are badly confused about drive IDs. The "c5t0d0" names are very far removed from the real world, and possibly they've gotten screwed up somehow. Is devfsadm supposed to fix those, or does it only delete excess?

Reason I believe it's confused:

zpool status shows mirror-0 on c9t3d0, c9t2d0, and c9t5d0. But format shows the one remaining Seagate 400GB drive at c5t0d0 (my initial pool was two of those; I replaced one with a Samsung 1TB earlier today). Now the mirror with three drives in is my very first mirror, which has to have the one remaining Seagate drive in it (given that I removed one Seagate drive; otherwise I could be confused about order of creation vs. mirror numbering).

I'm thinking either Solaris' appalling mess of device files is somehow scrod, or else ZFS is confused in its reporting (perhaps because of cache file contents?). Is there anything I can do about either of these? Does devfsadm really create the apporpirate /dev/dsk and etc. files based on what's present? Would deleting the cache file while the pool is exported, and then searching for and importing the pool, help?

How worried should I be?  (I've got current backups).
--
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to