As does fdisk -G: root@nas:~# fdisk -G /dev/rdsk/c16t5000C5002AA08E4Dd0 * Physical geometry for device /dev/rdsk/c16t5000C5002AA08E4Dd0 * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ 60800 60800 0 0 255 252 512 You have new mail in /var/mail/root root@nas:~# fdisk -G /dev/rdsk/c16t5000C5005295F727d0 * Physical geometry for device /dev/rdsk/c16t5000C5005295F727d0 * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ 60800 60800 0 0 255 252 512
On Mon, Sep 24, 2012 at 9:01 AM, LIC mesh <licm...@gmail.com> wrote: > Yet another weird thing - prtvtoc shows both drives as having the same > sector size, etc: > root@nas:~# prtvtoc /dev/rdsk/c16t5000C5002AA08E4Dd0 > * /dev/rdsk/c16t5000C5002AA08E4Dd0 partition map > * > * Dimensions: > * 512 bytes/sector > * 3907029168 sectors > * 3907029101 accessible sectors > * > * Flags: > * 1: unmountable > * 10: read-only > * > * Unallocated space: > * First Sector Last > * Sector Count Sector > * 34 222 255 > * > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 3907012495 3907012750 > 8 11 00 3907012751 16384 3907029134 > root@nas:~# prtvtoc /dev/rdsk/c16t5000C5005295F727d0 > * /dev/rdsk/c16t5000C5005295F727d0 partition map > * > * Dimensions: > * 512 bytes/sector > * 3907029168 sectors > * 3907029101 accessible sectors > * > * Flags: > * 1: unmountable > * 10: read-only > * > * Unallocated space: > * First Sector Last > * Sector Count Sector > * 34 222 255 > * > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 3907012495 3907012750 > 8 11 00 3907012751 16384 3907029134 > > > > > > On Mon, Sep 24, 2012 at 12:20 AM, Timothy Coalson <tsc...@mst.edu> wrote: > >> I think you can fool a recent Illumos kernel into thinking a 4k disk is >> 512 (incurring a performance hit for that disk, and therefore the vdev and >> pool, but to save a raidz1, it might be worth it): >> >> http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks , >> see "Overriding the Physical Sector Size" >> >> I don't know what you might have to do to coax it to do the replace with >> a hot spare (zpool replace? export/import?). Perhaps there should be a >> feature in ZFS that notifies when a pool is created or imported with a hot >> spare that can't be automatically used in one or more vdevs? The whole >> point of hot spares is to have them automatically swap in when you aren't >> there to fiddle with things, which is a bad time to find out it won't work. >> >> Tim >> >> On Sun, Sep 23, 2012 at 10:52 PM, LIC mesh <licm...@gmail.com> wrote: >> >>> Well this is a new one.... >>> >>> Illumos/Openindiana let me add a device as a hot spare that evidently >>> has a different sector alignment than all of the other drives in the array. >>> >>> So now I'm at the point that I /need/ a hot spare and it doesn't look >>> like I have it. >>> >>> And, worse, the other spares I have are all the same model as said hot >>> spare. >>> >>> Is there anything I can do with this or am I just going to be up the >>> creek when any one of the other drives in the raidz1 fails? >>> >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss@opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >>> >> >
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss