I have 24 identical Western Digital drives connected to a Dell SAS 5/E HBA.
Three of the drives list the following disk information when using the verify utility under the format command: Volume name = < > ascii name = <ATA-WDC WD2002FYPS-0-5G04-1.82TB> bytes/sector = 512 sectors = 3907029166 accessible sectors = 3907029133 Part Tag Flag First Sector Size Last Sector 0 usr wm 34 1.82TB 3907012749 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 3907012750 8.00MB 3907029133 The remaining 21 drives show this disk info: Volume name = < > ascii name = <ATA-WDC WD2002FYPS-0-5G04-1.82TB> bytes/sector = 512 sectors = 3907029166 accessible sectors = 3907029134 Part Tag Flag First Sector Size Last Sector 0 usr wm 256 1.82TB 3907012750 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 3907012751 8.00MB 3907029134 I know this isn't going to be an issue when creating a raidz pool, but if I have to replace a failed disk with one that has one less accessible sector, won't that cause problems? According the ZFS best practices guide: "The size of the replacements vdev, measured by usable sectors, must be the same or greater than the vdev being replaced. This can be confusing when whole disks are used because different models of disks may provide a different number of usable sectors." Can anyone shed some light on this? -- This message posted from opensolaris.org