yes, that's exactly what I did. the issue is that I can't get the corrected
label to be written once I've zero'd the drive. I get and error from fdisk that
apparently views the backup label
--
This message posted from opensolaris.org
___
zfs-discuss
you mentioned one, so what do you recomend as a workaround?.
I've tried re-initialing the disks on another system's HW RAID controller, but
still get the same error.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
can you recommend a walk-through for this process, or a bit more of a
description? I'm not quite sure how I'd use that utility to repair the EFI label
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
so you're suggesting I buy 750s to replace the 500s. then if a 750 fails buy
another bigger drive again?
the drives are RMA replacements for the other disks that faulted in the array
before. they are the same brand, model and model number, apparently not so
under the label though, but no way I
yes, it's the same make and model as most of the other disks in the zpool and
reports the same number of sectors
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
If so what should I do to remedy that? just reformat it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm having an issue replacing a failed 500GB disk with another new one with the
error that the disk is too small. The problem is that it isn't. Is there any
help anyone can offer here?
I've tried adding it once set as a spare or seperate from the pool and with
different formats and configs all
Volume name =
ascii name = SAMSUNG-S0VVJ1CP30539-0001-465.76GB
bytes/sector= 512
sectors = 976760063
accessible sectors = 976760030
Part TagFlag First Sector Size Last Sector
0usrwm 256 465.75GB 976743646
1