> > One reason to slice comes from recent personal experience. One disk
> of
> > a mirror dies. Replaced under contract with an identical disk. Same
> > model number, same firmware. Yet when it's plugged into the system,
> > for an unknown reason, it appears 0.001 Gb smaller than the old disk,
> > and therefore unable to attach and un-degrade the mirror. It seems
> > logical this problem could have been avoided if the device added to
> > the pool originally had been a slice somewhat smaller than the whole
> > physical device. Say, a slice of 28G out of the 29G physical disk.
> > Because later when I get the infinitesimally smaller disk, I can
> > always slice 28G out of it to use as the mirror device.
> >
> 
> What build were you running? The should have been addressed by
> CR6844090
> that went into build 117.

I'm running solaris, but that's irrelevant.  The storagetek array controller
itself reports the new disk as infinitesimally smaller than the one which I
want to mirror.  Even before the drive is given to the OS, that's the way it
is.  Sun X4275 server.

BTW, I'm still degraded.  Haven't found an answer yet, and am considering
breaking all my mirrors, to create a new pool on the freed disks, and using
partitions in those disks, for the sake of rebuilding my pool using
partitions on all disks.  The aforementioned performance problem is not as
scary to me as running in degraded redundancy.


> it's well documented. ZFS won't attempt to enable the drive's cache
> unless it has the physical device. See
> 
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
> #Storage_Pools

Nice.  Thank you.


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to