On Thu, Sep 30, 2010 at 1:16 AM, Scott Meilicke
scott.meili...@craneaerospace.com wrote:
Resliver speed has been beaten to death I know, but is there a way to avoid
this? For example, is more enterprisy hardware less susceptible to
reslivers? This box is used for development VMs, but there is
On Sep 30, 2010, at 2:32 AM, Tuomas Leikola wrote:
On Thu, Sep 30, 2010 at 1:16 AM, Scott Meilicke
scott.meili...@craneaerospace.com wrote:
Resliver speed has been beaten to death I know, but is there a way to avoid
this? For example, is more enterprisy hardware less susceptible to
If we've found one bad disk, what are our options?
On Thu, Sep 30, 2010 at 10:12 AM, Richard Elling
richard.ell...@gmail.comwrote:
On Sep 30, 2010, at 2:32 AM, Tuomas Leikola wrote:
On Thu, Sep 30, 2010 at 1:16 AM, Scott Meilicke
scott.meili...@craneaerospace.com wrote:
Resliver
Replace it. Reslivering should not as painful if all your disks are functioning
normally.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool
over NFS was effectively zero for about 45 minutes. Currently the pool is still
reslivering, but for some reason I can access the file system now.
Resliver speed has been beaten to death I
I should add I have 477 snapshots across all files systems. Most of them are
hourly snaps (225 of them anyway).
On Sep 29, 2010, at 3:16 PM, Scott Meilicke wrote:
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool
over NFS was
Yeah, I'm having a combination of this and the resilver constantly
restarting issue.
And nothing to free up space.
It was recommended to me to replace any expanders I had between the HBA and
the drives with extra HBAs, but my array doesn't have expanders.
If your's does, you may want to try