Hi Laurent,
I was able to reproduce on it on a Solaris 10 5/09 system.
The problem is fixed in a current Nevada bits and also in
the upcoming Solaris 10 release.
The bug fix that integrated this change might be this one:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6328632
zpool
You're right, from the documentation it definitely
should work. Still, it doesn't. At least not in
Solaris 10. But i am not a zfs-developer, so this
should probably answered by them. I will give it a
try with a recent OpneSolaris-VM and check, wether
this works in newer implementations of
Thanks a lot, Cindy!
Let me know how it goes or if I can provide more info.
Part of the bad luck I've had with that set, is that it reports such errors
about once a month, then everything goes back to normal again. So I'm pretty
sure that I'll be able to try to offline the disk someday.
Hi Laurent,
Yes, you should able to offline a faulty device in a redundant
configuration as long as enough devices are available to keep
the pool redundant.
On my Solaris Nevada system (latest bits), injecting a fault
into a disk in a RAID-Z configuration and then offlining a disk
works as
You could offline the disk if [b]this[/b] disk (not
the pool) had a replica. Nothing wrong with the
documentation. Hmm, maybe it is little misleading
here. I walked into the same trap.
I apologize for being daft here, but I don't find any ambiguity in the
documentation.
This is explicitly
You're right, from the documentation it definitely should work. Still, it
doesn't. At least not in Solaris 10. But i am not a zfs-developer, so this
should probably answered by them. I will give it a try with a recent
OpneSolaris-VM and check, wether this works in newer implementations of zfs.
FYI:
In b117 it works as expected and stated in the documentation.
Tom
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Great news, thanks Tom!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You can't replace it because this disk is still a valid member of the pool,
although it is marked faulty.
Put in a replacement disk, add this to the pool and replace the faulty one with
the new disk.
Regards,
Tom
--
This message posted from opensolaris.org
I don't have a replacement, but I don't want the disk to be used right now by
the volume: how do I do that?
This is exactly the point of the offline command as explained in the
documentation: disabling unreliable hardware, or removing it temporarily.
So this is a huge bug of the documentation?
You could offline the disk if [b]this[/b] disk (not the pool) had a replica.
Nothing wrong with the documentation. Hmm, maybe it is little misleading here.
I walked into the same trap.
The pool is not using the disk anymore anyway, so (from the zfs point of view)
there is no need to offline
(As I'm not subscribed to this list, you can keep me in CC:, but I'll check out
the Jive thread)
Hi all,
I've seen this questions asked several times, but there wasn't any solution
provided.
I'm trying to offline a faulted device in a RAID-Z2 device on Solaris 10. This
is done according to
Yup, just hit exactly the same myself. I have a feeling this faulted disk is
affecting performance, so tried to remove or offline it:
$ zpool iostat -v 30
capacity operationsbandwidth
pool used avail read write read write
-- - - -
13 matches
Mail list logo