On a reasonably up to date ZFS, if you "properly" (not just physically)
remove it, the pool will just deal with it. Have tested this a couple of
days ago.
Make sure your ZFS version supports slog removal.

On Thu, Jan 22, 2015 at 3:32 PM, Nicholas Lee <[email protected]> wrote:

>
>
> On 22 January 2015 at 10:54, Greg Zartman via smartos-discuss <
> [email protected]> wrote:
>
>> was your disk marked as bad (and the pool marked degraded) or was there
>>> some other way of determining your disk was "bad" ?
>>> and what does your original test look like (was 90-ish MB/s with "bad"
>>> disk plus slog) without the "bad" disk?
>>>
>>
>>
>> No, it looked fine when I did zpool status.  I was just getting horrible
>> performance in my KVM zone.  I added the an intel 3700 and it didn't make
>> much difference.  SIGXCPU from the IRC channel helped me run an i/o test on
>> all devices in the pool and the bad device was pegged at 100% usage while
>> the slog and other devices where just idling.  I pulled that one device
>> from the pool and the performance in the KVM VM container shot up and the
>> iostate of the devices showed pretty equal usage across the pool.
>>
>>
> Was this bad disk that root cause of your KVM i/o issues?  I guess it's a
> bit hard to remove the slog now.
>
> Although wont the pool keep operating if you offline it?
>
>
> Nicholas
>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to