no, I mean ceph sees it as a failure and marks it out for a while

On Thu, Sep 5, 2019 at 11:00 AM Ashley Merrick <[email protected]>
wrote:

> Is your HD actually failing and vanishing from the OS and then coming back
> shortly?
>
> Or do you just mean your OSD is crashing and then restarting it self
> shortly later?
>
>
> ---- On Fri, 06 Sep 2019 01:55:25 +0800 * [email protected]
> <[email protected]> * wrote ----
>
> One of the things i've come to notice is when HDD drives fail, they often
> recover in a short time and get added back to the cluster.  This causes the
> data to rebalance back and forth, and if I set the noout flag I get a
> health warning.  Is there a better way to avoid this?
>
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to