The problem with the current OSDs was a poorly advised chmod of the OSD
data store.  From what I've pieced together the chmod was run against a
running OSD.

On Tue, Aug 21, 2018 at 1:13 PM Paul Emmerich <[email protected]>
wrote:

> I would continue with the upgrade of all OSDs this scenario as the old
> ones are crashing, not the new one.
> Maybe with all the flags set (pause, norecover, ...)
>
>
> Paul
>
> 2018-08-21 19:08 GMT+02:00 Kees Meijs <[email protected]>:
> > Hello David,
> >
> > Thank you and I'm terribly sorry; I was unaware I was starting new
> threads.
> >
> > From the top of my mind I say "yes it'll fit" but obviously I make sure
> at
> > first.
> >
> > Regards,
> > Kees
> >
> > On 21-08-18 16:34, David Turner wrote:
> >>
> >> Ceph does not support downgrading OSDs.  When you removed the single
> OSD,
> >> it was probably trying to move data onto the other OSDs in the node with
> >> Infernalis OSDs.  I would recommend stopping every OSD in that node and
> >> marking them out so the cluster will rebalance without them.  Assuming
> your
> >> cluster is able to get healthy after that, we'll see where things are.
> >>
> >> Also, please stop opening so many email threads about this same issue.
> It
> >> makes tracking this in the archives impossible.
> >>
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90 <+49%2089%20189658590>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to