Re: [ceph-users] active+recovering+degraded after cluster reboot

2018-12-15 Thread David C
Yep, that cleared it. Sorry for the noise!

On Sun, Dec 16, 2018 at 12:16 AM David C  wrote:

> Hi Paul
>
> Thanks for the response. Not yet, just being a bit cautious ;) I'll go
> ahead and do that.
>
> Thanks
> David
>
>
> On Sat, 15 Dec 2018, 23:39 Paul Emmerich 
>> Did you unset norecover?
>>
>>
>> Paul
>>
>> --
>> Paul Emmerich
>>
>> Looking for help with your Ceph cluster? Contact us at https://croit.io
>>
>> croit GmbH
>> Freseniusstr. 31h
>> 81247 München
>> www.croit.io
>> Tel: +49 89 1896585 90
>>
>> On Sun, Dec 16, 2018 at 12:22 AM David C  wrote:
>> >
>> > Hi All
>> >
>> > I have what feels like a bit of a rookie question
>> >
>> > I shutdown a Luminous 12.2.1 cluster with noout,nobackfill,norecover set
>> >
>> > Before shutting down, all PGs were active+clean
>> >
>> > I brought the cluster up, all daemons started and all but 2 PGs are
>> active+clean
>> >
>> > I have 2 pgs showing: "active+recovering+degraded"
>> >
>> > It's been reporting this for about an hour with no signs of clearing on
>> it's own
>> >
>> > Ceph health detail shows: PG_DEGRADED Degraded data redundancy:
>> 2/131709267 objects degraded (0.000%), 2 pgs unclean, 2 pgs degraded
>> >
>> > I've tried restarting MONs and all OSDs in the cluster.
>> >
>> > How would you recommend I proceed at this point?
>> >
>> > Thanks
>> > David
>> >
>> >
>> >
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] active+recovering+degraded after cluster reboot

2018-12-15 Thread David C
Hi Paul

Thanks for the response. Not yet, just being a bit cautious ;) I'll go
ahead and do that.

Thanks
David


On Sat, 15 Dec 2018, 23:39 Paul Emmerich  Did you unset norecover?
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Sun, Dec 16, 2018 at 12:22 AM David C  wrote:
> >
> > Hi All
> >
> > I have what feels like a bit of a rookie question
> >
> > I shutdown a Luminous 12.2.1 cluster with noout,nobackfill,norecover set
> >
> > Before shutting down, all PGs were active+clean
> >
> > I brought the cluster up, all daemons started and all but 2 PGs are
> active+clean
> >
> > I have 2 pgs showing: "active+recovering+degraded"
> >
> > It's been reporting this for about an hour with no signs of clearing on
> it's own
> >
> > Ceph health detail shows: PG_DEGRADED Degraded data redundancy:
> 2/131709267 objects degraded (0.000%), 2 pgs unclean, 2 pgs degraded
> >
> > I've tried restarting MONs and all OSDs in the cluster.
> >
> > How would you recommend I proceed at this point?
> >
> > Thanks
> > David
> >
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] active+recovering+degraded after cluster reboot

2018-12-15 Thread Paul Emmerich
Did you unset norecover?


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Sun, Dec 16, 2018 at 12:22 AM David C  wrote:
>
> Hi All
>
> I have what feels like a bit of a rookie question
>
> I shutdown a Luminous 12.2.1 cluster with noout,nobackfill,norecover set
>
> Before shutting down, all PGs were active+clean
>
> I brought the cluster up, all daemons started and all but 2 PGs are 
> active+clean
>
> I have 2 pgs showing: "active+recovering+degraded"
>
> It's been reporting this for about an hour with no signs of clearing on it's 
> own
>
> Ceph health detail shows: PG_DEGRADED Degraded data redundancy: 2/131709267 
> objects degraded (0.000%), 2 pgs unclean, 2 pgs degraded
>
> I've tried restarting MONs and all OSDs in the cluster.
>
> How would you recommend I proceed at this point?
>
> Thanks
> David
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com