Re: [ceph-users] issue adding OSDs

2018-01-12 Thread Luis Periquito
"ceph versions" returned all daemons as running 12.2.1.

On Fri, Jan 12, 2018 at 8:00 AM, Janne Johansson  wrote:
> Running "ceph mon versions" and "ceph osd versions" and so on as you do the
> upgrades would have helped I guess.
>
>
> 2018-01-11 17:28 GMT+01:00 Luis Periquito :
>>
>> this was a bit weird, but is now working... Writing for future
>> reference if someone faces the same issue.
>>
>> this cluster was upgraded from jewel to luminous following the
>> recommended process. When it was finished I just set the require_osd
>> to luminous. However I hadn't restarted the daemons since. So just
>> restarting all the OSDs made the problem go away.
>>
>> How to check if that was the case? The OSDs now have a "class" associated.
>>
>>
>>
>> On Wed, Jan 10, 2018 at 7:16 PM, Luis Periquito 
>> wrote:
>> > Hi,
>> >
>> > I'm running a cluster with 12.2.1 and adding more OSDs to it.
>> > Everything is running version 12.2.1 and require_osd is set to
>> > luminous.
>> >
>> > one of the pools is replicated with size 2 min_size 1, and is
>> > seemingly blocking IO while recovering. I have no slow requests,
>> > looking at the output of "ceph osd perf" it seems brilliant (all
>> > numbers are lower than 10).
>> >
>> > clients are RBD (OpenStack VM in KVM) and using (mostly) 10.2.7. I've
>> > tagged those OSDs as out and the RBD just came back to life. I did
>> > have some objects degraded:
>> >
>> > 2018-01-10 18:23:52.081957 mon.mon0 mon.0 x.x.x.x:6789/0 410414 :
>> > cluster [WRN] Health check update: 9926354/49526500 objects misplaced
>> > (20.043%) (OBJECT_MISPLACED)
>> > 2018-01-10 18:23:52.081969 mon.mon0 mon.0 x.x.x.x:6789/0 410415 :
>> > cluster [WRN] Health check update: Degraded data redundancy:
>> > 5027/49526500 objects degraded (0.010%), 1761 pgs unclean, 27 pgs
>> > degraded (PG_DEGRADED)
>> >
>> > any thoughts as to what might be happening? I've run such operations
>> > many a times...
>> >
>> > thanks for all help, as I'm grasping as to figure out what's
>> > happening...
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] issue adding OSDs

2018-01-12 Thread Janne Johansson
Running "ceph mon versions" and "ceph osd versions" and so on as you do the
upgrades would have helped I guess.


2018-01-11 17:28 GMT+01:00 Luis Periquito :

> this was a bit weird, but is now working... Writing for future
> reference if someone faces the same issue.
>
> this cluster was upgraded from jewel to luminous following the
> recommended process. When it was finished I just set the require_osd
> to luminous. However I hadn't restarted the daemons since. So just
> restarting all the OSDs made the problem go away.
>
> How to check if that was the case? The OSDs now have a "class" associated.
>
>
>
> On Wed, Jan 10, 2018 at 7:16 PM, Luis Periquito 
> wrote:
> > Hi,
> >
> > I'm running a cluster with 12.2.1 and adding more OSDs to it.
> > Everything is running version 12.2.1 and require_osd is set to
> > luminous.
> >
> > one of the pools is replicated with size 2 min_size 1, and is
> > seemingly blocking IO while recovering. I have no slow requests,
> > looking at the output of "ceph osd perf" it seems brilliant (all
> > numbers are lower than 10).
> >
> > clients are RBD (OpenStack VM in KVM) and using (mostly) 10.2.7. I've
> > tagged those OSDs as out and the RBD just came back to life. I did
> > have some objects degraded:
> >
> > 2018-01-10 18:23:52.081957 mon.mon0 mon.0 x.x.x.x:6789/0 410414 :
> > cluster [WRN] Health check update: 9926354/49526500 objects misplaced
> > (20.043%) (OBJECT_MISPLACED)
> > 2018-01-10 18:23:52.081969 mon.mon0 mon.0 x.x.x.x:6789/0 410415 :
> > cluster [WRN] Health check update: Degraded data redundancy:
> > 5027/49526500 objects degraded (0.010%), 1761 pgs unclean, 27 pgs
> > degraded (PG_DEGRADED)
> >
> > any thoughts as to what might be happening? I've run such operations
> > many a times...
> >
> > thanks for all help, as I'm grasping as to figure out what's happening...
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] issue adding OSDs

2018-01-11 Thread Luis Periquito
this was a bit weird, but is now working... Writing for future
reference if someone faces the same issue.

this cluster was upgraded from jewel to luminous following the
recommended process. When it was finished I just set the require_osd
to luminous. However I hadn't restarted the daemons since. So just
restarting all the OSDs made the problem go away.

How to check if that was the case? The OSDs now have a "class" associated.



On Wed, Jan 10, 2018 at 7:16 PM, Luis Periquito  wrote:
> Hi,
>
> I'm running a cluster with 12.2.1 and adding more OSDs to it.
> Everything is running version 12.2.1 and require_osd is set to
> luminous.
>
> one of the pools is replicated with size 2 min_size 1, and is
> seemingly blocking IO while recovering. I have no slow requests,
> looking at the output of "ceph osd perf" it seems brilliant (all
> numbers are lower than 10).
>
> clients are RBD (OpenStack VM in KVM) and using (mostly) 10.2.7. I've
> tagged those OSDs as out and the RBD just came back to life. I did
> have some objects degraded:
>
> 2018-01-10 18:23:52.081957 mon.mon0 mon.0 x.x.x.x:6789/0 410414 :
> cluster [WRN] Health check update: 9926354/49526500 objects misplaced
> (20.043%) (OBJECT_MISPLACED)
> 2018-01-10 18:23:52.081969 mon.mon0 mon.0 x.x.x.x:6789/0 410415 :
> cluster [WRN] Health check update: Degraded data redundancy:
> 5027/49526500 objects degraded (0.010%), 1761 pgs unclean, 27 pgs
> degraded (PG_DEGRADED)
>
> any thoughts as to what might be happening? I've run such operations
> many a times...
>
> thanks for all help, as I'm grasping as to figure out what's happening...
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] issue adding OSDs

2018-01-10 Thread Luis Periquito
Hi,

I'm running a cluster with 12.2.1 and adding more OSDs to it.
Everything is running version 12.2.1 and require_osd is set to
luminous.

one of the pools is replicated with size 2 min_size 1, and is
seemingly blocking IO while recovering. I have no slow requests,
looking at the output of "ceph osd perf" it seems brilliant (all
numbers are lower than 10).

clients are RBD (OpenStack VM in KVM) and using (mostly) 10.2.7. I've
tagged those OSDs as out and the RBD just came back to life. I did
have some objects degraded:

2018-01-10 18:23:52.081957 mon.mon0 mon.0 x.x.x.x:6789/0 410414 :
cluster [WRN] Health check update: 9926354/49526500 objects misplaced
(20.043%) (OBJECT_MISPLACED)
2018-01-10 18:23:52.081969 mon.mon0 mon.0 x.x.x.x:6789/0 410415 :
cluster [WRN] Health check update: Degraded data redundancy:
5027/49526500 objects degraded (0.010%), 1761 pgs unclean, 27 pgs
degraded (PG_DEGRADED)

any thoughts as to what might be happening? I've run such operations
many a times...

thanks for all help, as I'm grasping as to figure out what's happening...
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com