Re: [ceph-users] PGs stuck unclean active+remapped after an osd marked out

2015-03-16 Thread Gregory Farnum
On Wed, Mar 11, 2015 at 3:49 PM, Francois Lafont flafdiv...@free.fr wrote: Hi, I was always in the same situation: I couldn't remove an OSD without have some PGs definitely stuck to the active+remapped state. But I remembered I read on IRC that, before to mark out an OSD, it could be

Re: [ceph-users] PGs stuck unclean active+remapped after an osd marked out

2015-03-16 Thread Craig Lewis
If I remember/guess correctly, if you mark an OSD out it won't necessarily change the weight of the bucket above it (ie, the host), whereas if you change the weight of the OSD then the host bucket's weight changes. -Greg That sounds right. Marking an OSD out is a ceph osd reweight, not

Re: [ceph-users] PGs stuck unclean active+remapped after an osd marked out

2015-03-16 Thread Francois Lafont
Hi, Gregory Farnum a wrote : If I remember/guess correctly, if you mark an OSD out it won't necessarily change the weight of the bucket above it (ie, the host), whereas if you change the weight of the OSD then the host bucket's weight changes. I can just say that, indeed, I have noticed

Re: [ceph-users] PGs stuck unclean active+remapped after an osd marked out

2015-03-11 Thread Francois Lafont
Hi, I was always in the same situation: I couldn't remove an OSD without have some PGs definitely stuck to the active+remapped state. But I remembered I read on IRC that, before to mark out an OSD, it could be sometimes a good idea to reweight it to 0. So, instead of doing [1]: ceph osd out

Re: [ceph-users] PGs stuck unclean active+remapped after an osd marked out

2015-03-11 Thread Francois Lafont
Le 11/03/2015 05:44, Francois Lafont a écrit : PS: here is my conf. [...] I have this too: ~# ceph osd crush show-tunables { choose_local_tries: 0, choose_local_fallback_tries: 0, choose_total_tries: 50, chooseleaf_descend_once: 1, chooseleaf_vary_r: 0, straw_calc_version: 1,

[ceph-users] PGs stuck unclean active+remapped after an osd marked out

2015-03-10 Thread Francois Lafont
Hi, I had a ceph cluster in HEALTH_OK state with Firefly 0.80.9. I just wanted to remove an OSD (which worked well). So after: ceph osd out 3 I waited for the rebalancing but I had PGs stuck unclean: --- ~# ceph -s cluster