Thank you. Unfortunately this won't work because 0.21 is already being creating: ~# ceph pg force_create_pg 0.21 pg 0.21 already creating
I think, and I am guessing here since I don't know internals that well, that 0.21 started to be created but since its OSD disappear it never finished and it keeps trying. On Sun, Jun 7, 2015 at 12:18 AM, Alex Muntada <[email protected]> wrote: > Marek Dohojda: > > One of the Stuck Inactive is 0.21 and here is the output of ceph pg map >> >> #ceph pg map 0.21 >> osdmap e579 pg 0.21 (0.21) -> up [] acting [] >> >> #ceph pg dump_stuck stale >> ok >> pg_stat state up up_primary acting acting_primary >> 0.22 stale+active+clean [5,1,6] 5 [5,1,6] 5 >> 0.1f stale+active+clean [2,0,4] 2 [2,0,4] 2 >> <reducted for ease of reading> >> >> # ceph osd stat >> osdmap e579: 14 osds: 14 up, 14 in >> >> If I do >> #ceph pg 0.21 query >> >> The command freezes and never returns any output. >> >> I suspect that the problem is that these PGs were created but the OSD >> that they were initially created under disappeared. So I believe that I >> should just remove these PGs, but honestly I don’t see how. >> >> Does anybody have any ideas as to what to do next? >> > > ceph pg force_create_pg 0.21 > > We've been playing last week with this same scenario: we stopped on > purpose the 3 OSD with the replicas of one PG to find out how it affected > to the cluster and we ended up with a stale PG and 400 requests blocked for > a long time. After trying several commands to get the cluster back the one > that made the difference was force_create_pg and later moving the OSD with > blocked requests out of the cluster. > > Hope that helps, > Alex >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
