Re: [ceph-users] jewel ceph has PG mapped always to the same OSD's

2018-04-07 Thread Konstantin Danilov
Deep scrub doesn't help. After some steps (not sure what exact list) ceph does remap this pg to another osd, but PG doesn't move # ceph pg map 11.206 osdmap e176314 pg 11.206 (11.206) -> up [955,198,801] acting [787,697] Hangs in this state forever, 'ceph pg 11.206 query' hangs as well On Sat,

Re: [ceph-users] jewel ceph has PG mapped always to the same OSD's

2018-04-06 Thread Konstantin Danilov
David, > What happens when you deep-scrub this PG? we haven't try to deep-scrub it, will try. > What do the OSD logs show for any lines involving the problem PGs? Nothing special were logged about this particular osd, except that it's degraded. Yet osd consume quite a lot portion of its CPU time

Re: [ceph-users] jewel ceph has PG mapped always to the same OSD's

2018-04-06 Thread David Turner
What happens when you deep-scrub this PG? What do the OSD logs show for any lines involving the problem PGs? Was anything happening on your cluster just before this started happening at first? On Fri, Apr 6, 2018 at 2:29 PM Konstantin Danilov wrote: > Hi all, we have a