On Fri, Dec 11, 2015 at 1:37 AM, Matt Conner <matt.con...@keepertech.com> wrote:
> Hi Ilya,
>
> I had already recovered but I managed to recreate the problem again. I ran

How did you recover?

> the commands against rbd_data.f54f9422698a8.0000000000000000 which was one
> of those listed in osdc this time. We have 2048 PGs in the pool so the list
> is long.

That's not going to work - I need it in a consistent "stuck" state so
I can match the outputs.  I understand you can't keep it that way for
a long time, so can you please reproduce it and email me off list with
osdmap, osdc and ceph -s right away?

You have a lot of OSDs but it also doesn't seem to take a long time to
recreate.  Bump debug ms to 1 on all OSDs.

>
> As for when I fetched the object using rados, it grabbed it without issue.
>
> osd map:
> osdmap e6247 pool 'NAS-ic2gw01' (3) object
> 'rbd_data.f54f9422698a8.0000000000000000' -> pg 3.cac46c43 (3.443) -> up
> ([33,56], p33) acting ([33,56], p33)
>
> osdmaptool:
> pool 3 pg_num 2048
> 3.0     [23,138]        23

If it's long, better to compress and attach.

Thanks,

                Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to