Hi Peter,

Relooking at your problem, you might want to keep track of this issue:
http://tracker.ceph.com/issues/22440

Regards,
Tom

On Wed, Jan 31, 2018 at 11:37 AM, Thomas Bennett <[email protected]> wrote:

> Hi Peter,
>
> From your reply, I see that:
>
>    1. pg 3.12c is part of pool 3.
>    2. The osd's in the "up"  for pg 3.12c  are: 6, 0, 12.
>
>
> I suggest to check on this 'activating' issue do the following:
>
>    1. What is the rule that pool 3 should follow, 'hybrid', 'nvme' or
>    'hdd'? (Use the *ceph osd pool ls detail* command and look at pool 3's
>    crush rule)
>    2. Then check are osds 6, 0, 12 backed by nvme's or hdd's? (Use *ceph
>    osd tree | grep nvme *command to find your nvme backed osds.)
>
>
> If your problem is similar to mine, you will have osds that are nvme
> backed in a pool that should only be backed by hdds, which was causing a pg
> to go into 'activating' state and staying there.
>
> Cheers,
> Tom
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to