On Tue, Aug 19, 2014 at 1:22 AM, Riederer, Michael <[email protected]>
wrote:

>
>
> root@ceph-admin-storage:~# ceph pg force_create_pg 2.587
> pg 2.587 now creating, ok
> root@ceph-admin-storage:~# ceph pg 2.587 query
> ...
>           "probing_osds": [
>                 "5",
>                 "8",
>                 "10",
>                 "13",
>                 "20",
>                 "35",
>                 "46",
>                 "56"],
> ...
>
> All mentioned osds "probing_osds" are up and in, but the cluster can not
> create the pg. Not even scrub, deep-scrub or repair it.
>


My experience is that as long as you have down_osds_we_would_probe in the
pg query, ceph pg force_create_pg won't do anything. ceph osd lost didn't
help. The PGs would go into the creating state, then revert to incomplete. The
only way I was able to get them to stay in the creating state was to
re-create all of the OSD IDs listed in down_osds_we_would_probe.

Even then, it wasn't deterministic. I issued the ceph pg force_create_pg,
and it actually took effect sometime in the middle of the night, after an
unrelated OSD went down and up.

It was a very frustrating experience.



>  Just to be sure, that I did it the right way:
> # stop ceph-osd id=x
> # ceph osd out x
> # ceph osd crush remove osd.x
> # ceph auth del osd.x
> # ceph osd rm x
>



My procedure was the same as yours, with the addition of a ceph osd lost x
before ceph osd rm.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to