Oh, sorry, not notice it's a incomplete state.

What's the result of "ceph -s"? It should be exists osd down.

On Wed, Apr 30, 2014 at 5:23 PM, [email protected] <[email protected]> wrote:
> Hello,
> The pg was "incomplete", and I had try to repair it before. But it do
> nothing.
>
> ________________________________
> [email protected]
>
> From: Haomai Wang
> Date: 2014-04-30 17:14
> To: [email protected]
> CC: ceph-users
> Subject: Re: [ceph-users] how can I repair the pg
> You can find the inconsistence pg via " ceph pg dump" and then run
> "ceph pg repair <pg_name>"
>
> On Wed, Apr 30, 2014 at 5:00 PM, [email protected] <[email protected]>
> wrote:
>> Hi,
>> I have some problem now. A large number of osds have down before. When
>> some
>> of them become up, I found a pg was "incomplete". Now this pg's map is
>> [35,29,42].
>> the pg's folders in osd.35 and osd.29 are empty. But there are 9.2G
>> capacity
>> in osd.42.  Like this:
>>
>> ----here is osd.35
>> [root@ceph952 49.6_head]# ls
>> [root@ceph952 49.6_head]#
>>
>> ----here is osd.42
>> [root@ceph960 49.6_head]# ls
>> DIR_6  DIR_E
>> [root@ceph960 49.6_head]#
>>
>> I want to know how to repair this pg?
>> And I found, when i stop osd.35, the map change like [0,29,42]. I run
>> "ceph
>> pg 49.6 query", and it show me:
>>
>> [root@ceph960 ~]# ceph pg 49.6 query
>> ... ...
>> "probing_osds": [
>>                 "(0,255)",
>>                 "(7,255)",
>>                 "(20,255)",
>>                 "(21,255)",
>>                 "(25,255)",
>>                 "(26,255)",
>>                 "(29,255)",
>>                 "(33,255)",
>>                 "(34,255)",
>>                 "(35,255)",
>>                 "(39,255)",
>>                 "(41,255)",
>>                 "(42,255)"],
>>           "down_osds_we_would_probe": [
>>                 38],
>>           "peering_blocked_by": []},
>>         { "name": "Started",
>>           "enter_time": "2014-04-30 16:52:24.181956"}]}
>>
>> Can I delete all this "probing_osds" but 42, and set the osd.42 as the
>> up_primary ?
>>
>> Thanks.
>>
>> ________________________________
>> [email protected]
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> Best Regards,
>
> Wheat



-- 
Best Regards,

Wheat
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to