> Op 21 jul. 2016 om 21:06 heeft Salwasser, Zac <[email protected]> het 
> volgende geschreven:
> 
> Thanks for the response!  Long story short, there’s one specific osd in my 
> cluster that is responsible, according to the dump command, for the two pg’s 
> that are still down.
>  
> I wiped the osd data directory and recreated that osd a couple of days ago, 
> but it is still stuck in the “booting” state.  Any ideas how I can 
> investigate that particular osd further?
>  
>  

Did you re-use the old UUID of the OSD as set in the OSDMap? Otherwise it will 
stay in booting.

However, wiping an OSD like that is usually not a good idea.

Wido

>  
> From: Gregory Farnum <[email protected]>
> Date: Thursday, July 21, 2016 at 3:01 PM
> To: "Salwasser, Zac" <[email protected]>
> Cc: "[email protected]" <[email protected]>, "Heller, Chris" 
> <[email protected]>
> Subject: Re: [ceph-users] Uncompactable Monitor Store at 69GB -- Re: Cluster 
> in warn state, not sure what to do next.
>  
> On Thu, Jul 21, 2016 at 11:54 AM, Salwasser, Zac <[email protected]> wrote:
> Rephrasing for brevity – I have a monitor store that is 69GB and won’t
> compact any further on restart or with ‘tell compact’.  Has anyone dealt
> with this before?
>  
> The monitor can't trim OSD maps over a period where PGs are unclean;
> you'll likely find that's where all the space has gone. You need to
> resolve your down PGs.
> -Greg
>  
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to