While it seemed to be solved yesterday, today the %USED has grown a lot
again. See:

~# ceph osd df tree
http://termbin.com/0zhk

~# ceph df detail
http://termbin.com/thox

94% USED while there is about 21TB worth of data, size = 2 menas ~42TB RAW
Usage, but the OSDs in that root sum ~70TB available space.



Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*

On Thu, Jan 18, 2018 at 8:21 PM, Webert de Souza Lima <[email protected]
> wrote:

> With the help of robbat2 and llua on IRC channel I was able to solve this
> situation by taking down the 2-OSD only hosts.
> After crush reweighting OSDs 8 and 23 from host mia1-master-fe02 to 0,
> ceph df showed the expected storage capacity usage (about 70%)
>
>
> With this in mind, those guys have told me that it is due the cluster
> beeing uneven and unable to balance properly. It makes sense and it worked.
> But for me it is still a very unexpected bahaviour for ceph to say that
> the pools are 100% full and Available Space is 0.
>
> There were 3 hosts and repl. size = 2, if the host with only 2 OSDs were
> full (it wasn't), ceph could still use space from OSDs from the other hosts.
>
> Regards,
>
> Webert Lima
> DevOps Engineer at MAV Tecnologia
> *Belo Horizonte - Brasil*
> *IRC NICK - WebertRLZ*
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to