'MAX AVAIL' in the 'ceph df' output represents the amount of data that can be 
used before the first OSD becomes full, and not the sum of all free space 
across a set of OSDs.
                             原始邮件                             发件人: Webert de 
Souza Lima<webert.b...@gmail.com>收件人: 
ceph-users<ceph-users@lists.ceph.com>发送时间: 2018年1月19日(周五) 20:20主题: Re: 
[ceph-users] ceph df shows 100% usedWhile it seemed to be solved yesterday, 
today the %USED has grown a lot again. See:
~# ceph osd df tree http://termbin.com/0zhk

~# ceph df detail
http://termbin.com/thox

94% USED while there is about 21TB worth of data, size = 2 menas ~42TB RAW 
Usage, but the OSDs in that root sum ~70TB available space.

Regards,
Webert LimaDevOps Engineer at MAV TecnologiaBelo Horizonte - BrasilIRC NICK - 
WebertRLZ
On Thu, Jan 18, 2018 at 8:21 PM, Webert de Souza Lima <webert.b...@gmail.com> 
wrote:
With the help of robbat2 and llua on IRC channel I was able to solve this 
situation by taking down the 2-OSD only hosts.
After crush reweighting OSDs 8 and 23 from host mia1-master-fe02 to 0, ceph df 
showed the expected storage capacity usage (about 70%)


With this in mind, those guys have told me that it is due the cluster beeing 
uneven and unable to balance properly. It makes sense and it worked.
But for me it is still a very unexpected bahaviour for ceph to say that the 
pools are 100% full and Available Space is 0.
There were 3 hosts and repl. size = 2, if the host with only 2 OSDs were full 
(it wasn't), ceph could still use space from OSDs from the other hosts.
Regards,
Webert LimaDevOps Engineer at MAV TecnologiaBelo Horizonte - BrasilIRC NICK - 
WebertRLZ
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to